id
stringlengths
1
6
content
stringlengths
0
5.74M
300
Published Time: 2024-07-04T12:00:00+00:00 Everything You Need To Know About Stalemate In Chess =============== FREE GROUND SHIPPING ON MEDIA ORDERS OVER $50Excludes Clearance, Shopworn, Imperfect, or Otherwise Marked FREE GROUND SHIPPING (48 US STATES) ON ORDERS OVER $100Excludes Clearance, Shopworn, Imperfect, or Otherwise Marked FREE WORLDWIDE SHIPPING ON OVER 500 ITEMS FREE GROUND SHIPPING ON MEDIA ORDERS OVER $50Excludes Clearance, Shopworn, Imperfect, or Otherwise Marked FREE GROUND SHIPPING (48 US STATES) ON ORDERS OVER $100Excludes Clearance, Shopworn, Imperfect, or Otherwise Marked FREE WORLDWIDE SHIPPING ON OVER 500 ITEMS ❮❯ The store will not work correctly in the case when cookies are disabled. Contact Us The Outlet New products Blog USD - US Dollar AUD - Australian Dollar GBP - British Pound CAD - Canadian Dollar EUR - Euro INR - Indian Rupee JPY - Japanese Yen MXN - Mexican Peso RUB - Russian Ruble Account Sign In Create an Account Cart Toggle Nav Search Advanced Search Search SAVE 15% ON ALL ORDERS UNDER $100 Use Coupon: SCHOOL15 SAVE 20% ON ALL ORDERS BETWEEN $100 AND $250 Use Coupon: SCHOOL20 SAVE 25% ON ALL ORDERS OVER $250 Use Coupon: SCHOOL25 SAVE 20% ON ALL ORDERS FOURTH OF JULY SALE USE COUPON CODE: FOURTH20 SAVE 10% ON ALL ORDERS USE COUPON CODE: SUMMER10 Chess Pieces Wood Chess Pieces Luxury Chess Pieces DGT-Enabled Chess Pieces Wood Chess Pieces Artistic Chess Pieces Plastic Chess Pieces Bone Chess Pieces The Forever Collection The Camaratta Collection Giant Chess and Checker Pieces Themed Chess Pieces Chess Pieces Clearance Sale Chess Boards Wood Chess Boards Luxury Wood Chess Boards Electronic Chess Boards (eBoards) Straight Up Chess Boards Custom Printed Chess Boards Vinyl Chess Boards Full Color Vinyl Chess Boards Silicone Chess Boards Mouse Pad Chess Boards Paper Chess Boards Chess Sets Wood Chess Sets Luxury Wood Chess Sets Travel Chess Sets European Chess Sets Plastic Chess Sets Tournament Chess Sets School & Club Chess Sets Analysis Chess Sets Giant Chess and Checker Sets Design Your Own Chess Set Combination USCF Plastic Tournament Chess Combinations Chess Boxes Library Chess Boxes Slide Top Chess Boxes Premium Chess Boxes Superior Chess Boxes Fitted Chess Briefcases Signature Chess Coffers Chess Clocks Digital Chess Clocks Bags for Chess Clocks DGT Projects Chess Clocks BHB Chess Clocks Analog Chess Clocks Tournament Suitable Chess Clocks Chronos Chess Clocks ZMart Chess Clocks Chess Bags Board Bags Clock Bags Tournament Bags Chess Piece Bags Chess Set Bags Miscellaneous Chess Bags Chess Books Chess Opening Books General Chess Instruction Chess Strategy Books Chess Puzzle Books Chess Game Collection Books Miscellaneous Chess Books Chess History Beginner Chess Books How to Teach Chess Books Chess Informants Chess Life Annuals Unbeatable Book Deals Chess Middlegames Books Chess Endgames Books Chess Tactics Books Chess Rule Books Electronic Chess Books (E-Books) Chess Match & Tournament Books Biographical Chess Books Foreign Language Chess Books Children Chess Books Chess Magazines New in Chess Yearbooks Shopworn Chess Books Non Chess Books Chess Software Chess Playing Software Chess Middlegame Software Chess Strategy Software Tournament Management Children Chess Software MAC Chess Software Software for Chess Clubs Convekta Chess Software Clearance Software Chess Database Software Chess Opening Software Chess Endgame Software Chess Biography Software Chess Tactics Software Downloadable Chess Software Windows Chess Software Linux Chess Software Chessbase Chess Software Chess King Chess Software Chess DVDs Chess eDVDs (Download) Chess Movies on DVD Chess Middlegame DVDs Chess Biography DVDs Chess Tactics DVDs Chess DVDs for Beginners Chess DVDs in Spanish Foreign Language DVDs Chess DVD Collections Chess Instructional DVDs Chess Openings DVDs Chess Endgames DVDs Chess Strategy DVDs Chess DVDs for Children Chess Game Collection DVDs Chess DVDs in Portuguese Chess Computers Millennium Chess Computers DGT - Chess Computers Square Off - Chess Computers Chess Club Supplies Tourn. Director Supplies Chess Club Starter Kit Tournament Chess Sets How to Teach Chess Chess Demonstration Boards Chess Score Books Club Chess Sets Chess Awards & Medals Chess Accessories Custom Printed Chess Bags Custom Printed Chess Scorebooks Chess Coffee Cups Chess Awards & Medals Chess Pens & Pencils Chess Clothing Chess Hats Chess Paper Weights Chess Keychains Chess Jewelry Chess Wrist Bands Chess Pins Chess Themed Cards US Chess Federation Items Learn to Play Chess Sign In Create an Account Contact Us The Outlet New products Blog USD - US Dollar AUD - Australian Dollar GBP - British Pound CAD - Canadian Dollar EUR - Euro INR - Indian Rupee JPY - Japanese Yen MXN - Mexican Peso RUB - Russian Ruble Home Blog Everything You Need To Know About Stalemate In Chess Everything You Need To Know About Stalemate In Chess The word “stalemate” has penetrated into everyday usage in the English language, referring to a standoff situation in which neither opposing side can win. But originally, “stalemate” comes from chess! In this article, we cover everything you need to know about stalemate in chess- what it is, and how it works. Image from chess.com The Rules: Stalemate = A Draw In Chess “Stalemate” is the situation where one player cannot make a legal move, but their king is not in check. An example is shown below. Black to move,the Black king is stalemated Note that the Black king is not in check. However, the two White queens are covering every other possible square it could potentially move to! The Black king cannot legally go to g8, g7,or h7.Therefore, a stalemate situation has arisen. According to the rules, stalemate is a draw in chess. Related:Making Use of Draws In Chess. In the example above, it doesn’t matter that White has an extra two queens, or that Black’s king would be captured no matter where it goes to. The game would immediately end in a draw. No doubt, the player with the White pieces would be rather upset about this! Having an extra two queens should, of course, result in a win. But the rules are the rules. That is why it is so important to know about stalemate in chess, and to be careful right to the end! When your opponent is losing, they may look for stalemating chess tactics to attempt to salvage a draw from an otherwise hopeless position. After all, we all want to avoid losing in chess. Ten Common Stalemates In Chess You Should Know One of the best ways to master stalemate in chess is to know the important stalemating patterns. That way, once you reach the chess endgame, you will be better equipped to be on the lookout for stalemate ideas, so that you can: Avoid them, if you are winning the game. Seek them out, if you are trying to get a draw from a worse position. Related:100 Endgames You Must Know, available from the USCF store. Here is a list of ten of the most important stalemate patterns from the chess endgame. #1: The King And Pawn vs. King Stalemate This is one of the most common chess endgames - a king and pawn vs. a lone king. Black to move, the Black king has no legal squares to go to: a stalemate. The White pawn cannot be captured because it is defended by the White king. The d8and f8 squares are controlled by the pawn, while d7and f7 are controlled by the White king. #2: The King And Knight-File Pawn vs. King Stalemate This chess stalemate is unique to when the strong side’s pawn is on the knight-file (i.e. the b-file or g-file). Black to move, the Black king has no legal squares to go to: a stalemate. The Black player has cunningly put their king in the corner, giving White the chance to fall into their stalemate trap. White has been careless, and now the Black king does not have a square it can move to. It is a stalemate, and therefore a draw. See also Mistakes Beginner Chess Players Make and How to Avoid Them The player with the extra pawn can usually avoid this stalemate by forcing the weak king to the open side of the board (away from the corner). Still, many wins have been squandered with this stalemate motif! #3: The King And Rook-File Pawn vs. King Stalemate A rook-file pawn (i.e. a pawn on the a-file or h-file) is well-known to present the defending side with extra drawing resources in the chess endgame - including this stalemate pattern. Black to movecan force a stalemate. Here, Black can secure the draw immediately by stalemating White’s king. After Kc7, White’s king cannot legally move to either b7or b8, and the White pawn is also immobilized, with its path blocked by its own king. Stalemate is reached, and the game is officially a draw. Related:Silman’s Complete Endgame Course, available from the USCF store. #4: The Bishop + Wrong Rook-File Pawn vs. King Stalemate This is one of the most useful chess endgames to know of. To the untrained eye, the below position looks absolutely hopeless for Black - with White having an extra bishop and an extra pawn. But thanks to stalemating motifs, Black can hold the draw, so long as they keep their king in the corner. No matter whose move it is, the game should be a draw due to the wrong-colored White bishop. The point is that White’s bishop is the opposite color to the pawn’s promotion square. White has a light-squared bishop, while h8is a dark-square. This means the bishop cannot force the Black king out of the corner. If White instead had a dark-squared bishop, then it would be a totally different story. Let’s say it was White's turn to move. If White were to advance the king with Kh6or Kg6, the Black king would be stalemated immediately. Whereas if White were to make a bishop move (e.g. Bd3), then the Black king can just go to g7 - shuffling back-and-forth between g7and h8 - and White cannot make progress. #5: The Queen vs. 7th-Rank Rook-File Pawn Stalemate This is another chess endgame position which looks deceptively simple, but contains stalemate ideas. A queen vs. a single pawn is usually easily winning for the player with the queen - but if that pawn can: reach the 7th rank the weak side’s king is near the strong player’s king is far away Then the player with the pawn should not resign just yet! Such a situation has arisen in the position below. White to movemust move their queen away to avoid stalemate, giving the Black king time to threaten pawn promotion. White to movemust move their queen away to avoid stalemate, giving the Black king time to threaten pawn promotion. With the pawn on the rook-file (i.e. on the a-file or h-file), the weak king can place itself in the corner - putting itself at “risk” of stalemate. See also Surprise Your Opponent With The Tennison Gambit With White to movein the above position, White does not have time to bring their king closer - they must move their queen away to free a square for the Black king so that it is not stalemated right away. Let’s say White were to play Qh6. This now means Black can play Kg1. White to move- Black threatens to promote their pawn with h1=Q. However, now Black has a different threat - to promote the pawn to a queen, whereupon the material balance will become equal. White’s only way of preventing this is to give a check - in which case Black’s king will simply go back to the corner and renew the stalemate threat. Much to White’s dismay, the game will end in a draw. #6: The Queen vs. 7th-Rank Bishop-File Pawn Stalemate A 7th rank pawn on the bishop’s file (i.e. the c-file or f-file) can also present the weak side with stalemating ideas if their king can go to the corner, and the strong side’s king is far away. White to move. They cannot play Qxf2, because Black’s king would be stalemated. White can try moving their king closer, but Black will then have time to move their king back to g1or g2 to again threaten promotion. The player with the Black pieces still has some work to do to prove that this position is, in fact, a draw, but the ability of the Black king to go to the corner on h1to prevent Qxf2due to stalemate is important to know. Related:Dvoretsky’s Endgame Manual, available from the USCF store. #7: The Rook vs. 7th-Rank Rook-File Pawn Stalemate With a 7th-rank pawn on the rook file (i.e. the a-file or h-file), stalemate motifs can also help the weak side secure a draw against a rook. In the below position, Black currently has no legal moves. White to move. White must move their rook away from the g-file in order to give Black’s king a square to move to, so as to avoid stalemate immediately. As soon as White’s rook vacates the g-file, Black’s king can go to g2or g1 to threaten promotion of the pawn. White to move.Now Black threatens h1=Q. If White’s rook gives a check with Rg7+, then the Black king will go back to the corner with Kh1, again threatening stalemate. There is nothing White can do other than agree to a draw. #8: The Rook vs. King Stalemate A rather special case, but nonetheless one which is useful to know. A sneaky opponent might tempt you into making a capture that results in a position similar to that shown below, or you could blunder it in time trouble if you are not on the lookout for it! Black to move, the Black king has no legal squares to go to: a stalemate. The White rook cannot be captured, while the a2 and b1squares are both impossible for the Black king to move to. See also Crush The Pirc Defense With The Austrian Attack #9: The Queen vs. King Stalemate A king and queen vs. a lone king should, of course, be winning for the strong side. But in time trouble, it is possible to throw away half a point due to carelessness. It has happened countless times in bullet chess and blitz chess! Black to move, the Black king has no legal squares to go to: a stalemate. Be especially careful once your opponent’s king reaches the edge of the board. When your opponent is down by a whole queen, they expect to lose - and stalemate is literally their only hope of stealing a draw. #10: The Rampant Stalemate The final stalemate to know of is not so much a “pattern”. It is instead a chess tactics “idea”. When a player’s king is immobilized, and so are all but one of the rest of their forces, then they can try to force their opponent into capturing that last remaining piece which has legal moves. That way, if that last mobile piece is captured, then a stalemate will arise. For example, the below position occurred in Sowray vs. Williams, 2011. White to move- where only the White queen has legal moves. Down by a rook and two bishops, White played the stunning Qg6+!offering Black the chance to capture his queen. However, if Black were to capture with Kxg6, then it would be a stalemate. So Black instead played Kg8, whereupon White continued attempting to sacrifice his own queen with Qf7+ Black to move.The Black king cannot escape the checks without capturing the White queen. With the game sure to end either in a draw by stalemate, or a draw by threefold repetition, Black was forced to acquiesce to sharing the point. Related:The Complete Chess Swindler available from the USCF store. Final Thoughts: Stalemate In The Chess Endgame Stalemate in chess is a theme that all strong chess players should be aware of. By studying these stalemate patterns and looking out for them in your own games, you can salvage many vital half-points from losing positions - and just as importantly, avoid throwing away half-points in winning positions! Remember - if your opponent has not yet resigned, then it is possible that they may be pinning their last hopes on stalemate. FAQ: Stalemate In Chess What Is Stalemate In Chess? A stalemate in chess is the situation where one side cannot make any legal move on its turn, and their king is not in check. When this happens, the game is a draw. Is Stalemate A Win Or A Draw? Stalemate in chess is a draw. Regardless of how many extra pieces one player has - if either player cannot make a legal move on their turn (and their king is not in check), the game immediately ends as a draw. What Is An Example Of Stalemate? One example of stalemate in chess is the famous king and pawn vs. king stalemate: with the pawn one square away from promoting, and the defending king blocking the promotion square, and the attacking king placed immediately behind its pawn. If it is the defending player’s turn to move, their king has no legal moves - and therefore stalemate has arisen. The game will be declared drawn. Is Stalemate Fair? When one player is losing the game, going for a stalemate is a completely legitimate and fair way of attempting to earn a draw. It is not considered somehow “unsporting” to play for a stalemate. However, it is all part of the game! It is up to the player winning the game to avoid their opponent’s attempts to stalemate. How Do I Avoid Stalemate? First, be aware. If you are winning the game, always be on the lookout for your opponent’s attempts to put themselves in a stalemate - especially in an endgame. If your opponent is in a losing position, stalemate may be their last, desperate hope to steal a draw. Second, study the most important stalemate patterns so that you can learn to recognize stalemate in your own games. Related posts: Must-Know Chess Endgames for Beginners: How to Checkmate Pawn Endgame in Chess: Getting It Right! Must-Know Rook Endgames in Chess Make Your New Chess Tactic Mantra “Don’t Lose!” Categories:Chess Endgames Tags:Draw, Chess tactics&chess stalemate Posted By:Nathan Rose Categories Announcements Chess Accessories Chess Books Chess Clocks Chess Matches Chess Sets Chess Strategies Chess Technology Chess Videos General Speed Chess Uncategorized Recent Posts The 10 Most Famous Chess Games In History The Best Chess Tactics For Beginners Blind Chess Tournaments Complete Guide US Chess Sales Affordable Buying Guide How to Learn Chess at 50: Getting Started USCF Sales History Contact us Support Help FAQ Track my Package Return Policy Request Return Ordering Printable Order Form Our NEW Affiliate System Layaway We Will Not Be Undersold! We Accept Purchase Orders! Useful Chess Antiques The Outlet Terms and Conditions Subscribe to our newsletter Subscribe (888) 512-4377 (CHESS) Mon - Fri: 8:30 am - 3:30 pm Central Copyright © 2025 - Chess Federation Sales, Inc. - All Rights Reserved e868595bcdd8b8a8363c4ad9d5a21e534e417dbd51ef071c0d5643096dc4b37c 34.34.225.104 My Cart Close 🚨 Free Chess Course! 🚨 LEARN THE 5 MISTAKES YOU'RE MAKING IN YOUR CHESS Which of the following describes you best? I'm a club player I'm a chess collector I'm a chess teacher or chess parent SUBMIT
301
Bretherton Amath 568 8-1 Singular Perturbation Theory Recall that a singular perturbation involves a small parameter ε which changes the order or degree of the problem so as to introduce a new class of solutions not present in the unperturbed problem. The general strategy is: (1) Use the Method of Dominant Balance (described below, using an example) to find a ‘leading-order’ approximation y0(x; ε) which asymptotically approaches the true solution y(x) as ε → 0, denoted y0(x; ε) ~ y(x) as ε → 0 , which means !"0 lim y(x;!) y0(x;!) = 1 for all x For instance sin εx ~ εx as ε → 0. For a regular perturbation problem this would just be the solution to the unperturbed problem, but there will be additional solutions in a singular perturbation problem. If the problem has no x dependence, just ignore the x’s in the above. (2) If more accuracy is desired, seek a perturbation series solution, which usually takes the form: y(x;!) ! y0(x;!) 1+ ! qy1(x) + ! 2qy2(x)… { } This is an asymptotic series. That is , if we let y!n = y0(x;") 1+ " qy1(x) + " 2qy2(x)…+ " nqyn(x) { } be the n’th partial sum of the series, !"0 lim y(x;!) # y$n(x;!) yn(x;!) = 0 An asymptotic series need not converge for any ε → 0! We’ll see some good examples for this later when we find asymptotic series for the solutions of ODEs around irregular singular points, but for now I’ll leave you to wonder how this could be, and how such series could be useful. Example S1 Find perturbation series for the roots of p(r;!) = !r3 A ! + r2 B ! " 1 C !, ! "1 (S1) Bretherton Amath 568 8-2 This is a singular perturbation problem because the unperturbed polynomial is quadratic and has only two roots r 0 (1,2) = ±1(for which we can develop regular perturbation series), while the perturbed problem has three roots. Exercise: Show that the perturbation series for these two roots are r(1,2) = ±1! 1 2 " ± 5 8 " 2… The asymptotic form of the third root r(3) is derived using dominant balance. We label the three terms of the polynomial (S1) as A, B and C. As ε → 0, at least two of these terms have to be of the same order in ε so they can balance each other. This is a consistent dominant balance if all the other terms are be much smaller as ε → 0. For instance, If we assume a dominant balance between B and C, we deduce that r2 ~ 1, which yields the unperturbed roots. This balance is consistent, since if r ~ ±1, then A=O(ε)<<1. If we assume a dominant balance between A and C, we deduce r ~ O(ε -1/3), for which both A and C are O(1). This would imply term B = r2 = O(ε -2/3) >> A, C, so this dominant balance is inconsistent. If we assume a dominant balance between A and B, we deduce r ~ - ε -1. In this case, A and B are O(ε -2) while C is O(1) << A, B, so this balance is consistent. Hence r(3) ! r 0 (3) = ! 1 " In most circumstances, the dominant balance in a singular perturbation problem comes from balancing the two highest-order terms, as we found here. An exception could occur if the coefficient of the second highest order term also depends on ε and goes to zero as ε → 0. This leading-order approximation may already be all we need to know about the third root. But if desired we can find an asymptotic series r(3) ! ! 1 " 1+ r 1" q + r 2" 2q… { }. Substituting this form into (S1) and sorting powers of ε, we obtain: ! r(3) ( ) 3 ! ! " 1 ! $ % & ' ( 3 1+ ! q(3r 1) + ! 2q(3r 2 + 3r 1 2)… { } r(3) ( ) 2 ! ! 1 " # $ % & ' ( 2 1+ " q(2r 1) + " 2q(2r 2 + r 1 2)… { } ! 0 = " r(3) ( ) 3 + r(3) ( ) 2 #1 Bretherton Amath 568 8-3 = 1 ! 2 "1" ! q(3r 1) " ! 2q(3r 2 + 3r 1 2)…+1+ ! q(2r 1) + ! 2q(2r 2 + r 1 2)… { } "1 = !" q!2r 1 ! " 2q!2(r 2 + 2r 1 2)…!1 To match powers of ε, we must have q = 2 and r 1 = !1. The next power of ε to balance is: O(ε2): 0 = !(r 2 + 2r 1 2) " r 2 = !2r 1 2 = !2. Hence r(3) ~ ! 1 ! 1! ! 2 ! 2! 4… { } Accuracy (using Matlab roots function to find exact r(3)): ε r !2 (3) r ex (3) ! r "2 (3) r ex (3) ! r "2 (3) # 5 0.05 -19.95 -2.2x10-6 -7.1 0.1 -9.90 -7.3x10-5 -7.3 0.2 -4.78 -2.7x10-3 -8.5 0.3 -2.98 -0.03 -11.9 We see that the partial sum given above gives a relative accuracy of better than 1% even for ε = 0.3. It is accurate to O(ε 5) (the next term in the expansion), which goes to zero faster than the last term retained in the series, which is O(ε 3). This is consistent with the definition of an asymptotic series.
302
Realism and Relativism: A Perspective on Kenneth Burke - Robert Lawrence Heath - Google Books =============== Sign in Hidden fields Try the new Google Books Books Add to my library Page 144 Try the new Google Books Check out the new look and enjoy easier access to your favorite features Try it now No thanks Try the new Google Books My library Help Advanced Book Search Good for: Web Tablet / iPad eReader Smartphone#### Features: Flowing text Scanned pages Help with devices & formats Learn more about books on Google Play Page vi A Perspective on Kenneth Burke Robert Lawrence Heath. CHAPTER 4 THE DANCING OF AN ATTITUDE ... Semantic and Poetic Meaning 142 Symbols: More than a Name 145 Symbols in Social Action 148 Conclusion 156. CHAPTER. 5. Page 142 ... Burke's theory arises from the word-actionethics connection. With this formula he traces equations and convolutions of symbolic action. He demonstrates how many aspects of human feelings and ... Semantic and Poetic Meaning. Page 143 A Perspective on Kenneth Burke Robert Lawrence Heath. The semantic ideal is “to evolve a vocabulary that gives the name and address of every event in the universe.” Such a goal, he ... Poetic meaning, A PERSPECTIVE ON KENNETH BURKE 143. Page 144 ... Poetic meaning, Burke argues, goes beyond literalness. For instance, the statement, “New York is in Iowa,” is ... semantic meaning. It requires “nothing less than the filling-out, by concrete body, of the characterizations which... Page 145 ... poetic is so broad that the semantic, which is narrower, is a “special kind” of poetic (PLF, 152). This position is ... Burke proposes that both are figurative to some degree. Poetic terms include semantic meaning that contains... Page 240 ... meaning or an inadequate meaning—but in either case it would contain a meaning” (PLF, 143). Burke has studied the semantic ideal because he fears that whenever scientific or semantic truth becomes predominant in society or poetry there... Page 250 ... poetic development to divert the critic's emphasis from scientistic or semantic representation. The critical standard for evaluating a single character, such as Othello, Burke ... meaning in a dramatistic sense is not representative but... Page 263 ... form of symbolic action. INDEX. 56Kenneth Burke, letter to Malcolm Cowley, 30 March 1934, Burke File, Pennsylvania State University. semantic and poetic meaning, 14245 League of American Writers, 20 A PERSPECTIVE ON KENNETH BURKE 263. Page 267 A Perspective on Kenneth Burke ... semantic and poetic meaning, 14245 League of American Writers, 20 Lentricchia, Frank, 36 Lewisohn, Ludwig, 6 Light, James, 6 of a Gentle Southern Man Rrjlrcmuis jrum Ihr Rom! A PERSPECTIVE ON KENNETH BURKE... Page 269 A Perspective on Kenneth Burke Robert Lawrence Heath. Poetic impact. See Literary appeal Poetic motive, 234-41 Poetic ... Semantic and poetic meaning, 142-45, 151, 239-40 Shakespeare, William, 58, 61, 124 Sinclair, Upton, 14 Smith... Buy eBook - $15.96 Get this book in print▼ Mercer University Press Amazon.com Barnes&Noble.com Books-A-Million IndieBound Find in a library All sellers» Realism and Relativism: A Perspective on Kenneth Burke By Robert Lawrence Heath About this book My library My History Terms of Service Pages displayed by permission of Mercer University Press.Copyright. Result 4 of 10 in this book for burke semantic and poetic meaning-‹ Previous Next ›-View all-Order by: relevance|pagesrelevance|pages Clear search Loading... Loading... Loading... Loading... Loading... Loading...
303
P. VERGILI MARONIS AENEIDOS LIBER SECVNDVS Conticuere omnes intentique ora tenebant. Inde toro pater Aeneas sic orsus ab alto: Infandum, regina, iubes renovare dolorem, Troianas ut opes et lamentabile regnum eruerint Danai, quaeque ipse miserrima vidi 5 et quorum pars magna fui. quis talia fando Myrmidonum Dolopumve aut duri miles Ulixi temperet a lacrimis? et iam nox umida caelo praecipitat suadentque cadentia sidera somnos. sed si tantus amor casus cognoscere nostros 10 et breviter Troiae supremum audire laborem, quamquam animus meminisse horret luctuque refugit, incipiam. fracti bello fatisque repulsi ductores Danaum tot iam labentibus annis instar montis equum divina Palladis arte 15 aedificant, sectaque intexunt abiete costas; votum pro reditu simulant; ea fama vagatur. huc delecta virum sortiti corpora furtim includunt caeco lateri penitusque cavernas ingentis uterumque armato milite complent. 20 est in conspectu Tenedos, notissima fama insula, dives opum Priami dum regna manebant, nunc tantum sinus et statio male fida carinis: huc se provecti deserto in litore condunt; nos abiisse rati et vento petiisse Mycenas. 25 ergo omnis longo soluit se Teucria luctu; panduntur portae, iuvat ire et Dorica castra desertosque videre locos litusque relictum: hic Dolopum manus, hic saevus tendebat Achilles; classibus hic locus, hic acie certare solebant. 30 pars stupet innuptae donum exitiale Minervae et molem mirantur equi; primusque Thymoetes duci intra muros hortatur et arce locari, sive dolo seu iam Troiae sic fata ferebant. at Capys, et quorum melior sententia menti, 35 aut pelago Danaum insidias suspectaque dona praecipitare iubent subiectisque urere flammis, aut terebrare cavas uteri et temptare latebras. scinditur incertum studia in contraria vulgus. Primus ibi ante omnis magna comitante caterva 40 Laocoon ardens summa decurrit ab arce, et procul 'o miseri, quae tanta insania, cives? creditis avectos hostis? aut ulla putatis dona carere dolis Danaum? sic notus Ulixes? aut hoc inclusi ligno occultantur Achivi, 45 aut haec in nostros fabricata est machina muros, inspectura domos venturaque desuper urbi, aut aliquis latet error; equo ne credite, Teucri. quidquid id est, timeo Danaos et dona ferentis.' sic fatus ualidis ingentem viribus hastam 50 in latus inque feri curvam compagibus alvum contorsit. stetit illa tremens, uteroque recusso insonuere cavae gemitumque dedere cavernae. et, si fata deum, si mens non laeva fuisset, impulerat ferro Argolicas foedare latebras, 55 Troiaque nunc staret, Priamique arx alta maneres. Ecce, manus iuvenem interea post terga revinctum pastores magno ad regem clamore trahebant Dardanidae, qui se ignotum venientibus ultro, hoc ipsum ut strueret Troiamque aperiret Achivis, 60 obtulerat, fidens animi atque in utrumque paratus, seu versare dolos seu certae occumbere morti. undique visendi studio Troiana iuventus circumfusa ruit certantque inludere capto. accipe nunc Danaum insidias et crimine ab uno 65 disce omnis. namque ut conspectu in medio turbatus, inermis constitit atque oculis Phrygia agmina circumspexit, 'heu, quae nunc tellus,' inquit, 'quae me aequora possunt accipere? aut quid iam misero mihi denique restat, 70 cui neque apud Danaos usquam locus, et super ipsi Dardanidae infensi poenas cum sanguine poscunt?' quo gemitu conversi animi compressus et omnis impetus. hortamur fari quo sanguine cretus, quidve ferat; memoret quae sit fiducia capto. 75 'Cuncta equidem tibi, rex, fuerit quodcumque, fatebor 77 vera,' inquit; 'neque me Argolica de gente negabo. hoc primum; nec, si miserum Fortuna Sinonem finxit, vanum etiam mendacemque improba finget. 80 fando aliquod si forte tuas pervenit ad auris Belidae nomen Palamedis et incluta fama gloria, quem falsa sub proditione Pelasgi insontem infando indicio, quia bella vetabat, demisere neci, nunc cassum lumine lugent: 85 illi me comitem et consanguinitate propinquum pauper in arma pater primis huc misit ab annis. dum stabat regno incolumis regumque vigebat conciliis, et nos aliquod nomenque decusque gessimus. invidia postquam pellacis Ulixi 90 (haud ignota loquor) superis concessit ab oris, adflictus vitam in tenebris luctuque trahebam et casum insontis mecum indignabar amici. nec tacui demens et me, fors si qua tulisset, si patrios umquam remeassem victor ad Argos, 95 promisi ultorem et verbis odia aspera movi. hinc mihi prima mali labes, hinc semper Ulixes criminibus terrere novis, hinc spargere voces in vulgum ambiguas et quaerere conscius arma. nec requievit enim, donec Calchante ministro— 100 sed quid ego haec autem nequiquam ingrata revoluo, quidue moror? si omnis uno ordine habetis Achivos, idque audire sat est, iamdudum sumite poenas: hoc Ithacus velit et magno mercentur Atridae.' Tum vero ardemus scitari et quaerere causas, 105 ignari scelerum tantorum artisque Pelasgae. prosequitur pavitans et ficto pectore fatur: 'Saepe fugam Danai Troia cupiere relicta moliri et longo fessi discedere bello; fecissentque utinam! saepe illos aspera ponti 110 interclusit hiems et terruit Auster euntis. praecipue cum iam hic trabibus contextus acernis staret equus, toto sonuerunt aethere nimbi. suspensi Eurypylum scitatum oracula Phoebi mittimus, isque adytis haec tristia dicta reportat: 115 "sanguine placastis ventos et virgine caesa, cum primum Iliacas, Danai, venistis ad oras; sanguine quaerendi reditus animaque litandum Argolica." vulgi quae uox ut venit ad auris, obstipuere animi gelidusque per ima cucurrit 120 ossa tremor, cui fata parent, quem poscat Apollo. hic Ithacus vatem magno Calchanta tumultu protrahit in medios; quae sint ea numina divum flagitat. et mihi iam multi crudele canebant artificis scelus, et taciti ventura videbant. 125 bis quinos silet ille dies tectusque recusat prodere voce sua quemquam aut opponere morti. vix tandem, magnis Ithaci clamoribus actus, composito rumpit vocem et me destinat arae. adsensere omnes et, quae sibi quisque timebat, 130 unius in miseri exitium conversa tulere. iamque dies infanda aderat; mihi sacra parari et salsae fruges et circum tempora vittae. eripui, fateor, leto me et vincula rupi, limosoque lacu per noctem obscurus in ulva 135 delitui dum vela darent, si forte dedissent. nec mihi iam patriam antiquam spes ulla videndi nec dulcis natos exoptatumque parentem, quos illi fors et poenas ob nostra reposcent effugia, et culpam hanc miserorum morte piabunt. 140 quod te per superos et conscia numina veri, per si qua est quae restet adhuc mortalibus usquam intemerata fides, oro, miserere laborum tantorum, miserere animi non digna ferentis.' His lacrimis vitam damus et miserescimus ultro. 145 ipse viro primus manicas atque arta levari vincla iubet Priamus dictisque ita fatur amicis: 'quisquis es, amissos hinc iam obliviscere Graios (noster eris) mihique haec edissere vera roganti: quo molem hanc immanis equi statuere? quis auctor? 150 quidve petunt? quae religio? aut quae machina belli?' dixerat. ille dolis instructus et arte Pelasga sustulit exutas vinclis ad sidera palmas: 'vos, aeterni ignes, et non violabile vestrum testor numen,' ait, 'vos arae ensesque nefandi, 155 quos fugi, vittaeque deum, quas hostia gessi: fas mihi Graiorum sacrata resolvere iura, fas odisse viros atque omnia ferre sub auras, si qua tegunt, teneor patriae nec legibus ullis. tu modo promissis maneas servataque serves 160 Troia fidem, si vera feram, si magna rependam. omnis spes Danaum et coepti fiducia belli Palladis auxiliis semper stetit. impius ex quo Tydides sed enim scelerumque inventor Ulixes, fatale adgressi sacrato avellere templo 165 Palladium caesis summae custodibus arcis, corripuere sacram effigiem manibusque cruentis virgineas ausi divae contingere vittas, ex illo fluere ac retro sublapsa referri spes Danaum, fractae vires, aversa deae mens. 170 nec dubiis ea signa dedit Tritonia monstris. vix positum castris simulacrum: arsere coruscae luminibus flammae arrectis, salsusque per artus sudor iit, terque ipsa solo (mirabile dictu) emicuit parmamque ferens hastamque trementem. 175 extemplo temptanda fuga canit aequora Calchas, nec posse Argolicis exscindi Pergama telis omina ni repetant Argis numenque reducant quod pelago et curvis secum auexere carinis. et nunc quod patrias vento petiere Mycenas, 180 arma deosque parant comites pelagoque remenso improvisi aderunt; ita digerit omina Calchas. hanc pro Palladio moniti, pro numine laeso effigiem statuere, nefas quae triste piaret. hanc tamen immensam Calchas attollere molem 185 roboribus textis caeloque educere iussit, ne recipi portis aut duci in moenia posset, neu populum antiqua sub religione tueri. nam si vestra manus violasset dona Minervae, tum magnum exitium (quod di prius omen in ipsum 190 convertant!) Priami imperio Phrygibusque futurum; sin manibus vestris vestram ascendisset in urbem, ultro Asiam magno Pelopea ad moenia bello venturam, et nostros ea fata manere nepotes.' Talibus insidiis periurique arte Sinonis 195 credita res, captique dolis lacrimisque coactis quos neque Tydides nec Larisaeus Achilles, non anni domuere decem, non mille carinae. Hic aliud maius miseris multoque tremendum obicitur magis atque improvida pectora turbat. 200 Laocoon, ductus Neptuno sorte sacerdos, sollemnis taurum ingentem mactabat ad aras. ecce autem gemini a Tenedo tranquilla per alta (horresco referens) immensis orbibus angues incumbunt pelago pariterque ad litora tendunt; 205 pectora quorum inter fluctus arrecta iubaeque sanguineae superant undas, pars cetera pontum pone legit sinuatque immensa volumine terga. fit sonitus spumante salo; iamque arva tenebant ardentisque oculos suffecti sanguine et igni 210 sibila lambebant linguis vibrantibus ora. diffugimus visu exsangues. illi agmine certo Laocoonta petunt; et primum parva duorum corpora natorum serpens amplexus uterque implicat et miseros morsu depascitur artus; 215 post ipsum auxilio subeuntem ac tela ferentem corripiunt spirisque ligant ingentibus; et iam bis medium amplexi, bis collo squamea circum terga dati superant capite et cervicibus altis. ille simul manibus tendit divellere nodos 220 perfusus sanie vittas atroque veneno, clamores simul horrendos ad sidera tollit: qualis mugitus, fugit cum saucius aram taurus et incertam excussit cervice securim. at gemini lapsu delubra ad summa dracones 225 effugiunt saevaeque petunt Tritonidis arcem, sub pedibusque deae clipeique sub orbe teguntur. tum vero tremefacta novus per pectora cunctis insinuat pavor, et scelus expendisse merentem Laocoonta ferunt, sacrum qui cuspide robur 230 laeserit et tergo sceleratam intorserit hastam. ducendum ad sedes simulacrum orandaque divae numina conclamant. dividimus muros et moenia pandimus urbis. accingunt omnes operi pedibusque rotarum 235 subiciunt lapsus, et stuppea vincula collo intendunt; scandit fatalis machina muros feta armis. pueri circum innuptaeque puellae sacra canunt funemque manu contingere gaudent; illa subit mediaeque minans inlabitur urbi. 240 o patria, o divum domus Ilium et incluta bello moenia Dardanidum! quater ipso in limine portae substitit atque utero sonitum quater arma dedere; instamus tamen immemores caecique furore et monstrum infelix sacrata sistimus arce. 245 tunc etiam fatis aperit Cassandra futuris ora dei iussu non umquam credita Teucris. nos delubra deum miseri, quibus ultimus esset ille dies, festa velamus fronde per urbem. Vertitur interea caelum et ruit Oceano nox 250 involvens umbra magna terramque polumque Myrmidonumque dolos; fusi per moenia Teucri conticuere; sopor fessos complectitur artus. et iam Argiua phalanx instructis navibus ibat a Tenedo tacitae per amica silentia lunae 255 litora nota petens, flammas cum regia puppis extulerat, fatisque deum defensus iniquis inclusos utero Danaos et pinea furtim laxat claustra Sinon. illos patefactus ad auras reddit equus laetique cavo se robore promunt 260 Thessandrus Sthenelusque duces et dirus Ulixes, demissum lapsi per funem, Acamasque Thoasque Pelidesque Neoptolemus primusque Machaon et Menelaus et ipse doli fabricator Epeos. invadunt urbem somno vinoque sepultam; 265 caeduntur vigiles, portisque patentibus omnis accipiunt socios atque agmina conscia iungunt. Tempus erat quo prima quies mortalibus aegris incipit et dono divum gratissima serpit. in somnis, ecce, ante oculos maestissimus Hector 270 visus adesse mihi largosque effundere fletus, raptatus bigis ut quondam, aterque cruento pulvere perque pedes traiectus lora tumentis. ei mihi, qualis erat, quantum mutatus ab illo Hectore qui redit exuvias indutus Achilli 275 vel Danaum Phrygios iaculatus puppibus ignis! squalentem barbam et concretos sanguine crinis vulneraque illa gerens, quae circum plurima muros accepit patrios. ultro flens ipse videbar compellare virum et maestas expromere voces: 280 'o lux Dardaniae, spes o fidissima Teucrum, quae tantae tenuere morae? quibus Hector ab oris exspectate venis? ut te post multa tuorum funera, post varios hominumque urbisque labores defessi aspicimus! quae causa indigna serenos 285 foedavit vultus? aut cur haec vulnera cerno?' ille nihil, nec me quaerentem uana moratur, sed graviter gemitus imo de pectore ducens, 'heu fuge, nate dea, teque his' ait 'eripe flammis. hostis habet muros; ruit alto a culmine Troia. 290 sat patriae Priamoque datum: si Pergama dextra defendi possent, etiam hac defensa fuissent. sacra suosque tibi commendat Troia penatis; hos cape fatorum comites, his moenia quaere magna pererrato statues quae denique ponto.' 295 sic ait et manibus vittas Vestamque potentem aeternumque adytis effert penetralibus ignem. Diverso interea miscentur moenia luctu, et magis atque magis, quamquam secreta parentis Anchisae domus arboribusque obtecta recessit, 300 clarescunt sonitus armorumque ingruit horror. excutior somno et summi fastigia tecti ascensu supero atque arrectis auribus asto: in segetem veluti cum flamma furentibus Austris incidit, aut rapidus montano flumine torrens 305 sternit agros, sternit sata laeta boumque labores praecipitisque trahit silvas; stupet inscius alto accipiens sonitum saxi de vertice pastor. tum vero manifesta fides, Danaumque patescunt insidiae. iam Deiphobi dedit ampla ruinam 310 Volcano superante domus, iam proximus ardet Ucalegon; Sigea igni freta lata relucent. exoritur clamorque virum clangorque tubarum. arma amens capio; nec sat rationis in armis, sed glomerare manum bello et concurrere in arcem 315 cum sociis ardent animi; furor iraque mentem praecipitat, pulchrumque mori succurrit in armis. Ecce autem telis Panthus elapsus Achivum, Panthus Othryades, arcis Phoebique sacerdos, sacra manu victosque deos parvumque nepotem 320 ipse trahit cursuque amens ad limina tendit. 'quo res summa loco, Panthu? quam prendimus arcem?' vix ea fatus eram gemitu cum talia reddit: 'venit summa dies et ineluctabile tempus Dardaniae. fuimus Troes, fuit Ilium et ingens 325 gloria Teucrorum; ferus omnia Iuppiter Argos transtulit; incensa Danai dominantur in urbe. arduus armatos mediis in moenibus astans fundit equus victorque Sinon incendia miscet insultans. portis alii bipatentibus adsunt, 330 milia quot magnis umquam venere Mycenis; obsedere alii telis angusta viarum oppositis; stat ferri acies mucrone corusco stricta, parata neci; vix primi proelia temptant portarum vigiles et caeco Marte resistunt.' 335 talibus Othryadae dictis et numine divum in flammas et in arma feror, quo tristis Erinys, quo fremitus vocat et sublatus ad aethera clamor. addunt se socios Rhipeus et maximus armis Epytus, oblati per lunam, Hypanisque Dymasque 340 et lateri adglomerant nostro, iuvenisque Coroebus Mygdonides—illis ad Troiam forte diebus venerat insano Cassandrae incensus amore et gener auxilium Priamo Phrygibusque ferebat, infelix qui non sponsae praecepta furentis 345 audierit! quos ubi confertos ardere in proelia vidi, incipio super his: 'iuvenes, fortissima frustra pectora, si vobis audentem extrema cupido certa sequi, quae sit rebus fortuna videtis: 350 excessere omnes adytis arisque relictis di quibus imperium hoc steterat; succurritis urbi incensae. moriamur et in media arma ruamus. una salus victis nullam sperare salutem.' sic animis iuvenum furor additus. inde, lupi ceu 355 raptores atra in nebula, quos improba ventris exegit caecos rabies catulique relicti faucibus exspectant siccis, per tela, per hostis vadimus haud dubiam in mortem mediaeque tenemus urbis iter; nox atra cava circumvolat umbra. 360 quis cladem illius noctis, quis funera fando explicet aut possit lacrimis aequare labores? urbs antiqua ruit multos dominata per annos; plurima perque vias sternuntur inertia passim corpora perque domos et religiosa deorum 365 limina. nec soli poenas dant sanguine Teucri; quondam etiam victis redit in praecordia virtus uictoresque cadunt Danai. crudelis ubique luctus, ubique pavor et plurima mortis imago. Primus se Danaum magna comitante caterva 370 Androgeos offert nobis, socia agmina credens inscius, atque ultro verbis compellat amicis: 'festinate, viri! nam quae tam sera moratur segnities? alii rapiunt incensa feruntque Pergama: vos celsis nunc primum a navibus itis?' 375 dixit, et extemplo (neque enim responsa dabantur fida satis) sensit medios delapsus in hostis. obstipuit retroque pedem cum voce repressit. improvisum aspris veluti qui sentibus anguem pressit humi nitens trepidusque repente refugit 380 attollentem iras et caerula colla tumentem, haud secus Androgeos visu tremefactus abibat. inruimus densis et circumfundimur armis, ignarosque loci passim et formidine captos sternimus; aspirat primo Fortuna labori. 385 atque hic successu exsultans animisque Coroebus 'o socii, qua prima' inquit 'Fortuna salutis monstrat iter, quaque ostendit se dextra, sequamur: mutemus clipeos Danaumque insignia nobis aptemus. dolus an virtus, quis in hoste requirat? 390 arma dabunt ipsi.' sic fatus deinde comantem Androgei galeam clipeique insigne decorum induitur laterique Argiuum accommodat ensem. hoc Rhipeus, hoc ipse Dymas omnisque iuventus laeta facit: spoliis se quisque recentibus armat. 395 vadimus immixti Danais haud numine nostro multaque per caecam congressi proelia noctem conserimus, multos Danaum demittimus Orco. diffugiunt alii ad navis et litora cursu fida petunt; pars ingentem formidine turpi 400 scandunt rursus equum et nota conduntur in alvo. Heu nihil inuitis fas quemquam fidere divis! ecce trahebatur passis Priameia virgo crinibus a templo Cassandra adytisque Minervae ad caelum tendens ardentia lumina frustra, 405 lumina, nam teneras arcebant vincula palmas. non tulit hanc speciem furiata mente Coroebus et sese medium iniecit periturus in agmen; consequimur cuncti et densis incurrimus armis. hic primum ex alto delubri culmine telis 410 nostrorum obruimur oriturque miserrima caedes armorum facie et Graiarum errore iubarum. tum Danai gemitu atque ereptae virginis ira undique collecti invadunt, acerrimus Aiax et gemini Atridae Dolopumque exercitus omnis: 415 adversi rupto ceu quondam turbine venti confligunt, Zephyrusque Notusque et laetus Eois Eurus equis; stridunt silvae saevitque tridenti spumeus atque imo Nereus ciet aequora fundo. illi etiam, si quos obscura nocte per umbram 420 fudimus insidiis totaque agitavimus urbe, apparent; primi clipeos mentitaque tela agnoscunt atque ora sono discordia signant. ilicet obruimur numero, primusque Coroebus Penelei dextra divae armipotentis ad aram 425 procumbit; cadit et Rhipeus, iustissimus unus qui fuit in Teucris et servantissimus aequi (dis aliter visum); pereunt Hypanisque Dymasque confixi a sociis; nec te tua plurima, Panthu, labentem pietas nec Apollinis infula texit. 430 Iliaci cineres et flamma extrema meorum, testor, in occasu vestro nec tela nec ullas vitavisse vices Danaum et, si fata fuissent ut caderem, meruisse manu. divellimur inde, Iphitus et Pelias mecum (quorum Iphitus aevo 435 iam gravior, Pelias et vulnere tardus Ulixi), protinus ad sedes Priami clamore vocati. hic vero ingentem pugnam, ceu cetera nusquam bella forent, nulli tota morerentur in urbe, sic Martem indomitum Danaosque ad tecta ruentis 440 cernimus obsessumque acta testudine limen. haerent parietibus scalae postisque sub ipsos nituntur gradibus clipeosque ad tela sinistris protecti obiciunt, prensant fastigia dextris. Dardanidae contra turris ac tota domorum 445 culmina convellunt; his se, quando ultima cernunt, extrema iam in morte parant defendere telis, auratasque trabes, veterum decora alta parentum, devolvunt; alii strictis mucronibus imas obsedere fores, has servant agmine denso. 450 instaurati animi regis succurrere tectis auxilioque levare viros uimque addere victis. Limen erat caecaeque fores et pervius usus tectorum inter se Priami, postesque relicti a tergo, infelix qua se, dum regna manebant, 455 saepius Andromache ferre incomitata solebat ad soceros et auo puerum Astyanacta trahebat. evado ad summi fastigia culminis, unde tela manu miseri iactabant inrita Teucri. turrim in praecipiti stantem summisque sub astra 460 eductam tectis, unde omnis Troia videri et Danaum solitae naves et Achaica castra, adgressi ferro circum, qua summa labantis iuncturas tabulata dabant, convellimus altis sedibus impulimusque; ea lapsa repente ruinam 465 cum sonitu trahit et Danaum super agmina late incidit. ast alii subeunt, nec saxa nec ullum telorum interea cessat genus. Vestibulum ante ipsum primoque in limine Pyrrhus exsultat telis et luce coruscus aena: 470 qualis ubi in lucem coluber mala gramina pastus, frigida sub terra tumidum quem bruma tegebat, nunc, positis novus exuviis nitidusque iuventa, lubrica convoluit sublato pectore terga arduus ad solem, et linguis micat ore trisulcis. 475 una ingens Periphas et equorum agitator Achillis, armiger Automedon, una omnis Scyria pubes succedunt tecto et flammas ad culmina iactant. ipse inter primos correpta dura bipenni limina perrumpit postisque a cardine vellit 480 aeratos; iamque excisa trabe firma cavavit robora et ingentem lato dedit ore fenestram. apparet domus intus et atria longa patescunt; apparent Priami et veterum penetralia regum, armatosque vident stantis in limine primo. 485 at domus interior gemitu miseroque tumultu miscetur, penitusque cavae plangoribus aedes femineis ululant; ferit aurea sidera clamor. tum pavidae tectis matres ingentibus errant amplexaeque tenent postis atque oscula figunt. 490 instat vi patria Pyrrhus; nec claustra nec ipsi custodes sufferre valent; labat ariete crebro ianua, et emoti procumbunt cardine postes. fit via vi; rumpunt aditus primosque trucidant immissi Danai et late loca milite complent. 495 non sic, aggeribus ruptis cum spumeus amnis exiit oppositasque evicit gurgite moles, fertur in arva furens cumulo camposque per omnis cum stabulis armenta trahit. vidi ipse furentem caede Neoptolemum geminosque in limine Atridas, 500 vidi Hecubam centumque nurus Priamumque per aras sanguine foedantem quos ipse sacraverat ignis. quinquaginta illi thalami, spes tanta nepotum, barbarico postes auro spoliisque superbi procubuere; tenent Danai qua deficit ignis. 505 Forsitan et Priami fuerint quae fata requiras. urbis uti captae casum convulsaque vidit limina tectorum et medium in penetralibus hostem, arma diu senior desueta trementibus aevo circumdat nequiquam umeris et inutile ferrum 510 cingitur, ac densos fertur moriturus in hostis. aedibus in mediis nudoque sub aetheris axe ingens ara fuit iuxtaque veterrima laurus incumbens arae atque umbra complexa penatis. hic Hecuba et natae nequiquam altaria circum, 515 praecipites atra ceu tempestate columbae, condensae et divum amplexae simulacra sedebant. ipsum autem sumptis Priamum iuvenalibus armis ut vidit, 'quae mens tam dira, miserrime coniunx, impulit his cingi telis? aut quo ruis?' inquit. 520 'non tali auxilio nec defensoribus istis tempus eget; non, si ipse meus nunc adforet Hector. huc tandem concede; haec ara tuebitur omnis, aut moriere simul.' sic ore effata recepit ad sese et sacra longaeuum in sede locavit. 525 Ecce autem elapsus Pyrrhi de caede Polites, unus natorum Priami, per tela, per hostis porticibus longis fugit et vacua atria lustrat saucius. illum ardens infesto vulnere Pyrrhus insequitur, iam iamque manu tenet et premit hasta. 530 ut tandem ante oculos evasit et ora parentum, concidit ac multo vitam cum sanguine fudit. hic Priamus, quamquam in media iam morte tenetur, non tamen abstinuit nec voci iraeque pepercit: 'at tibi pro scelere,' exclamat, 'pro talibus ausis 535 di, si qua est caelo pietas quae talia curet, persolvant grates dignas et praemia reddant debita, qui nati coram me cernere letum fecisti et patrios foedasti funere vultus. at non ille, satum quo te mentiris, Achilles 540 talis in hoste fuit Priamo; sed iura fidemque supplicis erubuit corpusque exsangue sepulcro reddidit Hectoreum meque in mea regna remisit.' sic fatus senior telumque imbelle sine ictu coniecit, rauco quod protinus aere repulsum, 545 et summo clipei nequiquam umbone pependit. cui Pyrrhus: 'referes ergo haec et nuntius ibis Pelidae genitori. illi mea tristia facta degeneremque Neoptolemum narrare memento. nunc morere.' hoc dicens altaria ad ipsa trementem 550 traxit et in multo lapsantem sanguine nati, implicuitque comam laeva, dextraque coruscum extulit ac lateri capulo tenus abdidit ensem. haec finis Priami fatorum, hic exitus illum sorte tulit Troiam incensam et prolapsa videntem 555 Pergama, tot quondam populis terrisque superbum regnatorem Asiae. iacet ingens litore truncus, avulsumque umeris caput et sine nomine corpus. At me tum primum saevus circumstetit horror. obstipui; subiit cari genitoris imago, 560 ut regem aequaeuum crudeli vulnere vidi vitam exhalantem, subiit deserta Creusa et direpta domus et parvi casus Iuli. respicio et quae sit me circum copia lustro. deseruere omnes defessi, et corpora saltu 565 ad terram misere aut ignibus aegra dedere. [Iamque adeo super unus eram, cum limina Vestae servantem et tacitam secreta in sede latentem Tyndarida aspicio; dant claram incendia lucem erranti passimque oculos per cuncta ferenti. 570 illa sibi infestos eversa ob Pergama Teucros et Danaum poenam et deserti coniugis iras praemetuens, Troiae et patriae communis Erinys, abdiderat sese atque aris invisa sedebat. exarsere ignes animo; subit ira cadentem 575 ulcisci patriam et sceleratas sumere poenas. 'scilicet haec Spartam incolumis patriasque Mycenas aspiciet, partoque ibit regina triumpho? coniugiumque domumque patris natosque videbit Iliadum turba et Phrygiis comitata ministris? 580 occiderit ferro Priamus? Troia arserit igni? Dardanium totiens sudarit sanguine litus? non ita. namque etsi nullum memorabile nomen feminea in poena est, habet haec victoria laudem; exstinxisse nefas tamen et sumpsisse merentis 585 laudabor poenas, animumque explesse iuvabit ultricis flammae et cineres satiasse meorum.' talia iactabam et furiata mente ferebar,] cum mihi se, non ante oculis tam clara, videndam obtulit et pura per noctem in luce refulsit 590 alma parens, confessa deam qualisque videri caelicolis et quanta solet, dextraque prehensum continuit roseoque haec insuper addidit ore: 'nate, quis indomitas tantus dolor excitat iras? quid furis? aut quonam nostri tibi cura recessit? 595 non prius aspicies ubi fessum aetate parentem liqueris Anchisen, superet coniunxne Creusa Ascaniusque puer? quos omnis undique Graiae circum errant acies et, ni mea cura resistat, iam flammae tulerint inimicus et hauserit ensis. 600 non tibi Tyndaridis facies invisa Lacaenae culpatusue Paris, divum inclementia, divum has evertit opes sternitque a culmine Troiam. aspice (namque omnem, quae nunc obducta tuenti mortalis hebetat visus tibi et umida circum 605 caligat, nubem eripiam; tu ne qua parentis iussa time neu praeceptis parere recusa): hic, ubi disiectas moles avulsaque saxis saxa vides, mixtoque undantem pulvere fumum, Neptunus muros magnoque emota tridenti 610 fundamenta quatit totamque a sedibus urbem eruit. hic Iuno Scaeas saevissima portas prima tenet sociumque furens a navibus agmen ferro accincta vocat. iam summas arces Tritonia, respice, Pallas 615 insedit nimbo effulgens et Gorgone saeva. ipse pater Danais animos virisque secundas sufficit, ipse deos in Dardana suscitat arma. eripe, nate, fugam finemque impone labori; nusquam abero et tutum patrio te limine sistam.' 620 dixerat et spissis noctis se condidit umbris. apparent dirae facies inimicaque Troiae numina magna deum. Tum vero omne mihi visum considere in ignis Ilium et ex imo verti Neptunia Troia: 625 ac veluti summis antiquam in montibus ornum cum ferro accisam crebrisque bipennibus instant eruere agricolae certatim, illa usque minatur et tremefacta comam concusso vertice nutat, vulneribus donec paulatim evicta supremum 630 congemuit traxitque iugis avulsa ruinam. descendo ac ducente deo flammam inter et hostis expedior: dant tela locum flammaeque recedunt. Atque ubi iam patriae perventum ad limina sedis antiquasque domos, genitor, quem tollere in altos 635 optabam primum montis primumque petebam, abnegat excisa vitam producere Troia exsiliumque pati. 'vos o, quibus integer aevi sanguis,' ait, 'solidaeque suo stant robore vires, vos agitate fugam. 640 me si caelicolae voluissent ducere vitam, has mihi servassent sedes. satis una superque vidimus excidia et captae superavimus urbi. sic o sic positum adfati discedite corpus. ipse manu mortem inveniam; miserebitur hostis 645 exuviasque petet. facilis iactura sepulcri. iam pridem invisus divis et inutilis annos demoror, ex quo me divum pater atque hominum rex fulminis adflavit ventis et contigit igni.' Talia perstabat memorans fixusque manebat. 650 nos contra effusi lacrimis coniunxque Creusa Ascaniusque omnisque domus, ne vertere secum cuncta pater fatoque urgenti incumbere vellet. abnegat inceptoque et sedibus haeret in isdem. rursus in arma feror mortemque miserrimus opto. 655 nam quod consilium aut quae iam fortuna dabatur? 'mene efferre pedem, genitor, te posse relicto sperasti tantumque nefas patrio excidit ore? si nihil ex tanta superis placet urbe relinqui, et sedet hoc animo perituraeque addere Troiae 660 teque tuosque iuvat, patet isti ianua leto, iamque aderit multo Priami de sanguine Pyrrhus, natum ante ora patris, patrem qui obtruncat ad aras. hoc erat, alma parens, quod me per tela, per ignis eripis, ut mediis hostem in penetralibus utque 665 Ascanium patremque meum iuxtaque Creusam alterum in alterius mactatos sanguine cernam? arma, viri, ferte arma; vocat lux ultima victos. reddite me Danais; sinite instaurata revisam proelia. numquam omnes hodie moriemur inulti.' 670 Hinc ferro accingor rursus clipeoque sinistram insertabam aptans meque extra tecta ferebam. ecce autem complexa pedes in limine coniunx haerebat, parvumque patri tendebat Iulum: 'si periturus abis, et nos rape in omnia tecum; 675 sin aliquam expertus sumptis spem ponis in armis, hanc primum tutare domum. cui parvus Iulus, cui pater et coniunx quondam tua dicta relinquor?' Talia vociferans gemitu tectum omne replebat, cum subitum dictuque oritur mirabile monstrum. 680 namque manus inter maestorumque ora parentum ecce levis summo de vertice visus Iuli fundere lumen apex, tactuque innoxia mollis lambere flamma comas et circum tempora pasci. nos pavidi trepidare metu crinemque flagrantem 685 excutere et sanctos restinguere fontibus ignis. at pater Anchises oculos ad sidera laetus extulit et caelo palmas cum voce tetendit: 'Iuppiter omnipotens, precibus si flecteris ullis, aspice nos, hoc tantum, et si pietate meremur, 690 da deinde auxilium, pater, atque haec omina firma.' Vix ea fatus erat senior, subitoque fragore intonuit laevum, et de caelo lapsa per umbras stella facem ducens multa cum luce cucurrit. illam summa super labentem culmina tecti 695 cernimus Idaea claram se condere silva signantemque vias; tum longo limite sulcus dat lucem et late circum loca sulphure fumant. hic vero victus genitor se tollit ad auras adfaturque deos et sanctum sidus adorat. 700 'iam iam nulla mora est; sequor et qua ducitis adsum, di patrii; servate domum, servate nepotem. vestrum hoc augurium, vestroque in numine Troia est. cedo equidem nec, nate, tibi comes ire recuso.' dixerat ille, et iam per moenia clarior ignis 705 auditur, propiusque aestus incendia volvunt. 'ergo age, care pater, cervici imponere nostrae; ipse subibo umeris nec me labor iste gravabit; quo res cumque cadent, unum et commune periclum, una salus ambobus erit. mihi parvus Iulus 710 sit comes, et longe servet vestigia coniunx. vos, famuli, quae dicam animis advertite vestris. est urbe egressis tumulus templumque vetustum desertae Cereris, iuxtaque antiqua cupressus religione patrum multos servata per annos; 715 hanc ex diverso sedem veniemus in unam. tu, genitor, cape sacra manu patriosque penatis; me bello e tanto digressum et caede recenti attrectare nefas, donec me flumine vivo abluero.' 720 haec fatus latos umeros subiectaque colla veste super fulvique insternor pelle leonis, succedoque oneri; dextrae se parvus Iulus implicuit sequiturque patrem non passibus aequis; pone subit coniunx. ferimur per opaca locorum, 725 et me, quem dudum non ulla iniecta movebant tela neque adverso glomerati examine Grai, nunc omnes terrent aurae, sonus excitat omnis suspensum et pariter comitique onerique timentem. iamque propinquabam portis omnemque videbar 730 evasisse viam, subito cum creber ad auris visus adesse pedum sonitus, genitorque per umbram prospiciens 'nate,' exclamat, 'fuge, nate; propinquant. ardentis clipeos atque aera micantia cerno.' hic mihi nescio quod trepido male numen amicum 735 confusam eripuit mentem. namque avia cursu dum sequor et nota excedo regione viarum, heu misero coniunx fatone erepta Creusa substitit, erravitne via seu lapsa resedit, incertum; nec post oculis est reddita nostris. 740 nec prius amissam respexi animumue reflexi quam tumulum antiquae Cereris sedemque sacratam venimus: hic demum collectis omnibus una defuit, et comites natumque virumque fefellit. quem non incusavi amens hominumque deorumque, 745 aut quid in eversa vidi crudelius urbe? Ascanium Anchisenque patrem Teucrosque penatis commendo sociis et curva valle recondo; ipse urbem repeto et cingor fulgentibus armis. stat casus renovare omnis omnemque reverti 750 per Troiam et rursus caput obiectare periclis. principio muros obscuraque limina portae, qua gressum extuleram, repeto et vestigia retro observata sequor per noctem et lumine lustro: horror ubique animo, simul ipsa silentia terrent. 755 inde domum, si forte pedem, si forte tulisset, me refero: inruerant Danai et tectum omne tenebant. ilicet ignis edax summa ad fastigia vento voluitur; exsuperant flammae, furit aestus ad auras. procedo et Priami sedes arcemque reviso: 760 et iam porticibus vacuis Iunonis asylo custodes lecti Phoenix et dirus Ulixes praedam adservabant. huc undique Troia gaza incensis erepta adytis, mensaeque deorum crateresque auro solidi, captivaque vestis 765 congeritur. pueri et pavidae longo ordine matres stant circum. ausus quin etiam voces iactare per umbram implevi clamore vias, maestusque Creusam nequiquam ingeminans iterumque iterumque vocavi. 770 quaerenti et tectis urbis sine fine ruenti infelix simulacrum atque ipsius umbra Creusae visa mihi ante oculos et nota maior imago. obstipui, steteruntque comae et uox faucibus haesit. tum sic adfari et curas his demere dictis: 775 'quid tantum insano iuvat indulgere dolori, o dulcis coniunx? non haec sine numine divum eveniunt; nec te comitem hinc portare Creusam fas, aut ille sinit superi regnator Olympi. longa tibi exsilia et vastum maris aequor arandum, 780 et terram Hesperiam venies, ubi Lydius arva inter opima virum leni fluit agmine Thybris. illic res laetae regnumque et regia coniunx parta tibi; lacrimas dilectae pelle Creusae. non ego Myrmidonum sedes Dolopumue superbas 785 aspiciam aut Grais servitum matribus ibo, Dardanis et divae Veneris nurus; sed me magna deum genetrix his detinet oris. iamque vale et nati serva communis amorem.' haec ubi dicta dedit, lacrimantem et multa volentem 790 dicere deseruit, tenuisque recessit in auras. ter conatus ibi collo dare bracchia circum; ter frustra comprensa manus effugit imago, par levibus ventis volucrique simillima somno. sic demum socios consumpta nocte reviso. 795 Atque hic ingentem comitum adfluxisse novorum invenio admirans numerum, matresque virosque, collectam exsilio pubem, miserabile vulgus. undique convenere animis opibusque parati in quascumque velim pelago deducere terras. 800 iamque iugis summae surgebat Lucifer Idae ducebatque diem, Danaique obsessa tenebant limina portarum, nec spes opis ulla dabatur. cessi et sublato montis genitore petivi.
304
Published Time: Sun, 29 Jun 2025 22:54:52 GMT Fixed-point arithmetic - Wikipedia =============== Jump to content [x] Main menu Main menu move to sidebar hide Navigation Main page Contents Current events Random article About Wikipedia Contact us Contribute Help Learn to edit Community portal Recent changes Upload file Special pages Search Search [x] Appearance Donate Create account Log in [x] Personal tools Donate Create account Log in Pages for logged out editors learn more Contributions Talk Contents move to sidebar hide (Top) 1 RepresentationToggle Representation subsection 1.1 Choice of scaling factors 1.2 Exact values 1.3 Comparison with floating-point 2 Applications 3 OperationsToggle Operations subsection 3.1 Addition and subtraction 3.2 Multiplication 3.3 Division 3.4 Scaling conversion 3.5 Conversion to and from floating-point 4 Hardware supportToggle Hardware support subsection 4.1 Scaling and renormalization 4.2 Overflow 5 Computer language support 6 Detailed examplesToggle Detailed examples subsection 6.1 Decimal fixed point multiplication 6.2 Binary fixed-point multiplication 7 Notations 8 Software application examples 9 See also 10 References 11 Further reading 12 External links [x] Toggle the table of contents Fixed-point arithmetic [x] 18 languages Català Čeština Deutsch Español Français 한국어 עברית Lombard Magyar Nederlands 日本語 Polski Română Русский Slovenščina Svenska Українська 中文 Edit links Article Talk [x] English Read Edit View history [x] Tools Tools move to sidebar hide Actions Read Edit View history General What links here Related changes Upload file Permanent link Page information Cite this page Get shortened URL Download QR code Print/export Download as PDF Printable version In other projects Wikidata item Appearance move to sidebar hide From Wikipedia, the free encyclopedia Computer format for representing real numbers This article is about fixed-precision fractions. For the invariant points of a function, see Fixed point (mathematics). This article possibly contains original research. Please improve it by verifying the claims made and adding inline citations. Statements consisting only of original research should be removed.(September 2019) (Learn how and when to remove this message) In computing, fixed-point is a method of representing fractional (non-integer) numbers by storing a fixed number of digits of their fractional part. Dollar amounts, for example, are often stored with exactly two fractional digits, representing the cents (1/100 of dollar). More generally, the term may refer to representing fractional values as integer multiples of some fixed small unit, e.g. a fractional amount of hours as an integer multiple of ten-minute intervals. Fixed-point number representation is often contrasted to the more complicated and computationally demanding floating-point representation. In the fixed-point representation, the fraction is often expressed in the same number base as the integer part, but using negative powers of the base b. The most common variants are decimal (base 10) and binary (base 2). The latter is commonly known also as binary scaling. Thus, if n fraction digits are stored, the value will always be an integer multiple of b−n. Fixed-point representation can also be used to omit the low-order digits of integer values, e.g. when representing large dollar values as multiples of $1000. When decimal fixed-point numbers are displayed for human reading, the fraction digits are usually separated from those of the integer part by a radix character (usually '.' in English, but ',' or some other symbol in many other languages). Internally, however, there is no separation, and the distinction between the two groups of digits is defined only by the programs that handle such numbers. Fixed-point representation was the norm in mechanical calculators. Since most modern processors have a fast floating-point unit (FPU), fixed-point representations in processor-based implementations are now used only in special situations, such as in low-cost embeddedmicroprocessors and microcontrollers; in applications that demand high speed or low power consumption or small chip area, like image, video, and digital signal processing; or when their use is more natural for the problem. Examples of the latter are accounting of dollar amounts, when fractions of cents must be rounded to whole cents in strictly prescribed ways; and the evaluation of functions by table lookup, or any application where rational numbers need to be represented without rounding errors (which fixed-point does but floating-point cannot). Fixed-point representation is still the norm for field-programmable gate array (FPGA) implementations, as floating-point support in an FPGA requires significantly more resources than fixed-point support. Representation [edit] Fixed-point representation with scaling 1/100 | Value represented | Internal representation | | --- | --- | | 0.00 | 0 | | 0.5 | 50 | | 0.99 | 99 | | 2 | 200 | | −14.1 | −1410 | | 314.160 | 31416 | A fixed-point representation of a fractional number is essentially an integer that is to be implicitly multiplied by a fixed scaling factor. For example, the value 1.23 can be stored in a variable as the integer value 1230 with implicit scaling factor of 1/1000 (meaning that the last 3 decimal digits are implicitly assumed to be a decimal fraction), and the value 1 230 000 can be represented as 1230 with an implicit scaling factor of 1000 (with "minus 3" implied decimal fraction digits, that is, with 3 implicit zero digits at right). This representation allows standard integer arithmetic logic units to perform rational number calculations. Negative values are usually represented in binary fixed-point format as a signed integer in two's complement representation with an implicit scaling factor as above. The sign of the value will always be indicated by the first stored bit (1 = negative, 0 = non-negative), even if the number of fraction bits is greater than or equal to the total number of bits. For example, the 8-bit signed binary integer (11110101)2 = −11, taken with −3, +5, and +12 implied fraction bits, would represent the values −11/2−3 = −88, −11/2 5 = −0.343 75, and −11/2 12 = −0.002 685 546 875, respectively. Alternatively, negative values can be represented by an integer in the sign-magnitude format, in which case the sign is never included in the number of implied fraction bits. This variant is more commonly used in decimal fixed-point arithmetic. Thus the signed 5-digit decimal integer (−00025)10, taken with −3, +5, and +12 implied decimal fraction digits, would represent the values −25/10−3 = −25000, −25/10 5 = −0.00025, and −25/10 12 = −0.000 000 000 025, respectively. A program will usually assume that all fixed-point values that will be stored into a given variable, or will be produced by a given instruction, will have the same scaling factor. This parameter can usually be chosen by the programmer depending on the precision needed and range of values to be stored. The scaling factor of a variable or formula may not appear explicitly in the program. Good programming practice then requires that it be provided in the documentation, at least as a comment in the source code. Choice of scaling factors [edit] For greater efficiency, scaling factors are often chosen to be powers (positive or negative) of the base b used to represent the integers internally. However, often the best scaling factor is dictated by the application. Thus one often uses scaling factors that are powers of 10 (e.g. 1/100 for dollar values), for human convenience, even when the integers are represented internally in binary. Decimal scaling factors also mesh well with the metric (SI) system, since the choice of the fixed-point scaling factor is often equivalent to the choice of a unit of measure (like centimeters or microns instead of meters). However, other scaling factors may be used occasionally, e.g. a fractional amount of hours may be represented as an integer number of seconds; that is, as a fixed-point number with scale factor of 1/3600. Even with the most careful rounding, fixed-point values represented with a scaling factor S may have an error of up to ±0.5 in the stored integer, that is, ±0.5 S in the value. Therefore, smaller scaling factors generally produce more accurate results. On the other hand, a smaller scaling factor means a smaller range of the values that can be stored in a given program variable. The maximum fixed-point value that can be stored into a variable is the largest integer value that can be stored into it, multiplied by the scaling factor; and similarly for the minimum value. For example, the table below gives the implied scaling factor S, the minimum and maximum representable values V min and V max, and the accuracy δ = S/2 of values that could be represented in 16-bit signed binary fixed point format, depending on the number f of implied fraction bits. Parameters of some 16-bit signed binary fixed-point formats | f | S | δ | V min | V max | | --- | --- | --- | --- | --- | | −3 | 1/2−3 = 8 | 4 | −262 144 | +262 136 | | 0 | 1/2 0 = 1 | 0.5 | −32 768 | +32 767 | | 5 | 1/2 5 = 1/32 | < 0.016 | −1024.000 00 | +1023.968 75 | | 14 | 1/2 14 = 1/16 384 | < 0.000 031 | −2.000 000 000 000 00 | +1.999 938 964 843 75 | | 15 | 1/2 15 = 1/32 768 | < 0.000 016 | −1.000 000 000 000 000 | +0.999 969 482 421 875 | | 16 | 1/2 16 = 1/65 536 | < 0.000 008 | −0.500 000 000 000 000 0 | +0.499 984 741 210 937 5 | | 20 | 1/2 20 = 1/1 048 576 | < 0.000 000 5 | −0.031 250 000 000 000 000 00 | +0.031 249 046 325 683 593 75 | Fixed-point formats with scaling factors of the form 2 n−1 (namely 1, 3, 7, 15, 31, etc.) have been said to be appropriate for image processing and other digital signal processing tasks. They are supposed to provide more consistent conversions between fixed- and floating-point values than the usual 2 n scaling. The Julia programming language implements both versions. Exact values [edit] Any binary fraction a/2 m, such as 1/16 or 17/32, can be exactly represented in fixed-point, with a power-of-two scaling factor 1/2 n with any n ≥ m. However, most decimal fractions like 0.1 or 0.123 are infinite repeating fractions in base 2. and hence cannot be represented that way. Similarly, any decimal fraction a/10 m, such as 1/100 or 37/1000, can be exactly represented in fixed point with a power-of-ten scaling factor 1/10 n with any n ≥ m. This decimal format can also represent any binary fraction a/2 m, such as 1/8 (0.125) or 17/32 (0.53125). More generally, a rational numbera/b, with a and brelatively prime and b positive, can be exactly represented in binary fixed point only if b is a power of 2; and in decimal fixed point only if b has no prime factors other than 2 and/or 5. Comparison with floating-point [edit] Fixed-point computations can be faster and/or use less hardware than floating-point ones. If the range of the values to be represented is known in advance and is sufficiently limited, fixed point can make better use of the available bits. For example, if 32 bits are available to represent a number between 0 and 1, a fixed-point representation can have error less than 1.2 × 10−10, whereas the standard floating-point representation may have error up to 596 × 10−10 — because 9 of the bits are wasted with the sign and exponent of the dynamic scaling factor. Specifically, comparing 32-bit fixed-point to floating-point audio, a recording requiring less than 40 dB of headroom has a higher signal-to-noise ratio using 32-bit fixed. Programs using fixed-point computations are usually more portable than those using floating-point since they do not depend on the availability of an FPU. This advantage was particularly strong before the IEEE Floating Point Standard was widely adopted when floating-point computations with the same data would yield different results depending on the manufacturer, and often on the computer model. Many embedded processors lack an FPU, because integer arithmetic units require substantially fewer logic gates and consume much smaller chip area than an FPU; and software emulation of floating-point on low-speed devices would be too slow for most applications. CPU chips for the earlier personal computers and game consoles, like the Intel 386 and 486SX, also lacked an FPU. The absolute resolution (difference between successive values) of any fixed-point format is constant over the whole range, namely the scaling factor S. In contrast, the relative resolution of a floating-point format is approximately constant over their whole range, varying within a factor of the base b; whereas their absolute resolution varies by many orders of magnitude, like the values themselves. In many cases, the rounding and truncation errors of fixed-point computations are easier to analyze than those of the equivalent floating-point computations. Applying linearization techniques to truncation, such as dithering and/or noise shaping is more straightforward within fixed-point arithmetic. On the other hand, the use of fixed point requires greater care by the programmer. Avoidance of overflow requires much tighter estimates for the ranges of variables and all intermediate values in the computation, and often also extra code to adjust their scaling factors. Fixed-point programming normally requires the use of integer types of different widths. Fixed-point applications can make use of block floating point, which is a fixed-point environment having each array (block) of fixed-point data be scaled with a common exponent in a single word. Applications [edit] This section does not cite any sources. Please help improve this section by adding citations to reliable sources. Unsourced material may be challenged and removed.(May 2023) (Learn how and when to remove this message) A common use of decimal fixed-point is for storing monetary values, for which the complicated rounding rules of floating-point numbers are often a liability. For example, the open-source money management application GnuCash, written in C, switched from floating-point to fixed-point as of version 1.6, for this reason. Binary fixed-point (binary scaling) was widely used from the late 1960s to the 1980s for real-time computing that was mathematically intensive, such as flight simulation and in nuclear power plant control algorithms. It is still used in many DSP applications and custom-made microprocessors. Computations involving angles would use binary angular measurement. Binary fixed point is used in the STM32G4 series CORDIC co-processors and in the discrete cosine transform algorithms used to compress JPEG images. Electronic instruments such as electricity meters and digital clocks often use polynomials to compensate for introduced errors, e.g. from temperature or power supply voltage. The coefficients are produced by polynomial regression. Binary fixed-point polynomials can utilize more bits of precision than floating-point and do so in fast code using inexpensive CPUs. Accuracy, crucial for instruments, compares well to equivalent-bit floating-point calculations, if the fixed-point polynomials are evaluated using Horner's method (e.g. y = ((ax + b)x + c)x + d) to reduce the number of times that rounding occurs, and the fixed-point multiplications utilize rounding addends. Operations [edit] This section does not cite any sources. Please help improve this section by adding citations to reliable sources. Unsourced material may be challenged and removed.(May 2023) (Learn how and when to remove this message) Addition and subtraction [edit] To add or subtract two values with the same implicit scaling factor, it is sufficient to add or subtract the underlying integers; the result will have their common implicit scaling factor and can thus be stored in the same program variables as the operands. These operations yield the exact mathematical result, as long as no overflow occurs—that is, as long as the resulting integer can be stored in the receiving program variable. If overflow happens, it occurs like with ordinary integers of the same signedness. In the unsigned and signed-via-two's-complement cases, the overflow behaviour is well-known as a finite group. If the operands have different scaling factors, then they must be converted to a common scaling factor before the operation. Multiplication [edit] To multiply two fixed-point numbers, it suffices to multiply the two underlying integers, and assume that the scaling factor of the result is the product of their scaling factors. (p/q) (r/s) = pr/qs The result will be exact, with no rounding, provided that it does not overflow the receiving variable. (Specifically, with integer multiplication, the product is up to twice the width of the two factors.) For example, multiplying the numbers 123 scaled by 1/1000 (0.123) and 25 scaled by 1/10 (2.5) yields the integer 123×25 = 3075 scaled by (1/1000)×(1/10) = 1/10000, that is 3075/10000 = 0.3075. As another example, multiplying the first number by 155 implicitly scaled by 1/32 (155/32 = 4.84375) yields the integer 123×155 = 19065 with implicit scaling factor (1/1000)×(1/32) = 1/32000, that is 19065/32000 = 0.59578125. In binary, it is common to use a scaling factor that is a power of two. After the multiplication, the scaling factor can be divided away by shifting right. Shifting is simple and fast in most computers. When right-shifting or a typical integer-division instruction (such as C integer division and x86 idiv) is used, the result is equivalent to a flooring division (floor(x/y)). A method with rounding can be used to reduce the error introduced. Three variations are possible based on choice of tie-breaking: Round-half-up is possible by adding a 'rounding addend' of half of the scaling factor before shifting. The proof: roundup(x/y) = floor(x/y + 0.5) = floor((x + y/2)/y). If y = 2^n, this is equivalent to (x + 2^(n−1)) >> n (where >> represents right shift). Round-half-down is, by analogy, floor((x - y/2)/y) or (x - 2^(n-1)) >> n. Round-half-to-even basically entails doing an extra decision on top of round-half-up. It is slightly more complicated but still requires no branching on a CPU. These rounding methods are usable in any scaling through integer division. For example, they are also applicable to the discussion on rescaling. Division [edit] The division of fixed point numbers can be understood as the division of two fractions of potentially different denominators (scaling factors). With p⁄q and r⁄s (where p q r s are all integers), the naive approach is to rearrange the fraction to form a new scaling factor (s/q): (p/q) / (r/s) = (p÷r) / (s÷q) For example, division of 3456 scaled by 1/100 (34.56) and 1234 scaled by 1/1000 (1.234) yields the integer 3456÷1234 = 3 (rounded) with scale factor (1/100)/(1/1000) = 10, that is, 30. As another example, the division of the first number by 155 implicitly scaled by 1/32 (155/32 = 4.84375) yields the integer 3456÷155 = 22 (rounded) with implicit scaling factor (1/100)/(1/32) = 32/100 = 8/25, that is 22×32/100 = 7.04. With very similar s and q, the above algorithm results in an overly coarse scaling factor. This can be improved by first converting the dividend to a smaller scaling factor. Say we reduce the scaling factor by n times, then we instead calculate: (p/q) / (r/s) = (np/nq) / (r/s) = (np÷r) / (s÷nq) For example, if a = 1.23 is represented as 123 with scaling 1/100, and b = 6.25 is represented as 6250 with scaling 1/1000, then simple division of the integers yields 123÷6250 = 0 (rounded) with scaling factor (1/100)/(1/1000) = 10. If a is first converted to 1,230,000 with scaling factor 1/1000000, the result will be 1,230,000÷6250 = 197 (rounded) with scale factor 1/1000 (0.197). The exact value 1.23/6.25 is 0.1968. A different way to think about the scaling is to consider division the inverse operation of multiplication. If multiplication leads to a finer scaling factor, it is reasonable that the dividend needs to have a finer scaling factor as well to recover the original value given. Scaling conversion [edit] In fixed-point computing it is often necessary to convert a value to a different scaling factor. This operation is necessary, for example: To store a value into a program variable that has a different implicit scaling factor; To convert two values to the same scaling factor, so that they can be added or subtracted; To restore the original scaling factor of a value after multiplying or dividing it by another; To improve the accuracy of the result of a division; To ensure that the scaling factor of a product or quotient is a simple power like 10 n or 2 n; To ensure that the result of an operation can be stored into a program variable without overflow; To reduce the cost of hardware that processes fixed-point data. To convert a number from a fixed point type with scaling factor R to another type with scaling factor S, the underlying integer must be multiplied by the ratio R/S. Thus, for example, to convert the value 1.23 = 123/100 from scaling factor R=1/100 to one with scaling factor S=1/1000, the integer 123 must be multiplied by (1/100)/(1/1000) = 10, yielding the representation 1230/1000. If the scaling factor is a power of the base used internally to represent the integer, changing the scaling factor requires only dropping low-order digits of the integer, or appending zero digits. However, this operation must preserve the sign of the number. In two's complement representation, that means extending the sign bit as in arithmetic shift operations. If S does not divide R (in particular, if the new scaling factor S is greater than the original R), the new integer may have to be rounded. In particular, if r and s are fixed-point variables with implicit scaling factors R and S, the operation r ← r×s requires multiplying the respective integers and explicitly dividing the result by S. The result may have to be rounded, and overflow may occur. For example, if the common scaling factor is 1/100, multiplying 1.23 by 0.25 entails multiplying 123 by 25 to yield 3075 with an intermediate scaling factor of 1/10000. In order to return to the original scaling factor 1/100, the integer 3075 then must be multiplied by 1/100, that is, divided by 100, to yield either 31 (0.31) or 30 (0.30), depending on the rounding policy used. Similarly, the operation r ← r/s will require dividing the integers and explicitly multiplying the quotient by S. Rounding and/or overflow may occur here too. Conversion to and from floating-point [edit] To convert a number from floating point to fixed point, one may multiply it by the scaling factor S, then round the result to the nearest integer. Care must be taken to ensure that the result fits in the destination variable or register. Depending on the scaling factor and storage size, and on the range input numbers, the conversion may not entail any rounding. To convert a fixed-point number to floating-point, one may convert the integer to floating-point and then divide it by the scaling factor S. This conversion may entail rounding if the integer's absolute value is greater than 2 24 (for binary single-precision IEEE floating point) or of 2 53 (for double-precision). Overflow or underflow may occur if |S| is very large or very small, respectively. Hardware support [edit] This section does not cite any sources. Please help improve this section by adding citations to reliable sources. Unsourced material may be challenged and removed.(May 2023) (Learn how and when to remove this message) Scaling and renormalization [edit] Typical processors do not have specific support for fixed-point arithmetic. However, most computers with binary arithmetic have fast bit shift instructions that can multiply or divide an integer by any power of 2; in particular, an arithmetic shift instruction. These instructions can be used to quickly change scaling factors that are powers of 2, while preserving the sign of the number. Early computers like the IBM 1620 and the Burroughs B3500 used a binary-coded decimal (BCD) representation for integers, namely base 10 where each decimal digit was independently encoded with 4 bits. Some processors, such as microcontrollers, may still use it. In such machines, conversion of decimal scaling factors can be performed by bit shifts and/or by memory address manipulation. Some DSP architectures offer native support for specific fixed-point formats, for example, signed n-bit numbers with n−1 fraction bits (whose values may range between −1 and almost +1). The support may include a multiply instruction that includes renormalization—the scaling conversion of the product from 2 n−2 to n−1 fraction bits.[citation needed] If the CPU does not provide that feature, the programmer must save the product in a large enough register or temporary variable, and code the renormalization explicitly. Overflow [edit] Overflow happens when the result of an arithmetic operation is too large to be stored in the designated destination area. In addition and subtraction, the result may require one bit more than the operands. In multiplication of two unsigned integers with m and n bits, the result may have m+n bits. In case of overflow, the high-order bits are usually lost, as the un-scaled integer gets reduced modulo 2 n where n is the size of the storage area. The sign bit, in particular, is lost, which may radically change the sign and the magnitude of the value. Some processors can set a hardware overflow flag and/or generate an exception on the occurrence of an overflow. Some processors may instead provide saturation arithmetic: if the result of an addition or subtraction were to overflow, they store instead the value with the largest magnitude that can fit in the receiving area and has the correct sign.[citation needed] However, these features are not very useful in practice; it is generally easier and safer to select scaling factors and word sizes so as to exclude the possibility of overflow, or to check the operands for excessive values before executing the operation. Computer language support [edit] Explicit support for fixed-point numbers is provided by a few programming languages, notably PL/I, COBOL, Ada, JOVIAL, and Coral 66. They provide fixed-point data types, with a binary or decimal scaling factor. The compiler automatically generates code to do the appropriate scaling conversions when doing operations on these data types, when reading or writing variables, or when converting the values to other data types such as floating-point. Most of those languages were designed between 1955 and 1990. More modern languages usually do not offer any fixed-point data types or support for scaling factor conversion. That is also the case for several older languages that are still very popular, like FORTRAN, C and C++. The wide availability of fast floating-point processors, with strictly standardized behavior, has greatly reduced the demand for binary fixed-point support.[citation needed] Similarly, the support for decimal floating point in some programming languages, like C# and Python, has removed most of the need for decimal fixed-point support. In the few situations that call for fixed-point operations, they can be implemented by the programmer, with explicit scaling conversion, in any programming language. On the other hand, all relational databases and the SQL notation support fixed-point decimal arithmetic and storage of numbers. PostgreSQL has a special numeric type for exact storage of numbers with up to 1000 digits. Moreover, in 2008 the International Organization for Standardization (ISO) published a draft technical report to extend the C programming language with fixed-point data types, for the benefit of programs running on embedded DSP processors. Two main kinds of data types are proposed, _Fract (fractional part with a minimum 7-bit precision) and _Accum (_Fract with at least 4 bits of integer part). The GNU Compiler Collection (GCC) supports this draft. Detailed examples [edit] Decimal fixed point multiplication [edit] Suppose there is the following multiplication with two fixed-point, 3-decimal-place numbers. (10.500)(1.050)=1×10.500+0.050×10.500=10.500+0.525000=11.025000{\displaystyle {\begin{aligned}(10.500)(1.050)&=1\times 10.500+0.050\times 10.500\&=10.500+0.525000=11.025000\end{aligned}}} Note how since there are 3 decimal places we show the trailing zeros. To re-characterize this as an integer multiplication we must first multiply by 1000(=10 3){\displaystyle 1000\ (=10^{3})} moving all the decimal places in to integer places, then we will multiply by 1/1000(=10−3){\displaystyle 1/1000\ (=10^{-3})} to put them back the equation now looks like (10.500)(10 3)(1.050)(10 3)(10−3)(10−3)=(10500)(1050)(10−6)=11 025 000(10−6)=11.025000{\displaystyle {\begin{aligned}(10.500)(10^{3})(1.050)(10^{3})(10^{-3})(10^{-3})&=(10500)(1050)(10^{-6})\&=11\,025\,000(10^{-6})\&=11.025000\end{aligned}}} This works equivalently if we choose a different base, notably base 2 for computing since a bit shift is the same as a multiplication or division by an order of 2. Three decimal digits is equivalent to about 10 binary digits, so we should round 0.05 to 10 bits after the binary point. The closest approximation is then 0.0000110011. 10=8+2=2 3+2 1 1=2 0 0.5=2−1 0.05=0.0000110011 2{\displaystyle {\begin{aligned}10&=8+2=2^{3}+2^{1}\1&=2^{0}\0.5&=2^{-1}\0.05&=0.0000110011_{2}\end{aligned}}} Thus our multiplication becomes (1010.100)(2 3)(1.0000110011)(2 10)(2−13)=(1010100)(10000110011)(2−13)=(10110000010111100)(2−13)=1011.0000010111100{\displaystyle {\begin{aligned}(1010.100)(2^{3})(1.0000110011)(2^{10})(2^{-13})&=(1010100)(10000110011)(2^{-13})\&=(10110000010111100)(2^{-13})\&=1011.0000010111100\end{aligned}}} This rounds to 11.023 with three digits after the decimal point. Binary fixed-point multiplication [edit] Consider the task of computing the product of 1.2 and 5.6 with binary fixed point using 16 fraction bits. To represent the two numbers, one multiplies them by 2 16, obtaining 78 643.2 and 367 001.6; and round these values the nearest integers, obtaining 78 643 and 367 002. These numbers will fit comfortably into a 32-bit word with two's complement signed format. Multiplying these integers together gives the 35-bit integer 28 862 138 286 with 32 fraction bits, without any rounding. Note that storing this value directly into a 32-bit integer variable would result in overflow and loss of the most significant bits. In practice, it would probably be stored in a signed 64-bit integer variable or register. If the result is to be stored in the same format as the data, with 16 fraction bits, that integer should be divided by 2 16, which gives approximately 440 401.28, and then rounded to the nearest integer. This effect can be achieved by adding 2 15 and then shifting the result by 16 bits. The result is 440 401, which represents the value 6.719 985 961 914 062 5. Taking into account the precision of the format, that value is better expressed as 6.719 986 ± 0.000 008 (not counting the error that comes from the operand approximations). The correct result would be 1.2 × 5.6 = 6.72. For a more complicated example, suppose that the two numbers 1.2 and 5.6 are represented in 32-bit fixed point format with 30 and 20 fraction bits, respectively. Scaling by 2 30 and 2 20 gives 1 288 490 188.8 and 5 872 025.6, that round to 1 288 490 189 and 5 872 026, respectively. Both numbers still fit in a 32-bit signed integer variable, and represent the fractions 1.200 000 000 186 264 514 923 095 703 125 and 5.600 000 381 469 726 562 50 Their product is (exactly) the 53-bit integer 7 566 047 890 552 914, which has 30+20 = 50 implied fraction bits and therefore represents the fraction 6.720 000 458 806 753 229 623 609 513 510 If we choose to represent this value in signed 16-bit fixed format with 8 fraction bits, we must divide the integer product by 2 50−8 = 2 42 and round the result; which can be achieved by adding 2 41 and shifting by 42 bits. The result is 1720, representing the value 1720/2 8 = 6.718 75, or rather the interval between 3439/2 9 and 3441/2 9 (approximately 6.719 ± 0.002). Notations [edit] Various notations have been used to concisely specify the parameters of a fixed-point format. In the following list, f represents the number of fractional bits, m the number of magnitude or integer bits, s the number of sign bits (0/1 or some other alternative representation), and b the total number of bits. The Q notation was defined by Texas Instruments. One writes Qf to specify a signed binary fixed-point value with f fraction bits; for example, Q15 specifies a signed integer in two's complement notation with a scaling factor 1/2 15. The code Qm.f specifies additionally that the number has m bits in the integer part of the value, not counting the sign bit. Thus Q1.30 would describe a binary fixed-point format with 1 integer bit and 30 fractional bits, which could be stored as a 32-bit 2's complement integer with scaling factor 1/2 30. A similar notation has been used by ARM, except that they count the sign bit in the value of m; so the same format above would be specified as Q2.30. The Embedded C proposal uses .f for unsigned fraction. s .f for signed fraction, m.f for unsigned accumulator, and s m.f for signed accumulator. This would translate the above to s1.30, though this is not a valid type for either fraction or accumulator: in valid versions, m is at least 4 and depending on the underlying type f is at least 7, 15, or 23. Note the non-italicized s: it is simply prepended as a letter. The COBOL programming language originally supported decimal fixed-precision with arbitrary size and decimal scaling, whose format was specified "graphically" with the PIC directive. For example, PIC S9999V99 specified a sign-magnitude 6-digit decimal integer with two decimal fraction digits. The construct REAL FIXED BINARY (p,f) is used in the PL/I programming language, to specify a fixed-point signed binary data type with p total bits (not including sign) with f bits in the fraction part; that is a p+1 bit signed integer with a scaling factor of 1/2 f. The latter could be positive or negative. One could specify COMPLEX instead of REAL, and DECIMAL instead of BINARY for base 10. In the Ada programming language, a numeric data type can be specified by, for example,type F is delta 0.005 range -50.0 .. 50.0. The decimal bounds are translated to the next power of two, hence it means a fixed-point representation consisting of a signed binary integer in two's complement format with at least 8 fraction bits (providing a scaling factor 1/256) and 7 sign-and-magnitude bits (ensuring an actual range from −64.00 to almost +64.00): a minimum total of 15 bits. On a 16-bit computer, the spare bit is assigned to the fractional part. Asymmetrical range constraints are also allowed, though the underlying implementation remains symmetric about 0. Newer versions of Ada allow specifying an exact (including non-power-of-two) scaling factor using 'Small => 0.005 (aspect specification), or, if the factor is a power of 10, through a decimal fixed point. The notation Bm has been used to mean a fixed binary format with m bits in the integer part; the rest of the word (typically 32 bits) being fraction bits. For example, the maximum and minimum values that can be stored in a signed B16 number are ≈32767.9999847 and −32768.0, respectively. The VisSim company used fxm.b to denote a binary fixed-point value with b total bits and m bits in the integer part; that is, a b-bit integer with scaling factor 1/2 b−m. Thus fx1.16 would mean a 16-bit number with 1 bit in the integer part and 15 in the fraction. The PS2 GS ("Graphics Synthesizer") User's Guide uses the notation s:m:f, where s specifies the presence (0 or 1) of sign bit. For example, 0:5:3 represents an unsigned 8-bit integer with a scaling factor of 1/2 3. The LabVIEW programming language uses the notation <s,b,m> to specify the parameters of an 'FXP' fixed point numbers. The s component can be either '+' or '±', signifying either an unsigned or 2's complement signed number, respectively. The b component is the total number of bits, and m is the number of bits in the integer part. Software application examples [edit] The popular TrueType font format uses 32-bit signed binary fixed-point with 26 bits to the left of the decimal for some numeric values in its instructions. This format was chosen to provide the minimal amount of precision required for hinting and for performance reasons. With the exception of the Nintendo 64, all 3D games for the fifth generation of video game consoles, including the 3DO, PlayStation, Sega Saturn, and Atari Jaguar use fixed-point arithmetic, as the systems lack hardware floating-point units. The PlayStation transformation coprocessor supports 16-bit fixed point with 12 fraction bits - whereas the Sega Saturn VDP coprocessors used a 32-bit fixed point format reserving the lower 16 bits for the fractional part. The TeX typesetting software, widely used by scientists and mathematicians, uses 32-bit signed binary fixed point with 16 fraction bits for all position calculations. The values are interpreted as fractions of a typographer's point. TeX font metric files use 32-bit signed fixed-point numbers, with 12 fraction bits. Tremor, Toast and MAD are software libraries which decode the Ogg Vorbis, GSM Full Rate and MP3 audio formats respectively. These codecs use fixed-point arithmetic because many audio decoding hardware devices do not have an FPU. The WavPack lossless audio compressor uses fixed point arithmetic. The choice was justified by, among other things, the worry that different floating-point rounding rules in different hardware could corrupt the lossless nature of the compression. The Nest Labs Utilities library, provides a limited set of macros and functions for fixed point numbers, particularly when dealing with those numbers in the context of sensor sampling and sensor outputs. The OpenGL ES 1.x specification includes a fixed point profile, as it is an API aimed for embedded systems, which do not always have an FPU. The dc and bc programs are arbitrary precision calculators, but only keep track of a (user-specified) fixed number of fractional digits. Fractint represents numbers as Q2.29 fixed-point numbers, to speed up drawing on old PCs with 386 or 486SX processors, which lacked an FPU. Doom was the last first-person shooter game by id Software to use a 16.16 fixed point representation for all of its non-integer computations, including map system, geometry, rendering, and player movement. This representation is still used in modern Doom source ports. The Q# programming language for the Azure quantum computers, that implement quantum logic gates, contains a standard numeric library for performing fixed-point arithmetic on registers of qubits. See also [edit] Q (number format) Libfixmath - a library written in C for fixed-point math Logarithmic number system Minifloat Block floating-point scaling Modulo operation μ-law algorithm A-law algorithm References [edit] ^"What's the Difference Between Fixed-Point, Floating-Point, and Numerical Formats?". ElectronicDesign. 2017-08-31. ^Julia programming language documentation FixedPointNumbers package. ^ Daniel Lemire, "Rounding integers to even, efficiently," in Daniel Lemire's blog, April 16, 2020, ^PostgreSQL manual, section 8.1.2. Arbitrary Precision Numbers ^JTC1/SC22/WG14 (2008), status of TR 18037: Embedded C ^GCC wiki, Fixed-Point Arithmetic Support ^Using GCC, section 5.13 Fixed-Point Types ^ ab"Appendix A.2". TMS320C64x DSP Library Programmer's Reference(PDF). Dallas, Texas, USA: Texas Instruments Incorporated. October 2003. SPRU565. Archived(PDF) from the original on 2022-12-22. Retrieved 2022-12-22. ^"MathWorks Fixed-Point Toolbox Documentation Glossary". mathworks.com. Archived from the original on 2011-03-16. Retrieved 2011-01-28. ^"ARM Developer Suite AXD and armsd Debuggers Guide". 1.2. ARM Limited. 2001 . Chapter 4.7.9. AXD > AXD Facilities > Data formatting > Q-format. ARM DUI 0066D. Archived from the original on 2017-11-04. ^"Chapter 4.7.9. AXD > AXD Facilities > Data formatting > Q-format". RealView Development Suite AXD and armsd Debuggers Guide(PDF). 3.0. ARM Limited. 2006 . pp.4–24. ARM DUI 0066G. Archived(PDF) from the original on 2017-11-04. ^IBM Corporation, "Numeric items". Online documentation site, accessed on 2021-07-05. ^Ada 83 documentation: "Rationale, 5.3.2: Fixed Point Types". Accessed on 2021-07-05. ^ ^"VisSim is now solidThinking Embed". www.vissim.com. solidThinking Inc. ^PS2 GS User's Guide, Chapter 7.1 "Explanatory Notes" ^"The TrueType Instruction Set: Data types". 2020-09-22. ^"[Freetype] Why 26.6?". ^"Dolphin Emulator". Dolphin Emulator. 2014-03-15. ^"WavPack Technical Description". www.wavpack.com. Retrieved 2015-07-13. ^Nest Labs Utilities library ^"Fractint, A Little Code". Archived from the original on 2010-10-27. Retrieved 2005-10-24. ^"Introduction to the Quantum Numerics Library". Retrieved 2019-11-13. Further reading [edit] Warren, Jr., Henry S. (2013). Hacker's Delight (2 ed.). Addison Wesley / Pearson Education, Inc.ISBN978-0-321-84268-8. External links [edit] The Wikibook Floating Point has a page on the topic of: Fixed-Point Numbers The Wikibook Embedded Systems has a page on the topic of: Fixed-Point Arithmetic Simple Fixed-Point Math Fixed-Point Arithmetic - An Introduction Fixed Point Representation and Fractional Math A Calculated Look at Fixed-Point Arithmetic, (PDF) | v t e Data types | | --- | | Uninterpreted | Bit Byte Trit Tryte Word Bit array | | Numeric | Arbitrary-precision or bignum Complex Decimal Fixed point Block floating point Floating point Reduced precision Minifloat Half precision bfloat16 Single precision Double precision Quadruple precision Octuple precision Extended precision Long double Integer signedness Interval Rational | | Pointer | Address physical virtual Reference | | Text | Character String null-terminated | | Composite | Algebraic data type generalized Array Associative array Class Dependent Equality Inductive Intersection List Object metaobject Option type Product Record or Struct Refinement Set Union tagged | | Other | Boolean Bottom type Collection Enumerated type Exception Function type Opaque data type Recursive data type Semaphore Stream Strongly typed identifier Top type Type class Empty type Unit type Void | | Related topics | Abstract data type Boxing Data structure Generic Kind metaclass Parametric polymorphism Primitive data type Interface Subtyping Type constructor Type conversion Type system Type theory Variable | Retrieved from " Categories: Computer arithmetic Data types Primitive types Binary arithmetic Hidden categories: Articles with short description Short description matches Wikidata Articles that may contain original research from September 2019 All articles that may contain original research Use dmy dates from December 2022 Use list-defined references from December 2022 Use American English from September 2024 All Wikipedia articles written in American English Articles needing additional references from May 2023 All articles needing additional references All articles with unsourced statements Articles with unsourced statements from July 2021 Articles with unsourced statements from October 2021 This page was last edited on 18 June 2025, at 02:39(UTC). Text is available under the Creative Commons Attribution-ShareAlike 4.0 License; additional terms may apply. By using this site, you agree to the Terms of Use and Privacy Policy. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit organization. Privacy policy About Wikipedia Disclaimers Contact Wikipedia Code of Conduct Developers Statistics Cookie statement Mobile view Search Search [x] Toggle the table of contents Fixed-point arithmetic 18 languagesAdd topic
305
Stack Allocations and Escape Analysis - Go Optimization Guide =============== - [x] - [x] Skip to content Go Optimization Guide Stack Allocations and Escape Analysis Initializing search GitHub Home Blog Common Performance Patterns Practical Networking Patterns Go Optimization Guide GitHub Home Blog [x] Common Performance Patterns Common Performance Patterns [x] Memory Management & Efficiency Memory Management & Efficiency Object Pooling Memory Preallocation Struct Field Alignment Avoiding Interface Boxing Zero-Copy Techniques Memory Efficiency and Go’s Garbage Collector [x] Stack Allocations and Escape Analysis Stack Allocations and Escape Analysis Table of contents What Is Escape Analysis? Why does it matter? Example: Stack vs Heap How to View Escape Analysis Output What Causes Variables to Escape? Returning Pointers to Local Variables Capturing Variables in Closures Interface Conversions Assignments to Global Variables or Struct Fields Large Composite Literals Benchmarking Stack vs Heap Allocations Forcing a Heap Allocation When to Optimize for Stack Allocation [x] Concurrency and Synchronization Concurrency and Synchronization Goroutine Worker Pools Atomic Operations and Synchronization Primitives Lazy Initialization Immutable Data Sharing Efficient Context Management [x] I/O Optimization and Throughput I/O Optimization and Throughput Efficient Buffering Batching Operations [x] Compiler-Level Optimization and Tuning Compiler-Level Optimization and Tuning Leveraging Compiler Optimization Flags [x] Practical Networking Patterns Practical Networking Patterns [x] Benchmarking First Benchmarking First Benchmarking and Load Testing for Networked Go Apps Practicle example of Profiling Networked Go Applications with pprof [x] Foundations and Core Concepts Foundations and Core Concepts How Go Handles Networking Efficient Use of net/http, net.Conn, and UDP [x] Scaling and Performance Engineering Scaling and Performance Engineering Managing 10K+ Concurrent Connections in Go GOMAXPROCS, epoll/kqueue, and Scheduler-Level Tuning [x] Diagnostics and Resilience Diagnostics and Resilience Building Resilient Connection Handling Memory Management and Leak Prevention in Long-Lived Connections [x] Transport-Level Optimization Transport-Level Optimization Comparing TCP, HTTP/2, and gRPC Performance in Go Stack Allocations and Escape Analysis¶ When writing performance-critical Go applications, one of the subtle but significant optimizations you can make is encouraging values to be allocated on the stack rather than the heap. Stack allocations are cheaper, faster, and garbage-free—but Go doesn't always put your variables there automatically. That decision is made by the Go compiler during escape analysis. In this article, we’ll explore what escape analysis is, how to read the compiler’s escape diagnostics, what causes values to escape, and how to structure your code to minimize unnecessary heap allocations. We'll also benchmark different scenarios to show the real-world impact. What Is Escape Analysis?¶ Escape analysis is a static analysis performed by the Go compiler to determine whether a variable can be safely allocated on the stack or if it must be moved ("escape") to the heap. Why does it matter?¶ Stack allocations are cheap: the memory is automatically freed when the function returns. Heap allocations are more expensive: they involve garbage collection overhead. The compiler decides where to place each variable based on how it's used. If a variable can be guaranteed to not outlive its declaring function, it can stay on the stack. If not, it escapes to the heap. Example: Stack vs Heap¶ ``` func allocate() int { x := 42 return &x // x escapes to the heap } func noEscape() int { x := 42 return x // x stays on the stack } ``` In allocate, x is returned as a pointer. Since the pointer escapes the function, the Go compiler places x on the heap. In noEscape, x is a plain value and doesn’t escape. How to View Escape Analysis Output¶ You can inspect escape analysis with the -gcflags compiler option: go build -gcflags="-m" ./path/to/pkg Or for a specific file: go run -gcflags="-m" main.go This will print lines like: main.go:10:6: moved to heap: x main.go:14:6: can inline noEscape Look for messages like moved to heap to identify escape points. What Causes Variables to Escape?¶ Here are common scenarios that force heap allocation: Returning Pointers to Local Variables¶ func escape() int { x := 10 return &x // escapes } Capturing Variables in Closures¶ func closureEscape() func() int { x := 5 return func() int { return x } // x escapes } Interface Conversions¶ When a value is stored in an interface, it may escape: func toInterface(i int) interface{} { return i // escapes if type info needed at runtime } Assignments to Global Variables or Struct Fields¶ ``` var global int func assignGlobal() { x := 7 global = &x // escapes } ``` Large Composite Literals¶ Go may allocate large structs or slices on the heap even if they don’t strictly escape. func makeLargeSlice() []int { s := make([]int, 10000) // may escape due to size return s } Benchmarking Stack vs Heap Allocations¶ Let’s run a benchmark to explore when heap allocations actually occur—and when they don’t, even if we return a pointer. ``` func StackAlloc() Data { return Data{1, 2, 3} // stays on stack } func HeapAlloc() Data { return &Data{1, 2, 3} // escapes to heap } func BenchmarkStackAlloc(b testing.B) { for b.Loop() { _ = StackAlloc() } } func BenchmarkHeapAlloc(b testing.B) { for b.Loop() { _ = HeapAlloc() } } ``` Benchmark Results | Benchmark | Iterations | Time per op (ns) | Bytes per op | Allocs per op | | --- | --- | --- | --- | --- | | BenchmarkStackAlloc-14 | 1,000,000,000 | 0.2604 ns | 0 B | 0 | | BenchmarkHeapAlloc-14 | 1,000,000,000 | 0.2692 ns | 0 B | 0 | You might expect HeapAlloc to always allocate memory on the heap—but it doesn’t here. That’s because the compiler is smart: in this isolated benchmark, the pointer returned by HeapAlloc doesn’t escape the function in any meaningful way. The compiler can see it’s only used within the benchmark and short-lived, so it safely places it on the stack too. Forcing a Heap Allocation¶ ``` var sink Data func HeapAllocEscape() { d := &Data{1, 2, 3} sink = d // d escapes to heap } func BenchmarkHeapAllocEscape(b testing.B) { for b.Loop() { HeapAllocEscape() } } ``` | Benchmark | Iterations | Time per op (ns) | Bytes per op | Allocs per op | | --- | --- | --- | --- | --- | | BenchmarkHeapAllocEscape-14 | 331,469,049 | 10.55 ns | 24 B | 1 | As shown in BenchmarkHeapAllocEscape, assigning the pointer to a global variable causes a real heap escape. This introduces real overhead: a 40x slower call, a 24-byte allocation, and one garbage-collected object per call. Show the benchmark file ``` package main import "testing" type Data struct { A, B, C int } // heap-alloc-start func StackAlloc() Data { return Data{1, 2, 3} // stays on stack } func HeapAlloc() Data { return &Data{1, 2, 3} // escapes to heap } func BenchmarkStackAlloc(b testing.B) { for b.Loop() { _ = StackAlloc() } } func BenchmarkHeapAlloc(b testing.B) { for b.Loop() { _ = HeapAlloc() } } // heap-alloc-end // escape-start var sink Data func HeapAllocEscape() { d := &Data{1, 2, 3} sink = d // d escapes to heap } func BenchmarkHeapAllocEscape(b testing.B) { for b.Loop() { HeapAllocEscape() } } // escape-end ``` When to Optimize for Stack Allocation¶ Not all escapes are worth preventing. Here’s when it makes sense to focus on stack allocation—and when it’s better to let values escape. When to Avoid Escape In performance-critical paths. Reducing heap usage in tight loops or latency-sensitive code lowers GC pressure and speeds up execution. For short-lived, small objects. These can be efficiently stack-allocated without involving the garbage collector, reducing memory churn. When you control the full call chain. If the object stays within your code and you can restructure it to avoid escape, it’s often worth the small refactor. If profiling reveals GC bottlenecks. Escape analysis helps you target and shrink memory-heavy allocations identified in real-world traces. When It’s Fine to Let Values Escape When returning values from constructors or factories. Returning a pointer from NewThing() is idiomatic Go—even if it causes an escape, it improves clarity and usability. When objects must outlive the function. If you're storing data in a global, sending to a goroutine, or saving it in a struct, escaping is necessary and correct. When allocation size is small and infrequent. If the heap allocation isn’t in a hot path, the benefit of avoiding it is often negligible. When preventing escape hurts readability. Writing awkward code to keep everything on the stack can reduce maintainability for a micro-optimization that won’t matter. Previous Memory Efficiency and Go’s Garbage CollectorNext Goroutine Worker Pools Made with Material for MkDocs
306
complexity theory - Subset Sum: reduce special to general case - Computer Science Stack Exchange =============== Join Computer Science By clicking “Sign up”, you agree to our terms of service and acknowledge you have read our privacy policy. Sign up with Google OR Email Password Sign up Already have an account? Log in Skip to main content Stack Exchange Network Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Visit Stack Exchange Loading… Tour Start here for a quick overview of the site Help Center Detailed answers to any questions you might have Meta Discuss the workings and policies of this site About Us Learn more about Stack Overflow the company, and our products current community Computer Science helpchat Computer Science Meta your communities Sign up or log in to customize your list. more stack exchange communities company blog Log in Sign up Home Questions Unanswered AI Assist Labs Tags Chat Users Companies Teams Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Try Teams for freeExplore Teams 3. Teams 4. Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Explore Teams Teams Q&A for work Connect and share knowledge within a single location that is structured and easy to search. Learn more about Teams Hang on, you can't upvote just yet. You'll need to complete a few actions and gain 15 reputation points before being able to upvote. Upvoting indicates when questions and answers are useful. What's reputation and how do I get it? Instead, you can save this post to reference later. Save this post for later Not now Subset Sum: reduce special to general case Ask Question Asked 12 years, 4 months ago Modified5 years, 10 months ago Viewed 6k times This question shows research effort; it is useful and clear 21 Save this question. Show activity on this post. Wikipedia states the subset sum problem as finding a subset of a given multiset of integers, whose sum is zero. Further it states that it is equivalent to finding a subset with sum s s for any given s s. So I believe as they are equivalent, there must be a reduction in either side. The one from s s to zero is trivial by setting s=0 s=0. But I had no luck finding a reduction from zero to s s, i.e. given a set of integers A A, construct a set of integers B B containing a subset with sum s s (for any s s), if and only if there is as subset of A A with sum zero. Can you give me some pointers? complexity-theory reductions np-hard Share Share a link to this question Copy linkCC BY-SA 4.0 Cite Improve this question Follow Follow this question to receive notifications edited Oct 11, 2019 at 14:44 Mengfan Ma 672 4 4 silver badges 18 18 bronze badges asked Apr 2, 2013 at 22:06 ipsecipsec 537 3 3 silver badges 9 9 bronze badges Add a comment| 2 Answers 2 Sorted by: Reset to default This answer is useful 12 Save this answer. Show activity on this post. You actually already have a reduction from special to general. By setting s=0 s=0, you are basically using the general algorithm to solve the special problem. For the other way round (i.e. a reduction from general to special): Suppose you are given a set S={x 1,…,x n}S={x 1,…,x n} and a number K K and you have to determine if there is some subset of S S which sums to K K. Now you want to solve this problem, given an algorithm for the case where you can determine if some subset sums to 0 0. Now if x i>0 x i>0, we have an easy reduction: S′={x 1,x 2,…,x n,−K}S′={x 1,x 2,…,x n,−K}. S′S′ has a subset of sum 0 0 iff S S has a subset of sum K K. The problem occurs when we can have x i≤0 x i≤0 for some of the i i. We can assume that K>0 K>0 (why?). Suppose the sum of the positive x i x i is P P and the negative x i x i is N N. Now construct a new set S′={y 1,y 2…,y n}S′={y 1,y 2…,y n} such that y i=x i+M y i=x i+M where M=P+|N|+K M=P+|N|+K. Each y i>0 y i>0. Now run the zero-subset-sum algorithm on the sets S′∪{−(K+M)}S′∪{−(K+M)} S′∪{−(K+2 M)}S′∪{−(K+2 M)} S′∪{−(K+3 M)}S′∪{−(K+3 M)} …… S′∪{−(K+n M)}S′∪{−(K+n M)} It is easy to show that if S S has a subset of sum K K, then at least one of the above sets has subset of sum zero. I will leave the proof of the other direction to you. Share Share a link to this answer Copy linkCC BY-SA 3.0 Cite Improve this answer Follow Follow this answer to receive notifications answered Apr 3, 2013 at 6:52 AryabhataAryabhata 6,291 2 2 gold badges 36 36 silver badges 47 47 bronze badges 6 Thank you very much. I wonder, is there a reduction which transforms an instance of 0-subset-sum to one (instead of n n) instance of K-subset-sum? –ipsec Commented Apr 3, 2013 at 16:34 @ipsec: You mean transform an instance of K-subset-sum to 0-subset-sum? Perhaps taking the union of the n n sets above will work. –Aryabhata Commented Apr 3, 2013 at 17:31 Well, I was actually thinking twice whether I got the rigth direction now. When I want to show that K-subset-sum is NP-hard for every K given the fact, that 0-subset-sum is NP-hard, I can use a reduction from 0-subset-sum to K-subset-sum, for which I would need a poly-time transformation from any 0-instance to a K-instance. But I am not certain now that this is actually what I have asked in my question. –ipsec Commented Apr 3, 2013 at 19:19 @ipsec: When you say set s=0 s=0, you have show the NP-Hardness of K K-subset-sum given the NP-Hardness of zero-subset-sum: the general problem is at least as hard as the special problem. Note that in reduction terms, you say you have reduced zero-subset-sum to K K-subset-sum. Also, note that K K is an input. When you talk about "every given K K" what exactly do you mean? The above answer shows that the special case(zero-subset-sum) is as hard (in NP-hardness sense) as the general case (k k-subset-sum, where k k is an input). –Aryabhata Commented Apr 4, 2013 at 4:40 Never mind. What I originally was wondering about is, if we know that 0-subset-sum is NP-hard, can we derive, that e.g. 1-subset-sum is as well? Wikipedia says so, but I was looking for a proper reduction. However I see now that my wording was totally messed up and I was in fact asking the opposite. Anyway you gave me enough input to reduce from any K-subset-sum instance to a L-subset-sum instance for any given integers K and L, so my problem is still solved. –ipsec Commented Apr 4, 2013 at 15:08 |Show 1 more comment This answer is useful 0 Save this answer. Show activity on this post. Aryabhata's answer can be fixed up by making use of the fact that we can multiply all the numbers by some large c c, and then add something small to each one to act like a "presence tag", and then supply some extra numbers that will allow us to get to zero if we could get to c K c K without them. Specifically, we will use c=2(n+1)c=2(n+1) and 1 as the presence tag. Given an instance (S={x 1,…,x n},K)(S={x 1,…,x n},K) of the general problem with target value K K, we will create an instance of the specific problem (with target value 0) that contains: Y={y 1,…,y n}Y={y 1,…,y n}, where y i=2(n+1)x i+1 y i=2(n+1)x i+1. The number z=−2 K(n+1)−n z=−2 K(n+1)−n. n−1 n−1 copies of the number 1, to be referred to as "pull-up" numbers. I'll assume as Aryabhatta does that K K is positive. (Since it's been 6 years, I'll answer his exercise for the reader: the reason we can do this is that if we swap the signs of all numbers in an instance of the general problem, including K K, then we wind up with a new, equivalent problem instance. That means that an algorithm to solve positive-K K instances suffices to solve any problem -- to solve an instance with negative K K, we could perform this sign-swap, run that algorithm, and forward its answer on as the answer to the original question. And of course if K=0 K=0 then we don't need to perform any transformation of the general case into the special case at all!) First let's show that a YES answer to the given instance of the general problem implies a YES answer to the constructed instance of the special problem. Here we can assume that some solution {x j 1,…,x j m}{x j 1,…,x j m} to the general problem exists: that is, this nonempty collection of m m numbers sums to K K. So if we take the corresponding y y-values {y j 1,…,y j m}{y j 1,…,y j m} into our solution to the constructed instance, they will sum to 2 K(n+1)+m 2 K(n+1)+m. We can then choose to include −2 K(n+1)−n−2 K(n+1)−n in the solution, leaving us with a sum of m−n m−n. Since 1≤m≤n 1≤m≤n, this in the range [−n+1,0][−n+1,0], which we can successfully pull up to 0 by including some subset of the pull-up numbers. Now let's show that a YES answer to the constructed instance implies a YES answer to the original given instance. This is where the multiplication by 2(n+1)2(n+1) becomes important -- it is what allows us to be certain that the extra numbers we included can't "do too much". Here we may assume that some solution {y j′1,…,y j′m′}{y j 1′,…,y j m′′} to the constructed instance exists: that is, this nonempty collection of m′m′ numbers sums to 0. By the problem requirements, this solution contains at least one element. Further, it must contain at least one element from Y Y, since without this it is impossible to reach a total of 0: If only pull-up numbers are present, then the sum is necessarily in the range [1,n−1][1,n−1] (note that in this case at least one pull-up number must be present, and all of them are strictly positive, so the sum cannot be 0); while if the solution consists of just z z and some pull-up numbers, then the total is necessarily negative because z=−2 K(n+1)−n≤−n z=−2 K(n+1)−n≤−n and the most that the pull-up numbers can increase the sum by is n−1 n−1. Now suppose towards contradiction that the solution does not contain z z. Every element in Y Y consists of two terms: A multiple of 2(n+1)2(n+1), and a +1 "presence tag". Notice that the +1 term on each of the n n elements of Y Y increases the sum by 1 if that element is chosen, as does each of the up to n−1 n−1 pull-up numbers that are chosen, so the total contributed by these 2 sources to any solution is at least 1 (because we established in the previous paragraph that at least one element of Y Y must be chosen) and at most n+n−1=2 n−1 n+n−1=2 n−1. In particular, this implies that the sum of these two sets of terms, when taken modulo 2(n+1)2(n+1), is nonzero. Under the assumption that the solution does not contain z z, the only other components in this sum are the multiples of 2(n+1)2(n+1) contributed by the chosen members of Y Y, which do not affect the value of the sum when taken modulo 2(n+1)2(n+1). Thus the sum of all terms in the solution, when taken modulo 2(n+1)2(n+1), is nonzero, meaning it cannot be equal to the target sum of 0, meaning it cannot be a valid solution at all: we have found a contradiction, meaning that it must be that z=−2 K(n+1)−n z=−2 K(n+1)−n is present in every solution after all. So every solution contains z z. We know that (−2 K(n+1)−n)+∑m′i′=1(2(n+1)x j′i′+1)+∑pull-ups=0(−2 K(n+1)−n)+∑i′=1 m′(2(n+1)x j i′′+1)+∑pull-ups=0, and we can rearrange the terms: −2 K(n+1)+∑m′i′=1(2(n+1)x j′i′)−(n+∑m′i′=1 1+∑pull-ups)=0−2 K(n+1)+∑i′=1 m′(2(n+1)x j i′′)−(n+∑i′=1 m′1+∑pull-ups)=0 −2 K(n+1)+∑m′i′=1(2(n+1)x j′i′)−(n+m′+∑pull-ups)=0−2 K(n+1)+∑i′=1 m′(2(n+1)x j i′′)−(n+m′+∑pull-ups)=0 2(n+1)(−K+∑m′i′=1 x j′i′)−(n+m′+∑pull-ups)=0 2(n+1)(−K+∑i′=1 m′x j i′′)−(n+m′+∑pull-ups)=0. Since the sum is 0, it must remain 0 when taken modulo 2(n+1)2(n+1), which implies that we can discard all terms containing a multiple of 2(n+1)2(n+1) to obtain the new equation −(n+m′+∑pull-ups)=0−(n+m′+∑pull-ups)=0. This can be directly substituted back into the previous equation to get 2(n+1)(−K+∑m′i′=1 x j′i′)=0 2(n+1)(−K+∑i′=1 m′x j i′′)=0. Finally, dividing both sides by 2(n+1)2(n+1) leaves −K+∑m′i′=1 x j′i′=0−K+∑i′=1 m′x j i′′=0, which yields a solution to the original general problem instance. Share Share a link to this answer Copy linkCC BY-SA 4.0 Cite Improve this answer Follow Follow this answer to receive notifications answered Oct 13, 2019 at 16:16 j_random_hackerj_random_hacker 5,509 1 1 gold badge 17 17 silver badges 22 22 bronze badges Add a comment| Your Answer Thanks for contributing an answer to Computer Science Stack Exchange! Please be sure to answer the question. Provide details and share your research! But avoid … Asking for help, clarification, or responding to other answers. Making statements based on opinion; back them up with references or personal experience. Use MathJax to format equations. MathJax reference. To learn more, see our tips on writing great answers. Draft saved Draft discarded Sign up or log in Sign up using Google Sign up using Email and Password Submit Post as a guest Name Email Required, but never shown Post Your Answer Discard By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy. Start asking to get answers Find the answer to your question by asking. Ask question Explore related questions complexity-theory reductions np-hard See similar questions with these tags. Featured on Meta Upcoming initiatives on Stack Overflow and across the Stack Exchange network... Will you help build our new visual identity? Report this ad Report this ad Linked 0Minimum difference of sums Related 3Is it possible to easily reduce 0/1 subset sum to subset sum with multiplicities? 0Subset-sum variation, multiple sums 2How do I reduce subset sum to another problem in NP? 1Proving NP-Complete problem by reduction of subset-sum 5Two versions of Subset Sum Problem 0Reduce Subset-Sum to Sat 1SUBSET SUM reduction to PARTITION Hot Network Questions When was this builder's paper produced? Replacing \kern1em with $\hookrightarrow$ in macro using \discretionary gives ‘Improper discretionary list’ error. How to solve this problem? Why was there a child at the dig site in Montana? How to deal with this problem in hedonism? Reskinning creatures without accidentally hiding how dangerous/safe they are Why לֶחֶם instead of לַחַם? What's the difference between democracy and totalitarianism if, even in democracy, we must respect laws set by parties we didn't vote for? How to reduce hyphenations How can a theory be discarded if the Duhem–Quine thesis suggests it can’t be falsified Is 人形机器人 a redundant expression? Why isn't gauge symmetry a symmetry while global symmetry is? What violent acts or injuries are attributable to Palestine Action? I failed to make Claus benzene. (With sticks.) Expected number of rolls to see all sides of a die Highlight everything after 500 characters How to defend against GDPR being used to access anti-fraud measures? Use bigger sample for predictors in regression How do Commoners "change class"? What is a single adjective for someone who accepts their faults? Does cell phone only receive (one way communication) or receive and transmit microwaves (two way communication) during download? Intel NUC automatically shuts down when trying Ubuntu Landmark identification in "The Angel" (Arsenal FC's anthem) In the US, can I contribute to my Roth IRA, ahead of the time I get the earned income? A story where a character that looks like Wile E. Coyote helps to relocate a community of business-sharp hunters-gatherers Question feed Subscribe to RSS Question feed To subscribe to this RSS feed, copy and paste this URL into your RSS reader. Why are you flagging this comment? It contains harassment, bigotry or abuse. This comment attacks a person or group. Learn more in our Code of Conduct. It's unfriendly or unkind. This comment is rude or condescending. Learn more in our Code of Conduct. Not needed. This comment is not relevant to the post. Enter at least 6 characters Something else. A problem not listed above. Try to be as specific as possible. Enter at least 6 characters Flag comment Cancel You have 0 flags left today Computer Science Tour Help Chat Contact Feedback Company Stack Overflow Teams Advertising Talent About Press Legal Privacy Policy Terms of Service Your Privacy Choices Cookie Policy Stack Exchange Network Technology Culture & recreation Life & arts Science Professional Business API Data Blog Facebook Twitter LinkedIn Instagram Site design / logo © 2025 Stack Exchange Inc; user contributions licensed under CC BY-SA. rev 2025.8.13.32804 By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy. Accept all cookies Necessary cookies only Customize settings Cookie Consent Preference Center When you visit any of our websites, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences, or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and manage your preferences. Please note, blocking some types of cookies may impact your experience of the site and the services we are able to offer. Cookie Policy Accept all cookies Manage Consent Preferences Strictly Necessary Cookies Always Active These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information. Cookies Details‎ Performance Cookies [x] Performance Cookies These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance. Cookies Details‎ Functional Cookies [x] Functional Cookies These cookies enable the website to provide enhanced functionality and personalisation. They may be set by us or by third party providers whose services we have added to our pages. If you do not allow these cookies then some or all of these services may not function properly. Cookies Details‎ Targeting Cookies [x] Targeting Cookies These cookies are used to make advertising messages more relevant to you and may be set through our site by us or by our advertising partners. They may be used to build a profile of your interests and show you relevant advertising on our site or on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device. Cookies Details‎ Cookie List Clear [x] checkbox label label Apply Cancel Consent Leg.Interest [x] checkbox label label [x] checkbox label label [x] checkbox label label Necessary cookies only Confirm my choices
307
cv.complex variables - Behaviour at natural boundary - MathOverflow =============== Join MathOverflow By clicking “Sign up”, you agree to our terms of service and acknowledge you have read our privacy policy. Sign up with Google OR Email Password Sign up Already have an account? Log in Skip to main content Stack Exchange Network Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Visit Stack Exchange Loading… Tour Start here for a quick overview of the site Help Center Detailed answers to any questions you might have Meta Discuss the workings and policies of this site About Us Learn more about Stack Overflow the company, and our products current community MathOverflow helpchat MathOverflow Meta your communities Sign up or log in to customize your list. more stack exchange communities company blog Log in Sign up Home Questions Unanswered AI Assist Labs Tags Chat Users Hang on, you can't upvote just yet. You'll need to complete a few actions and gain 15 reputation points before being able to upvote. Upvoting indicates when questions and answers are useful. What's reputation and how do I get it? Instead, you can save this post to reference later. Save this post for later Not now Behaviour at natural boundary Ask Question Asked 8 years, 1 month ago Modified8 years, 1 month ago Viewed 1k times This question shows research effort; it is useful and clear 8 Save this question. Show activity on this post. Suppose I have a holomorphic function f f in a domain Ω Ω with natural boundary ∂Ω.∂Ω. Let p∈∂Ω.p∈∂Ω. Is it true that there is some analogue of Picard's little theorem - that is, by choosing an appropriate sequence x 1,…,x n,…∈Ω x 1,…,x n,…∈Ω converging to p p we can have the limit of f(x i)f(x i) be (almost) any complex number? EDIT As divined by Noam Elkies, this was inspired by the recent discussion of ∑n i=0 z i 2,∑i=0 n z i 2, and the fact that its zeros seem to cluster near its natural boundary. Indeed, consider the function (a slight variant of that occurring in both answers) ∑i=0 n z 2 i−1(i+1)2.∑i=0 n z 2 i−1(i+1)2. Here is the graph of its zeros(n=10 n=10): This would tend to imply that you don't have to work very hard to tend to zero when approaching the boundary, and I assume that zero is not a particularly exceptional value. Indeed, if you plot the 1 1 s of the function (preimages of the value 1 1), you get an identical plot: Which would tend to indicate that my conjecture has at least a grain of truth in it. cv.complex-variables Share Share a link to this question Copy linkCC BY-SA 3.0 Cite Improve this question Follow Follow this question to receive notifications edited Jun 30, 2017 at 1:24 Igor Rivin asked Jun 29, 2017 at 22:57 Igor RivinIgor Rivin 1 3 Suggested by your recent computations for ∑∞n=1 z n 2∑n=1∞z n 2? mathoverflow.net/questions/272990 –Noam D. Elkies Commented Jun 30, 2017 at 0:00 1 @NoamD.Elkies Yes, see the edit... –Igor Rivin Commented Jun 30, 2017 at 1:24 1 An analytic function can be continuous and even smooth in the closed disk and have the circle as a natural boundary. –Alexandre Eremenko Commented Jun 30, 2017 at 5:43 Add a comment| 3 Answers 3 Sorted by: Reset to default This answer is useful 20 Save this answer. Show activity on this post. No. Consider f(z)=∑n=1∞z 2 n n 2 f(z)=∑n=1∞z 2 n n 2 By the Ostrowski-Hadamard gap theorem, the natural boundary is the unit circle. But the series converges absolutely on the unit circle, and the limit is always f(p)f(p), which is bounded in absolute value by ∑n 1/n 2∑n 1/n 2. Share Share a link to this answer Copy linkCC BY-SA 3.0 Cite Improve this answer Follow Follow this answer to receive notifications edited Jun 29, 2017 at 23:13 answered Jun 29, 2017 at 23:08 Robert IsraelRobert Israel 54.6k 1 1 gold badge 78 78 silver badges 155 155 bronze badges Add a comment| This answer is useful 8 Save this answer. Show activity on this post. No, this analogue is false. For example function f(z)=∑n=1∞z 2 n n 2 f(z)=∑n=1∞z 2 n n 2 can not be analytically continued beyond D D but |f||f| is bounded by 10 10 in D D so modulus of any limit of f(x i)f(x i) as well can not be greater then 10 10. Share Share a link to this answer Copy linkCC BY-SA 3.0 Cite Improve this answer Follow Follow this answer to receive notifications answered Jun 29, 2017 at 23:14 Aleksei KulikovAleksei Kulikov 8,241 1 1 gold badge 22 22 silver badges 36 36 bronze badges Add a comment| This answer is useful 6 Save this answer. Show activity on this post. There is, indeed, a grain of truth in your conjecture. One possible formalization of it is as follows. Suppose that there is a sequence of analytic in the unit disk D D functions f n f n such that f n f n converge pointwise on some set E⊂D E⊂D having an accumulation point inside D D. If f n f n omit two fixed different values in D D, then f n f n converge uniformly inside D D. This is just the usual mumbo-jumbo about normal families (Montel's theorem, to be exact) plus the uniqueness theorem (any limit of a subsequence has prescribed values on E E). If you replace D D by a disk around p p and take f n f n to be Taylor polynomials of f f, say, you'll see that either f f can be analytically extended to a neighborhood of p p, or you have the effect you observed for a a-points of f n f n with almost every a∈C a∈C. Alas, more often than not, the zeroes on your picture will lie outside Ω Ω, so you cannot extract too much information about f f itself from that picture. Share Share a link to this answer Copy linkCC BY-SA 3.0 Cite Improve this answer Follow Follow this answer to receive notifications answered Jun 30, 2017 at 2:00 fedjafedja 63.7k 11 11 gold badges 164 164 silver badges 308 308 bronze badges Add a comment| You must log in to answer this question. Start asking to get answers Find the answer to your question by asking. Ask question Explore related questions cv.complex-variables See similar questions with these tags. Featured on Meta Upcoming initiatives on Stack Overflow and across the Stack Exchange network... Will you help build our new visual identity? Linked 10zeros on the circle of convergence 11A closed formula for A n(X)=∑i=0 n X i 2 A n(X)=∑i=0 n X i 2 Related 4Converse of Picard's Big Theorem? 11Distribution of zeroes of lacunary functions 18Why would the roots of the generating functions of the number of k-almost primes less than x have negative real parts? 10zeros on the circle of convergence 2Boundary behavior of power series vs. boundedness of partial sums 5Boundary zeros of a holomorphic function f:Ω→C f:Ω→C 14The location of the zeros of the "new" function Ψ(s)=∑∞n=1 1 n!s Ψ(s)=∑n=1∞1 n!s Question feed Subscribe to RSS Question feed To subscribe to this RSS feed, copy and paste this URL into your RSS reader. Why are you flagging this comment? It contains harassment, bigotry or abuse. This comment attacks a person or group. Learn more in our Code of Conduct. It's unfriendly or unkind. This comment is rude or condescending. Learn more in our Code of Conduct. Not needed. This comment is not relevant to the post. Enter at least 6 characters Something else. A problem not listed above. Try to be as specific as possible. Enter at least 6 characters Flag comment Cancel You have 0 flags left today MathOverflow Tour Help Chat Contact Feedback Company Stack Overflow Teams Advertising Talent About Press Legal Privacy Policy Terms of Service Your Privacy Choices Cookie Policy Stack Exchange Network Technology Culture & recreation Life & arts Science Professional Business API Data Blog Facebook Twitter LinkedIn Instagram Site design / logo © 2025 Stack Exchange Inc; user contributions licensed under CC BY-SA. rev 2025.8.13.32804 By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy. Accept all cookies Necessary cookies only Customize settings
308
THE GRAPHON AS A LIMIT FOR DENSE GRAPHS VALERIE HAN Abstract. This paper will introduce the graphon, which is the completion of the space of dense graphs. We will discuss homomorphism densities, an important property of graphs, and cut distance and sampling distance, two metrics used to compare graphs, in order to make sense of the graphon serving as a limit object for dense graphs. Contents 1. Introduction 1 2. Preliminaries 2 3. Graph Homomorphisms 3 4. Graphons 8 5. Cut Distance and Sampling Distance 11 6. Convergence of Dense Graphs 13 7. Examples of Convergence to Graphons 15 Acknowledgments 17 References 17 1. Introduction When working with large graphs, it is difficult to and, in fact, not particularly illuminating to know exactly which nodes are connected to one other. One issue we face, then, is how to determine the structure of such a large graph in an informative and not (unnecessarily) overwhelming way. One approach is using graph homomorphisms, which can tell us a lot about the structure of a graph. However, finding homomorphisms becomes unwieldy as the size of the graph grows. To make this less unwieldy, in Section 3, we explore different kinds of homomorphisms, which are more useful or convenient in certain cases, as well as defining and relating homomorphism densities. Another way to study large graphs is through sampling, examining small random induced subgraphs of the large graph. Using this approach, we can compare graphs using sampling distance, defined in Section 5. In that section, we will also define cut distance, a metric that is more directly informative structurally. Another approach, the main focus of our paper, is to pass from sequences of larger and larger graphs to ideal limiting objects, which share important properties with their sequences. An advantage of this approach is having a more explicit representation of the limit object, which allows for greater analytic flexibility. In fact, we shall see that these approaches are not separate at all. In Section 6, we will define convergence using graph homomorphism densities, and we will see 1 2 VALERIE HAN that the limit of homomorphism densities of a convergent sequence of dense graphs will be identical to the homomorphism densities of its limit object, the graphon. Moreover, the cut distance between the graphs in a sequence approaches 0 if and only if the graphs themselves converge, and the sampling distance converges to 0 if and only if the cut distance converges to 0. This paper will assume familiarity with the very basics of graph theory and probability. We will be closely following Lov´ asz’s book on the subject. 2. Preliminaries First, we define terms and notation that will be used throughout the paper. Remark 2.1. In the paper, we will use the terms nodes and vertices interchangeably. Notation 2.2. Let G be a graph. Define v(G) to be the number of vertices of G and V(G) to be the set of vertices of G. Similarly, define e(G) to be the number of edges of G and E(G) to be the set of edges of G. The following definitions of dense and sparse, terms which are not well-defined in graph theory, are not universally used but are convenient for our purposes. Definition 2.3. Let G be a graph with n nodes. G is dense if the number of nodes that each node is adjacent to is linearly related to n. A graph is sparse if each node has a bounded number of neighbors. Note that these may not seem mutually exclusive. However, they are mutually exclusive when considering a sequence of graphs where the number of vertices is increasing, important for investigating graph limits. While the fraction of edges of a sparse graph (normalized with respect to the number of nodes) will tend towards 0, the fraction of edges of a dense graph will tend towards some positive number. This will become important in the context of graph homomorphisms in Section 3. Although sparse graphs are more important from a practical point of view, this paper will focus on dense graphs because they have a more natural limit object def-inition. We will explain why the graphon and the construction of homomorphism numbers in this paper unfortunately do not work for sparse graphs in Sections 6 and 3, respectively. Next, we define the two types of graphs we will work with. Definition 2.4. A simple graph is an undirected graph without weights, multiple edges, or loops. Definition 2.5. A weighted graph is an undirected graph that has a positive real number, known as a weight, assigned to each edge and/or each node. These two types of graphs will be sufficient for our purposes. We will mainly use simple graphs, but we will use weighted graphs to show how the same ideas illustrated with simple graphs can easily be extended. (For those readers familiar with other types of graphs, note that the weighted graph can easily represent other types of graphs such as multigraphs.) Finally, we define some terms useful for examples and explanations. THE GRAPHON AS A LIMIT FOR DENSE GRAPHS 3 Definition 2.6. A simple graph is a complete graph if every node is connected to every other node. We denote the complete graph with n vertices Kn. Definition 2.7. Consider a graph G on the vertices 1, . . . , n. G is a cycle if each node is connected only to the ones before and after it. We denote the cycle with n vertices Cn. Figure 1. K4 and C6 Definition 2.8. A graph F is a subgraph of graph G if there exists a map ϕ : V(F) →V(G) such that ϕ preserves adjacency. Definition 2.9. A graph F is an induced subgraph of graph G if there exists a map ϕ : V(F) →V(G) such that ϕ preserves adjacency and non-adjacency. Note that F being an induced subgraph of G means there exists an exact copy of F in G, while F being a subgraph of G means there exists a copy but one that may have extra edges. 3. Graph Homomorphisms It can be difficult to describe and characterize graphs, especially as the graphs become larger. One way we can characterize graphs is by how they’re related to other graphs. Thus, we consider graph homomorphisms. Definition 3.1. Let F, G be graphs. A map from V(F) to V(G) is a graph homo-morphism, denoted by F →G, if it preserves adjacency. Note, however, that a homomorphism does not necessarily preserve non-adjacency. Example 3.2. Consider the graphs in Figure 2. There exists the obvious homo-morphism f which takes F = C4 directly to G = C4; f(1) = A, f(2) = B, f(3) = C, and f(4) = D. However, there also exists the less obvious homomorphism g that “squishes” two opposite corners together since they are not connected by an edge; g(1) = g(3) = A, g(2) = B, and g(4) = D. Homomorphisms convey more information than may be immediately evident. Example 3.3. A homomorphism from the complete graph Kn to G exists if and only if G contains a clique with n nodes. (A clique is a subset of nodes such that every pair of distinct nodes is connected. The term originates from a clique of people—people who all know each other.) The reverse homomorphism is also interesting; if there exists a homomorphism G →Kn, the nodes in G can be split into n groups such that no node in each group is adjacent to another node in the same group. Thus, the existence of a homomorphism G →Kn means that G is n-colorable. 4 VALERIE HAN 4 1 2 3 D A B C F G Figure 2. Two labeled C4 graphs F and G In the context of graph limits, we look at homomorphisms into a given graph G rather than from a given graph G. This can be understood in terms of sampling from the given graph. Testing for homomorphism(s) from a smaller graph into a large graph G can give us information on what is “present” in G. This will be made even clearer when we discuss injective and induced homomorphisms. However, as these graphs become larger, it becomes difficult to determine or even record all the different homomorphisms from say K3 to G. Instead, then, we consider homomorphism numbers and densities, allowing us to condense and quantify the information. Definition 3.4. Define the homomorphism number hom(F, G) to be the number of homomorphisms from F to G. Notation 3.5. Throughout the rest of this section, we will use n = v(G) and k = v(F) to make equations simpler. In order to compare homomorphism numbers of two different graphs, we nor-malize them to get the probability that a random map from V (F) to V (G) is a homomorphism. The homomorphism number, which is the number of homomor-phisms from F to G, is divided by the number of maps from F to G. Since maps are not constrained by preserving adjacency, any node in F can be mapped to any node in G, giving us nk total maps from F to G. Definition 3.6. Define the homomorphism density (3.7) t(F, G) = hom(F, G) nk . Note that it is possible for this value to be 1. If F were an empty graph, which gives the most homomorphisms, each vertex of F could be mapped to any vertex of G to create a homomorphism, leaving us with nk total homomorphisms. Although this definition may now seem natural and indeed works well for dense graphs, note that homomorphism density is essentially meaningless for sparse graphs as the number of vertices increases because the homomorphism density will always approach 0. Consider, for example, the homomorphism density of K2 into a large sparse graph G with n vertices. Suppose the bound on the number of neighbors of each node is b. Then, the density t(K2, G) = hom(K2,G) n2 ≤bn n2 = b n, which will converge to 0 as n →∞. We now extend these concepts to injective and induced homomorphisms. THE GRAPHON AS A LIMIT FOR DENSE GRAPHS 5 Injective homomorphisms are exactly what they sound like, but the definition of induced homomorphisms may not be immediately apparent. Definition 3.8. An induced homomorphism is an injective homomorphism that also preserves non-adjacency. The induced homomorphism from F to G is so named because it is an embedding of F into G as an induced subgraph. Using different types of homomorphisms may be useful for three reasons. First, one is sometimes more convenient to use in a calculation than another, producing a cleaner formula when working probabilistically. Second, they are useful in different contexts as we will see in Example 3.9 when compared to Example 3.3. Third, as we will see in Section 6, they can provide different intuition than the usual homomorphism. Example 3.9. As we saw in Example 3.3, a homomorphism from F to G does not imply that there exists a subgraph F of G. However, there exists an injective homomorphism from F to G if and only if there exists such a subgraph in G. We can thus use the injective homomorphism as a tool to detect subgraphs such as cycles. Having an induced homomorphism from one graph to another is an even stronger condition. An induced homomorphism from F to G is equivalent to a copy of F existing in G, with the same adjacencies and non-adjacencies. We can use the induced homomorphism to detect how many cycles are present as exact copies in G. Note that while K4 will have an injective homomorphism from C4, it will not have an induced one. Notation 3.10. Let inj(F, G) be the number of injective homomorphisms from F to G, and let ind(F, G) be the number of induced homomorphisms from F to G. Now we define injective and induced homomorphism densities, which are similar to the usual homomorphism density. Instead of describing the probability of a random map from F to G being a homomorphism, the injective homomorphism density corresponds to the probability of a random injection from V(F) to V(G) being a homomorphism, and the induced homomorphism density corresponds to the probability of a random injection from V(F) to V(G) preserving both adjacency and non-adjacency. Instead of dividing by the total number of maps, we divide by the number of injective maps. Because we cannot map two vertices of F to the same vertex of G, we obtain n(n −1) · · · (n − k + 1) instead of nk. Definition 3.11. Define the injective homomorphism density (3.12) tinj(F, G) = inj(F, G) n(n −1) · · · (n −k + 1). Similarly, define the induced homomorphism density (3.13) tind(F, G) = ind(F, G) n(n −1) · · · (n −k + 1). Through combinatorial manipulation, we can relate the different homomorphism numbers to each other. 6 VALERIE HAN Induced and injective homomorphism numbers are closely related to one another since induced homomorphisms are injective homomorphisms that preserve non-adjacency. In order to relate them, we sum all induced homomorphisms from graphs with the same number of nodes that contain F as a subgraph. Then, with F ′ ranging over all simple graphs obtained from F by adding edges, (3.14) inj(F, G) = X F ′⊇F ind(F ′, G). Using inclusion-exclusion, we also have (3.15) ind(F, G) = X F ′⊇F V(F ′)=V(F ) (−1)e(F ′)−e(F )inj(F ′, G). It is also possible to relate homomorphism numbers and injective homomorphism numbers. To do this, we use the partition and quotient. For those readers who are unfa-miliar, a partition P groups vertices together. The quotient graph F/P is formed by collapsing the vertices in each of the groups in the partition into a single vertex; two vertices in F/P are adjacent if and only if any of the original vertices in the two groups are connected. Any parts of the partition that have internal edges collapse to a single vertex with a loop. With P ranging over all partitions of V(F) and F/P as the quotient graph, (3.16) hom(F, G) = X P inj(F/P, G). Using (3.16), we can also express inj in terms of hom by considering the values inj(F, G) unknowns and solving the system. Deriving the explicit formula is a more involved computation and is not relevant for our discussion; thus, we direct the reader to (5.17)-(5.18) in if the reader is interested. Now, we turn to relating homomorphism densities rather than numbers. Converting the relationships in (3.14) and (3.15) to relationships between in-jective and induced homomorphism numbers is easy. We simply divide by n(n − 1) · · · (n −k + 1) to obtain (3.17) tinj(F, G) = X F ′⊇F tind(F ′, G), and (3.18) tind(F, G) = X F ′⊇F V(F ′)=V(F ) (−1)e(F ′)−e(F )tinj(F ′, G). However, because t and tinj have different normalizations, we can only obtain an inequality. Fortunately, this is sufficient for our purposes. Recall that tinj is the probability that an injective map is a homomorphism while t is the probability that a random map is a homomorphism. THE GRAPHON AS A LIMIT FOR DENSE GRAPHS 7 Let ϕ be a random map from V(F) to V(G). If t(F, G) ≥tinj(F, G), |t(F, G) −tinj(F, G)| = t(F, G) −tinj(F, G) = hom(F, G) nk − inj(F, G) n(n −1) · · · (n −k + 1) ≤hom(F, G) nk −inj(F, G) nk = P(ϕ is a homomorphism) - P(ϕ is injective) = P(ϕ is a non-injective homomorphism) ≤P(ϕ is non-injective) |t(F, G) −tinj(F, G)| ≤1 n k 2  , (3.19) where 1 n is the probability of two given vertices of F being sent to the same vertex on G while k 2  is the number of ways to choose two vertices of F. Meanwhile, if tinj(F, G) > t(F, G), |t(F, G) −tinj(F, G)| = tinj(F, G) −t(F, G) = inj(F, G) n(n −1) · · · (n −k + 1) −hom(F, G) nk ≤ hom(F, G) n(n −1) · · · (n −k + 1) −hom(F, G) nk = hom(F, G)  1 n(n −1) · · · (n −k + 1) −1 nk  ≤nk  1 n(n −1) · · · (n −k + 1) −1 nk  |t(F, G) −tinj(F, G)| ≤ nk n(n −1) · · · (n −k + 1) −1, (3.20) Note that these inequalities are sufficient for our purposes because we are con-cerned with graphs G with a large number of vertices, and the difference in the homomorphism densities approaches 0 as v(G) →∞. Finally, we extend homomorphism numbers to weighted graphs. Counting weighted homomorphisms is more difficult because we must “weight” the homomorphisms. Definition 3.21. Let F be a simple graph, and let G be a weighted graph. Call the nodeweights αv(G) and edgeweights βuv(G). For each map, ϕ : V(F) →V(G), define their weights (3.22) αϕ = Y u∈V(F ) αϕ(u)(G), and (3.23) homϕ(F, G) = Y uv∈E(F ) βϕ(u)ϕ(v)(G). 8 VALERIE HAN Define the weighted homomorphism number (3.24) hom(F, G) = X ϕ:V(F )→V(G) αϕhomϕ(F, G), and (3.25) inj(F, G) = X ϕ:V(F )→V(G) ϕ injective αϕhomϕ(F, G), Note that αϕ in (3.22) weights the nodes so that maps ϕ sending vertices of F to vertices of G with higher nodeweights are weighted more. If there does not exist a corresponding edge of E(F) in E(G), where we treat non-edges as edges with weight 0, homϕ(F, G) in (3.23) is 0, so this map will not contribute to the sum, ensuring that only adjacency-preserving maps are counted. Note also that this definition is truly an extension of the simple graph version. If we let the nodeweights and edgeweights be 1, (3.24) and (3.25) reduce to the simple graph definitions. 4. Graphons The relationship between the graphon and the graph may not be immediately evident, but the reader may be reassured that this will be made clearer after the definition. Definition 4.1. A graphon is a bounded symmetric measurable function from [0, 1] × [0, 1] to R. To show its relation to a graph, we explain how we can construct a graphon from a graph. The construction of the graphon from a graph is similar to the construction of a graph’s adjacency matrix. Let G be a graph with n vertices. Label the vertices 1, . . . , n. In the case of the simple graph, partition the interval [0, 1] into n intervals of length 1/n. For each pair of vertices j and k , consider x ∈ j−1 n , j n and y ∈ k−1 n , k n . We denote the graphon WG as the graphon formed from the graph G, defined by W(x, y) = 1 if vertices j and k are adjacent and 0 if not. The process can easily be extended to weighted graphs. Replace 1 by the edge weight, and partition the interval [0, 1] proportionally to the node weights. 4 1 2 3     0 1 0 1 1 0 1 1 0 1 0 1 1 1 1 0     Figure 3. A graph, its adjacency matrix, and its corresponding graphon. The white areas of the grid represent 0 on the graphon while the shaded ones represent 1. Note that the origin is in the upper lefthand corner. THE GRAPHON AS A LIMIT FOR DENSE GRAPHS 9 Now that the relation between the graph and the graphon is clearer, the reader may wonder if this is all there is to it. Can we simply use the usual real analysis definition of convergence using a norm like the L1 norm? Unfortunately, the answer is no. The reason for this is that graphons can be “close” without being close in the L1 distance. For example, recall our construction of a graphon from a graph. The labeling of vertices is arbitrary. The same graph can be represented by two graphons that look very different. We would like to be able to view these two graphons as the “same” in some sense. This is especially important when we consider the important property we defined earlier and that is the backbone of our notion of convergence— homomorphism densities—which do not change when confronted by relabeling of vertices. 4 1 2 3 3 4 1 2 Figure 4. A graph (labeled two different ways) and two different graphon representations due to relabeling Definition 4.2. Graphons U, W are weakly isomorphic if there exist measure pre-serving functions ϕ, ψ : [0, 1] →[0, 1] such that U(ϕ(x), ϕ(y)) = W(ψ(x), ψ(y)) almost everywhere. Denote U(ϕ(x), ϕ(y)) by U ϕ. Example 4.3. Consider a graph G. Label its vertices two different ways, and call the resulting labeled graphs G1 and G2. The graphons created from G1 and G2 are weakly isomorphic because relabeling only moves around the subintervals and does not resize them. For example, the two graphons in Figure 4 are weakly isomorphic. While defining weak isomorphisms is motivated by the issue of relabeling, it is more versatile than that one use. Example 4.4. Define ϕk : x 7→kx (mod 1). The map ϕ2 is a measure preserving map that make four smaller “copies” of a given graphon. Given graphon W, the graphons W and W ϕ2 are weakly isomorphic. There exists an equivalent definition of weak isomorphism, defining two graphons U, W to be weakly isomorphic if t(F, U) = t(F, W) for every simple graph F. Proposition 4.5. Graphons U and W are weakly isomorphic if and only if t(F, U) = t(F, W) for every simple graph F. 10 VALERIE HAN Figure 5. A graphon W and W ϕ2 For the proof, see Corollary 10.35(a) in . We now connect graphons to our definition of convergence by defining homomor-phism densities for graphons. Call the vertices of F 1, . . . , v(F). Choose xi ∈[0, 1] for each vertex i. For each edge ij of F, take the product of W(xi, xj) to represent the weight of the homomor-phism. By integrating over all choices of xi, we obtain the homomorphism density. This process parallels the process of sampling v(F) vertices of a graph and sum-ming the resulting homomorphism weights to obtain the weighted homomorphism density for the graph. In fact, t(F, G) = t(F, WG) for all weighted graphs G. Definition 4.6. Let F be a graph, and let W be a graphon. Define the homomor-phism density (4.7) t(F, W) = Z [0,1]v(F ) Y ij∈E(F ) W(xi, xj) Y i∈V(F ) dxi. Note that in this context, the injective homomorphism density is insignificant because when randomly assigning vertices i and j to xi and xj in the interval [0, 1], xi ̸= xj with probability 1. Thus, the injective homomorphism density and homomorphism density are essentially equivalent in this context. However, it is useful to define the induced subgraph density. Definition 4.8. Let F be a graph, and let W be a graphon. Denote V 2  \E(F) to be the set of all pairs of vertices with no edge between them. Define the induced homomorphism density (4.9) tind(F, W) = Z [0,1]v(F ) Y ij∈E(F ) W(xi, xj) Y ij∈( V 2)\E(F ) (1−W(xi, xj)) Y i∈V(F ) dxi. These graphon versions of homomorphism numbers are very similar to their graph versions in that they can be related in similar ways. Similarly to (3.18), we have (4.10) tind(F, W) = X F ′⊇F V(F ′)=V(F ) (−1)e(F ′)−e(F )t(F ′, W) by expanding the product in parentheses. In addition, similarly to (3.17), we have (4.11) tinj(F, W) = X F ′⊇F tind(F ′, W). THE GRAPHON AS A LIMIT FOR DENSE GRAPHS 11 5. Cut Distance and Sampling Distance Weakly isomorphic graphons are equivalent in every way that matters for us. Thus, we want to define a metric that will make the distance between two weakly isomorphic graphons 0. We thus turn to two different distances that both work for convergence, as we will see in Section 6. First, though, to provide motivation for the other two more involved distances, we define an intuitive distance and explain why it cannot be used. Especially when considering the distance between graphs, one distance that may seem intuitive is the edit distance, the number of edges that need to be toggled to move from one graph to another. We may wish to normalize this because our graphs are very large. Thus, we define edit distance for our purposes as follows. Definition 5.1. Let G, G′ be dense graphs with the same set of nodes. The edit distance d1 is defined by (5.2) d1(G, G′) = |E(G)△E(G′)| n2 , where △denotes the symmetric difference. However, there are two main problems with using this distance. First, as discussed in the Introduction, structural similarity is important when comparing large graphs, but the edit distance does not measure structural similarity well. For example, consider two random graphs on {1, . . . , n} with edge density 1/2, which are structurally very similar, including approaching the same homomorphism densities as n →∞with probability 1. (For more structural similarities, see QR1-QR5 in .) Yet, the graphs’ edit distance will be large; it will be close to 1/2 with high probability. The second problem is that edit distance is only defined for graphs with the same number of nodes. When analyzing large graphs, we generally do not care overmuch about exactly how many nodes the two graphs have. In fact, we may not even know exactly how many nodes a graph has. In a sense, we wish to determine whether the object is cheese and not be bogged down by how large a piece we are given. We thus arrive at the idea of sampling a graph. Instead of looking at the graph as a whole, we can consider whether small samples of the two graphs are similar. First, we define a preliminary distance to compare two probability distributions. Definition 5.3. Let α, β be two probability distributions on the same set X. Define the variation distance dvar between α and β by (5.4) dvar(α, β) = sup X′ |α(X′) −β(X′)|, where X′ varies over all measurable subsets of X. Now, we define probability distributions obtained by sampling to use variation distance to compare graphs. Definition 5.5. Define σG,k to be the probability distribution on graphs on {1, . . . , k} obtained by selecting k random nodes of G, which will generally have more than k nodes, and taking the corresponding induced subgraph. If k > v(G), we return the empty k-node graph. 12 VALERIE HAN In order to measure the distance between two graphs with a single number, rather than a different variation distance for every k, we combine the variation distances using a sum. To ensure convergence, we divide by 2k. Definition 5.6. Let G, G′ be dense graphs. Define the sampling distance δsamp between G and G′ by (5.7) δsamp(G, G′) = ∞ X k=1 1 2k dvar(σG,k, σG′,k). The 1/2k is useful for another reason as well. It ensures that the sum is most influenced by “small” induced subgraphs even though sampling distance includes larger ones in the definition. However, the flip side of this quality is that sampling distance does not directly reflect global structural similarity, although we will see in Section 6 that it reflects structural similarity well enough, albeit indirectly. This definition can be extended to graphons simply by defining a distribution to use in the graphon version. Definition 5.8. Let W be a graphon such that 0 ≤W ≤1. Given S = {x1, . . . , xn} with xi ∈[0, 1], define GS,W to be the probability distribution on graphs obtained from W in the following way. Form weighted graph H(S, W) by assigning weight W(xi, xj) to edge ij, giving weight 0 to loops. Given such a weighted graph H, form the random simple graph G(H) on V(H) by connecting nodes i and j with probability βij(H), deciding independently for distinct pairs of nodes. Notation 5.9. For easier notation in defining sampling distance for graphons, given S = {x1, . . . , xn}, we denote GS,W by Gn,W . The graphon version of sampling distance then immediately follows. Definition 5.10. Let U, W be graphons. Define the sampling distance δsamp be-tween U and W by (5.11) δsamp(U, W) = ∞ X k=1 1 2k dvar(Gk,U, Gk,W ). The sampling distance can also be written in terms of induced homomorphism numbers. This will be useful in Section 6 when we relate sampling distance and convergence of graphs. Note that Gk,U(F) = tind(F, U) for any graphon U and simple k-node graph F. dvar(Gk,U, Gk,W ) = sup X |Gk,U(X) −Gk,W (X)| = sup X X F ∈X (Gk,U(F) −Gk,W (F)) = sup X X F ∈X (tind(F, U) −tind(F, W)) . (5.12) The quantity inside the supremum is maximized when Gk,U(F) > Gk,W (F) for all graphs F ∈X or when Gk,U(F) < Gk,W (F) for all graphs F ∈X. The difference between Gk,U(F) and Gk,W (F) contributes positively towards the sum THE GRAPHON AS A LIMIT FOR DENSE GRAPHS 13 P F |Gk,U(F)−Gk,W (F)| both when Gk,U(F) is larger and when Gk,W (F) is larger. The total differences in both cases must be equal for the overall probabilities to sum to 1. Thus, we divide by 2 to obtain the quantity in (5.12), giving us (5.13) dvar(Gk,U, Gk,W ) = 1 2 X F |tind(F, U) −tind(F, W)|, where F varies over all simple graphs with k vertices. Then, we have (5.14) δsamp(U, W) = X F 2−v(F )−1|tind(F, U) −tind(F, W)|, where F varies over all finite graphs. Now, we move to our discussion of cut distance. Unlike sampling distance, cut distance does directly reflect structural similarity, which makes it more useful to use in some cases. Although cut distance can be defined on graphs, and, indeed, gives some intuition that is not as obvious in the graphon version, we give only the graphon definition because the graph version is quite involved. Moreover, we can use the graphon version for graphs as well, by transforming any graph into a graphon. The interested reader is encouraged to read Sections 8.1 and 8.2 in for more information on the graph version. Definition 5.15. Let U, W be graphons. Define cut distance δ□between U and W by (5.16) δ□(U, W) = inf ϕ,ψ sup S,T Z S×T (U(ϕ(x), ϕ(y)) −W(ψ(x), ψ(y)))dxdy , where ϕ, ψ range over all measure preserving maps on [0, 1] and S, T range over all measurable subsets of [0, 1]. The cut distance takes measurable subsets S, T of [0, 1], forming “boxes” S × T (hence the □), which are similar to sets of vertices, then maximizes the difference between the two graphons integrating over the box S × T, in a sense, counting the difference in the number of edges between the sets of vertices. Then, we take the infimum over all measure preserving maps to ensure that weakly isomorphic graphons have cut distance 0. In Section 4, we have seen that a graph can easily be converted into a graphon. Thus, we define the cut distance for graphs by simply converting the graph into a graphon and using the graphon cut distance. Definition 5.17. Let F, G be graphs. Define cut distance δ□between F and G by (5.18) δ□(F, G) = δ□(WF , WG). 6. Convergence of Dense Graphs We have seen throughout the paper that graphs do not have to be identical or even a few edits away to be close structurally. Instead, we have looked at sampling and homomorphism densities. We thus define the convergence of a sequence of dense graphs (Gn) with v(Gn) →∞by combining these approaches. We use induced homomorphism densities, in a sense “sampling” induced subgraphs. 14 VALERIE HAN Definition 6.1. A sequence (Gn) with v(Gn) →∞converges if its induced sub-graph densities tind(F, Gn) converge for all finite graphs F. It may sometimes be more convenient to use t(F, Gn) or tinj(F, Gn) instead. Be-cause of the formulas relating the different densities, we have the following theorem. Theorem 6.2. Let (Gn) be a sequence of graphs with v(Gn) →∞. The following are equivalent: (i) tind(F, Gn) converges for all finite graphs F. (ii) tinj(F, Gn) converges for all finite graphs F. (iii) t(F, Gn) converges for all finite graphs F. Proof. We first show that (ii) is equivalent to (i). By (3.17) and (3.18), we can express the induced subgraph densities as a linear combination of the subgraph densities and vice versa. Thus, tind(F, Gn) converges for all finite graphs F if and only if tinj(F, Gn) does. Now we show that (iii) is equivalent to (ii). By the inequalities relating tinj(F, Gn) and t(F, Gn) in (3.19) and (3.20), we have that as n →∞, t(F, Gn) converges if and only if tinj(F, Gn) converges. □ The following theorems follow a long chain of results and are difficult to prove. For these reasons, we omit the proofs and direct the interested reader to the rele-vant sections in the textbook. We have defined convergence of a graph sequence by the convergence of the sequence’s homomorphism densities to some number. In the following theorem, we give the significance of such numbers in terms of our limit object, the graphon. Theorem 6.3. Let (Gn) be a sequence of simple graphs. If (Gn) converges, there exists a graphon W such that t(F, Gn) →t(F, W) for all simple graphs F. For the proof, see either Theorem 11.21 in or the original proof by Lov´ asz and Szegedy in . Given that the limits of each of a graph sequence’s homomorphism densities, which form the basis of the definition of convergence, coincide with a graphon’s homomorphism densities, we now define the graphon as the limit object. Definition 6.4. A graphon W is the limit of a convergent sequence (Gn) of simple graphs if t(F, Gn) →t(F, W) for all simple graphs F. However, we note here that the limit is not unique. Indeed, any weakly isomor-phic graphons will result in the same homomorphism densities. Fortunately, this is the only weakness in the uniqueness; the limit is unique up to weak isomorphism, following from Proposition 4.5. This is sufficient for our purposes because graphons which are weakly isomorphic have the same probability distributions on graphs obtained from the graphons, unsurprising given that the graphon homomorphism densities are identical. Although we have four different ways to prove convergence, the first three are not truly different ways of proving convergence so much as they are applying the same method to slightly different quantities, and the fourth requires an explicit THE GRAPHON AS A LIMIT FOR DENSE GRAPHS 15 limit, which is often hard to determine. The below theorem allows us to instead use the cut distance to prove convergence. Theorem 6.5. Let (Gn) be a sequence of simple graphs with v(Gn) →∞. Then, Gn converges if and only if it is a Cauchy sequence in the metric δ□. For a proof of this, see either Theorem 11.3 in or the original proof by Borgs, Chayes, Lov´ asz, S´ os and Vesztergombi in and . We have a similar theorem for a sequence of graphons. The graphon thus is truly the completion of the space of graphs; a convergent sequence of graphons has a graphon as its limit and does not require the construction of a different limit object. Theorem 6.6. Let (Wn) be a sequence of graphons such that every Wn takes values only in [0, 1], and likewise, let W be a graphon that takes values only in [0, 1]. Then t(F, Wn) converges for all finite simple graphs F if and only if Wn is a Cauchy sequence in the cut distance. Furthermore, t(F, Wn) →t(F, W) for all finite simple graphs F if and only if δ□(Wn, W) →0. See Theorem 11.5 in for the proof. The following theorem provides yet another way to prove convergence: show-ing the cut distance between the limit graphon and the graphs in the sequence approaches 0 as n →∞. Theorem 6.7. A graphon W is the limit of graph sequence (Gn) if and only if δ□(WGn, W) →0. See Theorem 11.22 in for the proof. In fact, there is a similar result for sampling distance. Note that one direction is easy to prove. If a sequence of graphs converge, the induced homomorphism densities converge, forming a Cauchy sequence. Using (5.14), then, the sampling distance approaches 0 as n →∞. The other direction also holds but is less straight-forward to prove because it does not follow from (5.14) that the sampling distance being 0 implies that the difference in induced homomorphism densities converges to 0. As mentioned in the Introduction, the graphon only serves as a limit object for dense graphs. The graphon is not a meaningful limit object for sparse graphs. Aside from the definition of convergence relying on homomorphism densities, which we have already explained are not significant for sparse graphs, the graphon itself is not helpful for gleaning information about a sequence of sparse graphs. Indeed, consider a sequence of sparse graphs (Gn) as v(Gn) →∞. Then, the number of neighbors of each node is bounded above by some fixed b. As v(Gn) →∞, the graphon constructed from such graphs will become the graphon W(x, y) = 0 for any sparse graph sequence, making it useless as a limit object for such a sequence. 7. Examples of Convergence to Graphons Finding the graphon limit to a sequence of graphs can be difficult. However, here are some intuitive examples of sequences of graphs that converge to a graphon. 16 VALERIE HAN Example 7.1 (Random p graphs). Let (G(n, p)) be a sequence of graphs formed by forming an edge between any two nodes with probability p. Because edges occur with probability p in (G(n, p)), we may guess the limit graphon to be W(x, y) = p. Note that this is one of the few examples where it is easy to calculate the ho-momorphism density. Consider any graph F. Then, the graphon homomorphism density t(F, W) = Z [0,1]v(F ) Y ij∈E(F ) p Y i∈V(F ) dxi by Definition 4.6 = Z [0,1]v(F ) pe(F ) Y i∈V(F ) dxi = pe(F ) With probability 1, the limit of the graph homomorphism densities t(F, G(n, p)) will also be pe(F ) because the probability of a map being a homomorphism as n →∞is determined by the probability that an edge in F is also an edge in Gn. Since edges occur with probability p, we have that the density is pe(F ). Example 7.2 (Complete bipartite graphs). A complete bipartite graph Kn,n is a bipartite graph (a graph where V(Kn,n) is decomposed into two disjoint sets of size n such that no pair of vertices within the same set are adjacent) such that every pair of vertices in different sets are adjacent. Let (Kn,n) be a sequence of complete bipartite graphs. Consider the two disjoint node sets of size n. Label the nodes in one set 1, . . . , n and the nodes in the other set n + 1, . . . , 2n. Then, every node in the first set is adjacent to every node in the second, and no two nodes in the same set are adjacent. Then, every graphon representation WKn,n is 1(0 ≤x ≤1 2 ≤y ≤1) + 1(0 ≤y ≤1 2 ≤x ≤1), where 1 denotes the indicator function. Then, (Kn,n) converges to the graphon W(x, y) = 1(0 ≤x ≤1 2 ≤y ≤1) + 1(0 ≤y ≤1 2 ≤x ≤1). 1 2 3 4 5 6 Figure 6. Graph K6,6, its graphon WK6,6, and the limiting graphon, which is the same graphon as WK6,6. Example 7.3 (Simple threshold graphs). A simple threshold graph is defined on the set {1, . . . , n} by connecting i and j if and only if i+j ≤n. Consider a sequence of simple threshold graphs (Gn). Note that when we convert Gn to a graphon, we THE GRAPHON AS A LIMIT FOR DENSE GRAPHS 17 obtain WGn = 1(x + y ≤1 + 1 n). Thus, as n →∞, (Gn) converges to the graphon W(x, y) = 1(x + y ≤1). Acknowledgments I’d like to thank my mentor Dylan Quintana for his patience and insight in answering my questions and reviewing my paper as well as for his help in finding me a topic that interested me. I would also like to thank Professor Peter May for organizing the program. References L´ aszl´ o Lov´ asz. Large networks and graph limits. American Mathematical Society. 2012. Daniel Glasscock. WHAT IS ... a Graphon? Christian Borgs, Jennifer Chayes, L´ aszl Lov´ asz, Vera S´ os, Bal´ azs Szegedy and Katalin Veszter-gombi. Graph Limits and Parameter Testing. STOC38. 2006. 261-270. Christian Borgs, Jennifer Chayes, L´ aszl Lov´ asz, Vera S´ os, Bal´ azs Szegedy and Katalin Veszter-gombi. Convergent Graph Sequences I: Subgraph frequencies, metric properties, and testing. Advances in Math. 2008. 1801-1851. L´ aszl Lov´ asza and Bal´ azs Szegedy. Limits of dense graph sequences. Theory B 96. 2006. 933-957.
309
310
Fabrication, Structure and Mechanical and Ultrasonic Properties of Medical Ti6Al4V Alloys Part I: Microstructure and Mechanical Properties of Ti6Al4V Alloys Suitable for Ultrasonic Scalpel - PMC =============== Skip to main content An official website of the United States government Here's how you know Here's how you know Official websites use .gov A .gov website belongs to an official government organization in the United States. Secure .gov websites use HTTPS A lock ( ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites. PMC Search Update PMC Beta search will replace the current PMC search the week of September 7, 2025. Try out PMC Beta search now and give us your feedback. Learn more Search Log in Dashboard Publications Account settings Log out Search… Search NCBI Primary site navigation Search Logged in as: Dashboard Publications Account settings Log in Search PMC Full-Text Archive Search in PMC Advanced Search Journal List User Guide New Try this search in PMC Beta Search View on publisher site Download PDF Add to Collections Cite Permalink PERMALINK Copy As a library, NLM provides access to scientific literature. Inclusion in an NLM database does not imply endorsement of, or agreement with, the contents by NLM or the National Institutes of Health. Learn more: PMC Disclaimer | PMC Copyright Notice Materials (Basel) . 2020 Jan 19;13(2):478. doi: 10.3390/ma13020478 Search in PMC Search in PubMed View in NLM Catalog Add to search Fabrication, Structure and Mechanical and Ultrasonic Properties of Medical Ti6Al4V Alloys Part I: Microstructure and Mechanical Properties of Ti6Al4V Alloys Suitable for Ultrasonic Scalpel Zheyu He Zheyu He 1 State Key Laboratory for Powder Metallurgy, Central South University, Changsha 410083, China; [email protected] (Z.H.); [email protected] (Y.L.); [email protected] (D.L.); [email protected] (Y.C.) 2 Research Center for Materials Science and Engineering, Guangxi University of Science and Technology, Liuzhou 545006, China Find articles by Zheyu He 1,2, Hao He Hao He 2 Research Center for Materials Science and Engineering, Guangxi University of Science and Technology, Liuzhou 545006, China Find articles by Hao He 2,, Jia Lou Jia Lou 3 School of Materials Science and Engineering, Xiangtan University, Xiangtan 411105, China; [email protected] Find articles by Jia Lou 3, Yimin Li Yimin Li 1 State Key Laboratory for Powder Metallurgy, Central South University, Changsha 410083, China; [email protected] (Z.H.); [email protected] (Y.L.); [email protected] (D.L.); [email protected] (Y.C.) 2 Research Center for Materials Science and Engineering, Guangxi University of Science and Technology, Liuzhou 545006, China Find articles by Yimin Li 1,2, Dongyang Li Dongyang Li 1 State Key Laboratory for Powder Metallurgy, Central South University, Changsha 410083, China; [email protected] (Z.H.); [email protected] (Y.L.); [email protected] (D.L.); [email protected] (Y.C.) Find articles by Dongyang Li 1, Yongzhi Chen Yongzhi Chen 1 State Key Laboratory for Powder Metallurgy, Central South University, Changsha 410083, China; [email protected] (Z.H.); [email protected] (Y.L.); [email protected] (D.L.); [email protected] (Y.C.) Find articles by Yongzhi Chen 1, Shaojun Liu Shaojun Liu 1 State Key Laboratory for Powder Metallurgy, Central South University, Changsha 410083, China; [email protected] (Z.H.); [email protected] (Y.L.); [email protected] (D.L.); [email protected] (Y.C.) Find articles by Shaojun Liu 1, Author information Article notes Copyright and License information 1 State Key Laboratory for Powder Metallurgy, Central South University, Changsha 410083, China; [email protected] (Z.H.); [email protected] (Y.L.); [email protected] (D.L.); [email protected] (Y.C.) 2 Research Center for Materials Science and Engineering, Guangxi University of Science and Technology, Liuzhou 545006, China 3 School of Materials Science and Engineering, Xiangtan University, Xiangtan 411105, China; [email protected] Correspondence: [email protected] (H.H.); [email protected] (S.L.) Received 2019 Dec 12; Accepted 2020 Jan 16; Collection date 2020 Jan. © 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license ( PMC Copyright notice PMCID: PMC7013957 PMID: 31963880 Abstract Ti6Al4V alloy has been considered as a key component used in ultrasonic scalpels. In this series of papers, the fabrication, structure, and mechanical and ultrasonic properties of medical Ti6Al4V alloys suitable for ultrasonic scalpel are studied systemically. These alloys with low elastic modulus and present a typical bimodal microstructure with relatively high β phase content (~40%) and lamellar α thickness of ≤ 0.9 µm. In the first paper, the relationship between the microstructure and mechanical properties of hot-rolled Ti6Al4V alloys treated by heating treatment is discussed. In the second paper, the dependence of the ultrasonic properties on the microstructure of the heat-treated Ti6Al4V alloys is reported. With increasing solid solution temperature, the content and size of the primary α phase decrease. In contrast, the content and size of the lamellar α phase increase. Additionally, the β phase content first increases and then decreases. The microstructure of Ti6Al4V alloys could be slightly changed by aging treatment. When the solid solution treatment temperature increases to 980 °C from 960 °C, the average size of the lamellar α phase in the alloys increases by 1.1 µm. This results in a decrease in the average yield strength (93 MPa). The elastic modulus of alloys is mainly controlled by the β phase content. The microstructure of alloys by solution-treatment at 960 °C shows the highest β phase content and lowest average elastic modulus of 99.69 GPa, resulting in the minimum resonant frequency (55.06 kHz) and the highest average amplitude (21.48 µm) of the alloys at the length of 41.25 mm. Keywords: medical Ti6Al4V alloys, ultrasonic scalpel, heat treatment, ultrasonic properties, tensile strength, elastic modulus 1. Introduction Since the 1990s, ultrasound surgery operation has become widely used in the field of biomedicine. The development of an efficient, precise, and flexible ultrasonic scalpel (UAS) has become a hot topic ever since . The ultrasonic scalpel system consists of three parts: the ultrasonic generator, transducer, and scalpel. Ultrasonic vibrations generated by the transducer excite a resonant horn and provide the required oscillatory displacement at the tuned ultrasonic frequency in the scalpels . The scalpel is a direct-acting workpiece, which requires it present not only good biocompatibility and mechanical properties, but also high ultrasonic energy utilization and large amplitude displacement . Annealed Ti6Al4V is commonly used for ultrasonic scalpel due to its high elastic strain limit, low acoustic attenuation, and biocompatibility . Recent studies [4,5,6,7] show that there is a close relationship between the ultrasonic parameters and the microstructure of titanium alloys. Wilkie et al. fabricated five distinct kinds of the microstructure of Ti6Al4V alloys ultrasonic scalpel through heat treatment. They further concluded that the samples with the equiaxial structure show better acoustic properties and higher acoustic attenuation than the alloys with fully lamellar samples. Lobkis et al. reported that the ultrasonic properties of the Ti-based alloys show a strong frequency-dependence of backscattering on the microstructure. Hector et al. showed that the ultrasonic attenuation was different between Widmanstätten and equiaxed microstructures of Ti6Al4V alloy, which was mainly attributed to the scattering loss due to the precipitation of Ti 3 Al particles homogeneously distributed in the α phase. Bhattacharjee et al. studied the ultrasonic attenuation of near-α titanium alloy and found that medical Ti alloys with larger micro-textured microstructure present higher attenuation than those with smaller micro-textured microstructure. It is well known that the microstructure of titanium alloys can be influenced significantly by the composition, processing, and heating treatment [8,9,10]. For Ti6Al4V alloys, various microstructures can be obtained by different heat treatments based on the β-transus temperature [11,12,13]. Matsumoto et al. [14,15,16,17] fabricated ultrafine-grained structure (UFG) of Ti6Al4V alloys through different heat treatment at 800 °C. Although the UFG alloys exhibit low-temperature superplasticity, the plasticity at room temperature is still very poor (~1% elongation). Ren et al. obtained a bimodal microstructure of Ti6Al4V alloys by annealing below the β-transition point. Further investigation showed that the plasticity and strength of alloys are closely related to the content of the primary α phase. In contrast, Peng et al. reported on the fabrication of the Widmannstatten structure of Ti6Al4V-DT alloys by annealing in the β phase region. It was found that the strength and plasticity of alloys decrease with increasing test temperature. Obviously, the heat treatment is an effective way to tailor the microstructure of the alloys. Additionally, as for two-phase titanium alloys, factors that influence the mechanical properties include the volume fraction of transformed β phase, size of prior β grain, primary α phase, secondary α phase, etc. [20,21]. However, conclusions made by different researchers often are inconsistent. Peng et al. found that the strength of Ti6Al4V alloys is related to the primary α phase content in the alloys. In contrast, Guo et al. concluded that the tensile strength of alloys has very low dependence on the primary α phase content. However, the dependence of the tensile ductility on the primary α content is strong. In contrast, Vrancken et al. showed that the key factor affecting the strength of Ti6Al4V alloys is the size of α and β phases. It is found that the σ y and UTS of Ti6Al4V alloys decrease while the fine needle-like α phase changes into a coarser mixed structure of α + β phases. Niinomi et al. found that the strength of Ti6Al4V alloys increases as the size of β grain increases. The ultrasonic properties of medical Ti alloys can be significantly improved by adjusting the microstructure of Ti alloys. However, very little attention has been paid to the influence of the fabrication processing and structure on the ultrasound properties of the Ti6Al4V alloys. There is still a lack of systematic studies on the relationship between the phase content, size, and mechanical and ultrasonic properties of Ti6Al4V alloy. These two papers aim to prepare Ti6Al4V alloys with significantly different microstructure combining the solid solution with aging treatment. Special attention is paid to the clarification of the mechanisms that are helpful for developing low-cost and highly efficient flexible ultrasonic scalpel (FUS) tools. In the first paper, special attention is paid to the relationship between the microstructure and mechanical properties of the Ti6Al4V alloys. In the second paper, further study will focus on the influence of microstructure on ultrasonic properties. 2. Experiments Table 1 lists the chemical composition of the Ti6Al4V alloys in the present investigation. As shown in Figure 1, the β and α phase transus temperature of the Ti6Al4V alloys are 970.2 °C and 598.1 °C, respectively. Cylinder samples (Ø8.5 × 60 mm) cut from the as-received round bar and their microstructure are shown in Figure 2. In general, the microstructure of Ti6Al4V alloys contains a large number of the α phase and a small number of the β phase evenly distributed between the α phases. However, the microstructure along the axial and radial direction is quite different. As shown in Figure 2a,b, when it is viewed from the radial direction, a large number of the equiaxial α grain with particle size ~5–10 µm can be observed, and the α phase is interspersed with the transformed β phase. In contrast, when it is viewed from the axial direction, the microstructure has an elongated α phase and intergranular transformed β phase, as shown in Figure 2c,d, respectively. The difference between the radial and axial directions of the microstructure of as-received specimens is due to the hot-rolling process. However, since the ultrasonic scalpel longitudinal vibration along the axial in the clinics, this study mainly focuses on the microstructure and properties of the Ti6Al4V alloys along the axial direction. Table 1. Chemical composition of as-received Ti6Al4V alloy. | Ti | Al/% | V/% | Fe/% | C/% | N/% | H/% | O/% | | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | | Reminder | 6.05 | 4.61 | 0.25 | 0.08 | 0.05 | 0.012 | 0.18 | Open in a new tab Figure 1. Open in a new tab DSC-TG graph of as-received Ti6Al4V alloy. Figure 2. Open in a new tab Microstructure of as-received Ti6Al4V alloy (a,b) viewed from the radial direction and (c,d) viewed from the axial direction. Heat treatment typically includes solution treatment and aging treatment, as shown in Table 2. The solution treatment is in a range from 920 °C to 980 °C for 1 h, followed by air cooling (AC, the cooling rate at about 0.5 °C/s) to achieve different microstructures. The solid solution specimen treated at 960 °C was further subjected to additional aging treatment for 2 h in a range from 600 °C to 750 °C, and then air cooling. Fifteen samples were used for each heat treatment condition for the subsequent tests. The as-received and heat-treated specimens were ground with SiC papers and then polished with a SiO 2 pyrolysis suspension and etched with reagent 10% HF + 40% HNO 3 + 50% H 2 O solution for 5–10 s at room temperature. The microstructure of Ti6Al4V alloys was observed by Leica DM2700M optical microscope (Leica Microystems, Weztlar, Germany) and further analyzed by VEGA3 LMH/LMU scanning electron microscope (TESCAN ORSAY HOLDING, a.s., Brno, Czech Republic). Table 2. Heat treatment scheme of various specimens. | Specimen No. | Heat Treatment | | :---: | :---: | | A | 920 °C, 1 h, AC | | B | 940 °C, 1 h, AC | | C | 960 °C, 1 h, AC | | D | 980 °C, 1 h, AC | | E | (960 °C, 1 h, AC) + (600 °C, 2 h, AC) | | F | (960 °C, 1 h, AC) + (650 °C, 2 h, AC) | | G | (960 °C, 1 h, AC) + (700 °C, 2 h, AC) | | H | (960 °C, 1 h, AC) + (750 °C, 2 h, AC) | Open in a new tab The macro and micro texture of the α phase was measured by the D8 Bruker XRD system (Bruker AXS, Oestliche, Germany) and NordlysNano SEM (Oxford Instruments plc, Abingdon, UK) with an Oxford EBSD (Electron Backscattered Scattering Detection) detector, respectively. The specimens for EBSD were prepared by mechanically grounding and polishing, and then electro-polished in a solution of 6% perchloric acid, 34% butarol, and 60% carbinol at 45 V and −40 °C for 15 s. The EBSD testing angle is 70°, the accelerating voltage and acquisition speed of the test are 20 kV and 11.27 Hz, respectively. The EBSD orientation maps, pole figure and inverse pole figure maps were analyzed by HKL Channel 5 software (Channel 5 11 b-win7 32). The α and β content and their size of each in Ti6Al4V alloys were further analyzed by Image J (v 1.8.0) software according to the GB/T 6394-2017 specification . Samples were cut from the as-received and heat-treated specimens along the rolling direction and machined into the tensile specimens. Then subjected to tensile tests using the WDW-100G computerized Instron testing system with the maximum testing force of 100 kN at a strain rate of 0.01 s−1 according to the GB/T 228.1-2010 specification . The results of tensile tests were taken from 3 tensile specimens in order to guarantee the reliability of experimental results. The elastic modulus was determined by DTM-II Dynamic Elastic Modulus Tester (Xiangtan Instrument Co., Ltd., Xiangtan, China) according to the GB/T 22315-2008 specification . The tensile strength, elastic modulus, and elongation of the as-received Ti6Al4V alloys are 986.42 MPa, 110.18 GPa, and 15.5%, respectively. 3. Results and Discussion 3.1. Microstructure and Mechanical Properties after Solution Treatment Figure 3 shows the microstructure of the Ti6Al4V alloys along the axial direction after different solution treatments. After solid-solution treatment, the microstructure contains primary α, lamellar α, and transformed β phase. It is observed that the primary α phase in the Ti6Al4V alloys treated by solid solution treatment at 920 °C loses its orientation and evolves into an equiaxial shape. When the solid solution treatment temperature increases, the microstructure of as-treated Ti6Al4V alloys gradually transforms from equiaxial to bimodal. The microstructure of the as-treated specimen at 980 °C that is above the β transus temperature presents the Widmannstatten structure characteristics, as shown in Figure 3d. This is consistent with the reported solid solution treatment parameters obtaining two-phase Ti6Al4V alloys with different microstructures . Figure 3. Open in a new tab Metallographic image of microstructure obtained by different solution treatments (a) 920 °C × 1 h, AC; (b) 940 °C × 1 h, AC; (c) 960 °C × 1 h, AC; and (d) 980 °C × 1 h, AC. The phase composition and grain size of treated Ti6Al4V alloys are shown in Figure 4. As the solid solution treatment temperature increases, the content and grain size of the primary α phase in the alloys decrease. However, a reverse trend is observed for the lamellar α phase in the treated alloys where the prior β grain grows and the β phase content first rises and then falls. A maximum value of 40.2% is observed when the alloys are treated by solid solution treatment at 960 °C. It was reported that the higher the solid solution treatment temperature, the lower the equiaxial α content and the larger the lamellar α thickness. These observations are consistent with the present investigation. It is well known that the Ti6Al4V alloys exhibit an α phase (hcp) at low temperature, which can transform into a β phase (bcc) at elevated temperature . The phase transformation during heating (α~β) and cooling (β~α) is governed by the so-called Burgers orientation relationship {0 0 0 2}α || {1 1 0}β and <1 1 -2 0>α || <1 1 1>β . There are 12 possible α orientations that can transform from a single parent β grain during β~α phase transformation while cooling so that the microstructure of alloys exhibit different shapes of α phase (equiaxial, lamellar et al.) and residual β phase after solid solution treatment. During heating below the β transus (970.2 °C in this article), the higher the solution temperature is, the more α phase is transformed into β phase. This makes the residual β phase reach the maximum value of about 40% in 960 °C solid solution treated specimen by air cooling. However, when the solid solution temperature exceeds the β transus temperature, the α phase completely transforms into the β phase. There is sufficient time for metastable β phase transformed into lamellar α during air cooling, resulting in a decrease in the residual β phase content and the further increase in the thickness of lamellar α. Therefore, as the solution temperature rises from 960 °C to 980 °C, the β phase content decreases from 40.2% to 25.2%, while the content and thickness of lamellar α phase increase to 47.6% from 23.7% and to 1.63µm from 0.836 µm, respectively. Figure 4. Open in a new tab Microstructure statistics (a) volume fraction of α, β phase under different solution temperature and (b) grain size and thickness of α, β phase under different solution temperature. Figure 5 shows the dependence of the mechanical properties of Ti6Al4V alloys treated by solid solution treatment on the microstructures that have been shown in Figure 3. It was observed that the tensile strength of the alloys treated below the β transus temperature (920–960 °C) is significantly higher than those treated in temperature above 980 °C. The alloys by solid solution treatment at 960 °C have the lowest average elastic modulus (99.69 GPa). In contrast, the plasticity of the alloys does not show strong microstructure dependence. As expected, these results clearly show that the content and size of the α and β phases can significantly affect the mechanical properties of the alloys with different solid solution treatment temperature. Additionally, the curve in Figure 4 further implies that the lamellar α phase rather than the primary α phase and residual β phase is the main factor affecting the tensile strength of the alloys. The primary α phase seems to have little effect on the strength of the alloy. This agrees with the reported results . Niinomi et al. proposed that the yield strength of β type alloys increased with increasing prior β grain size and attributed the increase of the yield strength to the effect of precipitated α rather than β grain size. However, the results in the present study show that the strength of the alloys decreases with the mobility of the prior β phase grain boundaries, which is consistent with the reported results . It is known that under the air-cooling condition, the growth of β grains accompanies a full growth of the lamellar α phase in the Ti6Al4V alloys. As the thickness of the lamellar α phase increases, the α/β phase boundaries and the dislocation slip resistance decreases . This subsequently makes the dislocations difficult to pile up and decrease the strength of as-treated alloys. Similar mechanisms have been proposed by Donachie et al. that the strength of titanium alloys has a strong dependence on the number and fineness of the α/β phase boundaries. Figure 5. Open in a new tab Mechanical properties of microstructures obtained by different solution treatments. Figure 6a,b are the BSE (Back scattered Electron Imaging) images of the microstructures of the Ti6Al4V alloys treated by solid solution treatment at 940 °C and 980 °C, respectively. The bright and dark regions denote the β phase and α phase, respectively. As shown in Figure 6a,b, it is obvious that the thickness of the lamellar α phase in the Ti6Al4V alloys increases significantly after the solution treatment at 980 °C. The average thickness of the lamellar α phase increases by 1.09 µm for the alloys treated at 980 °C comparing with the alloys treated at 920 °C. It can be determined that the average tensile strength of the specimen solution-treated at 920 °C decreases significantly from 984.11 MPa to 878.15 MPa of the specimen solution-treated at 980 °C. It is ascribed to the large increase in the content and thickness of lamellar α, roughly 140% and 202% respectively. However, the elongation of the alloys just changes slightly and remains about 15%, a value that is consistent with the reported ones (13.25%~17.75%). In contrast, the elastic modulus of the alloys first decreases, and it subsequently increases with increasing solution temperature, a trend that is consistent with that of the residual β phases shown in Figure 4a. Several reports [30,31,32] have shown that the elastic modulus of the β phases in the titanium alloys is nearly 80 GPa, which is lower than that of the α phase (~120 GPa). Therefore, the increase of the β phase content in titanium alloys can reduce the elastic modulus of the titanium alloys. After solution treatment at 960 °C, the average elastic modulus of the alloys is 99.69 GPa, lower than other solid solution treated specimens, which results from the maximum β phase content (40.2%) in the Ti6Al4V alloys. Figure 6. Open in a new tab BSE image of microstructures obtained by different solid solution treatment: (a) 940 °C ×1 h, AC; and (b) 980 °C × 1 h, AC. It is noticed that the microstructure of the as-received Ti6Al4V alloys (Figure 2) and their counterparts treated by solid solution treatment at 920 °C (Figure 3a) presents obvious differences. For example, the shape of primary α phases in these two alloys are elongated and equiaxed; the β phase content in these alloys is 8.38% and 15.1% respectively, which is significantly different as well. However, the elastic modulus of the alloys with significant different microstructure is almost the same (~110 GPa), which does not decrease significantly with the increase of the β phase content in the alloys. It is possible that the orientation of the α phase could play an important role in affecting the elastic modulus of the Ti6Al4V alloys. It was reported that the α phase has strong elastic anisotropy in two-phase titanium alloys. Therefore, it is believed that both the orientation of the α phase and β phase content could determine the elastic modulus of the Ti6Al4V alloys. To further analyze the orientation of the α phase, the texture of the α phase in the as-received and the solution-treated Ti6Al4V alloys are observed by EBSD and X-ray diffraction analysis, respectively, as shown in Figure 7 and Figure 8. Figure 7a–d show the morphology of the rolling plane, and the IPF sheet and pole figure of as-received alloys respectively. It is obvious that there are {0 0 0 1} textures of α phases in the as-received alloys as shown in Figure 7c,d. This observation is consistent with the calculation pole figure as shown in Figure 8a. However, the textures of α phases in the alloys gradually disappear, after solid solution treatment at a temperature above 920 °C, as shown in Figure 8b,c. These results are consistent with those shown in Figure 2 and Figure 3 in which the primary α phases of as-received alloys transformed into equiaxed shapes from elongated shapes after solid solution treatment. Figure 7. Open in a new tab The EBSD results showing the microstructure and texture distribution of the α phase of the as-received alloys: (a,b) inverse pole figure (IPF) map from RD and TD direction; (c) inverse pole figure (IPF) sheet and (d) pole figure (PF). Figure 8. Open in a new tab Pole diagram of different microstructure: (a) as-received Ti6Al4V alloy; (b) 920 °C × 1 h, AC solid solution treatment; and (c) 940 °C ×1 h, AC solid solution treatment. It has been pointed out that the c:a of the HCP titanium (α phase) is 1.59, which is less than the ideal value (1.63) because the a is longer, resulting into the atomic distance in the basal plane (0001) increase and the interatomic forces decrease, respectively. The elastic modulus of this plane therefore decreased. The {0 0 0 1}α textures appear in the as-received alloys, which means that the {0001}α (showing lower elastic modulus) parallel to the rolling plane (test direction) is beneficial to reduce the elastic modulus of the Ti6Al4V alloys. As shown in Figure 8a,b, the α phase textures of the as-received Ti6Al4V alloys disappear after solid solution treated at 920 °C, which might lead to an increase in the elastic modulus of the alloys. However, the residual β phase content gradually increases with the solid solution temperature increased to 920 °C and cause a decrease of elastic modulus of the alloys. These two factors together lead to the elastic modulus of as-received alloys and the alloys solid solution treated at 920 °C is basically the same. 3.2. Microstructure and its Mechanical Properties after Aging Treatment To further analyze the influence of the α and β phase content and their size on the mechanical properties of the Ti6Al4V alloys, the specimen subjected to solution treatment at 960 °C was further treated by aging treatment. The microstructure of the alloys after aging treatment consists of primary α phase, lamellar α phase, and residual β phase, as shown in Figure 9. The microstructure of the Ti6Al4V alloys still retains the bimodal characteristics after aging at different temperatures, with no significant difference in morphology. After aging treatment, the content of the primary α phase (~39%) increases slightly compared with the alloys (~36%) treated by the solid solution treatment at 960 °C. At the same time, the average size of the primary α grain of the alloys after aging-treated at 600 °C increases to 1.78 µm compared with the specimen solution-treated at 960 °C. Statistical data on the content and size of each phase in the aging microstructure is shown in Figure 10. It is stressed that when the aging temperature increases to 750 °C from 600 °C, the volume fraction and size of the primary α phase remains unchanged at about 39% and 11 µm. These results show that in a certain aging temperature range, the volume fraction and size of the primary α phase in the Ti6Al4V alloys have no significant dependence on the aging temperature, a conclusion that is consistent with the observation that the volume fraction of the primary α is mainly affected by the solution temperature . In contrast, when the aging temperature increases to 750 °C from 600 °C, the thickness of the lamellar α phase increases and the content of the β phase decreases slightly. The average thickness of the lamellar α phase increases to 1.17 µm from 0.836 µm. It is possible that the enhanced aging temperature is favorable to the phase transformation of the metastable β phase into the lamellar α phase. Figure 9. Open in a new tab Microstructure under the different solution treatments followed by aging. (a) 960 °C × 1 h, AC + 600 °C × 2 h, AC; (b) 960 °C × 1 h, AC + 650 °C × 2 h, AC; (c) 960 °C × 1 h, AC + 700 °C × 2 h, AC; and (d) 960 °C × 1 h, AC + 750 °C × 2 h, AC. Figure 10. Open in a new tab Microstructure statistics (a) volume fraction of α, β phase under different aging temperature and (b) grain size and thickness of α, β phase under different aging temperature. After aging treatment, the plasticity of the specimen does not change much compared with the alloys treated by the solid solution treatment at 960 °C, and the elongation remains at about 16%, while the tensile strength and the elastic modulus of the alloys increase. One possible reason is that the metastable β decomposed into the α phase during the aging process, which can enhance the tensile strength of the alloys. The decrease in residual β leads to an increase in the elastic modulus of the alloys. The mechanical properties of the alloys after aging treatment are shown in Figure 11. The results show that the tensile strength of the specimens slightly decreased from 1009.72 MPa to 956.3 MPa with the increase in aging temperature (from 600 °C to 750 °C), which may be attributed to the increase in the content and thickness of lamellar α, roughly 17% and 40%, respectively. The growth of lamellar α resulting in a slight decrease in the α/β phase boundary, hence, the strength of the specimen after aging treatment was not significantly reduced. As shown in Figure 12, the trend of growth of the lamellar α phase with increasing aging temperature further shows that the lamellar α is the main factor affecting the strength of the Ti6Al4V alloy. Figure 11 also shows that as the aging temperature rose from 600 °C to 750 °C, the elongation can be maintained at 16.5 ± 0.7%. At the same time, the elastic modulus increased by about 3 GPa, which is attributed to the further decrease (~5%) of the residual β phase. Figure 11. Open in a new tab Mechanical properties of microstructures obtained by different aging treatments. Figure 12. Open in a new tab BSE image of microstructure under the different solution treatments followed by aging (a) 960 °C × 1 h, AC + 600 °C × 2 h, AC; (b) 960 °C × 1 h, AC + 750 °C × 2 h, AC. 4. Conclusions Given the strong dependence of ultrasonic properties on the microstructure of Ti6Al4V alloys used as an ultrasonic scalpel, in this paper, the effects of solid solution and aging treatment on the microstructure and mechanical properties of Ti6Al4V alloys were studied; the following conclusions can be drawn: The content and size of the primary α phase of the Ti6Al4V alloy decrease with an increase in solid solution temperature. The lamellar α phase exhibited the opposite trend, i.e., the β phase content first increases and then decreases. The increase in the aging temperature has little effect on the content and size of the primary α phase but causes a slight increase in the thickness of the lamellar α phase. For the solution and aging-treated Ti6Al4V alloys, the main factor affecting the tensile strength is lamellar α. As the solid solution temperature increases from 960 °C to 980 °C, the average thickness of lamellar α increased by 1.09 µm, and the average yield strength decreased by 93 MPa. As aging temperature increases from 600 °C to 750 °C, the thickness of lamellar α slightly increases by 0.33 µm and causes the yield strength of the specimens remained at about 900 MPa. The elastic modulus of the Ti6Al4V alloy is mainly controlled by the texture of the α phase and content of the β phase. However, the β phase content is the main factor affecting the elastic modulus of the alloys treated by solution and aging treatment. Specifically, the specimen’s solid solution treated at 960 °C has the highest residual β content, hence the average elastic modulus is the lowest at 99.69 GPa. Additionally, the elastic modulus of the alloys after aging treatment remained at about 105 GPa, which is attributed to the stable residual β phase content. Acknowledgments The use of facilities in State Key Laboratory for Powder Metallurgy at Central South University is acknowledged. Author Contributions Conceptualization, S.L. and Y.L.; methodology, Z.H.; software, Z.H.; validation, Z.H., D.L. and J.L.; formal analysis, S.L.; investigation, Z.H.; resources, Y.L.; data curation, Y.C.; writing—original draft preparation, Z.H.; writing—review and editing, Z.H.; visualization, D.L.; supervision, Y.L.; project administration, H.H.; funding acquisition, H.H. All authors have read and agreed to the published version of the manuscript. Funding This research was funded by the National Natural Science Foundation of China (Grant No. 51804271), Guangxi Science and Technology Plan Project (Grant No. 2017GXNSFBA198187 and 2018GXNSFAA281237). Conflicts of Interest The authors declare no conflict of interest. References 1.Chen Y., Zhou Z., Zhang G. Effects of different tissue loads on high power ultrasonic surgery scalpel. Ultrasound. Med. Biol. 2006;32:415–420. doi: 10.1016/j.ultrasmedbio.2005.12.012. [DOI] [PubMed] [Google Scholar] 2.Gavin G.P., McGuinness G.B., Dolan F., Hashmi M.S.J. Performance characteristics of a therapeutic ultrasound wire waveguide apparatus. Int. J. Mech. Sci. 2007;49:298–305. doi: 10.1016/j.ijmecsci.2006.09.006. [DOI] [Google Scholar] 3.Chen Y., Luo X., Shi W., Zhou Z. The application and development of ultrasonic scalpel. J. Biomed. Eng. 2005;22:377–380. [PubMed] [Google Scholar] 4.Wilkie M., Lucas M. The effect of Ti-6Al-4V microstructure on the performance of ultrasonic soft tissue cutting tips. Proc. Meet. Acoust. 2017;32:020010. [Google Scholar] 5.Lobkis O.I., Rokhlin S. Characterization of polycrystals with elongated duplex microstructure by inversion of ultrasonic backscattering data. Appl. Phys. Lett. 2010;96:161905. doi: 10.1063/1.3416910. [DOI] [Google Scholar] 6.Hector C., Carreon M. Assessment of precipitates of aged Ti-6Al-4V alloy by ultrasonic attenuation. Philos. Mag. 2017;97:58–68. [Google Scholar] 7.Bhattacharjee A., Pilchak A.L., Lobkis O.I., Foltz J.W., Rokhlin S.I., Williams J.C. Correlating Ultrasonic Attenuation and Microtexture in a Near-Alpha Titanium Alloy. Metall. Mater. Trans. A. 2011;42:2358–2372. doi: 10.1007/s11661-011-0619-x. [DOI] [Google Scholar] 8.Cui G., Liu Y., Gao G., Liu H., Li S., Kou Z. Preparation, Mechanical Properties, and High-Temperature Wear Resistance of Ti–Al–B alloy. Materials. 2019;12:3751. doi: 10.3390/ma12223751. [DOI] [PMC free article] [PubMed] [Google Scholar] 9.Cui N., Wu Q., Yan Z., Zhou H., Wang X. The Microstructural Evolution, Tensile Properties, and Phase Hardness of a TiAl Alloy with a High Content of the β Phase. Materials. 2019;12:2757. doi: 10.3390/ma12172757. [DOI] [PMC free article] [PubMed] [Google Scholar] 10.Guo N., Cheng Q., Zhang X., Fu Y., Huang L. Microstructure and Mechanical Properties of Underwater Laser Welding of Titanium Alloy. Materials. 2019;12:2703. doi: 10.3390/ma12172703. [DOI] [PMC free article] [PubMed] [Google Scholar] 11.Lütjering G. Influence of processing on microstructure and mechanical properties of (α+β) titanium alloys. Mater. Sci. Eng. A. 1998;243:32–45. doi: 10.1016/S0921-5093(97)00778-8. [DOI] [Google Scholar] 12.Chong Y., Bhattacharjee T., Shibata A., Tsuji N. Investigation of the grain size effect on mechanical properties of Ti-6Al-4V alloy with equiaxed and bimodal microstructures. IOP Conf. Ser. Mater. Sci. Eng. 2017;219:012013. doi: 10.1088/1757-899X/219/1/012013. [DOI] [Google Scholar] 13.Zherebtsov S., Murzinova M., Salishchev G., Semiatin S.L. Spheroidization of the lamellar microstructure in Ti–6Al–4V alloy during warm deformation and annealing. Acta Mater. 2011;59:4138–4150. [Google Scholar] 14.Matsumoto H., Bin L., Lee S.-H., Li Y., Ono Y., Chiba A. Frequent Occurrence of Discontinuous Dynamic Recrystallization in Ti-6Al-4V Alloy with α Martensite Starting Microstructure. Metall. Mater. Trans. A. 2013;44:3245–3260. doi: 10.1007/s11661-013-1655-5. [DOI] [Google Scholar] 15.Matsumoto H., Yoshida K., Lee S.-H., Ono Y., Chiba A. Ti–6Al–4V alloy with an ultrafine-grained microstructure exhibiting low-temperature–high-strain-rate superplasticity. Mater. Lett. 2013;98:209–212. doi: 10.1016/j.matlet.2013.02.033. [DOI] [Google Scholar] 16.Matsumoto H., Nishihara T., Velay V., Vidal V. Superplastic Property of the Ti-6Al-4V Alloy with Ultrafine-Grained Heterogeneous Microstructure. Adv. Eng. Mater. 2017;20:1700317. doi: 10.1002/adem.201700317. [DOI] [Google Scholar] 17.Mironov S., Murzinova M., Zherebtsov S., Salishchev G.A., Semiatin S.L. Microstructure evolution during warm working of Ti–6Al–4V with a colony-α microstructure. Acta Mater. 2009;57:2470–2481. [Google Scholar] 18.Ren Y., Zhou S., Luo W., Xue Z., Zhang Y. Influence of primary α-phase volume fraction on the mechanical properties of Ti-6Al-4V alloy at different strain rates and temperatures. IOP Conf. Ser. Mater. Sci. Eng. 2018;322:022022. doi: 10.1088/1757-899X/322/2/022022. [DOI] [Google Scholar] 19.Peng X., Guo H., Wang T., Yao Z. Effects of β treatments on microstructures and mechanical properties of TC4-DT titanium alloy. Mater. Sci. Eng. A. 2012;533:55–63. doi: 10.1016/j.msea.2011.11.033. [DOI] [Google Scholar] 20.Donachie M.J., Jr. Titanium: A Technical Guide. 2nd ed. ASM International; Metals Park, OH, USA: 2000. pp. 95–121. [Google Scholar] 21.Ankem S., Margolin H., Greene C.A., Neuberger B.W., Oberson P.G. Mechanical properties of alloys consisting of two ductile phases. Prog. Mater. Sci. 2006;51:632–709. doi: 10.1016/j.pmatsci.2005.10.003. [DOI] [Google Scholar] 22.Peng X.N., Guo H.Z., Shi Z.F., Qin C., Zhao Z.L. Microstructure characterization and mechanical properties of TC4-DT titanium alloy after thermomechanical treatment. Trans. Nonferrous Met. Soc. China. 2014;24:682–689. doi: 10.1016/S1003-6326(14)63111-3. [DOI] [Google Scholar] 23.Guo P., Zhao Y., Zeng W., Hong Q. The effect of microstructure on the mechanical properties of TC4-DT titanium alloys. Mater. Sci. Eng. A. 2013;563:106–111. doi: 10.1016/j.msea.2012.11.033. [DOI] [Google Scholar] 24.Vrancken B., Thijs L., Kruth J.-P., van Humbeeck J. Heat treatment of Ti6Al4V produced by Selective Laser Melting: Microstructure and mechanical properties. J. Alloys Compd. 2012;541:177–185. doi: 10.1016/j.jallcom.2012.07.022. [DOI] [Google Scholar] 25.Niinomi M., Kobayashi T. Fracture characteristics analysis related to the microstructures in titanium alloys. Mater. Sci. Eng. A. 1996;213:16–24. doi: 10.1016/0921-5093(96)10239-2. [DOI] [Google Scholar] 26.Standardization Administration of the P.R.C. GB/T 6394-2017: Determination of Estimating the Average Grain Size of Metal. Standardization Administration of the P.R.C.; Beijing, China: 2017. [Google Scholar] 27.Standardization Administration of the P.R.C. GB/T 228.1-2010: Metallic Materials-Tensile Testing—Part 1: Method of Test at Room Temperature. Standardization Administration of the P.R.C.; Beijing, China: 2010. [Google Scholar] 28.Standardization Administration of the P.R.C. Metallic Materials—Determination of Modulus of Elasticity and Poisson’s Ratio. Standardization Administration of the P.R.C.; Beijing, China: 2008. [Google Scholar] 29.Zhang S.Z., Liu Z.Q., Wang G.D., Chen L.Q., Liu X.H., Yang R. Microstructural evolution during aging of Ti-5Al-5Mo-5V-1Cr-1Fe alloy. J. Cent. South Univ. Technol. 2009;16:354–359. doi: 10.1007/s11771-009-0060-0. [DOI] [Google Scholar] 30.Marker C., Shang S.-L., Zhao J.-C., Liu Z.-K. Elastic knowledge base of bcc Ti alloys from first-principles calculations and CALPHAD-based modeling. Comput. Mater. Sci. 2017;140:121–139. doi: 10.1016/j.commatsci.2017.08.037. [DOI] [Google Scholar] 31.You L., Song X. First principles study of low Young’s modulus Ti–Nb–Zr alloy system. Mater. Lett. 2012;80:165–167. doi: 10.1016/j.matlet.2012.01.145. [DOI] [Google Scholar] 32.You L., Song X. A study of low Young’s modulus Ti–Nb–Zr alloys using d electrons alloy theory. Scr. Mater. 2012;67:57–60. doi: 10.1016/j.scriptamat.2012.03.020. [DOI] [Google Scholar] Articles from Materials are provided here courtesy of Multidisciplinary Digital Publishing Institute (MDPI) ACTIONS View on publisher site PDF (7.4 MB) Cite Collections Permalink PERMALINK Copy RESOURCES Similar articles Cited by other articles Links to NCBI Databases On this page Abstract 1. Introduction 2. Experiments 3. Results and Discussion 4. Conclusions Acknowledgments Author Contributions Funding Conflicts of Interest References Cite Copy Download .nbib.nbib Format: Add to Collections Create a new collection Add to an existing collection Name your collection Choose a collection Unable to load your collection due to an error Please try again Add Cancel Follow NCBI NCBI on X (formerly known as Twitter)NCBI on FacebookNCBI on LinkedInNCBI on GitHubNCBI RSS feed Connect with NLM NLM on X (formerly known as Twitter)NLM on FacebookNLM on YouTube National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894 Web Policies FOIA HHS Vulnerability Disclosure Help Accessibility Careers NLM NIH HHS USA.gov Back to Top
311
“mcs-ftl” — 2010/9/8 — 0:40 — page 283 — #289 10 Recurrences A recurrence describes a sequence of numbers. Early terms are specified explic-itly and later terms are expressed as a function of their predecessors. As a trivial example, this recurrence describes the sequence 1, 2, 3, etc.: T1 D 1Tn D Tn1 C 1 (for n  2): Here, the first term is defined to be 1 and each subsequent term is one more than its predecessor. Recurrences turn out to be a powerful tool. In this chapter, we’ll emphasize using recurrences to analyze the performance of recursive algorithms. However, recur-rences have other applications in computer science as well, such as enumeration of structures and analysis of random processes. And, as we saw in Section 9.4, they also arise in the analysis of problems in the physical sciences. A recurrence in isolation is not a very useful description of a sequence. One can not easily answer simple questions such as, “What is the hundredth term?” or “What is the asymptotic growth rate?” So one typically wants to solve a recurrence; that is, to find a closed-form expression for the nth term. We’ll first introduce two general solving techniques: guess-and-verify and plug-and-chug. These methods are applicable to every recurrence, but their success re-quires a flash of insight—sometimes an unrealistically brilliant flash. So we’ll also introduce two big classes of recurrences, linear and divide-and-conquer, that often come up in computer science. Essentially all recurrences in these two classes are solvable using cookbook techniques; you follow the recipe and get the answer. A drawback is that calculation replaces insight. The “Aha!” moment that is essential in the guess-and-verify and plug-and-chug methods is replaced by a “Huh” at the end of a cookbook procedure. At the end of the chapter, we’ll develop rules of thumb to help you assess many recurrences without any calculation. These rules can help you distinguish promis-ing approaches from bad ideas early in the process of designing an algorithm. Recurrences are one aspect of a broad theme in computer science: reducing a big problem to progressively smaller problems until easy base cases are reached. This same idea underlies both induction proofs and recursive algorithms. As we’ll see, all three ideas snap together nicely. For example, one might describe the running time of a recursive algorithm with a recurrence and use induction to verify the solution. 1“mcs-ftl” — 2010/9/8 — 0:40 — page 284 — #290 Chapter 10 Recurrences Figure 10.1 The initial configuration of the disks in the Towers of Hanoi problem. 10.1 The Towers of Hanoi According to legend, there is a temple in Hanoi with three posts and 64 gold disks of different sizes. Each disk has a hole through the center so that it fits on a post. In the misty past, all the disks were on the first post, with the largest on the bottom and the smallest on top, as shown in Figure 10.1. Monks in the temple have labored through the years since to move all the disks to one of the other two posts according to the following rules:  The only permitted action is removing the top disk from one post and drop-ping it onto another post.  A larger disk can never lie above a smaller disk on any post. So, for example, picking up the whole stack of disks at once and dropping them on another post is illegal. That’s good, because the legend says that when the monks complete the puzzle, the world will end! To clarify the problem, suppose there were only 3 gold disks instead of 64. Then the puzzle could be solved in 7 steps as shown in Figure 10.2. The questions we must answer are, “Given sufficient time, can the monks suc-ceed?” If so, “How long until the world ends?” And, most importantly, “Will this happen before the final exam?” 10.1.1 A Recursive Solution The Towers of Hanoi problem can be solved recursively. As we describe the pro-cedure, we’ll also analyze the running time. To that end, let Tn be the minimum number of steps required to solve the n-disk problem. For example, some experi-mentation shows that T1 D 1 and T2 = 3. The procedure illustrated above shows that T3 is at most 7, though there might be a solution with fewer steps. The recursive solution has three stages, which are described below and illustrated in Figure 10.3. For clarity, the largest disk is shaded in the figures. 2“mcs-ftl” — 2010/9/8 — 0:40 — page 285 — #291 10.1. The Towers of Hanoi 1234567 Figure 10.2 The 7-step solution to the Towers of Hanoi problem when there are n D 3 disks. 123 Figure 10.3 A recursive solution to the Towers of Hanoi problem. 3“mcs-ftl” — 2010/9/8 — 0:40 — page 286 — #292 Chapter 10 Recurrences Stage 1. Move the top n1 disks from the first post to the second using the solution for n 1 disks. This can be done in Tn1 steps. Stage 2. Move the largest disk from the first post to the third post. This takes just 1 step. Stage 3. Move the n 1 disks from the second post to the third post, again using the solution for n 1 disks. This can also be done in Tn1 steps. This algorithm shows that Tn, the minimum number of steps required to move n disks to a different post, is at most Tn1 C 1 C Tn1 D 2T n1 C 1. We can use this fact to upper bound the number of operations required to move towers of various heights: T3  2  T2 C 1 D 7T4  2  T3 C 1  15 Continuing in this way, we could eventually compute an upper bound on T64 , the number of steps required to move 64 disks. So this algorithm answers our first question: given sufficient time, the monks can finish their task and end the world. This is a shame. After all that effort, they’d probably want to smack a few high-fives and go out for burgers and ice cream, but nope—world’s over. 10.1.2 Finding a Recurrence We can not yet compute the exact number of steps that the monks need to move the 64 disks, only an upper bound. Perhaps, having pondered the problem since the beginning of time, the monks have devised a better algorithm. In fact, there is no better algorithm, and here is why. At some step, the monks must move the largest disk from the first post to a different post. For this to happen, the n 1 smaller disks must all be stacked out of the way on the only remaining post. Arranging the n1 smaller disks this way requires at least Tn1 moves. After the largest disk is moved, at least another Tn1 moves are required to pile the n 1 smaller disks on top. This argument shows that the number of steps required is at least 2T n1 C 1.Since we gave an algorithm using exactly that number of steps, we can now write an expression for Tn, the number of moves required to complete the Towers of Hanoi problem with n disks: T1 D 1Tn D 2T n1 C 1 (for n  2):4“mcs-ftl” — 2010/9/8 — 0:40 — page 287 — #293 10.1. The Towers of Hanoi This is a typical recurrence. These two lines define a sequence of values, T1; T 2; T 3; : : : .The first line says that the first number in the sequence, T1, is equal to 1. The sec-ond line defines every other number in the sequence in terms of its predecessor. So we can use this recurrence to compute any number of terms in the sequence: T1 D 1T2 D 2  T1 C 1 D 3T3 D 2  T2 C 1 D 7T4 D 2  T3 C 1 D 15 T5 D 2  T4 C 1 D 31 T6 D 2  T5 C 1 D 63: 10.1.3 Solving the Recurrence We could determine the number of steps to move a 64-disk tower by computing T7, T8, and so on up to T64 . But that would take a lot of work. It would be nice to have a closed-form expression for Tn, so that we could quickly find the number of steps required for any given number of disks. (For example, we might want to know how much sooner the world would end if the monks melted down one disk to purchase burgers and ice cream before the end of the world.) There are several methods for solving recurrence equations. The simplest is to guess the solution and then verify that the guess is correct with an induction proof. As a basis for a good guess, let’s look for a pattern in the values of Tn computed above: 1, 3, 7, 15, 31, 63. A natural guess is Tn D 2n 1. But whenever you guess a solution to a recurrence, you should always verify it with a proof, typically by induction. After all, your guess might be wrong. (But why bother to verify in this case? After all, if we’re wrong, its not the end of the... no, let’s check.) Claim 10.1.1. Tn D 2n 1 satisfies the recurrence: T1 D 1Tn D 2T n1 C 1 (for n  2): Proof. The proof is by induction on n. The induction hypothesis is that Tn D 2n 1. This is true for n D 1 because T1 D 1 D 21 1. Now assume that Tn1 D 2n1 1 in order to prove that Tn D 2n 1, where n  2: Tn D 2T n1 C 1 D 2.2 n1 1/ C 1 D 2n 1: 5“mcs-ftl” — 2010/9/8 — 0:40 — page 288 — #294 Chapter 10 Recurrences The first equality is the recurrence equation, the second follows from the induction assumption, and the last step is simplification.  Such verification proofs are especially tidy because recurrence equations and induction proofs have analogous structures. In particular, the base case relies on the first line of the recurrence, which defines T1. And the inductive step uses the second line of the recurrence, which defines Tn as a function of preceding terms. Our guess is verified. So we can now resolve our remaining questions about the 64-disk puzzle. Since T64 D 264 1, the monks must complete more than 18 billion billion steps before the world ends. Better study for the final. 10.1.4 The Upper Bound Trap When the solution to a recurrence is complicated, one might try to prove that some simpler expression is an upper bound on the solution. For example, the exact so-lution to the Towers of Hanoi recurrence is Tn D 2n 1. Let’s try to prove the “nicer” upper bound Tn  2n, proceeding exactly as before. Proof. (Failed attempt.) The proof is by induction on n. The induction hypothesis is that Tn  2n. This is true for n D 1 because T1 D 1  21. Now assume that Tn1  2n1 in order to prove that Tn  2n, where n  2: Tn D 2T n1 C 1  2.2 n1/ C 1 6 2n Uh-oh! The first equality is the recurrence relation, the second follows from the induction hypothesis, and the third step is a flaming train wreck.  The proof doesn’t work! As is so often the case with induction proofs, the ar-gument only goes through with a stronger hypothesis. This isn’t to say that upper bounding the solution to a recurrence is hopeless, but this is a situation where in-duction and recurrences do not mix well. 10.1.5 Plug and Chug Guess-and-verify is a simple and general way to solve recurrence equations. But there is one big drawback: you have to guess right . That was not hard for the Towers of Hanoi example. But sometimes the solution to a recurrence has a strange form that is quite difficult to guess. Practice helps, of course, but so can some other methods. 6“mcs-ftl” — 2010/9/8 — 0:40 — page 289 — #295 10.1. The Towers of Hanoi Plug-and-chug is another way to solve recurrences. This is also sometimes called “expansion” or “iteration”. As in guess-and-verify, the key step is identifying a pattern. But instead of looking at a sequence of numbers , you have to spot a pattern in a sequence of expressions , which is sometimes easier. The method consists of three steps, which are described below and illustrated with the Towers of Hanoi example. Step 1: Plug and Chug Until a Pattern Appears The first step is to expand the recurrence equation by alternately “plugging” (apply-ing the recurrence) and “chugging” (simplifying the result) until a pattern appears. Be careful: too much simplification can make a pattern harder to spot. The rule to remember—indeed, a rule applicable to the whole of college life—is chug in moderation . Tn D 2T n1 C 1 D 2.2T n2 C 1/ C 1 plug D 4T n2 C 2 C 1 chug D 4.2T n3 C 1/ C 2 C 1 plug D 8T n3 C 4 C 2 C 1 chug D 8.2T n4 C 1/ C 4 C 2 C 1 plug D 16T n4 C 8 C 4 C 2 C 1 chug Above, we started with the recurrence equation. Then we replaced Tn1 with 2T n2 C 1, since the recurrence says the two are equivalent. In the third step, we simplified a little—but not too much! After several similar rounds of plugging and chugging, a pattern is apparent. The following formula seems to hold: Tn D 2k Tnk C 2k1 C 2k2 C : : : C 22 C 21 C 20 D 2k Tnk C 2k 1 Once the pattern is clear, simplifying is safe and convenient. In particular, we’ve collapsed the geometric sum to a closed form on the second line. 7“mcs-ftl” — 2010/9/8 — 0:40 — page 290 — #296 Chapter 10 Recurrences Step 2: Verify the Pattern The next step is to verify the general formula with one more round of plug-and-chug. Tn D 2k Tnk C 2k 1 D 2k .2T n.k C1/ C 1/ C 2k 1 plug D 2kC1Tn.k C1/ C 2kC1 1 chug The final expression on the right is the same as the expression on the first line, except that k is replaced by k C 1. Surprisingly, this effectively proves that the formula is correct for all k. Here is why: we know the formula holds for k D 1,because that’s the original recurrence equation. And we’ve just shown that if the formula holds for some k  1, then it also holds for k C 1. So the formula holds for all k  1 by induction. Step 3: Write Tn Using Early Terms with Known Values The last step is to express Tn as a function of early terms whose values are known. Here, choosing k D n 1 expresses Tn in terms of T1, which is equal to 1. Sim-plifying gives a closed-form expression for Tn: Tn D 2n1T1 C 2n1 1 D 2n1  1 C 2n1 1 D 2n 1: We’re done! This is the same answer we got from guess-and-verify. Let’s compare guess-and-verify with plug-and-chug. In the guess-and-verify method, we computed several terms at the beginning of the sequence, T1, T2, T3,etc., until a pattern appeared. We generalized to a formula for the nth term, Tn. In contrast, plug-and-chug works backward from the nth term. Specifically, we started with an expression for Tn involving the preceding term, Tn1, and rewrote this us-ing progressively earlier terms, Tn2, Tn3, etc. Eventually, we noticed a pattern, which allowed us to express Tn using the very first term, T1, whose value we knew. Substituting this value gave a closed-form expression for Tn. So guess-and-verify and plug-and-chug tackle the problem from opposite directions. 8“mcs-ftl” — 2010/9/8 — 0:40 — page 291 — #297 10.2. Merge Sort 10.2 Merge Sort Algorithms textbooks traditionally claim that sorting is an important, fundamental problem in computer science. Then they smack you with sorting algorithms until life as a disk-stacking monk in Hanoi sounds delightful. Here, we’ll cover just one well-known sorting algorithm, Merge Sort. The analysis introduces another kind of recurrence. Here is how Merge Sort works. The input is a list of n numbers, and the output is those same numbers in nondecreasing order. There are two cases:  If the input is a single number, then the algorithm does nothing, because the list is already sorted.  Otherwise, the list contains two or more numbers. The first half and the second half of the list are each sorted recursively. Then the two halves are merged to form a sorted list with all n numbers. Let’s work through an example. Suppose we want to sort this list: 10, 7, 23, 5, 2, 8, 6, 9. Since there is more than one number, the first half (10, 7, 23, 5) and the second half (2, 8, 6, 9) are sorted recursively. The results are 5, 7, 10, 23 and 2, 6, 8, 9. All that remains is to merge these two lists. This is done by repeatedly emitting the smaller of the two leading terms. When one list is empty, the whole other list is emitted. The example is worked out below. In this table, underlined numbers are about to be emitted. First Half Second Half Output 5, 7, 10, 23 2, 6, 8, 9 5, 7, 10, 23 6, 8, 9 27, 10, 23 6, 8, 9 2, 5 7, 10, 23 8, 9 2, 5, 6 10, 23 8, 9 2, 5, 6, 7 10, 23 9 2, 5, 6, 7, 8 10, 23 2, 5, 6, 7, 8, 9 2, 5, 6, 7, 8, 9, 10, 23 The leading terms are initially 5 and 2. So we output 2. Then the leading terms are 5 and 6, so we output 5. Eventually, the second list becomes empty. At that point, we output the whole first list, which consists of 10 and 23. The complete output consists of all the numbers in sorted order. 9“mcs-ftl” — 2010/9/8 — 0:40 — page 292 — #298 Chapter 10 Recurrences 10.2.1 Finding a Recurrence A traditional question about sorting algorithms is, “What is the maximum number of comparisons used in sorting n items?” This is taken as an estimate of the running time. In the case of Merge Sort, we can express this quantity with a recurrence. Let Tn be the maximum number of comparisons used while Merge Sorting a list of n numbers. For now, assume that n is a power of 2. This ensures that the input can be divided in half at every stage of the recursion.  If there is only one number in the list, then no comparisons are required, so T1 D 0.  Otherwise, Tn includes comparisons used in sorting the first half (at most Tn=2 ), in sorting the second half (also at most Tn=2 ), and in merging the two halves. The number of comparisons in the merging step is at most n 1.This is because at least one number is emitted after each comparison and one more number is emitted at the end when one list becomes empty. Since n items are emitted in all, there can be at most n 1 comparisons. Therefore, the maximum number of comparisons needed to Merge Sort n items is given by this recurrence: T1 D 0Tn D 2T n=2 C n 1 (for n  2 and a power of 2) : This fully describes the number of comparisons, but not in a very useful way; a closed-form expression would be much more helpful. To get that, we have to solve the recurrence. 10.2.2 Solving the Recurrence Let’s first try to solve the Merge Sort recurrence with the guess-and-verify tech-nique. Here are the first few values: T1 D 0T2 D 2T 1 C 2 1 D 1T4 D 2T 2 C 4 1 D 5T8 D 2T 4 C 8 1 D 17 T16 D 2T 8 C 16 1 D 49: We’re in trouble! Guessing the solution to this recurrence is hard because there is no obvious pattern. So let’s try the plug-and-chug method instead. 10 “mcs-ftl” — 2010/9/8 — 0:40 — page 293 — #299 10.2. Merge Sort Step 1: Plug and Chug Until a Pattern Appears First, we expand the recurrence equation by alternately plugging and chugging until a pattern appears. Tn D 2T n=2 C n 1 D 2.2T n=4 C n=2 1/ C .n 1/ plug D 4T n=4 C .n 2/ C .n 1/ chug D 4.2T n=8 C n=4 1/ C .n 2/ C .n 1/ plug D 8T n=8 C .n 4/ C .n 2/ C .n 1/ chug D 8.2T n=16 C n=8 1/ C .n 4/ C .n 2/ C .n 1/ plug D 16T n=16 C .n 8/ C .n 4/ C .n 2/ C .n 1/ chug A pattern is emerging. In particular, this formula seems holds: Tn D 2k Tn=2 k C .n 2k1/ C .n 2k2/ C : : : C .n 20/ D 2k Tn=2 k C k n 2k1 2k2 : : : 20 D 2k Tn=2 k C k n 2k C 1: On the second line, we grouped the n terms and powers of 2. On the third, we collapsed the geometric sum. Step 2: Verify the Pattern Next, we verify the pattern with one additional round of plug-and-chug. If we guessed the wrong pattern, then this is where we’ll discover the mistake. Tn D 2k Tn=2 k C k n 2k C 1 D 2k .2T n=2 kC1 C n=2 k 1/ C k n 2k C 1 plug D 2kC1Tn=2 kC1 C .k C 1/n 2kC1 C 1 chug The formula is unchanged except that k is replaced by k C 1. This amounts to the induction step in a proof that the formula holds for all k  1.11 “mcs-ftl” — 2010/9/8 — 0:40 — page 294 — #300 Chapter 10 Recurrences Step 3: Write Tn Using Early Terms with Known Values Finally, we express Tn using early terms whose values are known. Specifically, if we let k D log n, then Tn=2 k D T1, which we know is 0: Tn D 2k Tn=2 k C k n 2k C 1 D 2log nTn=2 log n C n log n 2log n C 1 D nT 1 C n log n n C 1 D n log n n C 1: We’re done! We have a closed-form expression for the maximum number of com-parisons used in Merge Sorting a list of n numbers. In retrospect, it is easy to see why guess-and-verify failed: this formula is fairly complicated. As a check, we can confirm that this formula gives the same values that we computed earlier: n Tn n log n n C 11 0 1 log 1 1 C 1 D 02 1 2 log 2 2 C 1 D 14 5 4 log 4 4 C 1 D 58 17 8 log 8 8 C 1 D 17 16 49 16 log 16 16 C 1 D 49 As a double-check, we could write out an explicit induction proof. This would be straightforward, because we already worked out the guts of the proof in step 2 of the plug-and-chug procedure. 10.3 Linear Recurrences So far we’ve solved recurrences with two techniques: guess-and-verify and plug-and-chug. These methods require spotting a pattern in a sequence of numbers or expressions. In this section and the next, we’ll give cookbook solutions for two large classes of recurrences. These methods require no flash of insight; you just follow the recipe and get the answer. 10.3.1 Climbing Stairs How many different ways are there to climb n stairs, if you can either step up one stair or hop up two? For example, there are five different ways to climb four stairs: 1. step, step, step, step 12 “mcs-ftl” — 2010/9/8 — 0:40 — page 295 — #301 10.3. Linear Recurrences hop, hop 3. hop, step, step 4. step, hop step 5. step, step, hop Working through this problem will demonstrate the major features of our first cook-book method for solving recurrences. We’ll fill in the details of the general solution afterward. Finding a Recurrence As special cases, there is 1 way to climb 0 stairs (do nothing) and 1 way to climb 1 stair (step up). In general, an ascent of n stairs consists of either a step followed by an ascent of the remaining n 1 stairs or a hop followed by an ascent of n 2 stairs. So the total number of ways to climb n stairs is equal to the number of ways to climb n 1 plus the number of ways to climb n 2. These observations define a recurrence: f .0/ D 1f .1/ D 1f .n/ D f .n 1/ C f .n 2/ for n  2: Here, f .n/ denotes the number of ways to climb n stairs. Also, we’ve switched from subscript notation to functional notation, from Tn to fn. Here the change is cosmetic, but the expressiveness of functions will be useful later. This is the Fibonacci recurrence, the most famous of all recurrence equations. Fibonacci numbers arise in all sorts of applications and in nature. Fibonacci intro-duced the numbers in 1202 to study rabbit reproduction. Fibonacci numbers also appear, oddly enough, in the spiral patterns on the faces of sunflowers. And the input numbers that make Euclid’s GCD algorithm require the greatest number of steps are consecutive Fibonacci numbers. Solving the Recurrence The Fibonacci recurrence belongs to the class of linear recurrences, which are es-sentially all solvable with a technique that you can learn in an hour. This is some-what amazing, since the Fibonacci recurrence remained unsolved for almost six centuries! In general, a homogeneous linear recurrence has the form f .n/ D a1f .n 1/ C a2f .n 2/ C : : : C ad f .n d / 13 “mcs-ftl” — 2010/9/8 — 0:40 — page 296 — #302 Chapter 10 Recurrences where a1; a 2; : : : ; a d and d are constants. The order of the recurrence is d . Com-monly, the value of the function f is also specified at a few points; these are called boundary conditions . For example, the Fibonacci recurrence has order d D 2 with coefficients a1 D a2 D 1 and g.n/ D 0. The boundary conditions are f .0/ D 1 and f .1/ D 1. The word “homogeneous” sounds scary, but effectively means “the simpler kind”. We’ll consider linear recurrences with a more complicated form later. Let’s try to solve the Fibonacci recurrence with the benefit centuries of hindsight. In general, linear recurrences tend to have exponential solutions. So let’s guess that f .n/ D xn where x is a parameter introduced to improve our odds of making a correct guess. We’ll figure out the best value for x later. To further improve our odds, let’s neglect the boundary conditions, f .0/ D 0 and f .1/ D 1, for now. Plugging this guess into the recurrence f .n/ D f .n 1/ C f .n 2/ gives xn D xn1 C xn2: Dividing both sides by xn2 leaves a quadratic equation: x2 D x C 1: Solving this equation gives two plausible values for the parameter x: x D 1 ˙ p52 : This suggests that there are at least two different solutions to the recurrence, ne-glecting the boundary conditions. f .n/ D 1 C p52 !n or f .n/ D 1 p52 !n A charming features of homogeneous linear recurrences is that any linear com-bination of solutions is another solution. Theorem 10.3.1. If f .n/ and g.n/ are both solutions to a homogeneous linear recurrence, then h.n/ D sf .n/ C tg.n/ is also a solution for all s; t 2 R.Proof. h.n/ D sf .n/ C tg.n/ D s .a 1f .n 1/ C : : : C ad f .n d // C t .a 1g.n 1/ C : : : C ad g.n d // D a1.sf .n 1/ C tg.n 1// C : : : C ad .sf .n d / C tg.n d // D a1h.n 1/ C : : : C ad h.n d / 14 “mcs-ftl” — 2010/9/8 — 0:40 — page 297 — #303 10.3. Linear Recurrences The first step uses the definition of the function h, and the second uses the fact that f and g are solutions to the recurrence. In the last two steps, we rearrange terms and use the definition of h again. Since the first expression is equal to the last, h is also a solution to the recurrence.  The phenomenon described in this theorem—a linear combination of solutions is another solution—also holds for many differential equations and physical systems. In fact, linear recurrences are so similar to linear differential equations that you can safely snooze through that topic in some future math class. Returning to the Fibonacci recurrence, this theorem implies that f .n/ D s 1 C p52 !n C t 1 p52 !n is a solution for all real numbers s and t. The theorem expanded two solutions to a whole spectrum of possibilities! Now, given all these options to choose from, we can find one solution that satisfies the boundary conditions, f .0/ D 1 and f .1/ D 1. Each boundary condition puts some constraints on the parameters s and t. In particular, the first boundary condition implies that f .0/ D s 1 C p52 !0 C t 1 p52 !0 D s C t D 1: Similarly, the second boundary condition implies that f .1/ D s 1 C p52 !1 C t 1 p52 !1 D 1: Now we have two linear equations in two unknowns. The system is not degenerate, so there is a unique solution: s D 1 p5  1 C p52 t D 1 p5  1 p52 : These values of s and t identify a solution to the Fibonacci recurrence that also satisfies the boundary conditions: f .n/ D 1 p5  1 C p52 1 C p52 !n 1 p5  1 p52 1 p52 !n D 1 p5 1 C p52 !nC1 1 p5 1 p52 !nC1 :15 “mcs-ftl” — 2010/9/8 — 0:40 — page 298 — #304 Chapter 10 Recurrences It is easy to see why no one stumbled across this solution for almost six centuries! All Fibonacci numbers are integers, but this expression is full of square roots of five! Amazingly, the square roots always cancel out. This expression really does give the Fibonacci numbers if we plug in n D 0; 1; 2 , etc. This closed-form for Fibonacci numbers has some interesting corollaries. The first term tends to infinity because the base of the exponential, .1 C p5/=2 D 1:618 : : : is greater than one. This value is often denoted  and called the “golden ratio”. The second term tends to zero, because .1 p5/=2 D 0:618033988 : : : has absolute value less than 1. This implies that the nth Fibonacci number is: f .n/ D nC1 p5 C o.1/: Remarkably, this expression involving irrational numbers is actually very close to an integer for all large n—namely, a Fibonacci number! For example: 20 p5 D 6765:000029 : : :  f .19/: This also implies that the ratio of consecutive Fibonacci numbers rapidly approaches the golden ratio. For example: f .20/ f .19/ D 10946 6765 D 1:618033998 : : : : 10.3.2 Solving Homogeneous Linear Recurrences The method we used to solve the Fibonacci recurrence can be extended to solve any homogeneous linear recurrence; that is, a recurrence of the form f .n/ D a1f .n 1/ C a2f .n 2/ C : : : C ad f .n d / where a1; a 2; : : : ; a d and d are constants. Substituting the guess f .n/ D xn, as with the Fibonacci recurrence, gives xn D a1xn1 C a2xn2 C : : : C ad xnd : Dividing by xnd gives xd D a1xd 1 C a2xd 2 C : : : C ad 1x C ad : This is called the characteristic equation of the recurrence. The characteristic equa-tion can be read off quickly since the coefficients of the equation are the same as the coefficients of the recurrence. The solutions to a linear recurrence are defined by the roots of the characteristic equation. Neglecting boundary conditions for the moment: 16 “mcs-ftl” — 2010/9/8 — 0:40 — page 299 — #305 10.3. Linear Recurrences  If r is a nonrepeated root of the characteristic equation, then rn is a solution to the recurrence.  If r is a repeated root with multiplicity k then rn, nr n, n2rn, . . . , nk1rn are all solutions to the recurrence. Theorem 10.3.1 implies that every linear combination of these solutions is also a solution. For example, suppose that the characteristic equation of a recurrence has roots s, t, and u twice. These four roots imply four distinct solutions: f .n/ D sn f .n/ D tn f .n/ D un f .n/ D nu n: Furthermore, every linear combination f .n/ D a  sn C b  tn C c  un C d  nu n (10.1) is also a solution. All that remains is to select a solution consistent with the boundary conditions by choosing the constants appropriately. Each boundary condition implies a linear equation involving these constants. So we can determine the constants by solving a system of linear equations. For example, suppose our boundary conditions were f .0/ D 0, f .1/ D 1, f .2/ D 4, and f .3/ D 9. Then we would obtain four equations in four unknowns: f .0/ D 0 ) a  s0 C b  t0 C c  u0 C d  0u 0 D 0f .1/ D 1 ) a  s1 C b  t1 C c  u1 C d  1u 1 D 1f .2/ D 4 ) a  s2 C b  t2 C c  u2 C d  2u 2 D 4f .3/ D 9 ) a  s3 C b  t3 C c  u3 C d  3u 3 D 9 This looks nasty, but remember that s, t, and u are just constants. Solving this sys-tem gives values for a, b, c, and d that define a solution to the recurrence consistent with the boundary conditions. 10.3.3 Solving General Linear Recurrences We can now solve all linear homogeneous recurrences, which have the form f .n/ D a1f .n 1/ C a2f .n 2/ C : : : C ad f .n d /: Many recurrences that arise in practice do not quite fit this mold. For example, the Towers of Hanoi problem led to this recurrence: f .1/ D 1f .n/ D 2f .n 1/ C 1 (for n  2):17 “mcs-ftl” — 2010/9/8 — 0:40 — page 300 — #306 Chapter 10 Recurrences The problem is the extra C1; that is not allowed in a homogeneous linear recur-rence. In general, adding an extra function g.n/ to the right side of a linear recur-rence gives an inhomogeneous linear recurrence : f .n/ D a1f .n 1/ C a2f .n 2/ C : : : C ad f .n d / C g.n/: Solving inhomogeneous linear recurrences is neither very different nor very dif-ficult. We can divide the whole job into five steps: 1. Replace g.n/ by 0, leaving a homogeneous recurrence. As before, find roots of the characteristic equation. 2. Write down the solution to the homogeneous recurrence, but do not yet use the boundary conditions to determine coefficients. This is called the homo-geneous solution .3. Now restore g.n/ and find a single solution to the recurrence, ignoring bound-ary conditions. This is called a particular solution . We’ll explain how to find a particular solution shortly. 4. Add the homogeneous and particular solutions together to obtain the general solution .5. Now use the boundary conditions to determine constants by the usual method of generating and solving a system of linear equations. As an example, let’s consider a variation of the Towers of Hanoi problem. Sup-pose that moving a disk takes time proportional to its size. Specifically, moving the smallest disk takes 1 second, the next-smallest takes 2 seconds, and moving the nth disk then requires n seconds instead of 1. So, in this variation, the time to complete the job is given by a recurrence with a Cn term instead of a C1: f .1/ D 1f .n/ D 2f .n 1/ C n for n  2: Clearly, this will take longer, but how much longer? Let’s solve the recurrence with the method described above. In Steps 1 and 2, dropping the Cn leaves the homogeneous recurrence f .n/ D 2f .n 1/ . The characteristic equation is x D 2. So the homogeneous solution is f .n/ D c2 n.In Step 3, we must find a solution to the full recurrence f .n/ D 2f .n 1/ C n,without regard to the boundary condition. Let’s guess that there is a solution of the 18 “mcs-ftl” — 2010/9/8 — 0:40 — page 301 — #307 10.3. Linear Recurrences form f .n/ D an C b for some constants a and b. Substituting this guess into the recurrence gives an C b D 2.a.n 1/ C b/ C n0 D .a C 1/n C .b 2a/: The second equation is a simplification of the first. The second equation holds for all n if both a C 1 D 0 (which implies a D 1) and b 2a D 0 (which implies that b D 2). So f .n/ D an C b D n 2 is a particular solution. In the Step 4, we add the homogeneous and particular solutions to obtain the general solution f .n/ D c2 n n 2: Finally, in step 5, we use the boundary condition, f .1/ D 1, determine the value of the constant c: f .1/ D 1 ) c2 1 1 2 D 1 ) c D 2: Therefore, the function f .n/ D 2  2n n 2 solves this variant of the Towers of Hanoi recurrence. For comparison, the solution to the original Towers of Hanoi problem was 2n 1. So if moving disks takes time proportional to their size, then the monks will need about twice as much time to solve the whole puzzle. 10.3.4 How to Guess a Particular Solution Finding a particular solution can be the hardest part of solving inhomogeneous recurrences. This involves guessing, and you might guess wrong. 1 However, some rules of thumb make this job fairly easy most of the time.  Generally, look for a particular solution with the same form as the inhomo-geneous term g.n/ .  If g.n/ is a constant, then guess a particular solution f .n/ D c. If this doesn’t work, try polynomials of progressively higher degree: f .n/ D bn C c, then f .n/ D an 2 C bn C c, etc.  More generally, if g.n/ is a polynomial, try a polynomial of the same degree, then a polynomial of degree one higher, then two higher, etc. For example, if g.n/ D 6n C 5, then try f .n/ D bn C c and then f .n/ D an 2 C bn C c. 1In Chapter 12, we will show how to solve linear recurrences with generating functions—it’s a little more complicated, but it does not require guessing. 19 “mcs-ftl” — 2010/9/8 — 0:40 — page 302 — #308 Chapter 10 Recurrences  If g.n/ is an exponential, such as 3n, then first guess that f .n/ D c3 n.Failing that, try f .n/ D bn3 n C c3 n and then an 23n C bn3 n C c3 n, etc. The entire process is summarized on the following page. 10.4 Divide-and-Conquer Recurrences We now have a recipe for solving general linear recurrences. But the Merge Sort recurrence, which we encountered earlier, is not linear: T .1/ D 0T .n/ D 2T .n=2/ C n 1 (for n  2): In particular, T .n/ is not a linear combination of a fixed number of immediately preceding terms; rather, T .n/ is a function of T .n=2/ , a term halfway back in the sequence. Merge Sort is an example of a divide-and-conquer algorithm: it divides the in-put, “conquers” the pieces, and combines the results. Analysis of such algorithms commonly leads to divide-and-conquer recurrences, which have this form: T .n/ D k X iD1 ai T .b i n/ C g.n/ Here a1; : : : a k are positive constants, b1; : : : ; b k are constants between 0 and 1, and g.n/ is a nonnegative function. For example, setting a1 D 2, b1 D 1=2 , and g.n/ D n 1 gives the Merge Sort recurrence. 10.4.1 The Akra-Bazzi Formula The solution to virtually all divide and conquer solutions is given by the amazing Akra-Bazzi formula . Quite simply, the asymptotic solution to the general divide-and-conquer recurrence T .n/ D k X iD1 ai T .b i n/ C g.n/ is T .n/ D ‚  np  1 C Z n1 g.u/ upC1 du  (10.2) 20 “mcs-ftl” — 2010/9/8 — 0:40 — page 303 — #309 10.4. Divide-and-Conquer Recurrences Short Guide to Solving Linear Recurrences A linear recurrence is an equation f .n/ D a1f .n 1/ C a2f .n 2/ C : : : C ad f .n d / „ ƒ‚ … homogeneous part C g.n/ „ ƒ‚ … inhomogeneous part together with boundary conditions such as f .0/ D b0, f .1/ D b1, etc. Linear recurrences are solved as follows: 1. Find the roots of the characteristic equation xn D a1xn1 C a2xn2 C : : : C ak1x C ak : Write down the homogeneous solution. Each root generates one term and the homogeneous solution is their sum. A nonrepeated root r generates the term cr n, where c is a constant to be determined later. A root r with multi-plicity k generates the terms d1rn d2nr n d3n2rn : : : dk nk1rn where d1; : : : d k are constants to be determined later. 3. Find a particular solution. This is a solution to the full recurrence that need not be consistent with the boundary conditions. Use guess-and-verify. If g.n/ is a constant or a polynomial, try a polynomial of the same degree, then of one higher degree, then two higher. For example, if g.n/ D n, then try f .n/ D bn C c and then an 2 C bn C c. If g.n/ is an exponential, such as 3n,then first guess f .n/ D c3 n. Failing that, try f .n/ D .bn C c/3 n and then .an 2 C bn C c/3 n, etc. 4. Form the general solution, which is the sum of the homogeneous solution and the particular solution. Here is a typical general solution: f .n/ D c2 n C d. 1/ n „ ƒ‚ … homogeneous solution C 3n C 1.„ ƒ‚ … inhomogeneous solution Substitute the boundary conditions into the general solution. Each boundary condition gives a linear equation in the unknown constants. For example, substituting f .1/ D 2 into the general solution above gives 2 D c  21 C d  .1/ 1 C 3  1 C 1 ) 2 D 2c d: Determine the values of these constants by solving the resulting system of linear equations. 21 “mcs-ftl” — 2010/9/8 — 0:40 — page 304 — #310 Chapter 10 Recurrences where p satisfies kX iD1 ai bpi D 1: (10.3) A rarely-troublesome requirement is that the function g.n/ must not grow or oscillate too quickly. Specifically, jg0.n/ j must be bounded by some polynomial. So, for example, the Akra-Bazzi formula is valid when g.n/ D x2 log n, but not when g.n/ D 2n.Let’s solve the Merge Sort recurrence again, using the Akra-Bazzi formula in-stead of plug-and-chug. First, we find the value p that satisfies 2  .1=2/ p D 1: Looks like p D 1 does the job. Then we compute the integral: T .n/ D ‚  n  1 C Z n1 u 1u2 du  D ‚  n  1 C  log u C 1u n1  D ‚  n  log n C 1n  D ‚ .n log n/ : The first step is integration and the second is simplification. We can drop the 1=n term in the last step, because the log n term dominates. We’re done! Let’s try a scary-looking recurrence: T .n/ D 2T .n=2/ C 8=9T .3n=4/ C n2: Here, a1 D 2, b1 D 1=2 , a2 D 8=9 , and b2 D 3=4 . So we find the value p that satisfies 2  .1=2/ p C .8=9/.3=4/ p D 1: Equations of this form don’t always have closed-form solutions, so you may need to approximate p numerically sometimes. But in this case the solution is simple: p D 2. Then we integrate: T .n/ D ‚  n2  1 C Z n1 u2 u3 du  D ‚ n2.1 C log n/  D ‚ n2 log n : That was easy! 22 “mcs-ftl” — 2010/9/8 — 0:40 — page 305 — #311 10.4. Divide-and-Conquer Recurrences 10.4.2 Two Technical Issues Until now, we’ve swept a couple issues related to divide-and-conquer recurrences under the rug. Let’s address those issues now. First, the Akra-Bazzi formula makes no use of boundary conditions. To see why, let’s go back to Merge Sort. During the plug-and-chug analysis, we found that Tn D nT 1 C n log n n C 1: This expresses the nth term as a function of the first term, whose value is specified in a boundary condition. But notice that Tn D ‚.n log n/ for every value of T1.The boundary condition doesn’t matter! This is the typical situation: the asymptotic solution to a divide-and-conquer recurrence is independent of the boundary conditions . Intuitively, if the bottom-level operation in a recursive algorithm takes, say, twice as long, then the overall running time will at most double. This matters in practice, but the factor of 2 is concealed by asymptotic notation. There are corner-case exceptions. For example, the solution to T .n/ D 2T .n=2/ is either ‚.n/ or zero, depending on whether T .1/ is zero. These cases are of little practical interest, so we won’t consider them further. There is a second nagging issue with divide-and-conquer recurrences that does not arise with linear recurrences. Specifically, dividing a problem of size n may create subproblems of non-integer size. For example, the Merge Sort recurrence contains the term T .n=2/ . So what if n is 15? How long does it take to sort seven-and-a-half items? Previously, we dodged this issue by analyzing Merge Sort only when the size of the input was a power of 2. But then we don’t know what happens for an input of size, say, 100. Of course, a practical implementation of Merge Sort would split the input ap-proximately in half, sort the halves recursively, and merge the results. For example, a list of 15 numbers would be split into lists of 7 and 8. More generally, a list of n numbers would be split into approximate halves of size dn=2 e and bn=2 c. So the maximum number of comparisons is actually given by this recurrence: T .1/ D 0T .n/ D T . dn=2 e/ C T . bn=2 c/ C n 1 (for n  2): This may be rigorously correct, but the ceiling and floor operations make the recur-rence hard to solve exactly. Fortunately, the asymptotic solution to a divide and conquer recurrence is un-affected by floors and ceilings . More precisely, the solution is not changed by replacing a term T .b i n/ with either T . dbi ne/ or T . bbi nc/. So leaving floors and 23 “mcs-ftl” — 2010/9/8 — 0:40 — page 306 — #312 Chapter 10 Recurrences ceilings out of divide-and-conquer recurrences makes sense in many contexts; those are complications that make no difference. 10.4.3 The Akra-Bazzi Theorem The Akra-Bazzi formula together with our assertions about boundary conditions and integrality all follow from the Akra-Bazzi Theorem , which is stated below. Theorem 10.4.1 (Akra-Bazzi) . Suppose that the function T W R ! R satisfies the recurrence T .x/ D 8ˆ<ˆ: is nonnegative and bounded for 0  x  x0kP iD1 ai T .b i x C hi .x// C g.x/ for x > x 0 where: 1. a1; : : : ; a k are positive constants. 2. b1; : : : ; b k are constants between 0 and 1. 3. x0 is large enough so that T is well-defined. 4. g.x/ is a nonnegative function such that jg0.x/ j is bounded by a polynomial. 5. jhi .x/ j D O.x= log 2 x/ .Then T .x/ D ‚  xp  1 C Z x1 g.u/ upC1 du  where p satisfies kX iD1 ai bpi D 1: The Akra-Bazzi theorem can be proved using a complicated induction argument, though we won’t do that here. But let’s at least go over the statement of the theorem. All the recurrences we’ve considered were defined over the integers, and that is the common case. But the Akra-Bazzi theorem applies more generally to functions defined over the real numbers. The Akra-Bazzi formula is lifted directed from the theorem statement, except that the recurrence in the theorem includes extra functions, hi . These functions 24 “mcs-ftl” — 2010/9/8 — 0:40 — page 307 — #313 10.4. Divide-and-Conquer Recurrences extend the theorem to address floors, ceilings, and other small adjustments to the sizes of subproblems. The trick is illustrated by this combination of parameters a1 D 1 b1 D 1=2 h1.x/ D l x2 m x2a2 D 1 b2 D 1=2 h2.x/ D j x2 k x2g.x/ D x 1 which corresponds the recurrence T .x/ D 1  T  x2 C l x2 m x2  C  T  x2 C j x2 k x2  C x 1 D T l x2 m C T j x2 k C x 1: This is the rigorously correct Merge Sort recurrence valid for all input sizes, complete with floor and ceiling operators. In this case, the functions h1.x/ and h2.x/ are both at most 1, which is easily O.x= log 2 x/ as required by the theorem statement. These functions hi do not affect—or even appear in—the asymptotic solution to the recurrence. This justifies our earlier claim that applying floor and ceiling operators to the size of a subproblem does not alter the asymptotic solution to a divide-and-conquer recurrence. 10.4.4 The Master Theorem There is a special case of the Akra-Bazzi formula known as the Master Theorem that handles some of the recurrences that commonly arise in computer science. It is called the Master Theorem because it was proved long before Akra and Bazzi arrived on the scene and, for many years, it was the final word on solving divide-and-conquer recurrences. We include the Master Theorem here because it is still widely referenced in algorithms courses and you can use it without having to know anything about integration. Theorem 10.4.2 (Master Theorem) . Let T be a recurrence of the form T .n/ D aT  nb  C g.n/: Case 1: If g.n/ D O  nlog b .a/   for some constant  > 0 , then T .n/ D ‚  nlog b .a/  :25 “mcs-ftl” — 2010/9/8 — 0:40 — page 308 — #314 Chapter 10 Recurrences Case 2: If g.n/ D ‚  nlog b .a/ log k .n/  for some constant k  0, then T .n/ D ‚  nlog b .a/ log kC1.n/  : Case 3: If g.n/ D   nlog b .a/ C  for some constant  > 0 and ag.n=b/ < cg.n/ for some constant c < 1 and sufficiently large n, then T .n/ D ‚.g.n//: The Master Theorem can be proved by induction on n or, more easily, as a corol-lary of Theorem 10.4.1. We will not include the details here. 10.4.5 Pitfalls with Asymptotic Notation and Induction We’ve seen that asymptotic notation is quite useful, particularly in connection with recurrences. And induction is our favorite proof technique. But mixing the two is risky business; there is great potential for subtle errors and false conclusions! False Claim. If T .1/ D 1 and T .n/ D 2T .n=2/ C n; then T .n/ D O.n/ . The Akra-Bazzi theorem implies that the correct solution is T .n/ D ‚.n log n/ and so this claim is false. But where does the following “proof” go astray? Bogus proof. The proof is by strong induction. Let P .n/ be the proposition that T .n/ D O.n/ . Base case : P .1/ is true because T .1/ D 1 D O.1/ . Inductive step : For n  2, assume P .1/ , P .2/ , . . . , P .n 1/ to prove P .n/ . We have T .n/ D 2  T .n=2/ C n D 2  O.n=2/ C n D O.n/: The first equation is the recurrence, the second uses the assumption P .n=2/ , and the third is a simplification. 26 “mcs-ftl” — 2010/9/8 — 0:40 — page 309 — #315 10.5. A Feel for Recurrences Where’s the bug? The proof is already far off the mark in the second sentence, which defines the induction hypothesis. The statement “ T .n/ D O.n/ ” is either true or false; it’s validity does not depend on a particular value of n. Thus the very idea of trying to prove that the statement holds for n D 1, 2, . . . , is wrong-headed. The safe way to reason inductively about asymptotic phenomena is to work di-rectly with the definition of the asymptotic notation . Let’s try to prove the claim above in this way. Remember that f .n/ D O.n/ means that there exist constants n0 and c > 0 such that jf .n/ j  cn for all n  n0. (Let’s not worry about the absolute value for now.) If all goes well, the proof attempt should fail in some blatantly obvious way, instead of in a subtle, hard-to-detect way like the earlier ar-gument. Since our perverse goal is to demonstrate that the proof won’t work for any constants n0 and c, we’ll leave these as variables and assume only that they’re chosen so that the base case holds; that is, T .n 0/  cn . Proof Attempt. We use strong induction. Let P .n/ be the proposition that T .n/  cn . Base case : P .n 0/ is true, because T .n 0/  cn . Inductive step : For n > n 0, assume that P .n 0/, . . . , P .n 1/ are true in order to prove P .n/ . We reason as follows: T .n/ D 2T .n=2/ C n  2c.n=2/ C n D cn C n D .c C 1/n — cn:  The first equation is the recurrence. Then we use induction and simplify until the argument collapses! In general, it is a good idea to stay away from asymptotic notation altogether while you are doing the induction. Once the induction is over and done with, then you can safely use big-Oh to simplify your result. 10.5 A Feel for Recurrences We’ve guessed and verified, plugged and chugged, found roots, computed integrals, and solved linear systems and exponential equations. Now let’s step back and look for some rules of thumb. What kinds of recurrences have what sorts of solutions? 27 “mcs-ftl” — 2010/9/8 — 0:40 — page 310 — #316 Chapter 10 Recurrences Here are some recurrences we solved earlier: Recurrence Solution Towers of Hanoi Tn D 2T n1 C 1 Tn  2n Merge Sort Tn D 2T n=2 C n 1 Tn  n log n Hanoi variation Tn D 2T n1 C n Tn  2  2n Fibonacci Tn D Tn1 C Tn2 Tn  .1:618 : : :/ nC1=p5 Notice that the recurrence equations for Towers of Hanoi and Merge Sort are some-what similar, but the solutions are radically different. Merge Sorting n D 64 items takes a few hundred comparisons, while moving n D 64 disks takes more than 10 19 steps! Each recurrence has one strength and one weakness. In the Towers of Hanoi, we broke a problem of size n into two subproblem of size n 1 (which is large), but needed only 1 additional step (which is small). In Merge Sort, we divided the problem of size n into two subproblems of size n=2 (which is small), but needed .n 1/ additional steps (which is large). Yet, Merge Sort is faster by a mile! This suggests that generating smaller subproblems is far more important to al-gorithmic speed than reducing the additional steps per recursive call . For example, shifting to the variation of Towers of Hanoi increased the last term from C1 to Cn,but the solution only doubled. And one of the two subproblems in the Fibonacci recurrence is just slightly smaller than in Towers of Hanoi (size n 2 instead of n1). Yet the solution is exponentially smaller! More generally, linear recurrences (which have big subproblems) typically have exponential solutions, while divide-and-conquer recurrences (which have small subproblems) usually have solutions bounded above by a polynomial. All the examples listed above break a problem of size n into two smaller prob-lems. How does the number of subproblems affect the solution? For example, suppose we increased the number of subproblems in Towers of Hanoi from 2 to 3, giving this recurrence: Tn D 3T n1 C 1 This increases the root of the characteristic equation from 2 to 3, which raises the solution exponentially, from ‚.2 n/ to ‚.3 n/.Divide-and-conquer recurrences are also sensitive to the number of subproblems. For example, for this generalization of the Merge Sort recurrence: T1 D 0Tn D aT n=2 C n 1: 28 “mcs-ftl” — 2010/9/8 — 0:40 — page 311 — #317 10.5. A Feel for Recurrences the Akra-Bazzi formula gives: Tn D 8ˆ<ˆ: ‚.n/ for a < 2 ‚.n log n/ for a D 2‚.n log a/ for a > 2: So the solution takes on three completely different forms as a goes from 1.99 to 2.01! How do boundary conditions affect the solution to a recurrence? We’ve seen that they are almost irrelevant for divide-and-conquer recurrences. For linear re-currences, the solution is usually dominated by an exponential whose base is de-termined by the number and size of subproblems. Boundary conditions matter greatly only when they give the dominant term a zero coefficient, which changes the asymptotic solution. So now we have a rule of thumb! The performance of a recursive procedure is usually dictated by the size and number of subproblems, rather than the amount of work per recursive call or time spent at the base of the recursion. In particular, if subproblems are smaller than the original by an additive factor, the solution is most often exponential. But if the subproblems are only a fraction the size of the original, then the solution is typically bounded by a polynomial. 29 “mcs-ftl” — 2010/9/8 — 0:40 — page 312 — #318 30 MIT OpenCourseWare 6.042J / 18.062J Mathematics for Computer Science Fall 2010 For information about citing these materials or our Terms of Use, visit:
312
Algebra & Number Theory msp Volume 2 2008 No. 4 Mass formulas for local Galois representations to wreath products and cross products Melanie Matchett Wood msp ALGEBRA AND NUMBER THEORY 2:4 (2008) dx.doi.org/10.2140/ant.2008.2.391 Mass formulas for local Galois representations to wreath products and cross products Melanie Matchett Wood Bhargava proved a formula for counting, with certain weights, degree n étale extensions of a local field, or equivalently, local Galois representations to Sn. This formula is motivation for his conjectures about the density of discriminants of Sn-number fields. We prove there are analogous “mass formulas” that count local Galois representations to any group that can be formed from symmetric groups by wreath products and cross products, corresponding to counting towers and direct sums of étale extensions. We obtain as a corollary that the above mentioned groups have rational character tables. Our result implies that D4 has a mass formula for certain weights, but we show that D4 does not have a mass formula when the local Galois representations to D4 are weighted in the same way as representations to S4 are weighted in Bhargava’s mass formula. 1. Introduction Bhargava proved the following mass formula for counting isomorphism classes of ´ etale extensions of degree n of a local field K: X [L:K]=n ´ etale 1 | Aut(K)| · 1 Norm( DiscK L) = n−1 X k=0 p(k, n −k)q−k, (1-1) where q is the cardinality of the residue field of K, and p(k, n −k) denotes the number of partitions of k into at most n −k parts. Equation (1-1) is proven using the beautiful mass formula of Serre which counts totally ramified degree n extensions of a local field. Equation (1-1) is at the heart of [Bhargava 2007, Conjecture 1] for the asymptotics of the number of Sn-number fields with discrim-inant ≤X, and also [Bhargava 2007, Conjectures 2–3] for the relative asymptotics of Sn-number fields with certain local behaviors specified. These conjectures are theorems for n ≤5 [Davenport and Heilbronn 1971; Bhargava 2005; ≥2008]. MSC2000: primary 11S15; secondary 11R45. Keywords: local field, mass formula, counting field extensions. 391 392 Melanie Matchett Wood Kedlaya [2007, Section 3] has translated Bhargava’s formula into the language of Galois representations so that the sum in (1-1) becomes a sum over Galois rep-resentations to Sn as follows: 1 n! X ρ:Gal(K sep/K)→Sn 1 qc(ρ) = n−1 X k=0 p(k, n −k)q−k, (1-2) where c(ρ) denotes the Artin conductor of ρ composed with the standard repre-sentation Sn →GLn(C). What is remarkable about the mass formulas in (1-1) and (1-2) is that the right hand side only depends on q and, in fact, is a polynomial (independent of q) evaluated at q−1. A priori, the left hand sides could depend on the actual local field K, and even if they only depended on q, it is not clear there should be a uniform way to write them as a polynomial function of q−1. This motivates the following definitions. Given a local field K and a finite group 0, let SK,0 denote the set of continuous homomorphisms Gal(K sep/K) →0 (for the discrete topol-ogy on 0) and let qK denote the size of the residue field of K. Given a function c: SK,0 →Z≥0, we define the total mass of (K, 0, c) to be M(K, 0, c) := X ρ∈SK,0 1 qc(ρ) K . (If the sum diverges, we could say the mass is ∞by convention. In most interesting cases, for example see [Kedlaya 2007, Remark 2.3], and all cases we consider in this paper, the sum will be convergent.) Kedlaya gave a similar definition, but one should note that our definition of mass differs from that in [Kedlaya 2007] by a factor of |0|. In [Kedlaya 2007], c(ρ) is always taken to be the Artin conductor of the composition of ρ and some 0 →GLn(C). We refer to such c as the counting function attached to the representation 0 →GLn(C). In this paper, we consider more general c. Given a group 0, a counting function for 0 is any function c: [ K SK,0 →Z≥0 where the union is over all isomorphism classes of local fields, such that c(ρ) = c(γργ −1) for every γ ∈0. (Since an isomorphism of local fields only determines an isomor-phism of their absolute Galois groups up to conjugation, we need this condition in order for the counting functions to be sensible.) Let c be a counting function for 0 and S be a class of local fields. We say that (0, c) has a mass formula for S if Mass formulas for local Galois representations to wreath and cross products 393 there exists a polynomial f (x) ∈Z[x] such that for all local fields K ∈S we have M(K, 0, c) = f  1 qK  . We also say that 0 has a mass formula for S if there is a c such that (0, c) has a mass formula for S. Kedlaya [2007, Theorem 8.5] proved that (W(Bn), cBn) has a mass formula for all local fields, where W(Bn) is the Weyl group of Bn and cBn is the count-ing function attached to the Weyl representation of Bn. This is in analogy with (1-2) which shows that (W(An), cAn) has a mass formula for all local fields, where W(An) ∼ = Sn is the Weyl group of An and cAn is the counting function attached to the Weyl representation of An. Kedlaya’s analogy is very attractive, but he found that it does not extend to the Weyl groups of D4 or G2 when the counting function is the one attached to the Weyl representation; he showed that mass formulas for all local fields do not exist for those groups and those particular counting functions. The main result of this paper is the following. Theorem 1.1. Any permutation group that can be constructed from the symmetric groups Sn using wreath products and cross products has a mass formula for all local fields. The mass formula of Kedlaya [2007, Theorem 8.5] for W(Bn) ∼ = S2 ≀Sn was the inspiration for this result, and it is now a special case of Theorem 1.1. Bhargava [2007, Section 8.2] asks whether his conjecture for Sn-extensions about the relative asymptotics of the number of global fields with specified local behaviors holds for other Galois groups. Ellenberg and Venkatesh [2005, Section 4.2] suggest that we can try to count extensions of global fields by quite general invariants of Galois representations. In [Wood 2008], it is shown that when count-ing by certain invariants of abelian global fields, such as conductor, Bhargava’s question can be answered affirmatively. It is also shown in [Wood 2008] that when counting abelian global fields by discriminant, the analogous conjectures fail in at least some cases. In light of the fact that Bhargava’s conjectures for the asymptotics of the number of Sn-number fields arise from his mass formula (1-1) for counting by discriminant, one naturally looks for mass formulas that use other ways of counting, such as Theorem 1.1, which might inspire conjectures for the asymptotics of counting global fields with other Galois groups. In Section 2, we prove that if groups A and B have certain refined mass formulas, then A≀B and A×B also have such refined mass formulas, which inductively proves Theorem 1.1. Bhargava’s mass formula for Sn, given in (1-2), is our base case. In Section 3, as a corollary of our main theorem, we see that any group formed from symmetric groups by taking wreath and cross products has a rational character table. This result, at least in such simple form, is not easily found in the literature. 394 Melanie Matchett Wood In order to suggest what our results say in the language of field extensions, in Section 4 we mention the relationship between Galois representations to wreath products and towers of field extensions. In Section 5, we discuss some situations in which groups have mass formulas for one way of counting but not another. In particular, we show that D4 ∼ = S2 ≀S2 does not have a mass formula for all local fields when c(ρ) is the counting function attached to the standard representation of S4 restricted to D4 ⊂S4. Consider quartic extensions M of K, whose Galois closure has group D4, with quadratic subfield L. The counting function that gives the mass formula for D4 of Theorem 1.1 corresponds to counting such extensions M weighted by Disc(L|K)NL|K(Disc(M|L)) −1, whereas the counting function attached to the standard representation of S4 re-stricted to D4 ⊂S4 corresponds to counting such extensions M weighted by | Disc(M|K)|−1 = Disc(L|K)2NL|K(Disc(M|L)) −1. So this change of exponent in the Disc(L|K) factor affects the existence of a mass formula for all local fields. Notation. Throughout this paper, K is a local field and GK := Gal(K sep/K) is the absolute Galois group of K. All maps in this paper from GK or subgroups of GK are continuous homomorphisms, with the discrete topology on all finite groups. We let IK denote the inertia subgroup of G K. Recall that SK,0 is the set of maps G K →0, and qK is the size of residue field of K. Also, 0 will always be a permutation group acting on a finite set. 2. Proof of Theorem 1.1 In order to prove Theorem 1.1, we prove finer mass formulas first. Instead of summing over all representations of G K, we stratify the representations by type and prove mass formulas for the sum of representations of each type. Let ρ : GK →0 be a representation such that the action of GK has r orbits m1, . . . , mr. If, under restriction to the representation ρ : IK →0, orbit mi breaks up into fi orbits of size ei, then we say that ρ is of type ( f e1 1 f e2 2 · · · f er r ) (where the terms f ei i are unordered formal symbols, as in [Bhargava 2007, Section 2]). Let Li be the fixed field of the stabilizer of an element in mi. So, [Li : K] = |mi|. Since ILi = GLi ∩IK is the stabilizer in IK of an element in mi, we conclude that ei = [IK : ILi], which is the ramification index of Li/K. Thus, fi is the inertial degree of Li/K. Given 0, a counting function c for 0, and a type σ = ( f e1 1 f e2 2 · · · f er r ), Mass formulas for local Galois representations to wreath and cross products 395 we define the total mass of (K, 0, c, σ) to be M(K, 0, c, σ) := X ρ∈SK,0 type σ 1 qc(ρ) K . We say that (0, c) has mass formulas for S by type if for every type σ there exists a polynomial f(0,c,σ)(x) ∈Z[x] such that for all local fields K ∈S we have M(K, 0, c, σ) = f(0,c,σ)  1 qK  . Bhargava [2007, Proposition 1] actually proved that Sn has mass formulas for all local fields by type. Of course, if (0, c) has mass formulas by type, then we can sum over all types to obtain a mass formula for (0, c). The key step in the proof of Theorem 1.1 is the following. Theorem 2.1. If A and B are finite permutation groups, S is some class of local fields, and (A, cA) and (B, cB) have mass formulas for S by type, then there exists a counting function c (given in (2-3)) such that (A ≀B, c) has mass formulas for S by type. Proof. Let K be a local field in S. Let A act on the left on the set A and B act on the left on the set B. We take the natural permutation action of A ≀B acting on a disjoint union of copies of A indexed by elements of B. Fix an ordering on B so that we have canonical orbit representatives in B. Given ρ : G K →A ≀B, there is a natural quotient ¯ ρ : G K →B. Throughout this proof, we use j as an indexing variable for the set B and i as an indexing variable for the r canonical orbit representatives in B of the ρ(G K) action. Let i j be the index of the orbit representative of j’s orbit. Let Sj ⊂GK be the stabilizer of j, and let Sj have fixed field L j. We define ρ j : GL j →A to be the given action of GL j on the j-th copy of A. We say that ρ has wreath type 6 = ( f e1 1 (σ1) · · · f er r (σr)) (2-1) if ¯ ρ has type σ = ( f e1 1 · · · f er r ) (where f ei i corresponds to the orbit of i) and ρi has type σi. Note that type is a function of wreath type; if ρ has wreath type 6 as above where σi = f ei,1 i,1 · · · f ei,ri i,ri  , then ρ has type (( fi fi,k)eiei,k)1≤i≤r, 1≤k≤ri. We consider the function c defined as follows: c(ρ) = cB( ¯ ρ) + X j∈B cA(ρ j) |{ ¯ ρ(IK) j}|. (2-2) 396 Melanie Matchett Wood Since cB( ¯ ρ) only depends on the B-conjugacy class of ¯ ρ and cA(ρ j) depends only on the A-conjugacy class of ρ j, we see that conjugation by elements of A ≀B does not affect the right hand side of (2-3) except by reordering the terms in the sum. Thus c is a counting function. Since ρ j and ρi j are representations of conjugate subfields of G K and since cA is invariant under A-conjugation, cA(ρ j) = cA(ρi j). There are fiei elements in the orbit of i under ¯ ρ(GK) and ei j elements in the orbit of j under ¯ ρ(IK), so c(ρ) = cB( ¯ ρ) + r X i=1 fiei ei cA(ρi) and thus c(ρ) = cB( ¯ ρ) + r X i=1 ficA(ρi). (2-3) Using this expression for c(ρ), we will prove that (A ≀B, c) has mass formulas by wreath type. Then, summing over wreath types that give the same type, we will prove that (A ≀B, c) has mass formulas by type. Remark 2.2. For a permutation group 0, let d0 be the counting function attached to the permutation representation of 0 (which is the discriminant exponent of the associated ´ etale extension). Then we can compute dA≀B = |A| dB( ¯ ρ) + r X i=1 fi dA(ρi), which is similar to the expression given in (2-3) but differs by the presence of |A| in the first term. In particular, when we have mass formulas for (A, dA) and (B, dB), the mass formula for A≀B that we find in this paper is not with the counting function dA≀B. We will see in Section 5, when A and B are both S2, that S2 ≀S2 ∼ = D4 does not have a mass formula with dA≀B. Lemma 2.3. The correspondence ρ 7→( ¯ ρ, ρ1, . . . , ρr) described above gives a function 9 from SK,A≀B to tuples (φ, φ1, . . . , φr) where φ : G K →B, the groups Si are the stabilizers of canonical orbit representatives of the action of φ on B, and φi : Si →A. The map 9 is (|A||B|−r)-to-one and surjective. Proof. Lemma 2.3 holds when GK is replaced by any group. It suffices to prove the lemma when ¯ ρ and φ are transitive because the general statement follows by multiplication. Let b ∈B be the canonical orbit representative. Given a φ : G K →B (or a ¯ ρ : G K →B) for all j ∈B, choose a σ j ∈G K such that φ(σ j) takes b to j. Given a ρ : G K →A≀B, let α j be the element of A such that ρ(σ j) acts on the b-th copy of A by α j and Mass formulas for local Galois representations to wreath and cross products 397 then moves the b-th copy of A to the j-th copy. Then for g ∈GK, the map ρ is given by ρ(g) = ¯ ρ(g)(a j) j∈B ∈B A|B| = A ≀B, (2-4) where a j = α ¯ ρ(g)( j)ρ1 σ −1 ¯ ρ(g)( j)gσ j  α−1 j , and a j ∈A acts on the j-th copy of A. For any transitive maps φ : GK →B and φb : Sb →A and for any choices of α j ∈A for all j ∈B such that αb = φb(σb), we can check that (2-4) for ¯ ρ = φ and ρ1 = φb gives a homomorphism ρ : G K →A≀B with ( ¯ ρ, ρ1) = (φ, φb), which proves the lemma. □ If 6 is as in (2-1), then X ρ:GK →A≀B wreath type 6 1 qc(ρ) K = |A||B|−r X φ:GK →B type σ X φ1:S1→A type σ1 X φ2:S2→A type σ2 · · · X φr:Sr→A type σr 1 q cB(φ)+Pr i=1 ficA(φi) K (2-5) where Si is the stabilizer under φ of a canonical orbit representative of the action of φ on B. The right hand side of (2-5) factors, and Si ⊂G K has fixed field Li with residue field of size q fi K . We conclude that X ρ:GK →A≀B wreath type 6 1 qc(ρ) K =|A||B|−r X φ:GK →B type σ 1 qcA(φ) K X φ1:GL1→A type σ1 1 q f1cB(φ1) K · · · X φr:GLr →A type σr 1 q frcB(φr) K =|A||B|−r f(B,cB,σ)  1 qK  r Y i=1 f(A,cA,σi)  1 q fi K  . So, (A ≀B, c) has mass formulas by wreath type, and thus by type. □ Kedlaya [2007, Lemma 2.6] noted that if (0, c) and (0′, c′) have mass formulas f and f ′, then (0×0′, c′′) has mass formula f f ′, where c′′(ρ×ρ′)=c(ρ)+c′(ρ′). We can strengthen this statement to mass formulas by type using a much easier version of our argument for wreath products. We define the product type of a representation ρ × ρ′ : G K →0 × 0′ to be (σ, σ ′), where σ and σ ′ are the types of ρ and ρ′ respectively. Then X ρ×ρ′:GK →0×0′ product type (σ,σ ′) 1 qc′′(ρ×ρ′) K = X φ:GK →0 type σ 1 qc(ρ) K X φ1:GL1→0′ type σ ′ 1 qc′(ρ′) K . If 0 and 0′ have mass formulas by type, then the above gives mass formulas of 0 × 0′ by product type. Since type is a function of product type, we can sum the mass formulas by product type to obtain mass formulas by type for 0 × 0′. 398 Melanie Matchett Wood This, combined with Theorem 2.1 and Bhargava’s mass formula for Sn by type [Bhargava 2007, Proposition 1], proves Theorem 1.1. 3. Groups with rational character tables Kedlaya [2007, Proposition 5.3, Corollary 5.4, Corollary 5.5] showed that if c(ρ) is the counting function attached to 0 →GLn(C), then the following statement holds: (0, c) has a mass formula for all local fields K with qK relatively prime to |0| if and only if the character table of 0 has all rational entries. The proofs of [Kedlaya 2007, Proposition 5.3, Corollary 5.4, Corollary 5.5] hold for any counting function c that is determined by ρ(IK). This suggests that we define a proper counting function to be a counting function c that satisfies the following: if we have ρ : G K →0 and ρ′ : GK ′ →0 with qK, qK ′ relatively prime to |0|, and if ρ(IK) = ρ′(IK ′), then c(ρ) = c(ρ′). For proper counting functions, we always have partial mass formulas proven as in [Kedlaya 2007, Corollary 5.4]. Proposition 3.1. Let a be an invertible residue class mod |0| and c be a proper counting function. Then (0, c) has a mass formula for all local fields K with qK ∈a. The following proposition says exactly when these partial mass formulas agree, again proven as in [Kedlaya 2007, Corollary 5.5]. Proposition 3.2. Let c be a proper counting function for 0. Then (0, c) has a mass formula for all local fields K with qK relatively prime to |0| if and only if 0 has a rational character table. So, when looking for a group and a proper counting function with mass formulas for all local fields, we should look among groups with rational character tables (which are relatively rare, for example including only 14 of the 93 groups of order < 32 [Conway 2006]). All specific counting functions that have been so far considered in the literature are proper. It is not clear if there are any interesting nonproper counting functions. Our proof of Theorem 2.1 has the following corollary. Corollary 3.3. Any permutation group that can be constructed from the symmetric groups using wreath products and cross products has a rational character table. Proof. We first show that the counting function c defined in (2-2) is proper if cA and cB are proper. We consider only fields K with qK relatively prime to |0|. Since cB( ¯ ρ) only depends on ¯ ρ(IK), it is clear that the cB( ¯ ρ) term only depends on ρ(IK). Mass formulas for local Galois representations to wreath and cross products 399 Since IL j = IK ∩Sj, we have ρ j(IL j) = ρ(IL j) = ρ(IK) ∩Stab( j). Since cA(ρ j) depends only on ρ j(IL j), we see that it depends only on ρ(IK). The sum in (2-2) then depends only on ρ(IK). So the c defined in (2-2) is proper. Clearly the c′′(ρ × ρ′) defined for cross products is proper if c and c′ are proper. The counting function in Bhargava’s mass formula for Sn (see (1-2)) is an Artin conductor and thus is proper. So we can prove Theorem 1.1 with a proper counting function and conclude the corollary. One can show in a similar way that even in wild characteristics, the counting function c defined in (2-3) depends only on the images of the higher ramification groups Gm K, that is, if ρ : G K →A ≀B and ρ′ : G′ K →A ≀B have ρ(Gm K) = ρ′(Gm K ′) for all m ∈[0, ∞), then c(ρ) = c(ρ′), as long as the same is true for cA and cB. □ So, for example, ((S7 ≀S4)× S3)≀S8 has a rational character table. Corollary 3.3 does not seem to be a well-reported fact in the literature; the corollary shows that all Sylow 2-subgroups of symmetric groups (which are cross products of wreath products of S2’s) have rational character table, which was posed as an open prob-lem in [Mazurov and Khukhro 1999, Problem 15.25] and solved in [Revin 2004; Kolesnikov 2005]. However, since A ≀(B ≀C) = (A ≀B) ≀C and A ≀(B × C) = (A ≀B) × (A ≀C), any of the groups of Corollary 3.3 can be constructed only using the cross product and ≀Sn operations. It is well known that the cross product of two groups with rational character tables has a rational character table. Furthermore, Pfeiffer explains how GAP computes the character table of G ≀Sn from the character table of G, and one can check that if G has rational character table then all of the values constructed in the character table of G ≀Sn are rational, which implies Corollary 3.3. One might hope that all groups with rational character tables have mass formulas by type, but this is not necessarily the case. For example, considering (C3 × C3) ⋊C2 (where C2 acts nontrivially on each factor separately) in the tame case in type (13 21 11), one can check that for q ≡1 (mod 3) the mass is zero and for q ≡2 (mod 3) the mass is nonzero. 400 Melanie Matchett Wood 4. Towers and direct sums of field extensions Kedlaya explains the correspondence between Galois permutation representations and ´ etale extensions in [Kedlaya 2007, Lemma 3.1]. We have seen this correspon-dence already in other terms. If we have a representation ρ : GK →0 with r orbits, Si is the stabilizer of an element in the i-th orbit, and Li is the fixed field of Si, then ρ corresponds to L = Lr i=1 Li. For a local field F, let ℘F be the prime of F. In this correspondence, if c is the counting function attached to the permutation representation of 0, then c is the discriminant exponent of the extension L/K [Kedlaya 2007, Lemma 3.4]. In other words, ℘c(ρ) K = Disc(L|K). We can interpret the representations ρ : G K →A≀B as towers of ´ etale extensions M/L/K. If we take ¯ ρ : G K →B, then L = Lr i=1 Li is just the ´ etale extension of K corresponding to ¯ ρ. Then if M is the ´ etale extension of K corresponding to ρ, we see that M = Lr i=1 Mi, where Mi is the ´ etale extension of Li corresponding to ρi : GLi →A. So we see that M is an ´ etale extension of L, though L might not be a field. Let c be the counting function of our mass formula for wreath products, given by (2-3). From (2-3), we obtain ℘c(ρ) K = ℘cB( ¯ ρ) K r Y i=1 NLi|K ℘cA(ρi) Li  . For example, if cA and cB are both given by the discriminant exponent (or equiv-alently, attached to the permutation representation), then ℘c(ρ) K = Disc(L|K) r Y i=1 NLi|K(Disc(Mi|Li)). (4-1) For comparison, Disc(M|K) = Disc(L|K)[M:L] r Y i=1 NLi|K(Disc(Mi|Li)). As we will see for 0 = D4 in the next section, representations ρ : GK →0 can give not only field extensions of K whose Galois closure has Galois group 0, but also field extensions whose Galois closure has Galois group a proper subgroup of 0, as well as direct sums of field extensions. One could say that representations ρ : G K →A ≀B correspond to towers of “A-extensions” over “B-extensions” and further relate iterated wreath products to iterated towers. Similarly, one could say Mass formulas for local Galois representations to wreath and cross products 401 that a representation ρ : G K →A × B corresponds to a direct sum of an “A-extension” and a “B-extension.” The quotes indicate that the extensions do not necessarily have Galois closure with group A or B. In fact, it seems the most convenient way to define “A-extensions” or isomorphisms of “A-extensions” is simply to use the language of Galois representations as we have in this paper. 5. Masses for D4 By Proposition 3.2 we know, at least for proper counting functions, that the exis-tence of a mass formula for a group 0 for fields with qK relatively prime to |0| does not depend on the choice of the counting function. However, in wild charac-teristic this is not the case. For example, D4, the dihedral group with 8 elements, is isomorphic to S2 ≀S2, so by Theorem 1.1 there is a c (given in (2-3)) for which D4 has a mass formula for all local fields. An expression for c in terms of ´ etale extensions can be read off from (4-1). In particular, for a surjective representation ρ : GK →D4 corresponding to a quartic field extension M of K with a quadratic subextension L, ℘c(ρ) K = Disc(L|K)NL|K(Disc(M|L)). (5-1) For this c, for all local fields K, we have that M(K, D4, c) := X ρ∈SK,D4 1 qc(ρ) K = 8 + 16 qK + 16 q2 K . From the definition of c given in (2-2) and the description of the absolute tame Galois group of a local field, we can compute M(K, D4, c) for a field K with qK odd. By Theorem 2.1 we know the formula holds for all K. However, the counting function for D4 that has been considered when counting global extensions (for example in [Cohen et al. 2002]) is the one attached the faithful permutation representation of D4 on a four element set (equivalently the discriminant exponent of the corresponding ´ etale extension). We call this counting function d, and in comparison with (5-1) we have ℘d(ρ) K = Disc(M|K) = Disc(L|K)2NL|K(Disc(M|L)). With d, we now show that D4 does not have a mass formula for all local fields. Using the correspondence of Section 4, we can analyze the representations ρ : G K →D4 ⊂S4 in Table 1, where I = image(ρ), j =  s ∈S4 sIs−1 ⊂D4 and k = | CentralizerS4(I)|. We take the D4 in S4 generated by (1 2 3 4) and (1 3). 402 Melanie Matchett Wood I j k L D4 8 2 degree 4 field whose Galois-closure/K has group D4 C4 8 4 degree 4 field Galois/K with group C4 ∼ = Z/4 ⟨(1 2)(3 4), (1 3)(2 4)⟩ 24 4 degree 4 field Galois/K with group V4 ∼ = Z/2 × Z/2 ⟨(1 3), (2 4)⟩ 8 4 L1 ⊕L2 with [Li : K]=2 and Li distinct fields ⟨(1 3)(2 4)⟩, ⟨(1 2)(3 4)⟩ 24 8 L1 ⊕L2 with [Li : K]=2 or ⟨(1 4)(2 3)⟩ and L1 ∼ = L2 fields ⟨(2 4)⟩or ⟨(1 3)⟩ 8 4 L1 ⊕K ⊕K with [L1 : K]=2 and L1 a field 1 24 24 K ⊕K ⊕K ⊕K Table 1 Each isomorphism class of algebras appears j k times from a representation ρ : GK →D4 (see [Kedlaya 2007, Lemma 3.1]). Let S(K, G, m) be the set of iso-morphism classes of degree m field extensions of K whose Galois closure over K has group G. Then from the above table we see that M(K, D4, d) = X F∈S(K,D4,4) 4 | Disc F| + X F∈S(K,C4,4) 2 | Disc F| + X F∈S(K,V4,4) 6 | Disc F| + X F1,F2∈S(K,C2,2) F1̸∼ =F2 2 | Disc F1|| Disc F2| + X F∈S(K,C2,2) 3 | Disc F|2 + X F∈S(K,C2,2) 2 | Disc F| + 1, where if ℘F is the prime of F and Disc F = ℘m F , then | Disc F| = qm F . Us-ing the Database of Local Fields [Jones and Roberts 2006] we can compute that M(Q2, D4, d) = 121 8 . For fields with 2 ∤qK, the structure of the tame quotient of the absolute Galois group of a local field allows us to compute the mass to be 8 + 8 qK + 16 q2 K + 8 q3 K Mass formulas for local Galois representations to wreath and cross products 403 (also see [Kedlaya 2007, Corollary 5.4]) which evaluates to 17 for qK = 2. Thus (D4, d) does not have a mass formula for all local fields. As another example, Kedlaya [2007, Proposition 9.3] found that W(G2) does not have a mass formula for all local fields of residual characteristic 2 when c is the Artin conductor of the Weyl representation. However, W(G2) ∼ = S2 × S3 and thus it has a mass formula for all local fields with counting function the sum of the Artin conductors of the standard representations of S2 and S3. It would be interesting to study what the presence or absence of mass formulas tells us about a counting function, in particular with respect to how global fields can be counted asymptotically with that counting function. As in Bhargava [2007, Section 8.2], we can form an Euler series Mc(0, s) = C(0)  X ρ∈SR,0 1 |0|  Y p  1 |0| X ρ∈SQp,0 1 pc(ρ)s  = X n≥1 mnn−s, where C(0) is some simple, yet to be explained, rational constant. (We work over Q for simplicity, and the product is over rational primes.) For a representation ρ : GQ →0, let ρp be the restriction of ρ to GQp. The idea is that mn should be a heuristic of the number of 0-extensions of Q (that is, surjective ρ : GQ →0) with Y p pc(ρp) = n, though mn is not necessarily an integer. Bhargava [2007, Section 8.2] asks the following question. Question 5.1. Does lim X→∞ PX n=1 mn  isom. classes of surjective ρ : GQ →0 with Q p pc(ρp) ≤X = 1? Bhargava in fact asks more refined questions in which some local behaviors are fixed. With the counting function d for D4 attached to the permutation represen-tation (that is, the discriminant exponent), we can form Md(D4, s) and compute numerically the above limit. We use the work of Cohen, Diaz y Diaz, and Oliver on counting D4-extensions by discriminant (see [Cohen et al. 2006] for a recent value of the relevant constants) to calculate the limit of the denominator, and we use standard Tauberian theorems (see [Narkiewicz 1983, Corollary, p. 121]) and PARI/GP to calculate the limit of the numerator. Of course, C(D4) has not been decided, but it does not appear (by using the algdep function in PARI/GP) that any simple rational C(D4) will give an affirmative answer to the above ques-tion. 404 Melanie Matchett Wood In light of our mass formula for a different counting function c for D4, we naturally wonder about Question 5.1 in the case of D4 and that c. Answering this question would require counting D4 extensions M with quadratic subfield L by Disc(L| Q) NL|Q(Disc(M|L)) instead of by discriminant, which is Disc(L|Q)2NL|Q(Disc(M|L)). Acknowledgements I would like to thank Manjul Bhargava for his guidance while I was doing this research and thorough and helpful comments on earlier versions of this manuscript. I would also like to thank Kiran Kedlaya for providing me an early version of [Kedlaya 2007] and for answers to my questions about that paper. The referee gave suggestions for additions and improvements to the paper that were incorporated and much appreciated. This work was supported by a National Defense Science and Engineering Grad-uate Fellowship. References [Bhargava 2005] M. Bhargava, “The density of discriminants of quartic rings and fields”, Ann. of Math. (2) 162:2 (2005), 1031–1063. MR 2006m:11163 Zbl 05042692 [Bhargava 2007] M. Bhargava, “Mass formulae for extensions of local fields, and conjectures on the density of number field discriminants”, Int. Math. Res. Not. 2007:17 (2007), rnm052. MR 2354798 Zbl 05215305 [Bhargava ≥2008] M. Bhargava, “The density of discriminants of quintic rings and fields”, Ann. of Math.. To appear. [Cohen et al. 2002] H. Cohen, F. Diaz y Diaz, and M. Olivier, “Enumerating quartic dihedral exten-sions of Q”, Compositio Math. 133:1 (2002), 65–93. MR 2003f:11167 Zbl 1050.11104 [Cohen et al. 2006] H. Cohen, F. Diaz y Diaz, and M. Olivier, “Counting discriminants of number fields”, J. Théor. Nombres Bordeaux 18:3 (2006), 573–593. MR 2008d:11127 Zbl 05186992 [Conway 2006] J. H. Conway, personal communication, 2006. [Davenport and Heilbronn 1971] H. Davenport and H. Heilbronn, “On the density of discriminants of cubic fields. II”, Proc. Roy. Soc. London Ser. A 322:1551 (1971), 405–420. MR 58 #10816 Zbl 0212.08101 [Ellenberg and Venkatesh 2005] J. S. Ellenberg and A. Venkatesh, “Counting extensions of function fields with bounded discriminant and specified Galois group”, pp. 151–168 in Geometric methods in algebra and number theory (Miami, 2003), edited by F. Bogomolov and Y. Tschinkel, Progr. Math. 235, Birkhäuser, Boston, 2005. MR 2006f:11139 Zbl 1085.11057 [Jones and Roberts 2006] J. W. Jones and D. P. Roberts, “A database of local fields”, J. Symbolic Comput. 41:1 (2006), 80–97. MR 2006k:11230 Zbl 05203425 Mass formulas for local Galois representations to wreath and cross products 405 [Kedlaya 2007] K. S. Kedlaya, “Mass formulas for local Galois representations”, Int. Math. Res. Not. 2007:17 (2007), rnm021. MR 2354797 Zbl 05215145 [Kolesnikov 2005] S. G. Kolesnikov, “On the rationality and strong reality of Sylow 2-subgroups of Weyl and alternating groups”, Algebra Logika 44:1 (2005), 44–53, 127. MR 2006d:20012 Zbl 05041622 [Mazurov and Khukhro 1999] V. D. Mazurov and E. I. Khukhro (editors), The Kourovka notebook. Unsolved problems in group theory, 14th augmented ed., Russian Academy of Sciences Siberian Division Institute of Mathematics, Novosibirsk, 1999. MR 2000h:20001a Zbl 0943.20004 [Narkiewicz 1983] W. Narkiewicz, Number theory, World Scientific Publishing Co., Singapore, 1983. Translated from the Polish by S. Kanemitsu. MR 85j:11002 Zbl 0528.10001 [PAR 2006] PARI/GP, 2.3.2, 2006, Available at http:/ /pari.math.u-bordeaux.fr. [Pfeiffer 1994] G. Pfeiffer, “Character tables of Weyl groups in GAP”, Bayreuth. Math. Schr. 47 (1994), 165–222. MR 95d:20027 Zbl 0830.20023 [Revin 2004] D. O. Revin, “The characters of groups of type X ≀Zp”, Sib. Èlektron. Mat. Izv. 1 (2004), 110–116. MR 2005m:20025 Zbl 1079.20011 [Serre 1978] J.-P. Serre, “Une “formule de masse” pour les extensions totalement ramifiées de degré donné d’un corps local”, C. R. Acad. Sci. Paris Sér. A-B 286:22 (1978), A1031–A1036. MR 80a:12018 Zbl 0388.12005 [Wood 2008] M. M. Wood, “On the probabilities of local behaviors in abelian field extensions”, preprint, 2008. Communicated by Hendrik W. Lenstra Received 2007-11-28 Revised 2008-03-31 Accepted 2008-04-28 [email protected] Princeton University, Department of Mathematics, Fine Hall, Washington Road, Princeton, NJ 08544, United States mathematical sciences publishers msp Algebra & Number Theory www.jant.org EDITORS MANAGING EDITOR Bjorn Poonen University of California Berkeley, USA EDITORIAL BOARD CHAIR David Eisenbud University of California Berkeley, USA BOARD OF EDITORS Georgia Benkart University of Wisconsin, Madison, USA Dave Benson University of Aberdeen, Scotland Richard E. Borcherds University of California, Berkeley, USA John H. Coates University of Cambridge, UK J-L. Colliot-Th´ elene CNRS, Universit´ e Paris-Sud, France Brian D. Conrad University of Michigan, USA H´ el ene Esnault Universit¨ at Duisburg-Essen, Germany Hubert Flenner Ruhr-Universit¨ at, Germany Edward Frenkel University of California, Berkeley, USA Andrew Granville Universit´ e de Montr´ eal, Canada Joseph Gubeladze San Francisco State University, USA Ehud Hrushovski Hebrew University, Israel Craig Huneke University of Kansas, USA Mikhail Kapranov Yale University, USA Yujiro Kawamata University of Tokyo, Japan J´ anos Koll´ ar Princeton University, USA Hendrik W. Lenstra Universiteit Leiden, The Netherlands Yuri Manin Northwestern University, USA Barry Mazur Harvard University, USA Susan Montgomery University of Southern California, USA Shigefumi Mori RIMS, Kyoto University, Japan Andrei Okounkov Princeton University, USA Raman Parimala Emory University, USA Victor Reiner University of Minnesota, USA Karl Rubin University of California, Irvine, USA Peter Sarnak Princeton University, USA Michael Singer North Carolina State University, USA Ronald Solomon Ohio State University, USA Vasudevan Srinivas Tata Inst. of Fund. Research, India J. Toby Stafford University of Michigan, USA Bernd Sturmfels University of California, Berkeley, USA Richard Taylor Harvard University, USA Ravi Vakil Stanford University, USA Michel van den Bergh Hasselt University, Belgium Marie-France Vign´ eras Universit´ e Paris VII, France Kei-Ichi Watanabe Nihon University, Japan Andrei Zelevinsky Northeastern University, USA Efim Zelmanov University of California, San Diego, USA PRODUCTION [email protected] Paulo Ney de Souza, Production Manager Silvio Levy, Senior Production Editor See inside back cover or www.jant.org for submission instructions. The subscription price for 2008 is US $120/year for the electronic version, and $180/year for print and electronic. Subscriptions, requests for back issues from the last three years and changes of subscribers address should be sent to Mathematical Sciences Publishers, Department of Mathematics, University of California, Berkeley, CA 94720-3840, USA. Algebra & Number Theory (ISSN 1937-0652) at Mathematical Sciences Publishers, Department of Mathematics, University of California, Berkeley, CA 94720-3840 is published continuously online. Periodical rate postage paid at Berkeley, CA 94704, and additional mailing offices. ANT peer review and production are managed by EditFLOW™from Mathematical Sciences Publishers. PUBLISHED BY mathematical sciences publishers A NON-PROFIT CORPORATION Typeset in L A T EX Copyright ©2008 by Mathematical Sciences Publishers Algebra & Number Theory Volume 2 No. 4 2008 369 Root systems and the quantum cohomology of ADE resolutions JIM BRYAN and AMIN GHOLAMPOUR 391 Mass formulas for local Galois representations to wreath products and cross products MELANIE MATCHETT WOOD 407 Operad of formal homogeneous spaces and Bernoulli numbers SERGEI A. MERKULOV 435 On the algebra of some group schemes DANIEL FERRAND 467 Group actions and rational ideals MARTIN LORENZ
313
The House Of The Wolfings by William Morris VIII. THE FOLK-MOTE OF THE MARKMEN So the Dayling warrior lifted up his voice and said: "O kindreds of the Markmen, hearken the words I say; For no chancehap assembly is gathered here to-day. The fire hath gone around us in the hands of our very kin, And twice the horn hath sounded, and the Thing is hallowed in. Will ye hear or forbear to hearken the tale there is to tell? There are many mouths to tell it, and a many know it well. And the tale is this, that the foemen against our kindreds fare Who eat the meadows desert, and burn the desert bare." Then sat he down on the turf seat; but there arose a murmur in the assembly as of men eager to hearken; and without more ado came a man out of a company of the Upper-mark, and clomb up to the top of the Speech-Hill, and spoke in a loud voice: "I am Bork, a man of the Geirings of the Upper-mark: two days ago I and five others were in the wild-wood a-hunting, and we wended through the thicket, and came into the land of the hill-folk; and after we had gone a while we came to a long dale with a brook running through it, and yew-trees scattered about it and a hazel copse at one end; and by the copse was a band of men who had women and children with them, and a few neat, and fewer horses; but sheep were feeding up and down the dale; and they had made them booths of turf and boughs, and were making ready their cooking fires, for it was evening. So when they saw us, they ran to their arms, but we cried out to them in the tongue of the Goths and bade them peace. Then they came up the bent to us and spake to us in the Gothic tongue, albeit a little diversely from us; and when we had told them what and whence we were, they were glad of us, and bade us to them, and we went, and they entreated us kindly, and made us such cheer as they might, and gave us mutton to eat, and we gave them venison of the wild-wood which we had taken, and we abode with them there that night. "But they told us that they were a house of the folk of the herdsmen, and that there was war in the land, and that the people thereof were fleeing before the cruelty of a host of warriors, men of a mighty folk, such as the earth hath not heard of, who dwell in great cities far to the south; and how that this host had crossed the mountains, and the Great Water that runneth from them, and had fallen upon their kindred, and overcome their fighting-men, and burned their dwellings, slain their elders, and driven their neat and their sheep, yea, and their women and children in no better wise than their neat and sheep. "And they said that they had fled away thus far from their old habitations, which were a long way to the south, and were now at point to build them dwellings there in that Dale of the Hazels, and to trust to it that these Welshmen, whom they called Romans, would not follow so far, and that if they did, they might betake them to the wild-wood, and let the thicket cover them, they being so nigh to it. "Thus they told us; wherefore we sent back one of our fellowship, Birsti of the Geirings, to tell the tale; and one of the herdsmen folk went with him, but we ourselves went onward to hear more of these Romans; for the folk when we asked them, said that they had been in battle against them, but had fled away for fear of their rumour only. Therefore we went on, and a young man of this kindred, who named themselves the Hrutings of the Fell-folk, went along with us. But the others were sore afeard, for all they had weapons. "So as we went up the land we found they had told us the very sooth, and we met divers Houses, and bands, and broken men, who were fleeing from this trouble, and many of them poor and in misery, having lost their flocks and herds as well as their roofs; and this last be but little loss to them, as their dwellings are but poor, and for the most part they have no tillage. Now of these men, we met not a few who had been in battle with the Roman host, and much they told us of their might not to be dealt with, and their mishandling of those whom they took, both men and women; and at the last we heard true tidings how they had raised them a garth, and made a stronghold in the midst of the land, as men who meant abiding there, so that neither might the winter drive them aback, and that they might be succoured by their people on the other side of the Great River; to which end they have made other garths, though not so great, on the road to that water, and all these well and wisely warded by tried men. For as to the Folks on the other side of the Water, all these lie under their hand already, what by fraud what by force, and their warriors go with them to the battle and help them; of whom we met bands now and again, and fought with them, and took men of them, who told us all this and much more, over long to tell of here." He paused and turned about to look on the mighty assembly, and his ears drank in the long murmur that followed his speaking, and when it had died out he spake again, but in rhyme: "Lo thus much of my tidings! But this too it behoveth to tell, That these masterful men of the cities of the Markmen know full well: And they wot of the well-grassed meadows, and the acres of the Mark, And our life amidst of the wild-wood like a candle in the dark; And they know of our young men's valour and our women's loveliness, And our tree would they spoil with destruction if its fruit they may never possess. For their lust is without a limit, and nought may satiate Their ravening maw; and their hunger if ye check it turneth to hate, And the blood-fever burns in their bosoms, and torment and anguish and woe O'er the wide field ploughed by the sword-blade for the coming years they sow; And ruth is a thing forgotten and all hopes they trample down; And whatso thing is steadfast, whatso of good renown, Whatso is fair and lovely, whatso is ancient sooth In the bloody marl shall they mingle as they laugh for lack of ruth. Lo the curse of the world cometh hither; for the men that we took in the land Said thus, that their host is gathering with many an ordered band To fall on the wild-wood passes and flood the lovely Mark, As the river over the meadows upriseth in the dark. Look to it, O ye kindred! availeth now no word But the voice of the clashing of iron, and the sword-blade on the sword." Therewith he made an end, and deeper and longer was the murmur of the host of freemen, amidst which Bork gat him down from the Speech-Hill, his weapons clattering about him, and mingled with the men of his kindred. Then came forth a man of the kin of the Shieldings of the Upper-mark, and clomb the mound; and he spake in rhyme from beginning to end; for he was a minstrel of renown: "Lo I am a man of the Shieldings and Geirmund is my name; A half-moon back from the wild-wood out into the hills I came, And I went alone in my war-gear; for we have affinity With the Hundings of the Fell-folk, and with them I fain would be; For I loved a maid of their kindred. Now their dwelling was not far From the outermost bounds of the Fell-folk, and bold in the battle they are, And have met a many people, and held their own abode. Gay then was the heart within me, as over the hills I rode And thought of the mirth of to-morrow and the sweet-mouthed Hunding maid And their old men wise and merry and their young men unafraid, And the hall-glee of the Hundings and the healths o'er the guesting cup. But as I rode the valley, I saw a smoke go up O'er the crest of the last of the grass-hills 'twixt me and the Hunding roof, And that smoke was black and heavy: so a while I bided aloof, And drew my girths the tighter, and looked to the arms I bore And handled my spear for the casting; for my heart misgave me sore, For nought was that pillar of smoke like the guest-fain cooking-fire. I lingered in thought for a minute, then turned me to ride up higher, And as a man most wary up over the bent I rode, And nigh hid peered o'er the hill-crest adown on the Hunding abode; And forsooth 'twas the fire wavering all o'er the roof of old, And all in the garth and about it lay the bodies of the bold; And bound to a rope amidmost were the women fair and young, And youths and little children, like the fish on a withy strung As they lie on the grass for the angler before the beginning of night. Then the rush of the wrath within me for a while nigh blinded my sight; Yet about the cowering war-thralls, short dark-faced men I saw, Men clad in iron armour, this way and that way draw, As warriors after the battle are ever wont to do. Then I knew them for the foemen and their deeds to be I knew, And I gathered the reins together to ride down the hill amain, To die with a good stroke stricken and slay ere I was slain. When lo, on the bent before me rose the head of a brown-faced man, Well helmed and iron-shielded, who some Welsh speech began And a short sword brandished against me; then my sight cleared and I saw Five others armed in likewise up hill and toward me draw, And I shook the spear and sped it and clattering on his shield He fell and rolled o'er smitten toward the garth and the Fell-folk's field. "But my heart changed with his falling and the speeding of my stroke, And I turned my horse; for within me the love of life awoke, And I spurred, nor heeded the hill-side, but o'er rough and smooth I rode Till I heard no chase behind me; then I drew rein and abode. And down in a dell was I gotten with a thorn-brake in its throat, And heard but the plover's whistle and the blackbird's broken note 'Mid the thorns; when lo! from a thorn-twig away the blackbird swept, And out from the brake and towards me a naked man there crept, And straight I rode up towards him, and knew his face for one I had seen in the hall of the Hundings ere its happy days were done. I asked him his tale, but he bade me forthright to bear him away; So I took him up behind me, and we rode till late in the day, Toward the cover of the wild-wood, and as swiftly as we might. But when yet aloof was the thicket and it now was moonless night, We stayed perforce for a little, and he told me all the tale: How the aliens came against them, and they fought without avail Till the Roof o'er their heads was burning and they burst forth on the foe, And were hewn down there together; nor yet was the slaughter slow. But some they saved for thralldom, yea, e'en of the fighting men, Or to quell them with pains; so they stripped them; and this man espying just then Some chance, I mind not whatwise, from the garth fled out and away. "Now many a thing noteworthy of these aliens did he say, But this I bid you hearken, lest I wear the time for nought, That still upon the Markmen and the Mark they set their thought; For they questioned this man and others through a go-between in words Of us, and our lands and our chattels, and the number of our swords; Of the way and the wild-wood passes and the winter and his ways. Now look to see them shortly; for worn are fifteen days Since in the garth of the Hundings I saw them dight for war, And a hardy folk and ready and a swift-foot host they are." Therewith Geirmund went down clattering from the Hill and stood with his company. But a man came forth from the other side of the ring, and clomb the Hill: he was a red-haired man, rather big, clad in a skin coat, and bearing a bow in his hand and a quiver of arrows at his back, and a little axe hung by his side. He said: "I dwell in the House of the Hrossings of the Mid-mark, and I am now made a man of the kindred: howbeit I was not born into it; for I am the son of a fair and mighty woman of a folk of the Kymry, who was taken in war while she went big with me; I am called Fox the Red. "These Romans have I seen, and have not died: so hearken! for my tale shall be short for what there is in it. "I am, as many know, a hunter of Mirkwood, and I know all its ways and the passes through the thicket somewhat better than most. "A moon ago I fared afoot from Mid-mark through Upper-mark into the thicket of the south, and through it into the heath country; and I went over a neck and came in the early dawn into a little dale when somewhat of mist still hung over it. At the dale's end I saw a man lying asleep on the grass under a quicken tree, and his shield and sword hanging over his head to a bough thereof, and his horse feeding hoppled higher up the dale. "I crept up softly to him with a shaft nocked on the string, but when I drew near I saw him to be of the sons of the Goths. So I doubted nothing, but laid down my bow, and stood upright, and went to him and roused him, and he leapt up, and was wroth. "I said to him, 'Wilt thou be wroth with a brother of the kindred meeting him in unpeopled parts?' "But he reached out for his weapons; but ere he could handle them I ran in on him so that he gat not his sword, and had scant time to smite at me with a knife which he drew from his waist. "I gave way before him for he was a very big man, and he rushed past me, and I dealt him a blow on the side of the head with my little axe which is called the War-babe, and gave him a great wound: and he fell on the grass, and as it happened that was his bane. "I was sorry that I had slain him, since he was a man of the Goths: albeit otherwise he had slain me, for he was very wroth and dazed with slumber. "He died not for a while; and he bade me fetch him water; and there was a well hard by on the other side of the tree; so I fetched it him in a great shell that I carry, and he drank. I would have sung the blood-staunching song over him, for I know it well. But he said, 'It availeth nought: I have enough: what man art thou?' "I said, 'I am a fosterling of the Hrossings, and my mother was taken in war: my name is Fox.' "Said he; 'O Fox, I have my due at thy hands, for I am a Markman of the Elkings, but a guest of the Burgundians beyond the Great River; and the Romans are their masters and they do their bidding: even so did I who was but their guest: and I a Markman to fight against the Markmen, and all for fear and for gold! And thou an alien-born hast slain their traitor and their dastard! This is my due. Give me to drink again.' "So did I; and he said; 'Wilt thou do an errand for me to thine own house?' 'Yea,' said I. "Said he, 'I am a messenger to the garth of the Romans, that I may tell the road to the Mark, and lead them through the thicket; and other guides are coming after me: but not yet for three days or four. So till they come there will be no man in the Roman garth to know thee that thou art not even I myself. If thou art doughty, strip me when I am dead and do my raiment on thee, and take this ring from my neck, for that is my token, and when they ask thee for a word say, "NO LIMIT"; for that is the token-word. Go south-east over the dales keeping Broadshield-fell square with thy right hand, and let thy wisdom, O Fox, lead thee to the Garth of the Romans, and so back to thy kindred with all tidings thou hast gathered--for indeed they come--a many of them. Give me to drink.' "So he drank again, and said, 'The bearer of this token is called Hrosstyr of the River Goths. He hath that name among dastards. Thou shalt lay a turf upon my head. Let my death pay for my life.' "Therewith he fell back and died. So I did as he bade me and took his gear, worth six kine, and did it on me; I laid turf upon him in that dale, and hid my bow and my gear in a blackthorn brake hard by, and then took his horse and rode away. "Day and night I rode till I came to the garth of the Romans; there I gave myself up to their watchers, and they brought me to their Duke, a grim man and hard. He said in a terrible voice, 'Thy name?' I said, 'Hrosstyr of the River Goths.' He said, 'What limit?' I answered, 'NO LIMIT.' 'The token!' said he, and held out his hand. I gave him the ring. 'Thou art the man,' said he. "I thought in my heart, 'thou liest, lord,' and my heart danced for joy. "Then he fell to asking me questions a many, and I answered every one glibly enough, and told him what I would, but no word of truth save for his hurt, and my soul laughed within me at my lies; thought I, the others, the traitors, shall come, and they shall tell him the truth, and he will not trow it, or at the worst he will doubt them. But me he doubted nothing, else had he called in the tormentors to have the truth of me by pains; as I well saw afterwards, when they questioned with torments a man and a woman of the hill-folk whom they had brought in captive. "I went from him and went all about that garth espying everything, fearing nothing; albeit there were divers woful captives of the Goths, who cursed me for a dastard, when they saw by my attire that I was of their blood. "I abode there three days, and learned all that I might of the garth and the host of them, and the fourth day in the morning I went out as if to hunt, and none hindered me, for they doubted me not. "So I came my ways home to the Upper-mark, and was guested with the Geirings. Will ye that I tell you somewhat of the ways of these Romans of the garth? The time presses, and my tale runneth longer than I would. What will ye?" Then there arose a murmur, "Tell all, tell all." "Nay," said the Fox, "All I may not tell; so much did I behold there during the three days' stay; but this much it behoveth you to know: that these men have no other thought save to win the Mark and waste it, and slay the fighting men and the old carles, and enthrall such as they will, that is, all that be fair and young, and they long sorely for our women either to have or to sell. "As for their garth, it is strongly walled about with a dyke newly dug; on the top thereof are they building a wall made of clay, and burned like pots into ashlar stones hard and red, and these are laid in lime. "It is now the toil of the thralls of our blood whom they have taken, both men and women, to dig that clay and to work it, and bear it to kilns, and to have for reward scant meat and many stripes. For it is a grim folk, that laugheth to see others weep. "Their men-at-arms are well dight and for the most part in one way: they are helmed with iron, and have iron on their breasts and reins, and bear long shields that cover them to the knees. They are girt with a sax and have a heavy casting-spear. They are dark-skinned and ugly of aspect, surly and of few words: they drink little, and eat not much. "They have captains of tens and of hundreds over them, and that war- duke over all; he goeth to and fro with gold on his head and his breast, and commonly hath a cloak cast over him of the colour of the crane's-bill blossom. "They have an altar in the midst of their burg, and thereon they sacrifice to their God, who is none other than their banner of war, which is an image of the ravening eagle with outspread wings; but yet another God they have, and look you! it is a wolf, as if they were of the kin of our brethren; a she-wolf and two man-children at her dugs; wonderful is this. "I tell you that they are grim; and know it by this token: those captains of tens, and of hundreds, spare not to smite the warriors with staves even before all men, when all goeth not as they would; and yet, though they be free men, and mighty warriors, they endure it and smite not in turn. They are a most evil folk. "As to their numbers, they of the burg are hard on three thousand footmen of the best; and of horsemen five hundred, nowise good; and of bowmen and slingers six hundred or more: their bows weak; their slingers cunning beyond measure. And the talk is that when they come upon us they shall have with them some five hundred warriors of the Over River Goths, and others of their own folk." Then he said: "O men of the Mark, will ye meet them in the meadows and the field, Or will ye flee before them and have the wood for a shield? Or will ye wend to their war-burg with weapons cast away, With your women and your children, a peace of them to pray? So doing, not all shall perish; but most shall long to die Ere in the garths of the Southland two moons have loitered by." Then rose the rumour loud and angry mingled with the rattle of swords and the clash of spears on shields; but Fox said: "Needs must ye follow one of these three ways. Nay, what say I? there are but two ways and not three; for if ye flee they shall follow you to the confines of the earth. Either these Welsh shall take all, and our lives to boot, or we shall hold to all that is ours, and live merrily. The sword doometh; and in three days it may be the courts shall be hallowed: small is the space between us." Therewith he also got him down from the Hill, and joined his own house: and men said that he had spoken well and wisely. But there arose a noise of men talking together on these tidings; and amidst it an old warrior of the Nether-mark strode forth and up to the Hill- top. Gaunt and stark he was to look on; and all men knew him and he was well-beloved, so all held their peace as he said: "I am Otter of the Laxings: now needeth but few words till the War- duke is chosen, and we get ready to wend our ways in arms. Here have ye heard three good men and true tell of our foes, and this last, Fox the Red, hath seen them and hath more to tell when we are on the way; nor is the way hard to find. It were scarce well to fall upon these men in their garth and war-burg; for hard is a wall to slay. Better it were to meet them in the Wild-wood, which may well be a friend to us and a wall, but to them a net. O Agni of the Daylings, thou warder of the Thing-stead, bid men choose a War-duke if none gainsay it." And without more words he clattered down the Hill, and went and stood with the Laxing band. But the old Dayling arose and blew the horn, and there was at once a great silence, amidst which he said: "Children of Slains-father, doth the Folk go to the war?" There was no voice but shouted "yea," and the white swords sprang aloft, and the westering sun swept along a half of them as they tossed to and fro, and the others showed dead-white and fireless against the dark wood. Then again spake Agni: "Will ye choose the War-duke now and once, or shall it be in a while, after others have spoken?" And the voice of the Folk went up, "Choose! Choose!" Said Agni: "Sayeth any aught against it?" But no voice of a gainsayer was heard, and Agni said: "Children of Tyr, what man will ye have for a leader and a duke of war?" Then a great shout sprang up from amidst the swords: "We will have Thiodolf; Thiodolf the Wolfing!" Said Agni: "I hear no other name; are ye of one mind? hath any aught to say against it? If that be so, let him speak now, and not forbear to follow in the wheatfield of the spears. Speak, ye that will not follow Thiodolf!" No voice gainsaid him: then said the Dayling: "Come forth thou War- duke of the Markmen! take up the gold ring from the horns of the altar, set it on thine arm and come up hither!" Then came forth Thiodolf into the sun, and took up the gold ring from where it lay, and did it on his arm. And this was the ring of the leader of the folk whenso one should be chosen: it was ancient and daintily wrought, but not very heavy: so ancient it was that men said it had been wrought by the dwarfs. So Thiodolf went up on to the hill, and all men cried out on him for joy, for they knew his wisdom in war. Many wondered to see him unhelmed, but they had a deeming that he must have made oath to the Gods thereof and their hearts were glad of it. They took note of the dwarf-wrought hauberk, and even from a good way off they could see what a treasure of smith's work it was, and they deemed it like enough that spells had been sung over it to make it sure against point and edge: for they knew that Thiodolf was well beloved of the Gods. But when Thiodolf was on the Hill of Speech, he said: "Men of the kindreds, I am your War-duke to-day; but it is oftenest the custom when ye go to war to choose you two dukes, and I would it were so now. No child's play is the work that lies before us; and if one leader chance to fall let there be another to take his place without stop or stay. Thou Agni of the Daylings, bid the Folk choose them another duke if so they will." Said Agni: "Good is this which our War-duke hath spoken; say then, men of the Mark, who shall stand with Thiodolf to lead you against the aliens?" Then was there a noise and a crying of names, and more than two names seemed to be cried out; but by far the greater part named either Otter of the Laxings, or Heriulf of the Wolfings. True it is that Otter was a very wise warrior, and well known to all the men of the Mark; yet so dear was Heriulf to them, that none would have named Otter had it not been mostly their custom not to choose both War- dukes from one House. Now spake Agni: "Children of Tyr, I hear you name more than one name: now let each man cry out clearly the name he nameth. So the Folk cried the names once more, but this time it was clear that none was named save Otter and Heriulf; so the Dayling was at point to speak again, but or ever a word left his lips, Heriulf the mighty, the ancient of days, stood forth: and when men saw that he would take up the word there was a great silence. So he spake: "Hearken, children! I am old and war-wise; but my wisdom is the wisdom of the sword of the mighty warrior, that knoweth which way it should wend, and hath no thought of turning back till it lieth broken in the field. Such wisdom is good against Folks that we have met heretofore; as when we have fought with the Huns, who would sweep us away from the face of the earth, or with the Franks or the Burgundians, who would quell us into being something worser than they be. But here is a new foe, and new wisdom, and that right shifty, do we need to meet them. One wise duke have ye gotten, Thiodolf to wit; and he is young beside me and beside Otter of the Laxings. And now if ye must needs have an older man to stand beside him, (and that is not ill) take ye Otter; for old though his body be, the thought within him is keen and supple like the best of Welsh-wrought blades, and it liveth in the days that now are: whereas for me, meseemeth, my thoughts are in the days bygone. Yet look to it, that I shall not fail to lead as the sword of the valiant leadeth, or the shaft shot by the cunning archer. Choose ye Otter; I have spoken over long." Then spoke Agni the Dayling, and laughed withal: "One man of the Folk hath spoken for Otter and against Heriulf--now let others speak if they will!" So the cry came forth, "Otter let it be, we will have Otter!" "Speaketh any against Otter?" said Agni. But there was no voice raised against him. Then Agni said: "Come forth, Otter of the Laxings, and hold the ring with Thiodolf." Then Otter went up on to the hill and stood by Thiodolf, and they held the ring together; and then each thrust his hand and arm through the ring and clasped hands together, and stood thus awhile, and all the Folk shouted together. Then spake Agni: "Now shall we hew the horses and give the gifts to the Gods." Therewith he and the two War-dukes came down from the hill; and stood before the altar; and the nine warriors of the Daylings stood forth with axes to hew the horses and with copper bowls wherein to catch the blood of them, and each hewed down his horse to the Gods, but the two War-dukes slew the tenth and fairest: and the blood was caught in the bowls, and Agni took a sprinkler and went round about the ring of men, and cast the blood of the Gods'-gifts over the Folk, as was the custom of those days. Then they cut up the carcases and burned on the altar the share of the Gods, and Agni and the War-dukes tasted thereof, and the rest they bore off to the Daylings' abode for the feast to be holden that night. Then Otter and Thiodolf spake apart together for awhile, and presently went up again on to the Speech-Hill, and Thiodolf said: "O kindreds of the Markmen; to-morrow with the day We shall wend up Mirkwood-water to bar our foes the way; And there shall we make our wain-burg on the edges of the wood, Where in the days past over at last the aliens stood, The Slaughter Tofts ye call it. There tidings shall we get If the curse of the world is awakened, and the serpent crawleth yet Amidst the Mirkwood thicket; and when the sooth we know, Then bearing battle with us through the thicket shall we go, The ancient Wood-wolf's children, and the People of the Shield, And the Spear-kin and the Horse-kin, while the others keep the field About the warded wain-burg; for not many need we there Where amidst of the thickets' tangle and the woodland net they fare, And the hearts of the aliens falter and they curse the fight ne'er done, And wonder who is fighting and which way is the sun." Thus he spoke; then Agni took up the war-horn again, and blew a blast, and then he cried out: "Now sunder we the Folk-mote! and the feast is for to-night, And to-morrow the Wayfaring; But unnamed is the day of the fight; O warriors, look ye to it that not long we need abide 'Twixt the hour of the word we have spoken, and our fair-fame's blooming tide! For then 'midst the toil and the turmoil shall we sow the seeds of peace, And the Kindreds' long endurance, and the Goth-folk's great increase." Then arose the last great shout, and soberly and in due order, kindred by kindred, they turned and departed from the Thing-stead and went their way through the wood to the abode of the Daylings. The House of the Wolfings: Next Chapter The House of the Wolfings: Previous Chapter The House of the Wolfings: Index The William Morris Internet Archive : Works
314
Published Time: 2022-06-10T23:37:46Z 3.5.4: Falling Raindrops - Mathematics LibreTexts =============== Skip to main content Table of Contents menu search Search build_circle Toolbar fact_check Homework cancel Exit Reader Mode school Campus Bookshelves menu_book Bookshelves perm_media Learning Objects login Login how_to_reg Request Instructor Account hub Instructor Commons Search Search this book Submit Search x Text Color Reset Bright Blues Gray Inverted Text Size Reset +- Margin Size Reset +- Font Type Enable Dyslexic Font - [x] Downloads expand_more Download Page (PDF) Download Full Book (PDF) Resources expand_more Periodic Table Physics Constants Scientific Calculator Reference expand_more Reference & Cite Tools expand_more Help expand_more Get Help Feedback Readability x selected template will load here Error This action is not available. chrome_reader_mode Enter Reader Mode 3.5: Numerical Applications 3: Numerical Solutions { } { "3.5.01:_The_Nonlinear_Pendulum" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "3.5.02:_Extreme_Sky_Diving" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "3.5.03:_The_Flight_of_Sports_Balls" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "3.5.04:_Falling_Raindrops" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "3.5.05:_The_Two-body_Problem" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "3.5.06:_The_Expanding_Universe" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "3.5.07:_The_Coefficient_of_Drag" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1" } { "3.01:_Eulers_Method" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "3.02:_Implementation_of_Numerical_Packages" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "3.03:_Higher_Order_Taylor_Methods" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "3.04:_Runge-Kutta_Methods" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "3.05:_Numerical_Applications" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "3.06:_Problems" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1" } Fri, 24 May 2024 06:57:21 GMT 3.5.4: Falling Raindrops 103779 103779 Delmar Larsen { } Anonymous Anonymous 2 false false [ "article:topic", "license:ccbyncsa", "showtoc:no", "licenseversion:30", "authorname:rherman", "source@ ] [ "article:topic", "license:ccbyncsa", "showtoc:no", "licenseversion:30", "authorname:rherman", "source@ ] Search site Search Search Go back to previous article Sign in Username Password Sign in Sign in Sign in Forgot password Contents 1. Home 2. Bookshelves 3. Differential Equations 4. A First Course in Differential Equations for Scientists and Engineers (Herman) 5. 3: Numerical Solutions 6. 3.5: Numerical Applications 7. 3.5.4: Falling Raindrops Expand/collapse global location A First Course in Differential Equations for Scientists and Engineers (Herman) Front Matter 1: First Order ODEs 2: Second Order ODEs 3: Numerical Solutions 4: Series Solutions 5: Laplace Transforms 6: Linear Systems 7: Nonlinear Systems 8: Appendix Calculus Review Back Matter 3.5.4: Falling Raindrops Last updated May 24, 2024 Save as PDF 3.5.3: The Flight of Sports Balls 3.5.5: The Two-body Problem Page ID 103779 Russell Herman University of North Carolina Wilmington ( \newcommand{\kernel}{\mathrm{null}\,}) Table of contents 1. Example 3.5.4.1 3.5.4.1 A SIMPLE PROBLEM THAT APPEARS IN MECHANICS is that of a falling raindrop through a mist. The raindrop not only undergoes free fall, but the mass of the drop grows as it interacts with the mist. There have been several papers written on this problem and it is a nice example to explore using numerical methods. In this section we look at models of a falling raindrop with and without air drag. First we consider the case in which there is no air drag. A simple model of free fall from Newton’s Second Law of Motion is d(m v)d t=m g d(m v)d t=m g In this discussion we will take downward as positive. Since the mass is not constant. we have m d v d t=m g−v d m d t m d v d t=m g−v d m d t In order to proceed, we need to specify the rate at which the mass is changing. There are several models one can adapt.We will borrow some of the ideas and in some cases the numerical values from Sokal(2010)1 1 and Edwards, Wilder, and Scime (2001).2 2 These papers also quote other interesting work on the topic. 1 A. D. Sokal, The falling raindrop, revisited, Am. J. Phys. 78, 643-645, (2010). 2 B. F. Edwards, J. W. Wilder, and E. E. Scime, Dynamics of Falling Raindrops, Eur. J. Phys. 22, 113-118, (2001). While v v and m m are functions of time, one can look for a way to eliminate time by assuming the rate of change of mass is an explicit function of m m and v v alone. For example, Sokal (2010) assumes the form d m d t=λ m σ v β,λ>0 d m d t=λ m σ v β,λ>0 This contains two commonly assumed models of accretion: σ=2/3,β=0 σ=2/3,β=0. This corresponds to growth of the raindrop proportional to the surface area. Since m∝r 3 m∝r 3 and A∝r 2 A∝r 2, then m∝A m∝A implies that m˙∝m 2/3 m˙∝m 2/3. σ=2/3,β=1 σ=2/3,β=1. In this case the growth of the raindrop is proportional to the volume swept out along the path. Thus, Δ m∝A(v Δ t)Δ m∝A(v Δ t), where A A is the cross sectional area and v Δ t v Δ t is the distance traveled in time Δ t Δ t In both cases, the limiting value of the acceleration is a constant. It is g/4 g/4 in the first case and g/7 g/7 in the second case. Another approach might be to use the effective radius of the drop, assuming that the raindrop remains close to spherical during the fall. According to Edwards, Wilder, and Scime (2001), raindrops with Reynolds number greater than 1000 and with radii larger than i mm will flatten. Even larger raindrops will break up when the drag force exceeds the surface tension. Therefore, they take 0.1 m m<r<1 m m 0.1 m m<r<1 m m and 10<R e<1000 10<R e<1000. We will return to a discussion of the drag later. It might seem more natural to make the radius the dynamic variable, than the mass. In this case, we can assume the accretion rate takes the form d r d t=γ r α v β,γ>0 d r d t=γ r α v β,γ>0 Since, m=4 3 π ρ d r 3 m=4 3 π ρ d r 3 d m d t∼r 2 d r d t∼m 2/3 d r d t d m d t∼r 2 d r d t∼m 2/3 d r d t Therefore, the two special cases become α=0,β=0 α=0,β=0. This corresponds to a growth of the raindrop proportional to the surface area. α=0,β=1 α=0,β=1. In this case the growth of the raindrop is proportional to the volume swept out along the path. Here ρ d ρ d is the density of the raindrop. We will also need v m d m d t=4 π ρ d r 2 4 3 π ρ d r 3 v d r d t=3 v r d r d t=3 γ r α−1 v β+1 v m d m d t=4 π ρ d r 2 4 3 π ρ d r 3 v d r d t=3 v r d r d t=3 γ r α−1 v β+1 Putting this all together, we have a systems of two equations for v(t)v(t) and r(t)r(t) : d v d t d r d t=g−3 γ r α−1 v β+1=γ r α v β d v d t=g−3 γ r α−1 v β+1 d r d t=γ r α v β Example 3.5.4.1 3.5.4.1 Determine v=v(r)v=v(r) for the case α=0,β=0 α=0,β=0 and the initial conditions r(0)=0.1 m m r(0)=0.1 m m and v(0)=0 m/s v(0)=0 m/s. In this case Equations 3.5.4.2 3.5.4.2 become \begin{aligned}&\dfrac{d v}{d t}=g-3\gamma r^{-1}v\[4pt]&\dfrac{d r}{d t}=\gamma\end{aligned}\end{equation}\label{3.41}\begin{aligned}&\dfrac{d v}{d t}=g-3\gamma r^{-1}v\[4pt]&\dfrac{d r}{d t}=\gamma\end{aligned}\end{equation}\label{3.41} Noting that d v d t=d v d r d r d t=γ d v d r d v d t=d v d r d r d t=γ d v d r we can convert the problem to one of finding the solution v(r)v(r) subject to the equation d v d r=g γ−3 v r d v d r=g γ−3 v r with the initial condition v(r 0)=0 m/s v(r 0)=0 m/s for r 0=0.0001 m r 0=0.0001 m. Rearranging the differential equation, we find that it is a linear first order differential equation, d v d r+3 r v=g γ d v d r+3 r v=g γ This equation can be solved using an integrating factor, μ=r 3 μ=r 3, obtaining d d r(r 3 v)=g γ r 3 d d r(r 3 v)=g γ r 3 Integrating, we obtain the solution v(r)=g 4 γ r(1−(r 0 r)4)v(r)=g 4 γ r(1−(r 0 r)4) Note that for large r,v∼g 4 γ r r,v∼g 4 γ r. Therefore, d v d t∼g 4 d v d t∼g 4. While this case was easily solved in terms of elementary operations, it is not always easy to generate solutions to Equations 3.5.4.2 3.5.4.2 analytically. Sokal (2010) derived a general solution in terms of incomplete Beta functions, though this does not help visualize the solution. Also, as we will see, adding air drag will lead to a nonintegrable system. So, we turn to numerical solutions. In MATLAB, we can use the function in raindropf.m to capture the system 3.5.4.2).H e r e w e p u t t h e v e l o c i t y i n(y(1)3.5.4.2).H e r e w e p u t t h e v e l o c i t y i n(y(1) and the radius in y(2)y(2). function dy=raindropf(t,y); global alpha beta gamma g dy=[g-3gammay(2)^(alpha-1)y(1)^(beta+1); ... gammay(2)^alphay(1)^beta]; We then use the Runge-Kutta solver, ode45, to solve the system. An implementation is shown below which calls the function containing the system. The value γ=2.5×10−7 γ=2.5×10−7 is based on empirical results quoted by Edwards, Wilder, and Scime (2001). Figure 3.5.4.1 3.5.4.1: The plots of position and velocity as a function of time for α=β=0 α=β=0. clear global alpha beta gamma g alpha=0; beta=0; gamma=2.5e-07; g=9.81; r0=0.0001; v0=0; y0=[v0;r0]; tspan=[0 1000]; [t,y]=ode45(@raindropf,tspan,y0); plot(1000y(:,2),y(:,1),’k’) Figure 3.5.4.2 3.5.4.2: The plot the velocity as a function of position for α=β=0 α=β=0. Figure 3.5.4.3 3.5.4.3: The plot the velocity as a function of position for α=0,β=1 α=0,β=1. The resulting plots are shown in Figures 3.5.4.1 3.5.4.1-3.5.4.2 3.5.4.2. The plot of velocity as a function of position agrees with the exact solution, which we derived in the last example. We note that these drops do not grow much, but they seem to obtain large speeds. For the second case, α=0,β=1 α=0,β=1, one can also obtain an exact solution. The result is v(r)=[2 g 7 γ r(1−(r 0 r)7)]1 2 v(r)=[2 g 7 γ r(1−(r 0 r)7)]1 2 For large r r one can show that d v d t∼g 7 d v d t∼g 7. In Figures 3⋅33−3⋅3 2 3⋅33−3⋅3 2 we see again large velocities, though about a third as fast over the same time interval. However, we also see that the raindrop has significantly grown well past the point it would break up. In this simple model of a falling raindrop we have not considered air drag. Earlier in the chapter we discussed the free fall of a body with air resistance and this lead to a terminal velocity. Recall that the drag force given by f D(v)=−1 2 C D A ρ a v 2 f D(v)=−1 2 C D A ρ a v 2 where C D C D is the drag coefficient, A A is the cross sectional area and ρ a ρ a is the air density. Also, we assume that the body is falling downward and downward is positive, so that f D(v)<0 f D(v)<0 so as to oppose the motion. We would like to incorporate this force into our model 3.5.4.2 3.5.4.2. The first equation came from the force law, which now becomes m d v d t=m g−v d m d t−1 2 C D A ρ a v 2 m d v d t=m g−v d m d t−1 2 C D A ρ a v 2 Or d v d t=g−v m d m d t−1 2 m C D A ρ a v 2 d v d t=g−v m d m d t−1 2 m C D A ρ a v 2 The next step is to eliminate the dependence on the mass, m m, in favor of the radius, r r. The drag force term can be written as f D m=1 2 m C D A ρ a v 2=1 2 C D π r 2 4 3 π ρ d r 3 ρ a v 2=3 8 ρ a ρ d C D v 2 r f D m=1 2 m C D A ρ a v 2=1 2 C D π r 2 4 3 π ρ d r 3 ρ a v 2=3 8 ρ a ρ d C D v 2 r We had already done this for the second term; however, Edwards, Wilder, and Scime (2001) point to experimental data and propose that d m d t=π ρ m r 2 v d m d t=π ρ m r 2 v where ρ m ρ m is the mist density. So, the second terms leads to v m d m d t=3 4 ρ m ρ d v 2 r v m d m d t=3 4 ρ m ρ d v 2 r Figure 3.5.4.4 3.5.4.4: The plots of position and velocity as a function of time for α=0,β=α=0,β= But, since m=4 3 π ρ d r 3 m=4 3 π ρ d r 3 d m d t=4 π ρ d r 2 d r d t d m d t=4 π ρ d r 2 d r d t S O S O, d r d t=ρ m 4 ρ d v d r d t=ρ m 4 ρ d v This suggests that their model corresponds to α=0,β=1 α=0,β=1, and γ=ρ m 4 ρ d γ=ρ m 4 ρ d. Now we can write down the modified system d v d t d r d t=g−3 γ r α−1 v β+1−3 8 ρ a ρ d C D v 2 r=γ r α v β d v d t=g−3 γ r α−1 v β+1−3 8 ρ a ρ d C D v 2 r d r d t=γ r α v β Edwards, Wilder, and Scime (2001) assume that the densities are constant with values ρ a=.856 k g/m 3,ρ d=1.000 k g/m 3 ρ a=.856 k g/m 3,ρ d=1.000 k g/m 3, and ρ m=1.00×10−3 ρ m=1.00×10−3 k g/m 3 k g/m 3. However, the drag coefficient is not constant. As described later in Section 3.5.7, there are various models indicating the dependence of C D C D on the Reynolds number, R e=2 r v v R e=2 r v v where v v is the kinematic viscosity, which Edwards, Wilder, and Scime (2001) set to v=2.06×10−5 m 2/s v=2.06×10−5 m 2/s. For raindrops of the range r=0.1 m m r=0.1 m m to 1 m m m m, the Reynolds number is below rooo. Edwards, Wilder, and Scime (2001) modeled C D=12 R e−1/2 C D=12 R e−1/2. In the plots in Section 3.5.7 3.5.7 we include this model and see that this is a good approximation for these raindrops. In Chapter 10 we discuss least squares curve fitting and using these methods, one can use the models of Putnam (1961) and Schiller-Naumann (1933) to obtain a power law fit similar to that used here. So, introducing C D=12 R e−1/2=12(2 r v v)−1/2 C D=12 R e−1/2=12(2 r v v)−1/2 and defining δ=9 2 3/2 ρ a ρ d v 1/2 δ=9 2 3/2 ρ a ρ d v 1/2 we can write the system of equations 3.5.4.6 3.5.4.6 as d v d t d r d t=g−3 γ v 2 r−δ(v r)3 2=γ v.d v d t=g−3 γ v 2 r−δ(v r)3 2 d r d t=γ v. Now, we can modify the MATLAB code for the raindrop by adding the extra term to the first equation, setting α=0,β=1 α=0,β=1, and using δ=0.0124 δ=0.0124 and γ=2.5×10−7 γ=2.5×10−7 from Edwards, Wilder, and Scime (2001)(2001). Figure 3.5.4.5 3.5.4.5: The plots of position and velocity as a function of time with air drag included. Figure 3.5.4.6 3.5.4.6: The plot the velocity as a function of position with air drag included. In Figures 3.5.4.5 3.5.4.5- 3.5.4.6 3.5.4.6 we see different behaviors as compared to the previous models. It appears that the velocity quickly reaches a terminal velocity and the radius continues to grow linearly in time, though at a slow rate. We might be able to understand this behavior. Terminal, or constant v v, g−3 γ v 2 r−δ(v r)3 2=0 g−3 γ v 2 r−δ(v r)3 2=0 Looking at these terms, one finds that the second term is significantly smaller δ(v r)3 2≈g δ(v r)3 2≈g v r≈(g δ)2/3≈85.54 s−1 v r≈(g δ)2/3≈85.54 s−1 This agrees with the numerical data which gives the slope of the v v vs r r plot would occur when than the other terms and thus Or as 85.5236 s−1 85.5236 s−1. This page titled 3.5.4: Falling Raindrops is shared under a CC BY-NC-SA 3.0 license and was authored, remixed, and/or curated by Russell Herman via source content that was edited to the style and standards of the LibreTexts platform. Back to top 3.5.3: The Flight of Sports Balls 3.5.5: The Two-body Problem Was this article helpful? Yes No Recommended articles 3.5.1: The Nonlinear PendulumNOW WE WILL INVESTIGATE THE USE OF NUMERCIAL METHODS fOr SOLVing the nonlinear pendulum problem. 3.5.2: Extreme Sky DivingON OCTOBER 14, 2012 FELIX BAUMGARTNER JUMPED from a helium balloon at an altitude of 39045 m(24.26mi or 128100ft) . According preliminary data from... 3.5.3: The Flight of Sports BallsANOTHER INTERESTING PROBLEM IS THE PROJECTILE MOTION OF A SPORTS ball. In an introductory physics course, one typically ignores air resistance and the... 3.5.5: The Two-body ProblemA STANDARD PROBLEM IN CLASSICAL DYNAMICS is the study of the motion of several bodies under the influence of Newton’s Law of Gravitation. The so-calle... 3.5.7: The Coefficient of DragWE HAVE SEEN THAT AIR DRAG can play a role in interesting physics problems in differential equations. This also is an important concept in fluid flow ... Article typeSection or PageAuthorRussell HermanLicenseCC BY-NC-SALicense Version3.0Show Page TOCno Tags source@ © Copyright 2025 Mathematics LibreTexts Powered by CXone Expert ® ? The LibreTexts libraries arePowered by NICE CXone Expertand are supported by the Department of Education Open Textbook Pilot Project, the UC Davis Office of the Provost, the UC Davis Library, the California State University Affordable Learning Solutions Program, and Merlot. We also acknowledge previous National Science Foundation support under grant numbers 1246120, 1525057, and 1413739. Privacy Policy. Terms & Conditions. Accessibility Statement.For more information contact us [email protected]. Support Center How can we help? Contact Support Search the Insight Knowledge Base Check System Status× contents readability resources tools ☰ 3.5.3: The Flight of Sports Balls 3.5.5: The Two-body Problem
315
HAL Id: hal-04493029 Preprint submitted on 6 Mar 2024 HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci-entific research documents, whether they are pub-lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL , est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Distributed under a Creative Commons Attribution - NonCommercial - ShareAlike 4.0 International License Algorithms and complexity for path covers of temporal DAGs Dibyayan Chakraborty, Antoine Dailly, Florent Foucaud, Ralf Klasing To cite this version: Dibyayan Chakraborty, Antoine Dailly, Florent Foucaud, Ralf Klasing. Algorithms and complexity for path covers of temporal DAGs. 2024. ￿hal-04493029￿ Algorithms and complexity for path covers of temporal DAGs: when is Dilworth dynamic? ∗ Dibyayan Chakraborty 1, Antoine Dailly 2, Florent Foucaud 2, and Ralf Klasing 31School of Computing, University of Leeds, United Kingdom 2Université Clermont-Auvergne, CNRS, Mines de Saint-Étienne, Clermont-Auvergne-INP, LIMOS, 63000 Clermont-Ferrand, France 3Université de Bordeaux, Bordeaux INP, CNRS, LaBRI, UMR 5800, Talence, France Abstract A path cover of a digraph is a collection of paths collectively containing the vertex set of the digraph. A path cover with minimum cardinality for a directed acyclic graph can be found in polynomial time [Fulkerson, AMS ’56; Cáceres et al., SODA’22 ]. Moreover, Dilworth’s celebrated theorem on chain coverings of partially ordered sets equivalently states that the minimum size of a path cover of a DAG is equal to the maximum size of a set of mutually unreachable vertices. In this paper, we examine how far Dilworth’s theorem can be extended to a “ dynamic ” analogue of directed acyclic graphs. A temporal digraph has an arc set that changes over discrete time-steps. Furthermore, if the underlying digraph ( i.e. , the union of all the arc sets that appears at some point) is acyclic, then we have a temporal directed acyclic graph (or simply a temporal DAG). A temporal path is a directed path in the underlying digraph, such that the time-steps of arcs are strictly increasing along the path. Two temporal paths are temporally disjoint if they do not occupy any vertex at the same time. A temporal path cover is a collection C of temporal paths that covers all vertices. Furthermore, C is a temporally disjoint path cover if all temporal paths are pairwise temporally disjoint. In this paper, we study the computational complexities of the problems of finding a temporal (disjoint) path cover with minimum cardinality (denoted as Temporal Path Cover and Temporally Disjoint Path Cover ). We show that both Temporal Path Cover and Temporally Disjoint Path Cover are NP-hard even when the underlying DAG is planar, bipartite, subcubic, and there are only two arc-disjoint time-steps. Moreover, Temporally Disjoint Path Cover remains NP-hard even on temporal oriented trees. We also observe that natural temporal analogues of Dilworth’s theorem on these classes of temporal DAGs do not hold. In contrast, we show that Temporal Path Cover is polynomial-time solvable on temporal oriented trees by a reduction to Clique Cover for (static undirected) weakly chordal graphs (a subclass of perfect graphs for which Clique Cover admits an efficient algorithm). This highlights an interesting algorithmic difference between the two problems. Although it is NP-hard on temporal oriented trees, Temporally Disjoint Path Cover becomes polynomial-time solvable on temporal oriented lines and temporal rooted directed trees. For all these positive algorithmic results, we also show that temporal analogues of Dilworth’s theorem hold for the corresponding temporal graph classes. We also show that Temporal Path Cover and Temporally Disjoint Path Cover become effi-ciently solvable when the number of time-steps is bounded and the underlying graph is close to a tree. More precisely, we show that Temporal Path Cover admits an XP time algorithm with respect to pa-rameter tmax + tw , where tmax is the maximum time-step, and tw is the treewidth of the underlying static undirected graph. We also show that Temporally Disjoint Path Cover admits an FPT algorithm with respect to the same parameter. ∗ This work was supported by the International Research Center "Innovation Transportation and Production Systems" of the I-SITE CAP 20-25 and by the ANR project GRALMECO (ANR-21-CE48-0004). Ralf Klasing’s research was partially supported by the ANR project TEMPOGRAL (ANR-22-CE48-0001). 11 Introduction A classic theorem of Dilworth from 1950 states that in any partially ordered set (poset), the minimum number of chains required to cover all the elements is equal to the maximum size of an antichain. Dilworth’s theorem is fundamental from the mathematical point of view; furthermore, an algorithmic proof (that enables to construct a chain cover and an antichain in polynomial time) was published by Fulkerson in 1956 . This theorem and its algorithmic form have many applications, not only in combinatorics, but also in various fields such as bioinformatics , scheduling , databases , program testing , etc. A collection P of (resp. pairwise vertex-disjoint) directed paths of a digraph D is a path cover (resp. path partition ) of D if all vertices of D are contained in some path of P. Dilworth’s theorem can be restated in an equivalent form, equating the minimum cardinality of path covers on directed acyclic graphs (DAGs) and the maximum size of a set of pairwise “unreachable” vertices, or antichain vertices [4, 5, 15]. Theorem 1 (Dilworth ) . For any DAG D, the minimum number of paths that cover its vertex set, is equal to the maximum size of an antichain of D. Fulkerson showed that finding a minimum-size path cover of a DAG can be done in polynomial time. Moreover, it is known that finding a minimum-size path partition can also be done in polynomial time for arbitrary DAGs [10, Probl. 26-2]. Improving the best known algorithms for path cover and partitions of DAGs still form an active field of research, see for example [4, 5, 9, 32] for some recent results. The notions of directed paths and path covers naturally extends to temporal (di)graphs . Informally, the arc set of a temporal digraph changes over discrete time-steps and labels of an arc are the time-steps where the arc appears. Temporal (di)graphs have been extensively studied in the two last decades, with contributions from and applications to various fields, see [7, 21, 23, 35, 36, 39]. A temporal path of a digraph is a path that traverses edges appearing at strictly increasing time-steps. The asymmetric nature of temporal paths has motivated many recent algorithmic works on related reachability or path problems on temporal graphs, such as [1, 2, 3, 8, 24, 34]. Two temporal paths are temporally disjoint if they do not occupy a vertex at the same time-step. This definition was introduced by Klobas et al. and has since then garnered attention in the graph algorithmic community . Even though the above notion was introduced in the context of temporal undirected graphs, it naturally extends to temporal digraphs and motivates the corresponding covering problems. The objective of Temporal Path Cover (resp. Temporally Disjoint Path Cover ) is to cover an input temporal digraph by a minimum number of temporal paths (resp. temporally disjoint paths). Main objectives. In this paper, we initiate the algorithmic study of Temporal Path Cover and Tem-porally Disjoint Path Cover and focus on temporal directed acyclic graphs (or simply, temporal DAGs). A temporal digraph is a temporal DAG if the union of all arcs across all time-steps induces a (static) DAG. We say that a temporal digraph satisfies the Dilworth property (resp. temporally disjoint Dilworth property ,or TD-Dilworth property for short) if the largest size of a temporal antichain (understood as a set of pairwise unreachable vertices) is equal to the smallest size of a temporal path cover (resp. temporally disjoint path cover). The main goals of this paper are the following: (a) Determine classes of temporal DAGs satisfying the (TD-)Dilworth property. (b) Study the computational complexities of Temporal Path Cover and Temporally Disjoint Path Cover on temporal digraphs. Practical motivations. A first motivation is multi-agent-based decision-making (a well-studied problem from artificial intelligence [41, 44]) in a temporal setting, such as for coral reef protection or crime preven-tion in transportation networks . In this setting, the temporal DAG can model a decision-making process, where the vertices represent the states of an environment. Agents navigate the DAG, an arc representing an agent’s move from one state to another. As the situation is varying over time, a move may only be available at specific time-steps. A path in this DAG thus represents the overall activity of an agent. In this setting, Temporal Path Cover represents the situation where a set of k agents need to cover all the possible states. 2In Temporally Disjoint Path Cover , the agents must also avoid each other, and cannot cover the same state at the same time, a scenario described as vertex-conflicts in the literature . Another natural application is multi-robot path planning [12, 42]. Imagine the setting where k robots are assigned the task of exploring a hazardous facility. Since the facility changes over time, it is modeled as a temporal digraph. If the facility digraph does not contain directed cycles, it is modeled by a temporal DAG (for example, if the facility is inherently directed from a start area towards a target area). The exploration path of a robot can be modeled by a temporal path. Now, Temporal Path Cover corresponds to the situation where the robots need to explore the whole facility, while for Temporally Disjoint Path Cover ,the robots also cannot be simultaneously at the same location. Our results. We begin by formally defining the problems studied in this paper. Temporal Path Cover (TPC ) Input: A temporal digraph D, an integer k. Problem: Does there exist a set C of k temporal paths in D such that every vertex of D is covered by some path of C? Temporally Disjoint Path Cover (TD-PC ) Input: A temporal digraph D, an integer k. Problem: Does there exist a set C of k temporally disjoint temporal paths in D such that every vertex of D is covered by some path of C?We observe that in general, temporal DAGs do not have the Dilworth property (see Figure 1a). Then, we prove the following negative result. Theorem 2. Temporal Path Cover and Temporally Disjoint Path Cover are NP-hard on temporal DAGs, even if the input is planar, bipartite, subcubic, of girth 10, uses only one time label per arc, and every label is either 1 or 2. A temporal directed acyclic graph D is a temporal oriented tree if the underlying directed graph of D is a tree. On the positive side, we prove the following. Theorem 3. There is an O(ℓn 2 + n3)-time algorithm for Temporal Path Cover on temporal oriented trees with n vertices and at most ℓ many labels per arc. Furthermore, temporal oriented trees satisfy the Dilworth property. We briefly describe the technique we use for proving Theorem 3. Two vertices of a temporal digraph are temporally connected if they are covered by the same temporal path. The connectivity graph of a temporal digraph D is an undirected (static) graph whose vertex set is the same as that of D, and whose edge set consists of all pairs of temporally connected vertices. To prove the above theorem, we show that the connectivity graph of a temporal oriented tree is a weakly chordal graph (a subclass of perfect graphs ). We show Temporal Path Cover can be reduced to Clique Cover on weakly chordal graphs. The above observation, combined with the Weak Perfect Graph Theorem (proved by Lovász ), proves that temporal oriented trees satisfy the Dilworth property. Moreover, the existing O(nm )-time algorithm to compute a minimum clique cover of a weakly chordal graph (having n vertices and m edges) completes the proof of Theorem 3. Our proof gives interesting structural information on the interaction between temporal paths in temporal oriented trees. Interestingly, another important class of perfect graphs plays an important role in connection with Dilworth’s theorem and its translation to the setting of static DAGs: the class of comparability graphs, see [18, Chapter 5.7]. In our case, there does not appear to be any connection to comparability graphs. On the other hand, temporal oriented trees do not satisfy the TD-Dilworth property (see Figure 1b for an example). Then, we prove the following negative result. Theorem 4. Temporally Disjoint Path Cover is NP-hard on temporal oriented trees. 33 21 1 1 2 (a) A temporal DAG not having the Dilworth property. . . . . . . 111 222 s1s2sk t1t2tk c (b) A temporal oriented tree not having the TD-Dilworth property. Figure 1: A minimum-size (temporally disjoint) temporal path cover is shown, vertices in a maximum-size temporal antichain are in black. To find classes that satisfy the TD-Dilworth property, we study temporal oriented lines (that is, where the underlying digraph is an oriented path) and temporal rooted directed trees . A tree is a rooted directed tree if it is an oriented tree with a single source vertex called the root . We prove the following result. Theorem 5. Temporal Path Cover and Temporally Disjoint Path Cover can be solved in time: (a) O(ℓn ) on temporal oriented lines; (b) O(ℓn 2) on temporal rooted directed trees; where ℓ is the maximum number of labels per arc and n is the number of vertices. Furthermore, both classes satisfy the TD-Dilworth property. Note that some related problems remain NP-hard for temporal lines, such as Temporally Disjoint Walks . Theorem 5(a) shows that this is not the case here. To prove Theorem 5(b), we begin by constructing a temporal path cover before transforming it into a temporally disjoint one of the same size. This is in contrast with general temporal oriented trees, for which, by Theorem 4, such an approach is not possible. As Temporally Disjoint Path Cover is NP-hard even on temporal oriented trees and on temporal DAGs with two time-steps, a natural question is what happens when the number of time-steps is small and the underlying digraph is a tree. Motivated by this question, we study the case where both the number of time-steps and the treewidth of the underlying digraph are bounded (where we define the treewidth of a temporal digraph as the treewidth of the underlying static undirected graph). We show that both problems become tractable in this setting. More precisely, we give a fixed-parameter tractable (FPT) algorithm for Temporally Disjoint Path Cover with treewidth and number of time-steps as parameters. The same technique gives an XP algorithm for Temporal Path Cover . Theorem 6. There is an algorithm for Temporally Disjoint Path Cover on general temporal digraphs that is FPT with respect to the treewidth of the underlying undirected graph and the maximum number of labels per arc. For Temporal Path Cover on general temporal digraphs, there is an XP algorithm for the same parameter. See Table 1 for a summary of our algorithmic results. Further related work. Algorithms for solving several types of path and distance problems in temporal graphs have been developed, see for example [3, 24, 43]. Recently, the problem Temporally Disjoint Paths was introduced in , as a generalization of the notorious Disjoint Paths problem (also known as Linkage ). In Temporally Disjoint Paths , one is given a temporal graph with k pairs of vertices called terminals , and the goal is to find a set of k pairwise temporally disjoint paths, each of them connecting one pair of terminals. Temporally Disjoint Paths is NP-hard, even for temporal lines and two paths or temporal stars , but becomes FPT for trees when parameterized by the number of paths . Algorithms that are FPT for certain structural parameters are given in . 4temporal graph class TPC TD-PC temporal DAGs (planar bipartite subcubic, girth 10, two arc-disjoint time-steps) NP-c. NP-c. temporal oriented trees poly NP-c. temporal rooted directed trees poly poly temporal oriented lines poly poly general temporal digraphs with bounded treewidth and number of time-steps poly (XP) poly (FPT) Table 1: Summary of our algorithmic results. For all polynomial-time solvable classes of temporal DAGs, we also show that the Dilworth property (or TD-Dilworth property for TD-PC ) holds. Structure of the paper. We start with the hardness result for temporal DAGs (Theorem 2) in Section 3. We then prove our results for temporal oriented trees (Theorem 3 and Theorem 4) in Sections 4 and 5. We prove Theorem 5, the polynomial-time algorithms for special temporal oriented trees (temporal rooted directed trees and temporal oriented lines), in Section 6. We then prove our results for temporal digraphs of bounded treewidth and number of time-steps (Theorem 6) in Section 7. We conclude in Section 8. 2 Preliminaries A temporal digraph D = ( V, A 1, . . . , A tmax ) is given by a sequence of arc-sets representing tmax discrete time-steps {1, . . . , t max }, where an arc in Ai is active at time-step i . Let us denote by D = ( V, A ), where A = ∪tmax i=1 Ai, the underlying digraph of temporal digraph D = ( V, A 1, . . . , A tmax ) (sometimes called footprint (di)graph ). Equivalently, one can view the time-steps as an arc-labelling function λ : A(D) → 2[tmax ],where λ(−→xy ) ⊆ [tmax ] is the set of time-steps where −→xy is active . In that case, we may denote the temporal digraph as D = ( D, λ ). We say that a temporal digraph has a given property P (planarity, given girth...) if the undirected graph obtained by forgetting the orientation of the arcs of its underlying digraph has property P. For a given temporal digraph, we denote by ℓ the maximum number of labels per arc and by n the number of vertices in the underlying digraph. For a (temporal) (di)graph D and subset S of its vertices (resp. edges), D \ S denotes the (temporal) (di)graph obtained by removing the vertices (resp. edges) in S from D.In a temporal digraph, a temporal (directed) path is a sequence (v1, v 2, t 1), (v2, v 3, t 2), . . . , (vk−1, v k, t k−1) such that for any i, j with 1 ≤ i < j ≤ k, vi̸ = vj and for any i with 1 ≤ i ≤ k − 1, ti < t i+1 and there is an arc −−−→vivi+1 at time-step ti. These paths are sometimes called strict in the literature. 1 For a temporal path P = ( v1, v 2, t 1), . . . , (vk−1, v k, t k−1), we denote by V (P ) the set ∪ki=1 {vi} and by A(P ) the set ∪k−1 i=1 {−−−→vivi+1 }.The length of a temporal path is the number of arcs it uses. We say that a temporal path P =(v1, v 2, t 1), . . . , (vk−1, v k, t k−1) occupies vertex vi during the time interval {ti−1, . . . , t i}. Two temporal paths P1, P 2 are temporally disjoint if for all arcs e1 ∈ A(P1), e 2 ∈ A(P2) incident with a common end-vertex, the time-step of e1 in P1 is distinct from the time-step of e2 in P2. In other words, two paths are temporally disjoint if they do not occupy the same vertex at the same time. A temporal path cover (resp. temporally disjoint path cover ) of a temporal digraph D is a collection of temporal paths (resp. temporally disjoint paths) that cover all vertices of D. Two vertices are temporally connected in D if there exists a temporal path between them. A temporal antichain is a set of vertices that are pairwise not temporally connected. Definition 7. A class C has the Dilworth property (resp. TD-Dilworth property ) if the cardinality of the minimum temporal path cover (resp. temporally disjoint path cover) is equal to the maximum cardinality of a temporal anti-chain. 1For non-strict paths, the condition ti< t i+1 is replaced with ti≤ti+1 , but as argued in , the strict definition is more natural for applications where an agent cannot traverse an arbitrary number of arcs at once, this is why we chose this convention. 5A hole of a static undirected graph is an induced cycle of length at least 5, and an anti-hole is the complement of a hole. A graph G is weakly chordal if it has no hole or anti-hole. A (minimum) clique cover of a graph G is a (minimum cardinality) set of complete subgraphs of G that covers all vertices. A (maximum) independent set of a graph G is a (maximum cardinality) set of pairwise non-adjacent vertices. We shall use the following results for weakly chordal graphs. Theorem 8 ([20, 31, 40]) . Let H be a weakly chordal graph with n vertices and m edges. Then, a minimum clique cover of H can be found in O(nm )-time. Furthermore, the maximum size of an independent set of H equals the minimum size of a clique cover of H. 3 Temporal DAGs We provide a reduction (inspired by ) from a restricted variant of 3-Dimensional Matching to prove the following. Theorem 2. Temporal Path Cover and Temporally Disjoint Path Cover are NP-hard on temporal DAGs, even if the input is planar, bipartite, subcubic, of girth 10, uses only one time label per arc, and every label is either 1 or 2.Proof. We will reduce the Temporal (Disjoint) Path Cover problem on temporal DAGs from the 3-Dimensional Matching problem. The reduction is inspired from . 3-Dimensional Matching (3DM) Instance: A set S ⊆ X × Y × Z, where X, Y , and Z are disjoint sets having the same number q of elements. Question: Does S contain a perfect matching , i.e. , a subset M ⊆ S such that |M | = q and no two elements of M agree in any coordinate? It is well-known that 3-Dimensional Matching is NP-hard . Given an instance I = ( S, X ×Y ×Z) of 3DM , where S = {s1 . . . , s p}, X = {x1, . . . , x q }, Y = {y1, . . . , y q } and Z = {z1, . . . , z q }, we build an instance D = ( V, A 1, A 2) of Temporal (Disjoint) Path Cover , where D is a temporal DAG, as follows. To each triple si = (xi, 1, y i, 2, z i, 3) ∈ S, we associate a gadget H(si) that consists of a collection {P i, 1, P i, 2, P i, 3} of 3 directed vertex-disjoint paths of 3 vertices with P i,r = {−−−−→ ai,r 1 ai,r 2 , −−−−→ ai,r 2 ai,r 3 } for r = 1 , 2, 3;and the time labels are 1 for the arcs −−−−→ ai,r 1 ai,r 2 and 2 for the arcs −−−−→ ai,r 2 ai,r 3 . We add to H(si) the arcs −−−−→ ai, 13 ai, 23 and −−−−→ ai, 23 ai, 33 , in order to form a 4th directed path of 3 vertices; and the time labels are 1 for the arcs −−−−→ ai, 13 ai, 23 and 2 for the arcs −−−−→ ai, 23 ai, 33 . Finally, we add to H(si) the arcs −−−−→ ai, 12 xi, 1, −−−−→ ai, 22 yi, 2 and −−−−→ ai, 32 zi, 3, with the time label 2 (see Figure 2 for an illustration). ai, 11 ai, 12 ai, 21 ai, 22 ai, 31 ai, 32 ai, 13 ai, 23 ai, 33 xi, 1 yi, 2 zi, 3 1 1 12 2 21 22 2 2Figure 2: The gadget H(si).6ai, 11 ai, 12 ai, 21 ai, 22 ai, 31 ai, 32 ai, 13ai, 23ai, 33 xi, 1yi, 2zi, 3 ai, 11ai, 12ai, 21ai, 22ai, 31ai, 32 ai, 13ai, 23ai, 33 xi, 1yi, 2zi, 3 111 222 12 222 (a) si ∈ M ai, 11ai, 12ai, 21ai, 22ai, 31ai, 32 ai, 13ai, 23ai, 33 xi, 1yi, 2zi, 3 ai, 11ai, 12ai, 21ai, 22ai, 31ai, 32 ai, 13ai, 23ai, 33 xi, 1yi, 2zi, 3 111 222 12 222 (b) si̸ ∈ M Figure 3: Vertex partition of the gadget H(si) into length-2 paths. The above construction yields a temporal digraph D on 9p + 3 q vertices. Note that the construction uses only 1 label per arc, and every label is either 1 or 2. We claim that there exists a perfect matching M ⊆ S in I if and only if there exists a temporal (disjoint) path cover (partition) of D of size 3p + q.(⇒) Let M ⊆ S be a perfect matching in S, and consider the following collection of directed vertex-disjoint temporal length-2 paths in the gadget H(si): {−−−−→ ai, 13 ai, 23 , −−−−→ ai, 23 ai, 33 }, {−−−−→ ai, 11 ai, 12 , −−−−→ ai, 12 xi, 1}, {−−−−→ ai, 21 ai, 22 , −−−−→ ai, 22 xi, 2}, {−−−−→ ai, 31 ai, 32 , −−−−→ ai, 32 xi, 3} if si ∈ M , and P i, 1, P i, 2, P i, 3 if si̸ ∈ M . As M is a perfect matching in S, the collection of the temporal paths defined above constitute a vertex-disjoint (and thus temporally disjoint) temporal path cover of D of size 3p + q. Figures 3a and 3b illustrate the construction of the temporal path partition on V (H(si)) with respect to a given matching M for 3DM .(⇐) Assume that there exists a (temporally disjoint) path cover C of D of size 3p+q. As |V (D)| = 9 q +3 p, |C| = 3 p + q and every (temporal) path in D has length at most 2, all paths in C must have exactly length 2, and C is indeed a partition of V (D) into length-2 paths. All length-2 paths in D are depicted in Figures 3a and 3b. Hence, any path partition C of D must have, for each triple gadget, the path structure as depicted in either one of Figures 3a and 3b, and there must be q gadgets H1, . . . , H q that are each covered by four vertex-disjoint temporal length-2 paths from C, and p − q gadgets Hq+1 , . . . , H p where the vertices ai,r 1 ai,r 2 ai,r 3 (for r = 1 , 2, 3) are covered by three vertex-disjoint temporal length-2 paths from C and the vertices xi, 1, x i, 2, x i, 3 are not covered. Then, the triples (xi, 1, y i, 2, z i, 3) corresponding to H1, . . . , H q constitute a perfect matching in S.This completes the NP-hardness proof of Temporal (Disjoint) Path Cover in temporal DAGs. In order to show that Temporal (Disjoint) Path Cover remains NP-hard in planar, bipartite, subcu-bic temporal DAGs of girth 10, we apply the above proof, except that we start from a restriction of the three-dimensional matching problem, in which every element appears in either two or three triples, and the associated bipartite graph (formed by the elements and triples as its vertices, with edges connecting each element to the triples it belongs to) is planar and subcubic, denoted by Planar 3DM-3 . It is well-known that this restriction of 3DM is still NP-hard . Following a planar embedding of the bipartite graph as-sociated to the instance of Planar 3DM-3 , one can obtain a planar enmbedding of the constructed graph. Note that the underlying DAG in the above reduction is bipartite, as it can be 2-colored as follows: vertices in X and Z are colored 1, vertices in Y are colored 2, and then the coloring can be extended to the triple gadgets. Note as well that the shortest cycle in the underlying undirected graph has length 10. We also show the following. Proposition 9. There are temporal DAGs (whose underlying digraph is a transitive tournament) that satisfy neither the Dilworth nor the TD-Dilworth property. Moreover, the ratio between the minimum-size temporal path cover and temporally disjoint path cover and the maximum-size temporal antichain can be arbitrarily large. Proof. Consider the temporal digraph Tn = (Tn, λ ) where Tn is the transitive tournament on vertices u1, . . . , u n and λ(−−→uiuj ) = n − (j + 1) for all i < j . Tn is a temporal DAG, and since its underlying digraph 7is a transitive tournament, all the pairs of vertices are temporally connected, implying that the temporal antichain is of size 1. However, no temporal path can contain more than two vertices, and thus  n 2  paths are needed to cover it. Hence, the gap between the maximum size of a temporal antichain and the minimum size of a temporal path cover can be as large as we want for a temporal DAG. Furthermore, the minimum-size TPC is also vertex-disjoint (and thus temporally disjoint) if n is even since the paths will never intersect; and if n is odd, then at most two paths in a minimum-size TPC can intersect in at most one vertex, thus we can reduce one of the two intersecting paths to cover only one vertex, giving us a TPC of the same size with vertex-disjoint, and thus temporally disjoint, paths. This is depicted in Figure 4. 32 1112 (a) n= 4 4 3 2 1 321 1 12 (b) n= 5 Figure 4: Tn, a temporal DAG with a maximum-size temporal antichain of size 1 and a minimum-size temporal path cover of size  n 2 , for n = 4 , 5. 4 Temporal Path Cover on temporal oriented trees In this section we prove the following theorem. Theorem 3. There is an O(ℓn 2 + n3)-time algorithm for Temporal Path Cover on temporal oriented trees with n vertices and at most ℓ many labels per arc. Furthermore, temporal oriented trees satisfy the Dilworth property. For the rest of this section, T = ( T, λ ) shall denote a temporal oriented tree with n vertices and at most ℓ-many labels per edge. We construct the connectivity graph of T , denoted by G, as follows: V (G) = V (T ) and E(G) = {uv | u̸ = v and u and v are temporally connected }. In other words, the connectivity graph of a temporal oriented tree connects vertices that are temporally connected. Observe that G can be constructed in O(ℓn 2)-time. The next observation follows immediately from the definition. Observation 10. A set S of vertices of T is a temporal antichain if and only if S induces an independent set in G. We have the following relationship between temporal paths in T and cliques in G. Lemma 11. Let S be a set of vertices of T . Then S is contained in a temporal path in T if and only if S is contained in a clique of G.Proof. Let S be contained in temporal path P in T . Let u1, u 2, . . . , u k where k = |S|, be the ordering of the vertices in S as they are encountered while traversing P from the source to the sink. Notice that, for each 1 ≤ i < j ≤ k, there is a temporal path from ui to uj . Therefore, ui is adjacent to uj in G. Hence, S is contained in a clique of G.Let S be contained in a clique of G and S′ be a maximal complete subgraph of G such that S ⊆ V (S′).Now, we orient the edges of S′ to create a digraph −→ S′ as follows. For an edge uv ∈ E(S′), we introduce an arc −→uv ∈ A(−→ S′ ) if there is a temporal path from u to v in T . Since T is acyclic, −→ S′ is a transitive tournament. Hence, there is an ordering u1, u 2, . . . , u k of the vertices of S′ where k = |V (S′)| such that for 1 ≤ i < j ≤ k,there is a temporal path from ui to uj in T . Now, consider any temporal path P from u1 to uk in T . ( P exists as −−→u1uk ∈ A(−→ S′ ). Since T is a temporal oriented tree, P will contain all vertices of S′ and therefore of S.8Following is an immediate corollary of the above. Corollary 12. The minimum cardinality of a temporal path cover of T is equal to the minimum cardinality of a clique cover of G. We will often use the following lemma about vertex-intersecting temporal paths between pairs of vertices. Lemma 13. Let {u, v, w, x } ⊆ V (T ) be four vertices such that any temporal path from u to v has a vertex in common with any temporal path from w to x. Then, there is a temporal path from u to x, or a temporal path from w to v.Proof. Assume that there is no temporal path from u to x. Let y be the vertex of a temporal path from w to x that is closest to u in T . Let t be the smallest integer such that there is a temporal path from u to v that reaches y at time-step t. Observe that no temporal path from y to x can start at time-step t′ > t since, otherwise, there would be a temporal path from u to x. This implies that all temporal paths between w and x reach y at time-step t′′ ≤ t. Let P1 be a temporal path from w to y which is also a subpath of a temporal path from w to x. Let P2 be a temporal path from y to v which is also a subpath of a temporal path from u to v. The above arguments imply that the arc incident with y in P1 has time-step at most t. Similarly, the arc incident with y in P2 has time-step strictly greater than t. Hence, the concatenation of P1 and P2 is a temporal path in T from w to v. 4.1 The case of holes In this subsection, we will show that the connectivity graph G does not contain any holes. We use the following lemma. Lemma 14. Let H be an induced cycle of length at least 4 in G. Then, for every vertex v ∈ V (H) and every arc −→a of T incident with v, the vertices of H \ { v} lie in the same connected component of T \ { −→a }.Proof. For the sake of contradiction, let there exist vertices {u, v, w } ⊆ V (H) and an arc −→a of T incident with v such that u and w lie in two different connected components of T ′ = T \ { −→a }. Let Cu and Cw be the sets of vertices of H \ { v} contained in the same connected component as u and w, respectively. Since H \ { v} is connected, there exist u′ ∈ Cu and w′ ∈ Cw such that u′w′ ∈ E(H) i.e. u′w′ ∈ E(G). Hence, there is a temporal path P from u′ to w′ or w′ to u′ in T . Since T is a tree, P must contain v. Lemma 11 implies that {u′, v, w ′} forms a subset of a clique in G, and therefore {u′, v, w ′} forms a triangle. But this contradicts that H is a hole. Going forward, we need the following notations. For an edge e = uv ∈ E(G), let Qe denote a temporal path from u to v or v to u in T . For an induced cycle H of length at least 4 in G, let TH denote the smallest connected subtree of T containing all vertices of H. Lemma 14 implies that every vertex of H must be a leaf in TH . For a vertex v ∈ V (H), let −→a (v) be the arc incident with v in TH . Let H be an induced cycle of length at least 4 in G. We can partition the vertex set of H into two sets IN (H) and OU T (H) as follows: a vertex v ∈ V (H) is in IN (H) if −→a (v) is directed towards v, and otherwise v is in OU T (H).For a vertex v ∈ IN (H), notice that both neighbors of v in H must lie in OU T (H), and vice versa, since they must be connected by a directed path in T . Hence, H is bipartite, and therefore G does not contain any odd hole ( i.e. , a hole with an odd number of vertices): Lemma 15. The connectivity graph G does not contain any odd hole. Without loss of generality, we assume in the following that OU T (H) (resp. IN (H)) contains every odd-indexed (resp. even-indexed) vertex of H. For an even hole H whose vertices are cyclically ordered as u1, u 2, . . . , u k, we use a cyclic definition of addition, so k + 1 = 1 . We first prove the following lemmas. Lemma 16. Let H be an even hole in the connectivity graph G. Then, for every i, Quiui+1 and Qui+2 ui+3 share a common vertex. 9Proof. Assume by contradiction that Quiui+1 and Qui+2 ui+3 are vertex-disjoint. Assume without loss of generality that Quiui+1 goes from ui to ui+1 . Note that, since each vertex of the hole is a leaf of TH as a consequence of Lemma 14, the two paths Quiui+1 and Qui+1 ui+2 have to share a common vertex other than ui+1 (its neighbour in TH ). By the same reasoning, Qui+1 ui+2 and Qui+2 ui+3 share a common vertex other than ui+2 . Hence, since the three paths Quiui+1 , Qui+1 ui+2 and Qui+2 ui+3 are in TH , and Quiui+1 and Qui+2 ui+3 are vertex-disjoint, there is an arc −→a contained in Qui+1 ui+2 that separates Quiui+1 and Qui+2 ui+3 .Removing −→a from T partitions the vertices of H into two sets H1 and H2: H1 (resp. H2) contains the vertices of H that are in the same part of T \ −→a as ui+1 (resp. ui+2 ). Now, since H is a cycle, there is an edge uj uj+1 such that (without loss of generality) uj ∈ H1, uj+1 ∈ H2 and (j, j + 1) ̸ = ( i + 1 , i + 2) . This implies that the path Quj uj+1 has to use −→a in T , and thus Qui+1 ui+2 and Quj uj+1 share a common vertex. Hence, Lemma 13 implies that there is a temporal path from uj+1 to ui+1 or from ui+2 to uj . However, since j̸ = i + 3 (uj ∈ H1 and ui+3 ∈ H2) and j + 1 ̸ = i (uj+1 ∈ H2 and ui ∈ H1), both temporal paths would induce a chord in H, a contradiction. Lemma 17. The connectivity graph G does not contain any hole of size 6. Proof. Assume by contradiction that there is a hole on six vertices u1, . . . , u 6. We know that Qu1u2 and Qu4u5 are vertex-disjoint (since otherwise, by Lemma 13, at least one of the chords u1u4 or u2u5 would exist). The ui’s are leaves of TH , so Qu1u2 and Qu1u6 , being paths with a common leaf in the same subtree, share at least one common vertex other than u1 (its neighbour in TH ), let v be the last (with respect to the orientation of T ) vertex in their common subpath. Now, Qu5u6 has a common vertex with both Qu1u2 (by Lemma 16) and Qu1u6 (the neighbour of u6 in TH ), so it has to contain v by the Helly property of subtrees of a tree. By the same reasoning, Qu4u5 and Qu5u6 share at least one common vertex other than u5 (its neighbour in TH ), let w be the last vertex in their common subpath. The Helly property of subtrees of a tree again implies that both Qu2u3 and Qu3u4 have to contain w, since they pairwise intersect with Qu4u5 . But this means that Qu2u3 and Qu5u6 share both v and w as common vertices, and so by Lemma 13 there is at least one of the two chords u2u5 or u3u6, a contradiction. We can now prove that there is no even hole in G: Lemma 18. The connectivity graph G does not contain any even hole. Proof. Assume by contradiction that G contains an even hole H on k ≥ 8 vertices ( k = 6 is impossible by Lemma 17). We know by Lemma 16 that both Qu3u4 and Quk−1uk intersect Qu1u2 , but do not intersect each other (otherwise, by Lemma 13, at least one of the edges u3uk or u4uk−1 would exist, and both would be chords since k ≥ 8), so there is an arc −→a in T that separates them. Removing −→a from T partitions the vertices of H into two sets H1 and H2: H1 (resp. H2) contains the vertices of H that are in the same part of T \ −→a as u3 (resp. uk). Now, since H is a cycle, there is an edge uj uj+1 such that (without loss of generality) uj ∈ H1 and uj+1 ∈ H2. This implies that the path Quj uj+1 has to use −→a in T , and thus Qu1u2 and Quj uj+1 ,both containing −→a , share a common vertex. Hence, Lemma 13 implies that there is a temporal path from uj+1 to u2 or from u1 to uj . However, since j̸ = k (uj ∈ H1 and uk ∈ H2) and j + 1 ̸ = 3 (uj+1 ∈ H2 and u3 ∈ H1), by Lemma 13 both temporal paths would induce a chord in H, a contradiction. 4.2 The case of anti-holes In this subsection, we will show that the connectivity graph G does not contain any anti-hole. For an anti-hole H, let its vertices be circularly ordered as u1, u 2, . . . , u k as they are encountered while traversing the complement of H (which is a hole). Let ODD (H) (resp. EV EN (H)) denote the set of vertices with odd (resp. even) indices. Lemma 19. The connectivity graph G does not contain any anti-hole. Proof. Assume by contradiction that G contains an anti-hole H with k vertices. If k = 5 , then H is a hole, which contradicts Lemma 15; hence, assume k ≥ 6.10 When k is odd, let F1 = ODD (H) \ { uk}, F 2 = EV EN (H). When k is even, let F1 = ODD (H) , F 2 = EV EN (H). Observe that |F1| = |F2| ≥ 3 and both sets induce (vertex-disjoint) cliques in G. By Lemma 11, there are temporal paths P1 and P2 in T containing F1 and F2, respectively, which we can assume are minimal vertex-inclusion-wise (so that, for each i ∈ { 1, 2}, both end-vertices of Pi lie in Fi). For i ∈ { 1, 2}, let vi and wi denote the source and sink of Pi, respectively. We have two cases. Case 1: V (P1) ∩ V (P2) = ∅. Let Q be the shortest temporal path that contains vertices from both P1 and P2. Let p1, p 2 be the end-vertices of Q that lie on P1 and P2, respectively. Since for each i ∈ { 1, 2} and Z ∈ { F1, F 2}, NG(wi) ∩ Z̸ = ∅, Q is oriented from p1 to p2, or vice versa. Without loss of generality, assume that Q is oriented from p2 to p1. Then, necessarily p2 = w2, since otherwise w2 is not temporally connected with any vertex of F1, a contradiction. By a similar argument, we have p1 = v1. Now, consider the clique induced by N (v2) ∩ F1. Due to Lemma 11, all vertices of N (v2) ∩ F1 and v2 itself are contained in a temporal path, which also necessarily contains w2. Hence all of F2 (P2, even) is in a temporal path containing v1,since the path has to go through v1 to reach other vertices of F1, and so F2 ∪ { v1} forms a clique. This is a contradiction as v1 necessarily has at least one non-neighbor in F2. Case 2: V (P1) ∩ V (P2)̸ = ∅. Let Q denote the maximal vertex-inclusion-wise path that is common to both P1 and P2, i.e. , the path induced by the set V (P1) ∩ V (P2). Note that Q does not contain any vertex from H, since a vertex of H in Q would be temporally connected to every other vertex of F1 ∪ F2, a contradiction. Let p denote source of Q and for each i ∈ { 1, 2} let Qi (resp. Q′ i ) be the subpath of Pi between p and wi (resp. p and vi). Note that no vertex of Q′ 1 \ { p} can be in a directed path with any vertex of Q′ 2 \ { p}. Similarly, no vertex of Q1 \ { p} can be in a directed path with any vertex of Q2 \ { p}. Thus, the two subgraphs of the connectivity graph G induced by the vertices of (V (Q1) ∪ V (Q2)) \ V (Q) and (V (Q′ 1 ) ∪ V (Q′ 2 )) \ { p} each induce the complement of a complete bipartite graph. As H does not contain any complement of a 4-cycle as an induced subgraph, this implies that there are exactly three vertices of H in each of these two subsets of vertices (since Q does not contain any vertex of H). In particular, H has size either 6 or 7. Without loss of generality, we assume that Q′ 1 contains only one vertex of H, which must be v1. Thus, there are two vertices of H in Q′ 2 : v2 and another vertex, say, v′ 2 . Since F1 and F2 both have size 3, the vertices of H in Q1 are w1 and (say) w′ 1 , and the only vertex of H in Q2 is w2. Now, observe that if v2 is contained in a temporal path with w1, then v2, v′ 2 , w′ 1 and w1 are in a common temporal path. This is not possible, since in H, there is either one or two non-edges among these four vertices (depending on whether H has size 7 or 6). Thus, w1 and v2 are in no common temporal path. Since v2 has no non-neighbour in H other than v1 and w1, v2 and w′ 1 are in a common temporal path, that also contains v′ 2 . Thus, {v2, v ′ 2 , w ′ 1 } form a clique in H. Similarly, {v′ 2 , w ′ 1 , w 1} also form a clique in H. If H had size 6, v′ 2 and w′ 1 would need to be non-neighbours in H (since w1 already has two non-neighbours in H), a contradiction. Thus, H has size 7, and the two non-neighbours in H of u7 (the vertex of H not in F1 ∪ F2) are v′ 2 and w′ 1 (since they are the only ones without two non-neighbours in H). But u7 has to be temporally connected to all of v1, v2, w1 and w2, so u7 has to be in Q. But any temporal path from v2 to a vertex of Q has to contain v′ 2 , and so u7 and v′ 2 are temporally connected, a contradiction. This completes the proof. 4.3 Completion of the proof of Theorem 3 Lemmas 15, 18 and 19 imply that the connectivity graph of a temporal oriented tree is weakly chordal. Note that this cannot be strengthened to chordal, as there are temporal oriented trees whose connectivity graphs contain induced 4-cycles: let λ(−→s1c) = λ(−→s2c) = 1 and λ(−→ ct 1) = λ(−→ ct 2) = 2 , the vertices s1, t1, s2 and t2 induce a C4 in the connectivity graph. Corollary 12 implies the correspondence between a minimum temporal path cover of a temporal oriented tree and a minimum clique cover of the corresponding connectivity graph. We then conclude using Theorem 8 for the algorithm. Observation 10, Corollary 12 and Theorem 8 together give the Dilworth property. 11 5 Temporally Disjoint Path Cover on temporal oriented trees Theorem 4. Temporally Disjoint Path Cover is NP-hard on temporal oriented trees. Proof. The reduction is inspired from Theorem 1 in [29, 30]. However, in [29, 30], the terminal vertices of the paths are fixed, which is not the case in our problem. Thus, nontrivial additions are needed. We reduce from Unary Bin Packing , which is NP-complete . Unary Bin Packing Instance: A list of item sizes (x1, . . . , x n), a number of bins b, each of size B. x1, . . . , x n, b, B are integers encoded in unary, and verify Pni=1 xi = bB .Question: Is it possible to assign every item to a bin, filling all the bins? The idea of the reduction will be to have pairs of vertices serving as bins, each with B leaves, and to have vertices representing, for each item, the bins that are unused by this item. We construct the following temporal oriented tree T = ( T, λ ): • V (T ) = {c} ∪ b [ i=1 {si, t i} ∪ B [ j=1 {rji , u ji }  ∪ n [ i=1 (xi−1)( b−1) [ j=1 {vji , w ji } • A(T ) = b [ i=1 {−→sic, −→ ct i} ∪ B [ j=1 {−−→ rji si, −−→ tiuji }  ∪ n [ i=1 (xi−1)( b−1) [ j=1 {−→ vji c, −−→ cw ji } For the sake of simplicity, we will use layers to represent the time labels: for each item i, the layer λi will assign to every arc a subset of {1, . . . , 2bx i +4 }. Thus, for an arc a, we have λ(a) = n [ i=1 {ℓ+Pi−1 j=1 (2 bx i +4) | ℓ ∈ λi(a)}. This allows us to describe the layers starting with label 1. We call two time labels 2-successive if they differ by 2. The time labels in layer i are as follows. • For every j ∈ { 1, . . . , b } and k ∈ { 1, . . . , B }: λi(−−→ rkj sj ) = {every 2-successive label between 2( j − 1) xi + 1 and 2jx i − 1}; λi(−→sj c) = {every 2-successive label between 2( j − 1) xi + 2 and 2jx i}; λi(−→ ct j ) = {every 2-successive label between 2( j − 1) xi + 3 and 2jx i + 1 }; λi(−−→ tj ukj ) = {every 2-successive label between 2( j − 1) xi + 4 and 2jx i + 2 }. • For every j ∈ { 1, . . . , (xi − 1)( b − 1) }: λi(−→ vji c) = Sbk=1 {the xi − 1 first labels of −→ ct k}; λi(−−→ cw ji ) = Sbk=1 {the xi − 1 highest labels of −→skc}. For a given k, those are called the bin-k-labels .A layer of this construction is depicted in Figure 5. The number of vertices is 1 + 2 b + 2 bB + Pni=1 (xi − 1)( b − 1) = 2 b(bB − n + 1) + 2 n + 1 , among which 2b(bB − n) + 2 n are leaves (half of them are sources, the other half sinks). The number of different time labels is 2bx i + 4 for layer i, and thus a total of 2b(bB + 4 n).Hence, the reduction is polynomial. We now prove that there is a valid assignment of every item to a bin if and only if there is a temporally disjoint path cover of T of size b(bB − n) + n.(⇒) Suppose that f : {1, . . . , n } → { 1, . . . , b } is a valid assignment (so P i∈f−1(j) xi = B for every j). In every layer i, we take xi (r, u )-paths (−−−−−→ rjf (i)sf (i), 2( f (i) − 1) xi + 2 a), (−−−→sf (i)c, 2( f (i) − 1) xi + 2 a + 1) , (−−−→ ct f (i), 2( f (i) − 1) xi + 2 a + 2) , (−−−−−−→ tf (i)ukf (i), 2( f (i) − 1) xi + 2 a + 3) using uncovered rjf (i)’s and ukf (i)’s and for a ∈ { 1, . . . , x i}; as well as all the paths from vji to wki such that the arc −→ vji c is used with a bin-k-label for k̸ = f (i). The first paths will always be possible, and will cover every rji and uji once all the layers are done, since we will use exactly B paths for every i. The other paths can also clearly be constructed, 12 c s1s2s3 t1t2t3 2,4,6 8, 10 , 12 14 , 16 , 18 3,5,7 9, 11 , 13 15 , 17 , 19 rj 1’s uj 1’s rj 2’s uj 2’s rj 3’s uj 3’s 1,3,5 4,6,8 7,9,11 10,12,14 13,15,17 16,18,20 3,5,9,11,15,17 4,6,10,12,16,18 vk 1’s wk 1’s . . . . . . vkn’s wkn’s Figure 5: Layer 1 of the reduction of the proof of Theorem 4, with x1 = 3 , b = 3 , B = 4 . The only vkj ’s and wkj ’s linked with c in this layer are those with j = 1 .since whenever k̸ = f (i) the bin-k-labels are not used by the first paths, and so we will cover all the vji ’s and wji ’s. Hence, we obtain a temporally disjoint path cover of T (c and the si’s and ti’s are clearly covered) of size Pni=1 (xi + ( xi − 1)( b − 1)) = b(bB − n) + n.(⇐) Suppose that there is a temporal path cover of T of size b(bB − n) + n. The cover is of size twice the number of leaves, so each path in the cover will contain two leaves. Since every path between two leaves has to go through c and the paths are temporally disjoint, there can be at most bx i paths in layer i, with equality if and only if all in-arcs with successive labels from 2 to 2bx i are used. The vertices vji ’s are linked with c only in layer i, so they are covered in layer i by an arc −→ vji c at time d (note that d is odd); sort them in a sequence S ordered by the d’s. We call a subsequence of S 2-successive if the d’s are 2-successive. Claim 1. Each 2-successive subsequence of length k prevents k + 1 ( r, u )-paths from going through c.Proof. Each 2-successive subsequence S′ = ( d, d + 2 , . . . , d + 2 k − 2) (recall that d is odd) induces paths that occupy c at all times between d and d + 2 k − 1 (since the last path needs to leave c in order to allow another path to occupy c in turn); but due to the difference of parity between the time labels of the arcs −→ uji c and −→skc, S′ will prevent at least |S′| + 1 = k + 1 ( r, u )-paths from going through c, since no such path will be able to go through c in the interval between d − 1 and d + 2 k − 1.Note that having a path start in one layer and end in another layer will lower the maximum number of paths restricted to their layers by 1 for each of both, while gaining at most one path with arcs in two different layers. Furthermore, while the theoretical maximum number of paths in layer i is bx i, the vji ’s are covered in layer i, and thus by Claim 1 there can be at most bx i − (b − 1) paths in layer i. Since the number of paths in the cover is b(bB − n) + n = Pni=1 (bx i − (b − 1)) , there is no path having arcs in two different layers. Similarly, the only paths in the cover are (r, u )-paths and (v, w )-paths (since, otherwise, there would be even fewer paths in the layer). Claim 2. There are exactly xi (r, u )-paths in layer i. 13 Proof. Due to the definition of λi, there are at least b − 1 2-successive subsequences, each of length at most xi − 1. By Claim 1, they prevent at least (b − 1)( xi − 1) + ( b − 1) = ( b − 1) xi (r, u )-paths from going through c in layer i (since the number of (v, w )-paths is fixed, increasing the number of 2-successive subsequences will decrease their sizes, but will end up increasing the number of time labels during which c is occupied). Since there are at most bx i paths per layer, there are at most xi (r, u )-paths in layer i, with equality if and only if there are exactly b − 1 2-successive subsequences. As this holds for every layer, the path cover is of size b(bB − n) + n, and (b − 1)( bB − n) paths are necessary to cover the vkj ’s and wkj ’s, there has to be bB (r, u )-paths in all the layers; thus there are exactly xi such paths in layer i. Claim 3. All the (r, u )-paths of a layer i have to all go through the same sj .Proof. Assume by contradiction that there are (r, u )-paths in the same layer using sj and sk for j̸ = k. Then, in order to cover the vi’s and wi’s, we need to have at least b 2-successive subsequences (at least 1 for each of j and k, and at least b − 2 for the other bins). By Claim 1 and as in the proof of Claim 2, this implies that there will be less than xi (r, u )-paths in the layer, which contradicts Claim 2, so all the (r, u )-paths go through the same sj .The same argument can be applied to show that all (r, u )-paths go through the same tj′ , and that j = j′.Hence, we can construct the item assignment function f as follows: for every item i, let j be the integer such that there are xi (r, u )-paths in layer i going through sj and tj ; we define f (i) = j. Claim 3 implies that f will assign each item to exactly one bin. Moreover, by our construction, there are exactly B (r, u )-paths going through each sj , xi of which at layer i for i ∈ f −1(j) by Claim 2, and thus it is a correct assignment: P i∈f−1(j) xi = B for each bin j.We also show the following. Proposition 20. There are temporal oriented trees (whose underlying digraph is a star) that do not satisfy the TD-Dilworth property. Proof. Consider the temporal oriented tree Sk = ( Sk, λ ), with V (Sk) = {s1, . . . , s k} ∪ { t1, . . . , t k} ∪ { c}, A(Sk) = Ski=1 {−→siu, −→ ut i}, and λ(−→siu) = 1 and λ(−→ ct i) = 2 for i ∈ { 1, . . . , k }. Now, as depicted on Figure 6, the si’s (or the ti’s) form a (maximum-size) temporal antichain of size k. Since c is a cut-vertex with all its in-arcs having the same time label, there can only be one path using c to cover both an si and a ti, and thus every other vertex has to be covered individually. Hence, we need at least 2k − 1 temporal paths in any temporal path cover of Sk (and it is easy to construct such a cover). . . . . . . 1 1 12 2 2 s1 s2 sk t1 t2 tk c Figure 6: Sk, a temporal oriented tree with a maximum-size temporal antichain of size k and a minimum-size TD-PC of size 2k − 1.14 6 Subclasses of temporal oriented trees Theorem 5. Temporal Path Cover and Temporally Disjoint Path Cover can be solved in time: (a) O(ℓn ) on temporal oriented lines; (b) O(ℓn 2) on temporal rooted directed trees; where ℓ is the maximum number of labels per arc and n is the number of vertices. Furthermore, both classes satisfy the TD-Dilworth property. Proof. (a) Let P = ( P, λ ) be a temporal oriented line, and let v be a leaf of P . We construct C as follows. Assume that v is incident with an in-arc −→uv . We construct a maximum-length temporal path that covers v. Set (b, c ) = ( u, v ), ℓ = max λ(−→uv ), and apply the following routine: while b is incident with an in-arc −→ ab , if there is a time label smaller than ℓ in λ(−→ ab ), add −→ ab to the path, update (b, c ) = ( a, b ) and ℓ = max {k ∈ λ(−→ ab ) | k < ℓ }.When the routine stops, add the path to C, remove its vertices from P , and start again on a new leaf (or return C if P is empty). If v was incident with an out-arc, we would do the same but with out-arcs, start with the smallest possible time label, and update ℓ = min {k ∈ λ(−→ ab ) | k > ℓ }.This algorithm computes its output in time O(ℓn ): every arc is visited at most once, but we need to parse the time labels in order to see whether we can keep on extending the path or not.Furthermore, the set of leaves v where we start the routine are a temporal antichain: assume on the contrary that v1 and v2 are such vertices that are temporally connected, and assume without loss of generality that there is a path from v1 to v2 in the underlying oriented path; in this case, our algorithm would have added v1 to the path that started being computed at v2, a contradiction. Hence, C is a temporally disjoint path cover with the same size as a temporal antichain, proving that it is minimum-size and that temporal oriented lines satisfy the TD-Dilworth property. (b) We give an algorithm that solves Temporal Path Cover on a temporal rooted directed tree T =(T, λ ). First, we sort the vertices of T with respect to their topological distance from the root in T (with the highest distances first). Then, we construct a maximum-length temporal path covering the first uncovered vertex (which will be a sink of that path), until T is fully covered. Note that this algorithm outputs C which is clearly is a temporal path cover: every vertex is covered by some path of C. Furthermore, it is an adaptation of the algorithm for temporal oriented lines: instead of successive leaves, we construct the paths from successive leaves with highest topological distance from the root. We will show that C is minimum-size, and later we will explain how to modify the algorithm in order to obtain a minimum-size TD-PC .Let S be the set of sinks of paths of C. First, let vi and vj be two vertices of S (without loss of generality, assume that vi was covered by the algorithm after vj ). They cannot be temporally connected, since otherwise, the graph being a temporal rooted directed tree, one of them is necessarily the predecessor of the other in a path from the root, and thus the maximum-length temporal path ending in vj would necessarily contain vi,since there is a temporal path from vi to vj , and thus vi would have been covered at this step and cannot be in S. Hence, S is a temporal antichain. We now prove that S is maximum-size. Assume by contradiction that there is a temporal antichain S′ with |S′| > |S|. However, by definition, no two vertices in S′ can be covered by the same temporal path of C, and thus |S′| = |C| . But |C| = |S|, a contradiction. Thus, S is a maximum-size temporal antichain of the temporal rooted directed tree. Since the temporal antichain number is a lower bound for the temporal path cover number, this implies that the temporal path cover C that the algorithm constructed is minimum-size, and thus that temporal rooted directed trees satisfy the Dilworth property. We now modify the algorithm to obtain a minimum-size temporally disjoint path cover. Indeed, we can see that the maximum-length temporal path construction, which is executed for every vertex of S, can re-cover some vertices that had already been covered at a previous step. Let vi and vj be two vertices of S such that their maximum-length temporal paths constructed by the algorithm Pi and Pj intersect. Since the graph is a temporal rooted directed tree, we can divide Pi and Pj into the following parts, without loss of generality: Pi = P top i ∪(Pi ∩ Pj )∪P bot i and Pj = ( Pi ∩ Pj )∪P bot j , where P top i ∩Pj = P bot i ∩Pj = P bot j ∩Pi = ∅ (note that we can have P top i = ∅). Hence, we can modify the algorithm by adding a loop that, for each such pair (Pi, P j ),defines those subpaths and then removes Pi ∩ Pj from Pj . Now, C will still be a temporal path cover, but 15 the paths will be vertex-disjoint and thus temporally disjoint, and its size will not change. This implies that temporal rooted directed trees satisfy the TD-Dilworth property (contrasting the general temporal oriented trees), and thus the modified algorithm outputs the optimal solution for those two problems. The result of the algorithm and its modification is depicted in Figure 7. Finally, one can check that the algorithm and its modification compute C in time O(ℓn 2). For each vertex in the antichain S, we have to construct the maximum-length temporal path. This can be done in time O(ℓn ) by taking at every arc the largest label that allows to extend the path, thus we have to parse all the labels of every arc along the path, which can be of linear-size in the worst case. Since we can have a linear number of antichain vertices, we have a complexity of O(ℓn 2) to get the temporal path cover. The modification to make it temporally disjoint can be done in O(n2) time afterwards. 23 12 223 12 211,2 21311,2 2 1 3Figure 7: Minimum-size temporal path covers and temporally disjoint path covers of the same temporal rooted directed tree (on the left, with one label per arc; on the right, with any labels per arc), as computed by our algorithm and its modification in the proof of Theorem 5. 7 Algorithms for temporal digraphs of bounded treewidth Recall that an algorithm is FPT with respect to some parameter k of the input, if it runs in time f (k)nO(1) for inputs of size n, where f is any computable function; the algorithm is XP for this parameter if the running time is in nf (k) . We prove the following theorem. Theorem 6. There is an algorithm for Temporally Disjoint Path Cover on general temporal digraphs that is FPT with respect to the treewidth of the underlying undirected graph and the maximum number of labels per arc. For Temporal Path Cover on general temporal digraphs, there is an XP algorithm for the same parameter. To prove the theorem, we use the well-known concept of nice tree decompositions , which gives a very structured decomposition of a graph. Definition 21. A nice tree decomposition of an undirected graph G = ( V, E ) is a rooted tree T where each node v is associated to a subset Xv of V called bag , and each internal node has one or two children, with the following properties. 1. The set of nodes of T containing a given vertex of G forms a nonempty connected subtree of T.2. Any two adjacent vertices of G appear in a common node of T.3. Each node of T belongs to one of the following types: introduce , forget , join or leaf .4. A join node v has two children v1 and v2 such that Xv = Xv1 = Xv2 .5. An introduce node v has one child v1 such that Xv \ { x} = Xv1 , where x ∈ Xv .6. A forget node v has one son v1 such that Xv = Xv1 \ { x}, where x ∈ Xv1 . 16 7. A leaf node v is a leaf of T with Xv = ∅.8. The tree T is rooted at a leaf node r with Xr = ∅. It is known that for any undirected graph of treewidth tw with n vertices, a tree-decomposition of width at most 2 tw can be computed in time 2O(tw) n , and the obtained tree decomposition can be transformed into a nice tree-decomposition of the same width with O(tw n) bags in time O(tw 2 n) . For the remainder of this section, we shall work with a temporal digraph D = ( V, A 1, . . . , A tmax ), and a nice tree decomposition T of the underlying undirected graph of D. For a node v of T, let Tv denote the subtree of T rooted at v, and let Dv denote the temporal digraph induced by the union of the bags of nodes of Tv .The main idea behind the algorithm is to perform a bottom-up dynamic programming algorithm over T.We can bound the number of partial solutions that can intersect a given bag, partly because of the following. Observation 22. Let C be a temporally disjoint path cover of D. Then any arc of D appears in at most tmax many paths of C. Consider an arbitrary temporally disjoint path cover C of Dv . Observation 22 implies that the number of temporally disjoint paths of C that contain at least one arc from the digraph induced by Xv is at most the number of arcs in this digraph, times the number of time-steps at which each of the arcs is active in D. This is at most p = tw 2  · tmax .Based on these, we create the following temporal multi-digraph. Let D′ be a copy of D. Now; for each arc a with time labels λ(a) = L ⊆ [tmax ], introduce |L| many new arcs a1, . . . , a |L|, each with a distinct time label of L. Note that any temporally disjoint path cover of D can be transformed into a temporally disjoint path cover of D′ whose temporal paths are pairwise edge-disjoint. Therefore, from now on, we will consider D′ instead of D.We now describe the states of our dynamic programming algorithm. To do so, for a temporally disjoint path cover C of D, its type τ with respect to a node v of T is determined by the following elements: • a partition Q = Q0, Q 1, . . . , Q t of the arcs of D′ inside Xv , where each part Qi corresponds to a temporal path P (Qi) of C (note that this path may form a set of disconnected sub-paths inside Xv ), and where the part Q0 is reserved for those arcs that do not belong to any path of C; • for each part Qi of Q, the subset Vi of vertices of Xv that belong to P (Qi) (those are the endpoints of arcs in Qi, together with those vertices of P (Qi) that are not incident with any arc in Qi); • for each part Qi of Q, the order of the vertices of Vi inside P (Qi), where P (Qi) is ordered from lowest to largest time label; • for each part Qi of Q, the set of vertices x in Vi with one or two arcs in P (Qi) from x to a vertex y not in Xv , together with the time labels of these arcs in P (Qi), and whether the arc connects x to a vertex in Dv or in D \ D v .The total number of different types of solutions with respect to any node v is at most pp × 2tw +1 × (tw +1)! × 2tw +2 × t2max which is 2O(p log p). For a type τ with respect to node v, its size is the bit-length of its encoding. A type τ with respect to v is said to be consistent if for each part Qi of Q, there is a subset of vertices of Vi whose ordering together with the arcs of Qi, form a valid temporal path (in particular, all labels of Qi must be distinct). Moreover, the labels of required arcs from a vertex x of Xv to a vertex y outside of Xv , must correspond to an actual arc label for some arc in D connecting x to some vertex outside of Xv . Moreover, every vertex of Xv must belong to some set Vi. Whether a type τ with respect to v is consistent can be checked in time proportional to p and the size of τ . For a node v of T and a solution type τ with respect to v, let opt (v, τ ) denote the minimum size of a solution for Dv that is of type τ with respect to v. The dynamic programming algorithm computes opt (v, τ ) by traversing the nice tree-decomposition T bottom-up and computes, for each node v, all the values for opt (v, τ ). The computation depends on whether the current node of T is a leaf, forget, introduce or join node. 17 Leaf node: There is nothing to do since for a leaf node v, Xv = ∅, so there is no partial solution with respect to v. Forget node: Let v be a forget node that has a child node v′ such that Xv = Xv′ \ { x}. For each possible consistent solution type τ with respect to v, we check which (consistent) solution types τ ′ with respect to v′ are compatible with τ . Whether τ and τ ′ are compatible (meaning that, roughly speaking, τ corresponds to τ ′ by removing x) can be computed in time proportional to p, the size of τ and that of τ ′. Among those, we discard those where x is required to have an arc to a vertex of D \ D v in its solution path (since x will never appear again in the tree-decomposition). We let opt (v, τ ) be the minimum value among all values opt (v′, τ ′) with τ ′ one of the non-discarded types compatible with τ . Introduce node: Let v be an introduce node that has a child node v′ such that Xv = Xv′ ∪ { x}. For each possible consistent solution type τ with respect to v, we check which solution types with respect to v′ are compatible with it. Here, this means that τ can be obtained from τ ′ either as a new solution path with a single vertex, or by adding x to one of the solution paths described by τ ′ (through an arc of the correct label as described in τ ′). If x forms a single path in τ , we let opt (v, τ ) be the minimum over opt (v′, τ ′) + 1 , where τ ′ is compatible with τ ; otherwise, we take the minimum over all opt (v′, τ ′) for compatible τ ′. Join node: Let v be a join node with children v1 and v2 and Xv = Xv1 = Xv2 . For any three possible solution types τ , τ1, τ2 that are consistent with respect to v, v1 and v2, respectively, we check if they are compatible. For this, the partitions of the arcs of Xv have to agree, as well as the order of vertices inside each solution path. The other elements should also be compatible. For example, if in τ1 there is a vertex x ∈ Xv that is required to have a single neighbour in Dv1 \ Xv in its solution path, and the same holds for τ2 and Dv2 \ Xv ,then the types are not compatible, since combining them would give two such neighbours to x in Dv \ Xv .Similarly, if in τ1, x is required to have a neighbour in Dv1 \ Xv in its solution path and another neighbour in D \ D v1 , τ1 is compatible with τ and τ2 if in τ2, x is required to have a neighbour in Dv2 \ Xv in its solution path and another neighbour in D \ D v2 , but in τ , x is required to have two neighbours in Dv \ Xv in its solution path. Other similar cases arise as well. For three compatible types τ , τ1, τ2, opt (v, τ ) is obtained from opt (v1, τ 1) + opt (v2, τ 2) by subtracting the number of solution paths that intersect the bag (and that would otherwise be counted twice). This dynamic programming algorithm for Temporally Disjoint Path Cover takes 2O(p log p)n time, which is an FPT running time of 2O(tw 2 tmax log(tw tmax )) n.For Temporal Path Cover , the algorithm is similar, however, as the paths are not necessarily disjoint, the type of a solution with respect to v must also contain the information of how many times a given part of Q, representing a solution path with a certain intersection with the subgraph induced by Xv , appears in the solution C. Thus, the number of possible solution types becomes kO(p log p), where k is the solution size. We obtain a running time of kO(p log p), which is an XP running time of nO(tw 2 tmax log(tw tmax )) since we can assume k ≤ n. 8 Conclusion We have initiated the study of two natural path covering problems in temporal DAGs, which, in the static case, are related to Dilworth’s theorem and are polynomial-time solvable. Both problems become NP-hard for temporal DAGs, even in a very restricted setting. Interestingly, and somewhat unexpectedly, they behave differently on temporal oriented trees: we showed that Temporal Path Cover is polynomial-time solv-able on temporal oriented trees (and a temporal version of Dilworth’s theorem holds in this setting), while Temporally Disjoint Path Cover remains NP-hard for this class. To prove our polynomial-time algorithm for Temporal Path Cover on temporal oriented trees, we have reduced the problem to Clique Cover in a static undirected graph, which turns out to be weakly chordal. This is a powerful technique, and the correspondence between the two problems is quite enlightening for the structure of temporal paths in an oriented tree. Nevertheless, it seems unlikely that this particular technique can be used on temporal digraph classes that are far from trees, as it was essential for the proof that any two vertices are joined by only one path in the underlying tree. However, this general technique could likely be applied in other temporal settings. 18 We do not know if our algorithm for treewidth and number of time-steps is optimal. In particular, can we obtain an FPT algorithm for Temporal Path Cover for this parameter? One could also explore other (structural) parameterizations of the problems. We note that many of our results for Temporally Disjoint Path Cover also hold for its stricter vertex-disjoint version (note that a vertex-disjoint version of Temporally Disjoint Paths is studied in ), in particular, the NP-hardness result for restricted DAGs and the polynomial-time algorithms for rooted directed trees and oriented lines. References Eleni C. Akrida, George B. Mertzios, Sotiris E. Nikoletseas, Christoforos L. Raptopoulos, Paul G. Spirakis, and Viktor Zamaraev. How fast can we reach a target vertex in stochastic temporal graphs? J. Comput. Syst. Sci. , 114:65–83, 2020. Eleni C. Akrida, George B. Mertzios, and Paul G. Spirakis. The temporal explorer who returns to the base. In Pinar Heggernes, editor, Algorithms and Complexity - 11th International Conference, CIAC 2019, Rome, Italy, May 27-29, 2019, Proceedings , volume 11485 of Lecture Notes in Computer Science ,pages 13–24. Springer, 2019. Binh-Minh Bui-Xuan, Afonso Ferreira, and Aubin Jarry. Computing shortest, fastest, and foremost journeys in dynamic networks. Int. J. Found. Comput. Sci. , 14(2):267–285, 2003. Manuel Cáceres. Minimum chain cover in almost linear time. In Kousha Etessami, Uriel Feige, and Gabriele Puppis, editors, 50th International Colloquium on Automata, Languages, and Programming, ICALP 2023, July 10-14, 2023, Paderborn, Germany , volume 261 of LIPIcs , pages 31:1–31:12. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2023. Manuel Cáceres, Massimo Cairo, Brendan Mumey, Romeo Rizzi, and Alexandru I. Tomescu. Sparsifying, shrinking and splicing for minimum path cover in parameterized linear time. In ACM-SIAM Symposium on Discrete Algorithms, SODA , pages 359–376. SIAM, 2022. Manuel Cáceres, Brendan Mumey, Edin Husić, Romeo Rizzi, Massimo Cairo, Kristoffer Sahlin, and Alexandru I. Tomescu. Safety in multi-assembly via paths appearing in all path covers of a DAG. IEEE/ACM Transactions on Computational Biology and Bioinformatics , 19(6):3673–3684, 2022. Arnaud Casteigts, Paola Flocchini, Walter Quattrociocchi, and Nicola Santoro. Time-varying graphs and dynamic networks. Int. J. Parallel Emergent Distributed Syst. , 27(5):387–408, 2012. Arnaud Casteigts, Anne-Sophie Himmel, Hendrik Molter, and Philipp Zschoche. Finding temporal paths under waiting time constraints. Algorithmica , 83(9):2754–2802, 2021. Yangjun Chen and Yibin Chen. On the graph decomposition. In 2014 IEEE Fourth International Conference on Big Data and Cloud Computing , pages 777–784, 2014. Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. Introduction to Algo-rithms, Third Edition . The MIT Press, 3rd edition, 2009. Marek Cygan, Fedor V Fomin, Łukasz Kowalik, Daniel Lokshtanov, Dániel Marx, Marcin Pilipczuk, Michał Pilipczuk, and Saket Saurabh. Parameterized algorithms , volume 4. Springer, 2015. Pradipta Kumar Das, Himansu Sekhar Behera, Prabir Kumar Jena, and Bijaya K. Panigrahi. An intelligent multi-robot path planning in a dynamic environment using improved gravitational search algorithm. Int. J. Autom. Comput. , 18(6):1032–1044, 2021. Robert P. Dilworth. A decomposition theorem for partially ordered sets. Annals of Mathematics , 51:161– 166, 1950. 19 Martin E. Dyer and Alan M. Frieze. Planar 3DM is NP-complete. Journal of Algorithms , 7(2):174–184, 1986. D. R. Ford and D. R. Fulkerson. Flows in Networks . Princeton University Press, 1962. Delbert R Fulkerson. Note on Dilworth’s decomposition theorem for partially ordered sets. In Proceedings of the American Mathematical Society , volume 7, pages 701–702, 1956. M. R. Garey and David S. Johnson. Computers and Intractability: A Guide to the Theory of NP-Completeness . W. H. Freeman, 1979. Martin Charles Golumbic. Algorithmic Graph Theory and Perfect Graphs (Annals of Discrete Mathe-matics, Vol 57) . North-Holland Publishing Co., NLD, 2004. Ryan B. Hayward. Weakly triangulated graphs. J. Comb. Theory, Ser. B , 39(3):200–208, 1985. doi: 10.1016/0095-8956(85)90050-4 . Ryan B. Hayward, Jeremy P. Spinrad, and R. Sritharan. Improved algorithms for weakly chordal graphs. ACM Trans. Algorithms , 3(2):14, 2007. doi:10.1145/1240233.1240237 . Petter Holme. Modern temporal network theory: a colloquium. European Physical Journal B , 88(9), 2015. H. V. Jagadish. A compression technique to materialize transitive closure. ACM Transactions on Database Systems , 15(4):558–598, 1990. Naoyuki Kamiyama and Yasushi Kawase. On packing arborescences in temporal networks. Inf. Process. Lett. , 115(2):321–325, 2015. David Kempe, Jon M. Kleinberg, and Amit Kumar. Connectivity and inference problems for temporal networks. J. Comput. Syst. Sci. , 64(4):820–842, 2002. Nina Klobas, George B. Mertzios, Hendrik Molter, Rolf Niedermeier, and Philipp Zschoche. Interference-free walks in time: temporally disjoint paths. Autonomous Agents and Multi Agent Systems , 37(1):1, 2023. Nina Klobas, George B Mertzios, Hendrik Molter, Rolf Niedermeier, and Philipp Zschoche. Interference-free walks in time: Temporally disjoint paths. Autonomous Agents and Multi-Agent Systems , 37(1):1, 2023. T. Kloks. Treewidth, Computations and Approximations . Springer, 1994. T. Korhonen. A single-exponential time 2-approximation algorithm for treewidth. In 2021 IEEE 62nd Annual Symposium on Foundations of Computer Science (FOCS 2021) , pages 184–192, 2022. Pascal Kunz, Hendrik Molter, and Meirav Zehavi. In which graph structures can we efficiently find temporally disjoint paths and walks? In Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence, IJCAI 2023, 19th-25th August 2023, Macao, SAR, China , pages 180–188. ijcai.org, 2023. Pascal Kunz, Hendrik Molter, and Meirav Zehavi. In which graph structures can we efficiently find temporally disjoint paths and walks? arXiv preprint arXiv:2301.10503 , 2023. L. Lovász. Normal hypergraphs and the perfect graph conjecture. Discrete Mathematics , 2(3):253–267, 1972. Veli Mäkinen, Alexandru I. Tomescu, Anna Kuosmanen, Topi Paavilainen, Travis Gagie, and Rayan Chikhi. Sparse dynamic programming on dags with small width. ACM Trans. Algorithms , 15(2), feb 2019. 20 Loris Marchal, Hanna Nagy, Bertrand Simon, and Frédéric Vivien. Parallel scheduling of DAGs under memory constraints. In 2018 IEEE International Parallel and Distributed Processing Symposium, IPDPS 2018, Vancouver, BC, Canada, May 21-25, 2018 , pages 204–213. IEEE Computer Society, 2018. Andrea Marino and Ana Silva. Eulerian walks in temporal graphs. Algorithmica , 85(3):805–830, 2023. George B. Mertzios, Hendrik Molter, Malte Renken, Paul G. Spirakis, and Philipp Zschoche. The complexity of transitively orienting temporal graphs. In Filippo Bonchi and Simon J. Puglisi, editors, 46th International Symposium on Mathematical Foundations of Computer Science, MFCS 2021, August 23-27, 2021, Tallinn, Estonia , volume 202 of LIPIcs , pages 75:1–75:18. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2021. Othon Michail. An introduction to temporal graphs: An algorithmic perspective. In Christos D. Zaro-liagis, Grammati E. Pantziou, and Spyros C. Kontogiannis, editors, Algorithms, Probability, Networks, and Games - Scientific Papers and Essays Dedicated to Paul G. Spirakis on the Occasion of His 60th Birthday , volume 9295 of Lecture Notes in Computer Science , pages 308–343. Springer, 2015. Jérôme Monnot and Sophie Toulouse. The path partition problem and related problems in bipartite graphs. Oper. Res. Lett. , 35(5):677–684, 2007. Simeon C. Ntafos and S. Louis Hakimi. On path cover problems in digraphs and applications to program testing. IEEE Transactions on Software Engineering , SE-5(5):520–529, 1979. Jari Saramäki and Petter Holme, editors. Temporal Network Theory . Computational Social Sciences. Springer, Germany, October 2019. Jeremy P. Spinrad and R. Sritharan. Algorithms for weakly triangulated graphs. Discret. Appl. Math. ,59(2):181–191, 1995. doi:10.1016/0166-218X(93)E0161-Q . Roni Stern, Nathan R. Sturtevant, Ariel Felner, Sven Koenig, Hang Ma, Thayne T. Walker, Jiaoyang Li, Dor Atzmon, Liron Cohen, T. K. Satish Kumar, Roman Barták, and Eli Boyarski. Multi-agent pathfind-ing: Definitions, variants, and benchmarks. In Pavel Surynek and William Yeoh, editors, Proceedings of the Twelfth International Symposium on Combinatorial Search, SOCS 2019, Napa, California, 16-17 July 2019 , pages 151–159. AAAI Press, 2019. Dawei Sun, Jingkai Chen, Sayan Mitra, and Chuchu Fan. Multi-agent motion planning from signal temporal logic specifications. IEEE Robotics Autom. Lett. , 7(2):3451–3458, 2022. Huanhuan Wu, James Cheng, Yiping Ke, Silu Huang, Yuzhen Huang, and Hejun Wu. Efficient algorithms for temporal path computation. IEEE Transactions on Knowledge and Data Engineering , 28(11):2927– 2942, 2016. Haifeng Xu, Fei Fang, Albert Xin Jiang, Vincent Conitzer, Shaddin Dughmi, and Milind Tambe. Solving zero-sum security games in discretized spatio-temporal domains. In Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence , AAAI’14, page 1500–1506. AAAI Press, 2014. Yue Yin and Bo An. Efficient resource allocation for protecting coral reef ecosystems. In Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence , IJCAI’16, page 531–537. AAAI Press, 2016. Youzhi Zhang, Bo An, Long Tran-Thanh, Zhen Wang, Jiarui Gan, and Nicholas R. Jennings. Optimal escape interdiction on transportation networks. In Carles Sierra, editor, Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI 2017, Melbourne, Australia, August 19-25, 2017 , pages 3936–3944. ijcai.org, 2017. 21
316
Published Time: 2023-08-21T15:19:11+00:00 The Lovers of the Poor by Gwendolyn Brooks - Poem Analysis =============== Skip to content Menu Poems Poem Explorer Find Poems by Tags Poems for Education Poetry Explained Poems Beginning with… Explore More Latest Poem Guides Poets Poet A-Z (with Search) Poets A-Z (Full List) Explore the Best Poets Study Tools Poetry PDFs Courses PDF Study Kit Poetry 101 Workbook Poem Generator Search (with Filters) Interactive Glossary Podcast PDF Learning Library 🏆 Quiz Leaderboard Support About Charity Contact Meet the Team Request a Poem Guide Log in Join Poetry+ Menu Poems Poem Explorer Find Poems by Tags Poems for Education Poetry Explained Poems Beginning with… Explore More Latest Poem Guides Poets Poet A-Z (with Search) Poets A-Z (Full List) Explore the Best Poets Study Tools Poetry PDFs Courses PDF Study Kit Poetry 101 Workbook Poem Generator Search (with Filters) Interactive Glossary Podcast PDF Learning Library 🏆 Quiz Leaderboard Support About Charity Contact Meet the Team Request a Poem Guide Log in Join Poetry+ The Lovers of the Poor By Gwendolyn Brooks Gwendolyn Brooks’ scathing critique exposes Ladies’ insincere charity, highlighting social inequality and privilege. Read Poem Read Poem PDF Guides Share Cite ShareCopyXFacebookPinterestWhatsApp Gwendolyn Brooks Nationality: American Poet Guide Gwendolyn Brooks was the first Black author to win the Pulitzer Prize. She is one of the most widely-read poets of the 20th century. BiographyPoemsQuotes Key Poem Information Central Message:Expose the Ladies' hollow charity, addressing social inequality and the need for genuine empathy Speaker:Unknown Poetic Form:Free Verse Themes:Desire, Disappointment, Dreams, Failure, Identity Emotions Evoked:Disgust, Empathy, Frustration, Regret, Remorse Time Period:20th Century Unlock more with Poetry+ Brooks satirizes privileged Ladies' superficial charity, revealing societal inequality and detachment from poverty's harsh realities. View Poetry+ Review Corner Poem Guide by Hilary Benard M.A. in Comparative Literature & Critical Theories and B.A. Honors in Comparative History ‘The Lovers of the Poor‘ by Gwendolyn Brooks is a scathing critique of the Ladies from the Ladies’ Betterment League, who engage in superficial charity towards the poverty-stricken community. Through vivid imagery and powerful language, the poem exposes the stark contrast between the Ladies’ privileged lives and the harsh realities the poor face. Brooks highlights the insincerity and detachment of the Ladies while also criticizing societal inequality and the need for genuine empathy and understanding when addressing poverty and social disparity. Lines [x] Rhyme Scheme- [x] Meter Syllables Stressed / Unstressed- [x] The Lovers of the Poor Gwendolyn Brooks arrive. The Ladies from the Ladies’ Betterment League Arrive in the afternoon, the late light slanting In diluted gold bars across the boulevard brag Of proud, seamed faces with mercy and murder hinting Here, there, interrupting, all deep and debonair,The pink paint on the innocence of fear; Walk in a gingerly manner up the hall.Cutting with knives served by their softest care, Served by their love, so barbarously fair.Whose mothers taught: You’d better not be cruel! You had better not throw stones upon the wrens! Herein they kiss and coddle and assault Anew and dearly in the innocence With which they baffle nature. Who are full, Sleek, tender-clad, fit, fiftyish, a-glow, all Sweetly abortive, hinting at fat fruit, Judge it high time that fiftyish fingers felt Beneath the lovelier planes of enterprise. To resurrect. To moisten with milky chill. To be a random hitching-post or plush.To be, for wet eyes, random and handy hem. Their guild is giving money to the poor.The worthy poor. The very very worthy And beautiful poor. Perhaps just not too swarthy?perhaps just not too dirty nor too dim Nor—passionate. In truth, what they could wish Is—something less than derelict or dull.Not staunch enough to stab, though, gaze for gaze!God shield them sharply from the beggar-bold! The noxious needy ones whose battle’s bald Nonetheless for being voiceless, hits one down. But it’s all so bad! and entirely too much for them.The stench; the urine, cabbage, and dead beans,Dead porridges of assorted dusty grains,The old smoke, heavy diapers, and, they’re told,Something called chitterlings. The darkness. Drawn Darkness, or dirty light. The soil that stirs. The soil that looks the soil of centuries.And for that matter the general oldness. Old Wood. Old marble. Old tile. Old old old.(...) Read the full text to 'The Lovers of the Poor'. next stay CC Settings Off Arabic Chinese English French German Hindi Portuguese Spanish Font Color white Font Opacity 100%Font Size 100%Font Family Arial Text Shadow none Background Color black Background Opacity 50%Window Color black Window Opacity 0% White Black Red Green Blue Yellow Magenta Cyan 100%75%50%25% 200%175%150%125%100%75%50% Arial Georgia Garamond Courier New Tahoma Times New Roman Trebuchet MS Verdana None Raised Depressed Uniform Drop Shadow White Black Red Green Blue Yellow Magenta Cyan 100%75%50%25%0% White Black Red Green Blue Yellow Magenta Cyan 100%75%50%25%0% Explore The Lovers of the Poor Join Poetry+ Read Poem Summary Structure and Form Themes Poetic Techniques and Figurative Language Detailed Analysis FAQs Similar Poetry Poetry+ Review Corner 🧠 Take the Quiz! Comments Get Complete PDFs for this Poem: PDF GuidesGwendolyn Brooks Summary ‘The Lovers of the Poor‘ by Gwendolyn Brooks portrays a scene where the affluent Ladies from the Ladies’ Betterment League visit a poverty-stricken neighborhood to offer charitable assistance. The Ladies arrive in the late afternoon, their privileged appearances contrasting starkly with the impoverished surroundings. The poem emphasizes their attempts to help the ‘worthy’ poor while subtly expressing prejudice against those they deem too swarthy, dirty, or passionate. The Ladies, trying to embody charity, are unnerved by the destitution they encounter—the stench, filth, and struggles of the poor, which are entirely alien to their sheltered lives. The poet highlights the Ladies’ disconnection from the harsh realities of poverty, contrasting it with their posh lifestyles in affluent areas like Glencoe and Lake Forest. Their substantial citizen hostess appears overwhelmed by the state of her home, cluttered with makeshift newspaper rugs and struggling children. The Ladies feel horrified and out of place amidst this ‘make-do-ness’ and squalor, their money seeming insufficient to address the deep-rooted problems faced by the poor. Despite their generous gestures, the Ladies can’t fully comprehend the plight of the needy. The poem touches on themes of condescension and the ineffectiveness of superficial charity, raising questions about the authenticity of their intentions. Brooks conveys the Ladies’ desire to escape the discomfort and return to their privileged lives. As they leave the poverty-stricken area, the Ladies maintain their composed appearance, avoiding touching the walls and keeping their distance from the impoverished surroundings. The poem ends with the Ladies attempting to maintain their composure while escaping the loaded air of the slum. This poem satirizes the patronizing attitude of privileged individuals who try to aid the poor without truly understanding their struggles. It critiques the superficiality of charity without genuine empathy and explores the complex dynamics between different social classes. The poem urges readers to reflect on the need for deeper understanding and meaningful engagement when addressing issues of poverty and inequality. Structure and Form ‘The Lovers of the Poor‘ by Gwendolyn Brooks adopts a free verse form, evident in its single stanza consisting of ninety-nine lines. The absence of a traditional rhyme scheme or consistent meter allows Brooks to convey her critique with poetic flexibility. This form mirrors the complexity of the social issues she addresses, granting her the space to explore various facets of the Ladies’ Betterment League’s actions and the impact on the impoverished community. The structure of a single extended stanza enhances the poem’s flow, giving it an almost conversational tone. This encourages readers to engage deeply with the unfolding critique, as Brooks dissects the Ladies’ superficial charity and contrasts it with the stark realities of poverty. The unbroken stanza mirrors the unbroken cycle of privilege and disparity that the poem seeks to unravel. The poem’s length serves as a canvas for Brooks’ intricate examination of class, compassion, and privilege. The absence of traditional stanza breaks mirrors the interconnectedness of the issues being discussed, emphasizing the ongoing nature of societal inequality and the complexities of addressing it. By utilizing free verse, Brooks defies conventional poetic constraints, allowing her to focus on the impact of her words rather than adhering to a strict structure. This form accentuates the power of her language, effectively conveying her message of the need for genuine empathy and understanding when addressing issues of poverty and social disparity. Themes In ‘The Lovers of the Poor,’ Gwendolyn Brooks addresses several themes, shedding light on social issues and human behavior. One of the central themes is the disparity between social classes and the patronizing attitudes of the affluent. Brooks critiques the Ladies’ Betterment League, a group of privileged women who visit the poor neighborhood to offer charity. The Ladies’ condescending behavior and discomfort with the impoverished surroundings exemplify the disconnect between the privileged and the marginalized. Another theme explored is the ineffectiveness of superficial charity. Despite their generous gestures, the Ladies fail to grasp the depth of poverty and its systemic roots. Brooks highlights this when the Ladies try to address the poor’s struggles with their clean money, reflecting the superficiality of their aid. Prejudice and superficial judgments are evident in the poem. The Ladies express subtle biases against the “very very worthy and beautiful poor,” demonstrating their inclination to help only those who fit their idealized image of deserving recipients. Their hesitations towards those who appear “too swarthy, dirty nor too dim” illustrate the biased lens through which they view poverty. The poem delves into the discomfort of the Ladies as they encounter the impoverished reality. Brooks emphasizes the stark contrast between the Ladies’ opulent lifestyles in places like Glencoe and Lake Forest and the squalor of the slum. The Ladies’ attempts to maintain their composed appearances and distance from the surroundings highlight their unease. Additionally, Brooks explores the notion of performative charity, where the Ladies engage in charitable acts not out of genuine empathy but to fulfill a social obligation. Their detachment from the poor’s struggles and their eagerness to escape the discomfort reinforce the superficiality of their efforts. Finally, the poem also touches on the resilience and survival of the poor despite their challenging circumstances. The imagery of “children children children” and their ability to endure reflects the strength and determination of those facing adversity. Gwendolyn Brooks skillfully weaves various themes into ‘The Lovers of the Poor,’ shedding light on class disparities, prejudice, performative charity, discomfort, and the resilience of the marginalized. Through her powerful imagery and nuanced portrayal of characters, Brooks encourages readers to reflect on the complexities of social issues and the need for genuine understanding and empathy. Poetic Techniques and Figurative Language In ‘The Lovers of the Poor,’ Gwendolyn Brooks employs various poetic techniques and figurative language to convey her powerful message. Imagery: One technique she uses is evocative imagery, enabling readers to visualize the stark contrast between the affluent Ladies and the impoverished neighborhood. For instance, the “late light slanting in diluted gold bars” paints a picture of the Ladies’ luxurious world against the backdrop of poverty. Irony: Brooks also employs irony throughout the poem. She describes the Ladies as “proud” but hints at their “mercy and murder.” This irony underscores their patronizing behavior and superficial empathy toward the poor. Metaphor: Figurative language, such as metaphor, adds depth to Brooks’ portrayal. The phrase “pink paint on the innocence of fear” symbolizes the Ladies’ attempt to mask their discomfort with a false sense of compassion. Enjambment: This is a technique where a sentence or phrase runs from one line to the next without a pause. The technique creates a flowing and continuous narrative, much like the Ladies’ ongoing inner dialogue and attempts to rationalize their actions. Alliteration and Consonance: Through alliteration and consonance, Brooks enhances the musicality of her lines. For example, “softest care, served by their love, so barbarously fair” emphasizes the contradictory nature of the Ladies’ charity. Symbolism: Brooks uses symbolism to emphasize the Ladies’ privileged status. References to “Spode, Lowestoft, candelabra” and other affluent possessions showcase their wealth, highlighting the vast economic disparity between them and the poor. Repetition: Brooks also employs repetition for emphasis. The repetition of “children” in the middle lines of stanza six reinforces the poem’s focus on the resilience and enduring spirit of the poor. Gwendolyn Brooks masterfully employs various poetic techniques and figurative language as she effectively conveys the various themes. These literary devices enrich the poem’s message and encourage readers to reflect on the profound societal issues presented. Detailed Analysis Lines 1-21 arrive. The Ladies from the Ladies’ Betterment League (…) To be, for wet eyes, random and handy hem. In lines 1-21 of ‘The Lovers of the Poor,’ Gwendolyn Brooks introduces the Ladies from the Ladies’ Betterment League, a group of affluent women who arrive in the poverty-stricken neighborhood. Through vivid imagery and clever word choices, Brooks conveys a powerful message about the Ladies’ condescending attitude and the disparity between their privileged world and the harsh reality of the poor. The poem begins with the word “arrive,” immediately drawing attention to the Ladies’ presence and their intrusion into a different environment. The use of the word “Betterment” in the name of their league implies a sense of superiority, suggesting that they believe they are there to improve the lives of the poor. The “late light slanting in diluted gold bars” depicts their luxurious lives as they walk along the boulevard. The imagery of “proud, seamed faces with mercy and murder hinting” hints at the conflicting emotions within the Ladies, portraying their patronizing behavior towards the poor but also their underlying potential for cruelty. The phrase “pink paint on the innocence of fear” encapsulates the Ladies’ attempt to mask their discomfort and guilt with an appearance of compassion. Brooks uses the color pink to symbolize a superficial and false sense of care, contrasting it with the “innocence of fear” experienced by the poor. The Ladies are described as “deep and debonair,” which hints at their sophistication and elegance. However, this description is juxtaposed with the harsh reality of their actions, portrayed through the metaphor of “Cutting with knives served by their softest care.” Brooks critiques the Ladies’ performative charity, suggesting their efforts may cause more harm than good. The lines “Served by their love, so barbarously fair” further emphasize the contradictory nature of the Ladies’ actions. Their supposed love and compassion are paradoxically juxtaposed with their harmful impact on the lives of the poor. Brooks refers to the Ladies’ upbringing, where their mothers taught them not to be cruel and not to harm innocent creatures like wrens. This highlights the Ladies’ privileged background and upbringing, suggesting that their intentions might be well-meaning, but their understanding of poverty is limited and superficial. This first twenty one lines sets the tone for the entire poem, highlighting the theme of social disparity and the patronizing attitudes of the affluent toward the poor. Through vivid imagery, metaphors, and clever word choices, Gwendolyn Brooks conveys a powerful message about the complexities of charity and the need for genuine empathy and understanding when engaging with social issues. Lines 22-31 Their guild is giving money to the poor. (…) Nonetheless for being voiceless, hits one down. In these lines, Gwendolyn Brooks continues to explore the theme of class disparity and the superficiality of the Ladies’ charity efforts. Through her pointed language and use of adjectives, Brooks exposes the Ladies’ biased and condescending attitudes toward the poor. The lines open with a simple yet significant statement: “Their guild is giving money to the poor.” Brooks refers to the Ladies’ organization as a “guild,” highlighting its exclusive and privileged nature. Giving money to the poor suggests an attempt at charity, but as the lines unfolds, it becomes clear that the Ladies’ intentions are not entirely altruistic. The repetition of the phrase “The worthy poor. The very very worthy” underscores the Ladies’ selective approach to charity. They seem to determine the deservingness of the poor based on subjective criteria, such as beauty and appearance. The Ladies’ prejudice is further evident in their preference for those “just not too swarthy” or “dirty nor too dim.” This reveals their bias towards individuals who align with their idealized image of poverty and respectability. Brooks uses a series of negative descriptions, such as “derelict,” “dull,” and “not staunch enough to stab,” to emphasize the Ladies’ aversion to engaging with poverty in its raw and unfiltered form. They seek individuals who won’t challenge or confront them, reflecting their desire to maintain a comfortable distance from the true struggles of the poor. The phrase “God shield them sharply from the beggar-bold!” reveals the Ladies’ fear of encountering individuals who might demand more from them, emotionally or financially. Their concern for the “noxious needy ones whose battle’s bald” suggests a reluctance to confront the harsh realities of poverty and the potential discomfort it may bring. In the last lines, Brooks emphasizes the plight of those who are voiceless and powerless, highlighting the inequality faced by the poor in society. The “voiceless” poor struggle to be heard, and their battles often go unnoticed or ignored. Essentially, these lines expose the Ladies’ superficial and prejudiced approach to charity. Brooks’ use of descriptive language and repetition reinforces the message that their efforts are driven by their own comfort and a desire to avoid engaging with the harsh realities of poverty. The lines serve as powerful critiques of performative charity and highlights the need for genuine empathy and understanding when addressing issues of social inequality. Lines 32-49 But it’s all so bad! and entirely too much for them. (…) Patience of the poor and put-upon. In these lines, the poet continues her scathing critique of the Ladies from the Ladies’ Betterment League, delving deeper into their discomfort and aversion towards the poverty-stricken neighborhood. Through evocative imagery and contrasting language, Brooks conveys the stark contrast between the Ladies’ privileged world and the harsh reality of the slum. The lines open with an exclamation of their overwhelmed state: “But it’s all so bad! and entirely too much for them.” Here, Brooks highlights the Ladies’ inability to cope with poverty’s overwhelming and distressing sights and smells. Their discomfort reinforces their detachment from the experiences of the poor. The poet employs a list of sensory details to create a vivid picture of the impoverished environment. The “stench,” “urine,” “cabbage,” and “dead beans” reflect the unsanitary conditions of the slum. The mention of “dead porridges of assorted dusty grains” further accentuates the hopelessness and deprivation the poor face. The Ladies’ unease with unfamiliar food is evident when they encounter “something called chitterlings,” a type of soul food. Brooks uses this mention to underscore their unfamiliarity with the cuisine of the poor, emphasizing their lack of understanding and connection. The repeated use of the word “old” throughout these lines contrasts the decaying state of the slum with the polished and pristine surroundings of places like Lake Forest and Glencoe. The Ladies’ preference for the “homekind Oldness” of affluent areas further accentuates their disdain for the slum’s general oldness. The final lines condemn the Ladies’ detachment and privilege. Brooks juxtaposes the absence of “sturdy,” “majestic,” “quiet drama,” and “rubbed glaze” in the slum with the tasteful elegance they enjoyed in affluent areas. The use of “unkillable infirmity” to describe the social issues faced by the poor further emphasizes their struggle and vulnerability. The Ladies’ ultimate goal, as suggested in these lines, is to return to their privileged world once they are done with their charity efforts. Brooks characterizes their interactions with the poor as dealing with “dullards and distortions,” implying that their attempts at charity are mere token gestures rather than genuine engagement. The message passed in the lines serves as a scathing critique of the Ladies’ discomfort, detachment, and superficiality in their efforts to engage with the poverty-stricken neighborhood. Through vivid imagery and contrasting language, Gwendolyn Brooks highlights the stark contrast between the Ladies’ privileged lives and the harsh realities the poor face. The lines reinforce the poem’s overarching message about the need for genuine empathy and understanding when addressing issues of social inequality. Lines 50-61 They’ve never seen such a make-do-ness as (…) Eyed kitten, hunched-up, haggard, to-be-hurt. In these lines, Gwendolyn Brooks vividly depicts the Ladies’ encounter with the poverty-stricken household. Through powerful imagery and contrasting emotions, Brooks highlights the stark realities faced by the poor and the Ladies’ discomfort and horror at the scene before them. The lines begin with the Ladies’ amazement at the “make-do-ness” of the newspaper rugs in the flat. The use of this term conveys the resourcefulness and resilience of the poor in making the best of their limited means. The Ladies’ surprise indicates their lack of exposure to such hardships. The phrase “Their hostess is gathering up the oozed, the rich” presents a stark contrast between the hostess’s impoverished circumstances and the Ladies’ privileged background. The use of “oozed” and “bespattered” to describe the morning rugs reveals the squalor and lack of material wealth. Brooks uses contrasting actions to show the hostess’s efforts to maintain cleanliness and dignity despite her difficult living conditions. She “readies to spread clean rugs for afternoon,” signifying her determination to maintain a sense of order and cleanliness in her home. The poet then invites readers to witness the Ladies’ reaction to the scene. The phrase “Here is a scene for you” implies a sense of spectacle, suggesting that the Ladies view the household and its inhabitants as a curiosity rather than individuals needing genuine help. The Ladies’ horror is emphasized as they look at the substantial citizeness, a woman representing the impoverished class. Her “trains clank out across her swollen heart” conveys a sense of burden and struggle, highlighting the weight of poverty on her shoulders. The description of “all tumbling children, quilts dragged to the floor” further illustrates the poor’s chaotic and overcrowded living conditions. The presence of “potato peelings” and a “soft-eyed kitten” adds to the household’s sense of deprivation and struggle. The use of “hunched-up, haggard, to-be-hurt” evokes empathy for the poor, showing that they endure both physical and emotional suffering. The juxtaposition of this suffering with the Ladies’ privileged lives highlights the stark contrast in their experiences. These lines are a powerful portrayal of the stark realities faced by the poverty-stricken household and the Ladies’ discomfort and horror at the scene. It reinforces the poem’s overarching message about the superficiality of charity efforts and the importance of genuine engagement with impoverished communities. Lines 62-65 Their League is allotting largesse to the Lost. (…) Tipped with their hundred flawless rose-nails seems . . . In lines 62-65, Gwendolyn Brooks delves into the theme of charity and the superficiality of the Ladies’ efforts to assist the poor. Through powerful imagery and metaphorical language, Brooks criticizes the Ladies’ condescending attitude and highlights the insincerity of their charity. The lines open with a description of the Ladies’ League, which is “allotting largesse to the Lost.” The term “largesse” suggests the act of giving generously or as an act of charity. However, the use of “Lost” to describe the recipients of their charity carries a tone of condescension, as if the Ladies view the poor as aimless and directionless individuals. Brooks employs a series of caesuras with the ellipsis (“…”) at the end of the stanza, creating a pause that adds emphasis and invites readers to reflect on the Ladies’ motivations. The poet uses this technique to heighten the impact of her message and draw attention to the contradiction within the Ladies’ actions. The poet uses metaphorical language to critique the Ladies’ approach to charity. Their money is described as “clean” and “pretty,” with the imagery of “delicate rose-fingers” and “hundred flawless rose-nails.” These phrases emphasize the Ladies’ privileged and pampered existence, in contrast to the hardships faced by the poor. The use of “delicate rose-fingers” and “flawless rose-nails” also hints at the fragility and artificiality of their charity efforts. It suggests that their aid is cosmetic and does not address the underlying systemic issues that perpetuate poverty. Brooks leaves the sentence unfinished with “seems . . .,” deliberately leaving the reader hanging and open to interpretation. This technique leaves a sense of ambiguity, allowing readers to question the authenticity of the Ladies’ charity and ponder the deeper implications of their actions. These lines serve as a powerful critique of the Ladies’ charity efforts and their condescending attitude toward the poor. It reinforces the poem’s overarching message about the complexities of charity and the need for genuine empathy and understanding when engaging with social issues. Lines 66-93 They own Spode, Lowestoft, candelabra, (…) Where loathe-love likelier may be invested. In lines 66-93 of ‘The Lovers of the Poor,’ Gwendolyn Brooks continues her scathing critique of the Ladies from the Ladies’ Betterment League and their shallow engagement with poverty. Through contrasting imagery and powerful language, Brooks exposes the stark contrast between the Ladies’ luxurious lifestyle and the deplorable conditions of the slum. The lines open with a list of the Ladies’ possessions and activities, which are emblematic of their wealth and privilege. The use of names such as “Spode, Lowestoft, candelabra,” and “Chippendale” reflects their extravagant taste in expensive items and high-end furnishings. The description of their wintering in Palm Beach and crossing the water in June suggests their lavish lifestyle and leisurely activities. The mention of attending the “nice Art Institute” and buying books in the “best bindings” further emphasizes the Ladies’ cultural and intellectual pursuits, indicative of their education and refinement. Brooks presents a stark contrast between their privileged lives and the poverty-stricken surroundings they encounter. The phrase “Oh Squalor!” stands out in sharp contrast to the opulence listed earlier, creating a jarring effect. Brooks shifts the focus to the “sick four-story hulk,” emphasizing the dilapidated and deteriorating state of the slum. The use of “fibre with fissures everywhere” symbolizes the crumbling and broken state of the impoverished neighborhood. Brooks then contrasts the Ladies’ “loathe-love largesse” with the harsh realities faced by the poor. The Ladies’ superficial and patronizing aid, represented by “tin can, blocked fire escape, and chitterling,” pales in comparison to the genuine struggles of the impoverished, which include “the middle passage” and “urine and stale shames.” The repetition of “children children children” further emphasizes the vulnerability and suffering of the young ones living in poverty. The mention of a rat in the shadows creates an unsettling and uncomfortable atmosphere, highlighting the unsanitary conditions endured by the poor. In the closing lines, Brooks describes the Ladies’ realization that it would be better to escape the discomfort and return to their privileged world. The phrase “achieve the outer air that rights and steadies” suggests a desire to escape the suffocating environment of poverty and return to their familiar and comfortable lives. The Ladies’ contemplation of posting the money and choosing another slum exposes their lack of genuine commitment to making a meaningful impact. Brooks implies that their charity efforts are merely token gestures, lacking the depth and sincerity needed to address the systemic issues of poverty. The lines serve to condemn the Ladies’ shallow engagement with poverty and their preference for comfort and convenience. Through contrasting imagery and powerful language, Gwendolyn Brooks highlights the stark contrast between the Ladies’ privileged lives and the harsh realities faced by the poor. The lines reinforces the poem’s overarching message about the need for genuine empathy and understanding when addressing issues of social inequality. Lines 94-99 Keeping their scented bodies in the center (…) Try to avoid inhaling the laden air. In these final lines, Gwendolyn Brooks brings the poem to a powerful conclusion, reinforcing the poem’s overarching message about the insincerity and detachment of the Ladies from the Ladies’ Betterment League in their engagement with poverty. Through vivid imagery and metaphorical language, Brooks exposes the Ladies’ efforts to maintain their privileged status and distance themselves from the harsh realities of the impoverished neighborhood. The lines begin with the Ladies keeping “their scented bodies in the center of the hall.” This description emphasizes their self-importance and desire to remain at the forefront, while also distancing themselves from their poverty-stricken surroundings. The repetition of the word “hall” further accentuates their detachment from the environment and the stark contrast between their refined appearance and the squalor around them. The line “They allow their lovely skirts to graze no wall” suggests the Ladies’ reluctance to touch or come into contact with anything in the impoverished setting. This conveys their discomfort and desire to maintain a sense of cleanliness and elegance, highlighting their unwillingness to truly engage with the harsh reality of poverty. The phrase “off at what they manage of a canter” portrays the Ladies’ attempt to quickly distance themselves from the environment, as if they are eager to escape from the discomfort they feel. The use of the word “manage” implies that their efforts to canter away are half-hearted and superficial. Brooks employs metaphorical language to describe the Ladies “resuming all the clues of what they were.” This implies that they are reverting to their usual behaviors and mannerisms, discarding any pretense of empathy or engagement with the poor. The final line, “Try to avoid inhaling the laden air,” captures the essence of the Ladies’ detachment and reluctance to confront the harsh realities of the impoverished neighborhood. The word “laden” suggests a heavy burden, symbolizing the overwhelming and uncomfortable nature of the poverty-stricken surroundings. The Ladies’ attempt to avoid inhaling this air is a metaphorical representation of their desire to avoid confronting the true challenges faced by the poor. These final lines serve as a powerful denouement to the poem’s critique of the Ladies from the Ladies’ Betterment League. Through vivid imagery and metaphorical language, Gwendolyn Brooks exposes the Ladies’ insincerity and detachment in their charity efforts. The lines reinforce the poem’s overarching message about the complexities of charity, the importance of genuine empathy, and the need to confront the harsh realities of poverty to bring about meaningful change. FAQs What is the mood of ‘The Lovers of the Poor?‘ The mood is a mixture of irony, social commentary, and unease. Gwendolyn Brooks exposes the stark contrast between the Ladies’ privileged lives and the harsh realities faced by the poor, leaving readers with a critical and reflective impression. What feelings are triggered by the poem? The poem triggers feelings of indignation, empathy for the poor, and a sense of discomfort at the Ladies’ insincerity and detachment from the harsh realities of poverty. Why is the poem titled ‘The Lovers of the Poor?‘ The poem is titled ‘The Lovers of the Poor’ because it sarcastically refers to the Ladies from the Ladies’ Betterment League as “lovers” of the poor, highlighting their patronizing and condescending attitude towards the impoverished community. What is the tone in ‘The Lovers of the Poor?‘ The tone is critical and satirical, as Gwendolyn Brooks sharply critiques the Ladies from the Ladies’ Betterment League and their superficial charity efforts towards the poor. Similar Poetry Those who enjoyed this poem by Gwendolyn Brooks may also wish to explore these others: ‘At a Potato Digging’ written by Seamus Heaney – consists of four sections that depict men’s relationship with the land. ‘America’ by Allen Ginsberg deals with the turbulent times in America. It was written during and focused on the period after the Second World War. ‘Baudelaire’ by Delmore Schwartz is an emotional depiction of a poet’s desperation caused by poverty and the vicious cycle of hopelessness. × Poetry+ Review Corner The Lovers of the Poor Explore an expert's insights on this poem. Join Poetry+ to instantly unlock fully understanding the poem. Poet: Gwendolyn Brooks (poems) 62 Period: 20th Century 40 Nationality: American 47 Themes: Desire 44 Disappointment 48 Dreams 41 Failure 40 Identity 39 Emotions: Disgust 41 Empathy 42 Frustration 42 Regret 42 Remorse 44 Topics: Adversity 41 Appreciation 37 Betrayal 40 Care 37 Longing 45 Form: Free Verse 42 Gwendolyn Brooks 62 This poem is a good representative of Gwendolyn Brooks' poetry. It exhibits several characteristics commonly found in her work, such as a keen focus on social issues, a critical examination of societal inequality, and the use of powerful imagery and language to convey her message. Brooks was known for her ability to critique societal norms and expose the struggles of marginalized communities, and this poem exemplifies those themes in a skillful and impactful manner. 20th Century 40 This poem is a good representation of 20th-century poetry. It exemplifies the era's focus on social commentary, using vivid imagery to critique societal inequality and superficial charity. The poem's themes and style align with other works of the time that sought to challenge traditional norms, highlight marginalized voices, and address pressing social issues. To unlock full analysis, login or join Poetry+ American 47 This poem by Gwendolyn Brooks stands as a powerful and significant poem in American literature. Brooks' portrayal of the Ladies' Betterment League's superficial charity is a poignant critique of social inequality and privilege. The poem's sharp imagery, satirical tone, and critical analysis of performative philanthropy set it apart from other American poems. Brooks' ability to expose the harsh realities faced by the poor and challenge conventional attitudes toward poverty elevates the poem's impact, solidifying her as a prominent voice in 20th-century poetry. To unlock full analysis, login or join Poetry+ Desire 44 This poem addresses the theme of desire through contrasting desires of the Ladies from the Ladies' Betterment League and the impoverished community. The Ladies desire to perform superficial charity, distancing themselves from the harsh realities of poverty. In contrast, the impoverished desire genuine empathy and understanding, highlighting the disconnect between privilege and the true needs of the disadvantaged. To unlock full analysis, login or join Poetry+ Disappointment 48 This poem addresses the theme of disappointment through the disparity between the Ladies' Betterment League's performative charity and the genuine needs of the impoverished community. The Ladies' shallow efforts and lack of understanding lead to disappointment and disillusionment for the poor, highlighting the unmet expectations and the disconnect between privilege and the reality of poverty. To unlock full analysis, login or join Poetry+ Dreams 41 The poem explores the theme of dreams through the contrasting dreams of the Ladies from the Ladies' Betterment League and the impoverished community. The Ladies' dreams revolve around maintaining their privilege and superficial charity, while the impoverished dream of genuine empathy and understanding. The poem highlights the stark contrast between these two sets of dreams and the impact of societal inequality on the aspirations of different social classes. To unlock full analysis, login or join Poetry+ Failure 40 The poem looks into the theme of failure through the Ladies' Betterment League's inadequate and superficial attempts at charity. Their failure lies in their lack of genuine empathy and understanding, leading to a disconnect between their privilege and the true needs of the impoverished community. The poem critiques the failure of performative philanthropy to address the root causes of poverty and societal inequality. To unlock full analysis, login or join Poetry+ Identity 39 This poem addresses the theme of identity through the contrasting identities of the Ladies from the Ladies' Betterment League and the impoverished community. The poem explores how social status and privilege shape one's identity, leading to detachment and a lack of genuine understanding. It highlights the importance of recognizing and understanding the complexities of identity within the context of social inequality. To unlock full analysis, login or join Poetry+ Disgust 41 This poem triggers the emotion of disgust through its vivid portrayal of the Ladies' Betterment League's patronizing and superficial charity efforts. The poem exposes their detachment and condescension, evoking a sense of repulsion for their lack of genuine understanding and empathy towards the impoverished. The stark contrast between privilege and poverty evokes a feeling of disdain for their performative philanthropy. To unlock full analysis, login or join Poetry+ Empathy 42 This poem brings forth the emotion of empathy through its poignant portrayal of the impoverished community's struggles. The vivid imagery and stark contrast between the Ladies' privilege and the harsh realities faced by the poor evoke a strong sense of compassion. The poem prompts readers to empathize with the disadvantaged and reflects on the need for genuine understanding and support. To unlock full analysis, login or join Poetry+ Frustration 42 The poem triggers the emotion of frustration through its scathing critique of the Ladies' Betterment League and their superficial charity efforts. The poem exposes their privileged detachment and lack of genuine engagement, provoking a sense of exasperation at their insensitivity and failure to address the root causes of poverty. It highlights the frustration of witnessing performative philanthropy that falls short of making a meaningful impact. To unlock full analysis, login or join Poetry+ Regret 42 The poem evokes the emotion of regret through its portrayal of the Ladies' Betterment League's insincere charity. The poem exposes their lack of genuine engagement, prompting regret for their missed chance to make a meaningful impact on the impoverished community. It highlights the consequences of superficiality and prompts introspection on the importance of authentic empathy and action. To unlock full analysis, login or join Poetry+ Remorse 44 This poem elicits the emotion of remorse through its critique of the Ladies' Betterment League's superficial charity. The poem exposes the consequences of their detachment and lack of empathy, leading readers to feel remorseful for the missed opportunities to genuinely address the plight of the impoverished. It prompts reflection on the impact of privilege and the need for greater compassion and understanding. To unlock full analysis, login or join Poetry+ Adversity 41 The poem delves into the topic of adversity through its depiction of the harsh realities faced by the impoverished community. The poem exposes the adversity of poverty, presenting images of dilapidation and struggle. It contrasts this adversity with the superficial charity of the Ladies' Betterment League, highlighting the disconnect between privilege and the genuine challenges of those in need. To unlock full analysis, login or join Poetry+ Appreciation 37 This poem addresses the topic of appreciation by contrasting the Ladies' superficial charity with the genuine needs of the impoverished community. The poem underscores the importance of appreciating the true struggles of others, advocating for genuine empathy and understanding. It prompts reflection on the significance of appreciating the complexities of poverty and the impact of performative philanthropy. To unlock full analysis, login or join Poetry+ Betrayal 40 This poem addresses the topic of betrayal through its critique of the Ladies from the Ladies' Betterment League. The poem exposes their betrayal of genuine charity, as their efforts are superficial and detached from the harsh realities of poverty. It highlights the consequences of their insincere actions, revealing a sense of betrayal towards the impoverished community's true needs. To unlock full analysis, login or join Poetry+ Care 37 The poem looks into the topic of care through its critique of the Ladies from the Ladies' Betterment League. The poem exposes their lack of genuine care towards the impoverished community, as their efforts are superficial and detached. It emphasizes the importance of authentic care and empathy in addressing the needs of the disadvantaged, highlighting the consequences of performative philanthropy. To unlock full analysis, login or join Poetry+ Longing 45 This poem explores the topic of longing through the contrasting desires of the Ladies from the Ladies' Betterment League and the impoverished community. The Ladies long for appearances and detachment from poverty's harsh realities, while the impoverished long for genuine empathy and understanding. The poem highlights the longing for authentic engagement and compassion in the face of social inequality. To unlock full analysis, login or join Poetry+ Free Verse 42 This poem exemplifies a free-verse form characterized by its lack of strict rhyme and meter. Gwendolyn Brooks' use of unstructured stanzas and varying line lengths allows her to convey her critical message with poetic freedom. The absence of traditional poetic constraints gives her the flexibility to use vivid imagery and powerful language to critique societal inequality and superficial charity. To unlock full analysis, login or join Poetry+ Get PDFs for this Poem Log in or join Poetry+ to access all PDFs for this poem. Get the ultimate PDF Guide to understanding poetry, or a one-pager snapshot. Get PDF GuidesGwendolyn BrooksJoin Poetry+ 🧠 Take the Quiz! Think you know the poem? Take our AI-generated quiz, trained on this Poem Guide, to test your comprehension and discover new insights about the poem. Complete 2 quizzes a month to join the leaderboard ↗ and win prizes 💰! ⏱️10 Questions 🎯Instant Results 🏆Leaderboard Start Quiz Now Poem Quiz 0 Questions 00:00 Quiz Results Your Score:0/0 0% Leaderboard & Results CommentCite this Guide ShareCopyXFacebookPinterestWhatsApp How helpful was this guide? Your feedback helps us improve our content, which helps us help you! Home » Gwendolyn Brooks » The Lovers of the Poor About Hilary Benard Hilary has an M.A. in Comparative Literature & Critical Theories and B.A. Honors in Comparative History. Courtesy of his expertise in literature and poetry, he has a depth of experience in a wide range of literary texts and movements: this includes the historical, cultural, and social contexts that produced them. Login Please login to comment 0 Comments Most Voted Newest Oldest Inline Feedbacks View all comments We Look Deep into Poetry Our Poem Guides, PDFs, Study Tools, and Articles are created by a team of qualified poetry experts to provide an unparalleled in-depth look into poetry. About Poem Analysis Get Tailor-Made Poetry PDF Resources Explore Poetry Like Never Before Cite This Page Choose citation style: MLA Chicago APA MHRA Benard, Hilary. "The Lovers of the Poor by Gwendolyn Brooks". Poem Analysis, Accessed 12 August 2025. Copy to Clipboard Close [email protected] Poem Solutions Limited Company no: 10883994 United Kingdom General About Charity Contact Help Center Meet the Team Request a Guide Poetry+ (not a member? Join now) Dashboard Poetry PDFs Courses Study Tools Beyond the Verse Podcast Discover Poetry Archives Poets A-Z Poems beginning with... Poetry Explained Best Poems Best Poets Useful Links Changelog Poem Generator Search Poem Explorer Education Glossary Poem Analysis Copyright © 2025. Careers. Poet Collaborations. Site Policies. Privacy Policy. T&Cs. Log in Join Poetry+ Poems Poem Explorer Tool Find Poems by Tags Poems for Education Poetry Explained Poems Beginning with… Explore More Latest Poem Guides Poets Poet A-Z (with Search) Poets A-Z (Full List) Explore the Best Poets Study Tools Poetry PDFs Courses PDF Study Kit Poetry 101 Workbook Poem Generator Search (with Filters) Interactive Glossary Podcast PDF Learning Library 🏆 Quiz Leaderboard Support About Charity Contact Meet the Team Request a Poem Guide Close (and also g et exclusive PDFs on poetry, plus much more) Unlock the Secrets to Poetry 860+ Reviews Enter Email Yes! Count me in Access Poetry PDF Guides for this Poem Complete Poetry PDF Guide Perfect Offline Resource Covers Everything You Need to Know One-pager 'snapshot' PDF Great Offline Resource Gateway to deeper understanding 860+ Reviews Full PDF Guide Poetry PDF Snapshot Insert Join Poetry+ Get Complete PDFs for this Poem: PDF GuidesGwendolyn Brooks Share to... EmailFacebookMessengerPinterestXWhatsAppLinkedInCopy ✕ Do not sell or share my personal information. You have chosen to opt-out of the sale or sharing of your information from this site and any of its affiliates. To opt back in please click the "Customize my ad experience" link. This site collects information through the use of cookies and other tracking tools. Cookies and these tools do not contain any information that personally identifies a user, but personal information that would be stored about you may be linked to the information stored in and obtained from them. This information would be used and shared for Analytics, Ad Serving, Interest Based Advertising, among other purposes. For more information please visit this site's Privacy Policy. CANCEL CONTINUE Your Use of Our Content ✕ The content we make available on this website [and through our other channels] (the “Service”) was created, developed, compiled, prepared, revised, selected, and/or arranged by us, using our own methods and judgment, and through the expenditure of substantial time and effort. This Service and the content we make available are proprietary, and are protected by these Terms of Service (which is a contract between us and you), copyright laws, and other intellectual property laws and treaties. This Service is also protected as a collective work or compilation under U.S. copyright and other laws and treaties. We provide it for your personal, non-commercial use only. You may not use, and may not authorize any third party to use, this Service or any content we make available on this Service in any manner that (i) is a source of or substitute for the Service or the content; (ii) affects our ability to earn money in connection with the Service or the content; or (iii) competes with the Service we provide. These restrictions apply to any robot, spider, scraper, web crawler, or other automated means or any similar manual process, or any software used to access the Service. You further agree not to violate the restrictions in any robot exclusion headers of this Service, if any, or bypass or circumvent other measures employed to prevent or limit access to the Service by automated means. Information from your device can be used to personalize your ad experience. Do not sell or share my personal information. Terms of Content Use A Raptive Partner Site
317
Published Time: 2007-03-01 tRNA's Wobble Decoding of the Genome: 40 Years of Modification | Request PDF =============== Home General Biochemistry Biomolecules RNA Nucleic Acids Chemistry tRNA Article Literature Review tRNA's Wobble Decoding of the Genome: 40 Years of Modification March 2007 Journal of Molecular Biology 366(1):1-13 DOI:10.1016/j.jmb.2006.11.046 Source PubMed Authors: Paul Agris Duke University School of Medicine Franck A. P. Vendeix Sirga Advanced Biopharma, Inc. William D Graham William D Graham This person is not on ResearchGate, or hasn't claimed this research yet. Request full-text PDF To read the full-text of this research, you can request a copy directly from the authors. Request full-text Download citation Copy link Link copied Request full-textDownload citation Copy link Link copied To read the full-text of this research, you can request a copy directly from the authors. Citations (522)References (130) Abstract The genetic code is degenerate, in that 20 amino acids are encoded by 61 triplet codes. In 1966, Francis Crick hypothesized that the cell's limited number of tRNAs decoded the genome by recognizing more than one codon. The ambiguity of that recognition resided in the third base-pair, giving rise to the Wobble Hypothesis. Post-transcriptional modifications at tRNA's wobble position 34, especially modifications of uridine 34, enable wobble to occur. The Modified Wobble Hypothesis proposed in 1991 that specific modifications of a tRNA wobble nucleoside shape the anticodon architecture in such a manner that interactions were restricted to the complementary base plus a single wobble pairing for amino acids with twofold degenerate codons. However, chemically different modifications at position 34 would expand the ability of a tRNA to read three or even four of the fourfold degenerate codons. One foundation of Crick's Wobble Hypothesis was that a near-constant geometry of canonical base-pairing be maintained in forming all three base-pairs between the tRNA anticodon and mRNA codon on the ribosome. In accepting an aminoacyl-tRNA, the ribosome requires maintenance of a specific geometry for the anticodon-codon base-pairing. However, it is the post-transcriptional modifications at tRNA wobble position 34 and purine 37, 3'-adjacent to the anticodon, that pre-structure the anticodon domain to ensure the correct codon binding. The modifications create both the architecture and the stability needed for decoding through restraints on anticodon stereochemistry and conformational space, and through selective hydrogen bonding. A physicochemical understanding of modified nucleoside contributions to the tRNA anticodon domain architecture and its decoding of the genome has advanced RNA world evolutionary theory, the principles of RNA chemistry, and the application of this knowledge to the introduction of new amino acids to proteins. Discover the world's research 25+ million members 160+ million publication pages 2.3+ billion citations Join for free No full-text available To read the full-text of this research, you can request a copy directly from the authors. Request full-text PDF Citations (522) References (130) ... The Elongator complex, along with Trm112/Trm9 and the Ctu1/ Ctu2 complexes, plays a critical role in the formation of the 5-methoxycarbonylmethyl (mcm 5 ) and 5-methoxycarbonylmethyl-2thiouridine (mcm 5 s 2 ) side chains on uridine 34 (U 34 ) at the tRNA wobble position during vegetative growth and under stress conditions 8,12,13, . We conducted an analysis of Trm112 protein levels during nitrogen starvation in both wild-type and igo1-deleted cells. ... ... The igo1Δ mutant is defective in U 34 and A 37 ... ... In all organisms, modifications of uridine 34 at the wobble position (U 34 ) of certain tRNAs are necessary to enhance codon-anticodon recognition 37 . These modifications are mediated by the Elongator complex, which introduces an acetyl group at position 5 of U 34 (cm 5 U 34 ), the Trm112/Trm9 methyltransferase complex involved in the formation of mcm 5 U 34 , and the Ctu1-Ctu2 complex, which catalyses the thiolation at carbon 2 of U 34 (mcm 5 s 2 U 34 ) ( Supplementary Fig. 6a). ... The Greatwall-Endosulfine-PP2A/B55 pathway regulates entry into quiescence by enhancing translation of Elongator-tunable transcripts Article Full-text available Dec 2024 Javier Encinar Del Dedo Belen Suarez Rafael López-San Segundo Sergio Moreno Quiescent cells require a continuous supply of proteins to maintain protein homeostasis. In fission yeast, entry into quiescence is triggered by nitrogen stress, leading to the inactivation of TORC1 and the activation of TORC2. In this study, we demonstrate that the Greatwall-Endosulfine-PPA/B55 pathway connects the downregulation of TORC1 with the upregulation of TORC2, resulting in the activation of Elongator-dependent tRNA modifications crucial for sustaining the translation programme during entry into quiescence. This mechanism promotes U34 and A37 tRNA modifications at the anticodon stem loop, enhancing translation efficiency and fidelity of mRNAs enriched for AAA versus AAG lysine codons. Notably, several of these mRNAs encode TORC1 inhibitors, TORC2 activators, tRNA modifiers, and proteins necessary for telomeric and subtelomeric functions. Therefore, we propose a mechanism by which cells respond to nitrogen stress at the level of translation, involving a coordinated interplay between tRNA epitranscriptome and biased codon usage. View Show abstract ... The levels of specific tRNAs and their biochemical properties directly impact the decoding rate of mRNA. Chemical modifications to tRNA are abundant and necessary for tRNA structure and function (Agris et al., 2007). Modifications located on the anticodon affect the decoding of codons as well as the efficiency and speed of protein synthesis (Agris et al., 2007). ... ... Chemical modifications to tRNA are abundant and necessary for tRNA structure and function (Agris et al., 2007). Modifications located on the anticodon affect the decoding of codons as well as the efficiency and speed of protein synthesis (Agris et al., 2007). ... ... The expression levels of an enzyme involved in t 6 A modification are lowered PBF, but it is possible the pre-blood-feeding levels of the enzyme are sufficient for modification. Wobble modifications affect the decoding of mRNA codons and, thus, the accuracy and speed of translation (Agris et al., 2007). The wobble position modifications impacted by bloodfeeding in mosquitoes were mcm 5 U, oQ, Q and manQ. ... Tyrosine transfer RNA levels and modifications during blood-feeding and vitellogenesis in the mosquito, Aedes aegypti Article Full-text available Aug 2024 INSECT MOL BIOL Melissa Kelley Christopher J. Holmes Cassandra Herbert Joshua B Benoit Mosquitoes such as Aedes aegypti must consume a blood meal for the nutrients necessary for egg production. Several transcriptome and proteome changes occur post‐blood meal that likely corresponds with codon usage alterations. Transfer RNA (tRNA) is the adapter molecule that reads messenger RNA codons to add the appropriate amino acid during protein synthesis. Chemical modifications to tRNA enhance codon decoding, improving the accuracy and efficiency of protein synthesis. Here, we examined tRNA modifications and transcripts associated with the blood meal and subsequent periods of vitellogenesis in A. aegypti . More specifically, we assessed tRNA transcript abundance and modification levels in the fat body at critical times post blood‐feeding. Based on a combination of alternative codon usage and identification of particular modifications, we discovered that increased transcription of tyrosine tRNAs is likely critical during the synthesis of egg yolk proteins in the fat body following a blood meal. Altogether, changes in both the abundance and modification of tRNA are essential factors in the process of vitellogenin production after blood‐feeding in mosquitoes. View Show abstract ... Chemical modifications to tRNA are abundant and necessary to tRNA structure and function (Agris et al., 2007). Modifications located on the anticodon affect the decoding of codons as well as the efficiency and speed of protein synthesis (Agris et al., 2007). ... ... Chemical modifications to tRNA are abundant and necessary to tRNA structure and function (Agris et al., 2007). Modifications located on the anticodon affect the decoding of codons as well as the efficiency and speed of protein synthesis (Agris et al., 2007). Anticodon modifications are often reflected in codon usage bias in abundant transcripts and stress-specific transcripts (Chan et al., 2018;Endres et al., 2015). ... ... The expression levels of an enzyme involved in t 6 A modification are lowered PBF, but it is possible the pre-blood-feeding levels of the enzyme are sufficient for modification. Wobble modifications affect the decoding of mRNA codons and, thus, the accuracy and speed of translation (Agris et al., 2007). The mosquito's wobble position modifications impacted by blood-feeding were mcm 5 U, oQ, Q, and manQ. ... Tyrosine transfer RNA levels and modifications during blood-feeding and vitellogenesis in the mosquito, Aedes aegypti Preprint Full-text available Nov 2023 Melissa Kelley Christopher J. Holmes Cassandra Herbert Joshua B Benoit Mosquitoes such as Aedes aegypti must consume a blood meal for the nutrients necessary for egg production. Several transcriptome and proteome changes occur post blood meal that likely corresponds with codon usage alterations. Transfer RNA (tRNA) is the adapter molecule that reads messenger RNA (mRNA) codons to add the appropriate amino acid during protein synthesis. Chemical modifications to tRNA enhance codons' decoding, improving the accuracy and efficiency of protein synthesis. Here, we examined tRNA modifications and transcripts associated with the blood meal and subsequent periods of vitellogenesis in A. aegypti. More specifically, we assessed tRNA transcript abundance and modification levels in the fat body at critical times post blood-feeding. Based on a combination of alternative codon usage and identification of particular modifications, we identified that increased transcription of tyrosine tRNAs is likely critical during the synthesis of egg yolk proteins in the fat body following a blood meal. Altogether, changes in both the abundance and modification of tRNA are essential factors in the process of vitellogenin production after blood-feeding in mosquitoes. View Show abstract ... To achieve accurate translation, tRNA molecules feature post-transcriptional chemical modifications, which have been conserved through evolution. These modifications stabilize the tRNA tertiary structure, introduce recognition determinants and anti-determinants towards RNA-interacting macromolecules and fine-tune the decoding process both in terms of efficiency and fidelity 1,2 . Interestingly, recent studies have revealed unsuspected roles of these modifications in the regulation of translation and protein homeostasis during cellular stress . ... ... Indeed, position 34 is the first base in the tRNA anticodon, which is bound less tightly to the ribosome than the two other positions of the anticodon, enabling it to participate to non-canonical base pairs. Since this loose binding enhances the probability of amino acid mis-incorporation into the growing peptide chain, complex chemical base modifications at the wobble base were selected during evolution for accurate tRNA-codon recognition 1,11,12 . For instance, U34 of tRNA Gln UUG , tRNA Lys UUU and tRNA Glu UUC is universally thiolated at C2 position and hypermodified by different chemical groups at C5 position depending on the organism: methylaminomethyl (mnm), carboxymethylaminomethyl (cmnm), aminomethyl (nm) or isopentenylaminomethyl (inm) in bacteria 13 , mnm and carbamoylmethyluridine (ncm) in archaea 14,15 , methoxycarbonylmethyl (mcm) in the eukaryotic cytosol, cmnm in yeast mitochondria, or taurinomethyl (τm) in mammalian mitochondria 16 . ... ... In vitro enzyme assay. 1 tRNA digestion and analysis of modified nucleosides. 20 µM tRNA was digested overnight in 100 µL of 25 mM HEPES pH 7.5, 200 mM NaCl, 0.1 mM ZnSO 4 at 37 °C by nuclease P1 (2 units, Sigma) followed by the addition of alkaline phosphatase for 2 h at 37 °C (2 units, Sigma). ... The thiolation of uridine 34 in tRNA, which controls protein translation, depends on a [4Fe-4S] cluster in the archaeum Methanococcus maripaludis Article Full-text available Apr 2023 Bimaï Ornella Pierre Legrand Jean-Luc Ravanat Beatrice Golinelli-Pimpaneau Thiolation of uridine 34 in the anticodon loop of several tRNAs is conserved in the three domains of life and guarantees fidelity of protein translation. U34-tRNA thiolation is catalyzed by a complex of two proteins in the eukaryotic cytosol (named Ctu1/Ctu2 in humans), but by a single NcsA enzyme in archaea. We report here spectroscopic and biochemical experiments showing that NcsA from Methanococcus maripaludis (MmNcsA) is a dimer that binds a [4Fe-4S] cluster, which is required for catalysis. Moreover, the crystal structure of MmNcsA at 2.8 Å resolution shows that the [4Fe-4S] cluster is coordinated by three conserved cysteines only, in each monomer. Extra electron density on the fourth nonprotein-bonded iron most likely locates the binding site for a hydrogenosulfide ligand, in agreement with the [4Fe-4S] cluster being used to bind and activate the sulfur atom of the sulfur donor. Comparison of the crystal structure of MmNcsA with the AlphaFold model of the human Ctu1/Ctu2 complex shows a very close superposition of the catalytic site residues, including the cysteines that coordinate the [4Fe-4S] cluster in MmNcsA. We thus propose that the same mechanism for U34-tRNA thiolation, mediated by a [4Fe-4S]-dependent enzyme, operates in archaea and eukaryotes. View Show abstract ... These modifications at different positions are needed for maintaining the chemical stability, stabilizing the tertiary structure, decoding ability and the decay of tRNAs . The wobble position 34 and position 37 (3 adjacent to anticodon) are two hotspots that install the largest diversity of chemical modifications [3,4,. The N 6 -threonylcarbamoyladenosine (t 6 A), which denotes the incorporation of an L-threonine via a ureido linkage at the N6 nitrogen of adenosine ( Figure 1A,B) [15,16], is universally found at position 37 of ANN-decoding (N being A, U, C and G) tRNAs from the three domains of life (Table 1) [11,12,14,17,18]. ... ... The wobble position 34 and position 37 (3 adjacent to anticodon) are two hotspots that install the largest diversity of chemical modifications [3,4,. The N 6 -threonylcarbamoyladenosine (t 6 A), which denotes the incorporation of an L-threonine via a ureido linkage at the N6 nitrogen of adenosine ( Figure 1A,B) [15,16], is universally found at position 37 of ANN-decoding (N being A, U, C and G) tRNAs from the three domains of life (Table 1) [11,12,14,17,18]. Structurally, t 6 A extends its planar ring via intramolecular hydrogen bonds and π-π stacking interaction with its 5 -adjacent base U36 and prevents the intra-loop Watson-Crick pairing between U33 and A37 ( Figure 1A) [14,. ... ... The N 6 -threonylcarbamoyladenosine (t 6 A), which denotes the incorporation of an L-threonine via a ureido linkage at the N6 nitrogen of adenosine ( Figure 1A,B) [15,16], is universally found at position 37 of ANN-decoding (N being A, U, C and G) tRNAs from the three domains of life (Table 1) [11,12,14,17,18]. Structurally, t 6 A extends its planar ring via intramolecular hydrogen bonds and π-π stacking interaction with its 5 -adjacent base U36 and prevents the intra-loop Watson-Crick pairing between U33 and A37 ( Figure 1A) [14,. The t 6 A-stabilized conformation of anticodon stem loop (ASL) facilitates the entry of amino-acylated tRNAs into the ribosomal A-site, wherein t 6 A principally promotes the formation of a codon-anticodon duplex via forming extra hydrogen bonds and overcoming . ... Conservation and Diversification of tRNA tA-Modifying Enzymes across the Three Domains of Life Article Full-text available Nov 2022 INT J MOL SCI Chenchen Su Mengqi Jin Wenhua Zhang The universal N⁶-threonylcarbamoyladenosine (t⁶A) modification occurs at position 37 of tRNAs that decipher codons starting with adenosine. Mechanistically, t⁶A stabilizes structural configurations of the anticodon stem loop, promotes anticodon–codon pairing and safeguards the translational fidelity. The biosynthesis of tRNA t⁶A is co-catalyzed by two universally conserved protein families of TsaC/Sua5 (COG0009) and TsaD/Kae1/Qri7 (COG0533). Enzymatically, TsaC/Sua5 protein utilizes the substrates of L-threonine, HCO3⁻/CO2 and ATP to synthesize an intermediate L-threonylcarbamoyladenylate, of which the threonylcarbamoyl-moiety is subsequently transferred onto the A37 of substrate tRNAs by the TsaD–TsaB –TsaE complex in bacteria or by the KEOPS complex in archaea and eukaryotic cytoplasm, whereas Qri7/OSGEPL1 protein functions on its own in mitochondria. Depletion of tRNA t⁶A interferes with protein homeostasis and gravely affects the life of unicellular organisms and the fitness of higher eukaryotes. Pathogenic mutations of YRDC, OSGEPL1 and KEOPS are implicated in a number of human mitochondrial and neurological diseases, including autosomal recessive Galloway–Mowat syndrome. The molecular mechanisms underscoring both the biosynthesis and cellular roles of tRNA t⁶A are presently not well elucidated. This review summarizes current mechanistic understandings of the catalysis, regulation and disease implications of tRNA t⁶A-biosynthetic machineries of three kingdoms of life, with a special focus on delineating the structure–function relationship from perspectives of conservation and diversity. View Show abstract ... modifications occur in tRNA, 37,38 with the ASL containing the largest variety and density of modifications across all domains of life. 8,38 A significant proportion of modifications in the ASL are unique to tRNA, 39,40 with the presence of specific modifications varying across tRNA sequences. 41 Based on experimental data, such modifications have been proposed to affect tRNA structure, stability, and mRNA decoding. ... ... An interesting case study to probe the impact of chemical alterations to nucleotides on tRNA structure and function are modifications at position 34 of tRNA (denoted the wobble position), which forms the third base pair in the anticodon-codon minihelix (B34:B3) and contains the most chemically complex RNA modifications of the anticodon (Fig. 1B). 38,39,61 During ribosomal decoding, tRNA is subjected to a kinetic proofreading mechanism upon attempted binding to the A-site (Fig. 1C), which involves ribosomal monitoring of tRNA. 62,63 Although the first two base pairs (B36:B1 and B35:B2) are strictly monitored by 16S rRNA residues A1492, A1493, and G530 (E. coli numbering), B34 of tRNA is relatively solvent exposed in the ribosomal A-site. ... Chemical composition, sequence context, and base-pairing potential of posttranscriptional modifications at the wobble position of the tRNA anticodon loop Article Full-text available Mar 2025 Mark J. Lea Cynthia D.E. Fonderson Preethi Seelam Prabhakar Stacey Wetmore It is well accepted that RNA is frequently and diversely modified, with the anticodon stem loop of transfer RNA (tRNA) containing the largest number and types of chemical substitutions. Nevertheless, the roles many modifications play in cells remain unclear. The present study consolidates and expands our current knowledge of tRNA modifications at position 34 (wobble position) using a range of bioinformatics and computational techniques. Sequence analysis of 474 tRNAs clarifies the position 34 modifications identified to date at each parent nucleotide across all domains of life. Subsequent analysis of 1291 cryo-EM or X-ray crystal structures of ribosomal complexes led to the curation of a dataset of 468 high-resolution structures of position 34 base-pair interactions with messenger RNA (mRNA). Despite highlighting that structural information is scarce for several canonical base-pairing combinations and nucleotide modifications, the structural data hint that modifications can have differential impact on the base pairing at position 34. Due to limited experimental structural data for position 34 modifications, density functional theory calculations were used to characterize 120 pairs involving canonical and/or modified nucleobases, revealing that some chemical substituents do not impact base-pairing properties of parent nucleotides regardless of modification size, while others slightly alter inherent base pairing or afford completely new base-pairing properties to fine-tune tRNA–mRNA interactions. Overall, consolidation of previous and newly-generated data suggests that position 34 modifications likely regulate translation in several ways and underscores the importance of incorporating computational analyses in the future analysis pipeline as modifications are identified. View Show abstract ... Sulfur modifications on tRNA occur in many cellular processes (2)(3)(4). They stabilize the tertiary structure and introduce recognition determinants for the ribosome, resulting in a more accurate decoding process and minimizing frame-shifting (5,6). Sulfur modification of nucleosides is also common but has been observed mostly in tRNAs. ... ... In this reaction, MnmC catalyzes the cleavage of the carboxymethyl group of cmnm to generate 5-aminomethyl (nm 5 ) (27). Alternatively, MnmEG is also suggested to install nm 5 ... 2-Thiouridine formation in Escherichia coli: a critical review Article Full-text available Dec 2024 J BACTERIOL Silke Leimkühler Modifications of transfer RNA (tRNA) have been shown to play critical roles in the biogenesis, metabolism, structural stability, and function of RNA molecules, and the specific modifications of nucleobases with sulfur atoms in tRNA are present in prokaryotes and eukaryotes. The s² group of s²U34 stabilizes anticodon structure, confers ribosome-binding ability to tRNA, and improves reading frame maintenance. In particular, specific enzymes catalyze the biosynthesis of sulfur-containing nucleosides of s²U34, such as the L-cysteine desulfurase IscS and the tRNA thiouridylase MnmA in Escherichia coli. Until recently, the mechanism of sulfur transfer in E. coli was considered to involve persulfide chemistry; however, a newly proposed mechanism suggests the involvement of a [4Fe–4S] cluster bound to MnmA. This review provides a critical appraisal of recent evidence for [4Fe–4S]-dependent or [4Fe–4S]-independent tRNA thiolation in 2-thiouridine formation. View Show abstract ... This conundrum led to the wobble hypothesis, introduced by Francis Crick in 1966 , proposing that only the first two bases of the codon pair precisely with corresponding bases in the anticodon, while the third position allows for flexibility or "wobble". Accordingly, 30-40% of all codon recognition in a given organism is achieved through tRNA wobble recognition . The modified wobble hypothesis of 1991 expanded on the original hypothesis by including the role of certain base modifications occurring in or near the tRNA anticodon loop ( Figure 1B). ... ... (www.preprints.org) | NOT PEER-REVIEWED | Posted: 27 September 2024 doi:10.20944/preprints202409.2124.v1 18 ... SARS-CoV-2 Displays a Suboptimal Codon Usage Bias for Efficient Translation in Human Cells Diverted by Hijacking the tRNA Epitranscriptome Preprint Full-text available Sep 2024 Patrick Eldin Alexandre David Christophe Hirtz Laurence Briant Codon bias analysis of SARS-CoV-2 reveals suboptimal adaptation for translation in human cells it infects. The detailed examination of the codons preferentially used by SARS-CoV-2 shows a strong preference for LysAAA, GlnCAA, GluGAA, and Arg AGA infrequently used in human genes. In the absence of an adapted tRNA pool, efficient decoding of these codons requires a 5-methoxycarbonylmethyl-2-thiouridine (mcm5s2) modification at the U34 wobble position of the corresponding tRNAs (tLysUUU; tGlnUUG; tGluUUC; tArgUCU). The optimal translation of SARS-CoV-2 open reading frames (ORFs) may therefore require several adjustments to the host's translation machinery, enabling the highly biased viral genome to achieve a more favorable "Ready-to-Translate" state in human cells. Experimental approaches based on LC-MS/MS quantification of tRNA modifications and on alteration of enzymatic tRNA modification pathways provide strong evidence to support the hypothesis that SARS-CoV-2 induces U34 tRNA modifications and relies on these modifications for its lifecycle. The conclusions emphasize the need for future studies on the evolution of SARS-CoV-2 codon bias and its ability to alter the host tRNA pool through the manipulation of RNA modifications. View Show abstract ... Although we know that mature tRNAs carry numerous nucleotide modifications that are introduced post-transcriptionally 40,41 , it mostly remains unclear at which processing stage each respective modification is introduced 42 . Here we study this intricate sequence of events in vivo and in vitro, showing that Q and galQ, a hyper-modification derived from Q, are added to certain isodecoders of tRNA Tyr at the precursor stage, to molecules that still contain the intron, thus occurring before splicing. ... Queuosine is incorporated into precursor tRNA before splicing Article Full-text available Jul 2025 Wei Guo Igor Kaczmarczyk Kevin Kopietz Francesca Tuorto Each newly transcribed tRNA molecule must undergo processing and receive modifications to become functional. Queuosine (Q) is a tRNA modification present at position 34 of four tRNAs with “GUN” anticodons. Among these, the precursor of tRNATyr carries an intronic sequence within the anticodon loop that is removed by an essential non-canonical splicing event. The functional and temporal coupling between tRNA-splicing and Q-incorporation remains elusive. Here, we demonstrate in vitro and in vivo that intron-containing precursors of tRNATyr are modified with Q or with the Q-derivative galactosyl-queuosine (galQ) before being spliced. We show that this order of events is conserved in mouse, human, flies and worms. Using single particle cryo-EM, we confirm that pre-tRNATyr is a bona fide substrate of the QTRT1/2 complex, which catalyzes the incorporation of Q into the tRNA. Our results elucidate the hierarchical interplay that coordinates Q-incorporation and splicing in eukaryotic tRNAs, providing a relevant but unappreciated aspect of the cellular tRNA maturation process. View Show abstract ... Eight of these genes correspond to the special initiation tRNA (iMet CAT ) and mature into the same isodecoder. In general, these genes encode only Following Crick's wobble hypothesis [35,72,73], an inosine at position 34 may pair with A, U and C; moreover G:U and U:G pairs may be utilized in codon:anticodon interactions. In tRNAs, these these types non-canonical recognition are further supported by extensive chemical modifications , see also for empirical evidence in Dictyostelium. ... Regulation of tRNA expression during the social cycle of the amoeba Dictyostelium discoideum Preprint Full-text available Jun 2025 Dulce I. Valdivia Peter F. Stadler Transfer RNAs are the decoders of the protein coding genetic information, as they transfer amino acids into nascent proteins during messenger RNA translation. This pivotal role makes tRNAs a source of translation regulation that can affect protein synthesis. Still, we are beginning to understand the upstream mechanisms regulating tRNA pools themselves. In Dictyostelium discoideum , starvation of a sufficient number of individuals, triggers the development of a coordinated sporulation response denominated the social cycle. By using publicly available epigenomic and genomic data, we studied two factors contributing to the regulation of tRNA pools throughout this social cycle. First, the tRNA gene repertoire shows that the compact genome of D. discoideum escapes translational selection as even with a relatively high number of tRNA genes, anticodon and codon frequencies greatly mismatch. This disparity is explained by the overrepresentation of anticodons that can be modified in the wobble position. During the social cycle, the vast majority of tRNA genes lie on nucleosome free regions, indicating that most genes are always contributing to the tRNA pools. However, there is a marked variation in expression levels of the proteins involved in tRNA maturation. This modulation is ultimately mirrored by fine-tuned differential composition of tRNA pools at isodecoder, isoacceptor and isotype levels. Particularly, there is an overall down-regulation in the vegetative to streaming transition. Key elements bypass this down-regulation pattern and, taken together, this evidence suggests compensatory mechanisms in tRNA regulation that might rescue translation for the following developmental stages, thus allowing D. discoideum to evolve this remarkable strategy under the pressure of an amino acid scarce environment. View Show abstract ... Transfer RNA hairpins could have evolved into tRNAs by a duplication [22,25] (Fig. 1), in which the anticodon of the 3'-terminal end of the stem is transferred to an opposite anticodon loop, probably optimizing COOH-NH2 binding of the growing peptide and codon-anticodon recognition (including wobbling at the third codon position ) (Fig. 4c). The amino acid could have been covalently transferred from the 5'-end (codon-associated) to the 3'-end of the acceptor stem where it is localized in current tRNAs (Fig. 4c), opening the possibility of incorporating new amino acids (i.e., those associated with anticodon sequences) to complete the genetic code [71,74]. ... Earth, a planetary PCR machine to create life, or the brief history of a tRNA Article Full-text available May 2025 Discov Life Juan Jimenez About 4 billion years ago, the Earth probably fulfilled the environmental conditions necessary to favour the transition from primitive chemistry to life. Based on a theoretical hairpin duplication origin of tRNAs and its putative peptide-coding capability before ribosomes existed, I postulate here that, in this hypothetical environment, Earth's daily temperature cycles could have provided a unique planetary ‘PCR machine’ to create self-replicating RNA hairpins that simultaneously templated amino acid polymerization in a primordial ‘PCR well’ of prebiotic molecules. This early RNA hairpin-peptide interaction could have established a reciprocal nucleopeptide replicator that paved the way for catalytic translation and replication machineries towards the origin of LUCA. View Show abstract ... For example, some cognates exclusively form standard Watson:Crick base pairs whereas others form nonstandard "wobble" base-pairs. Wobble pairing tRNAs can initiate peptidyl transfer, but in at least some cases do so with lower probability than Watson:Crick pairing tRNAs, and wobblepairing cognate tRNAs can be rejected from the A-site 18 . Similarly, anticodons of some noncognate tRNAs do not base pair with the codon at all, whereas others base-pair in one or more of the three nucleotide positions. ... Near-cognate tRNAs dominate codon decoding times in simulated ribosomes Preprint Full-text available Feb 2025 Fabio Hedayioglu Emma J Mead Sathishkumar Kurusamy Tobias von der Haar The codon sequence of messenger RNAs affects ribosome dynamics, translational control, and transcript stability. Here we describe an advanced computational modelling tool and its application to studying the effect of different tRNA species on the codon decoding process. We show that simulated codon decoding times are sensitive to the abundance of near-cognate tRNA species as well as cognate species, an aspect of the decoding system that is not fully considered in other computational modelling studies. We demonstrate that codon decoding times predicted by models that accurately define near-cognate tRNAs and that are parameterised with high-quality tRNA abundance datasets are highly similar to ribosome dwell times determined using experimental ribosome footprinting data, thereby confirming both the importance of near-cognate tRNAs for the codon decoding process and the general accuracy of our modelling tools. View Show abstract ... Transfer RNA hairpins could have evolved into tRNAs by a duplication [22,25] (Fig. 1), in which the anticodon of the 3'-terminal end of the stem is transferred to an opposite anticodon loop, probably optimizing COOH-NH2 binding of the growing peptide and codon-anticodon recognition (including wobbling at the third codon position ) (Fig. 4c). The amino acid could have been covalently transferred from the 5' end (codonassociated) to the 3' end of the acceptor stem where localize in present tRNAs (Fig. 4c), opening the possibility of incorporating new amino acids (i.e., those associated with anticodon sequences) to complete the genetic code [74 , 71]. ... Earth, a planetary PCR machine to create life, or the brief history of a tRNA Preprint Full-text available Jan 2025 Juan Jimenez About 4 billion years ago, the Earth probably fulfilled the environmental conditions necessary to favour the transition from primitive chemistry to life. Based on a theoretical hairpin duplication origin of tRNAs and their putative peptide-coding capability before ribosomes existed, here I postulate that, at this hypothetical environment, Earth's daily temperature cycles could have provided a unique planetary thermocycler to create self-replicating RNA hairpins that simultaneously templated amino acids polymerization in a primordial PCR well of prebiotic molecules. This early RNA hairpin-peptide interaction could have established a reciprocal nucleopeptide replicator that paved the way for catalytic translation and replication machineries towards the origin of LUCA. View Show abstract ... The possible interaction between EhDUF2419 and EhDNMT2 (Dnmt2) (EHI_103830) in E. histolytica revealed by the present interactome analysis could have significant implications for translation regulation. DNMT2, known for its role in 5-cytosine methylation (m5C) of tRNA, particularly near the wobble base, is crucial for the precise control of the translation efficiency . This modification, which has been shown to enhance the translation efficiency in other organisms , might also affect the processing and function of tRNAs in E. histolytica. ... Exploring the Interactome of the Queuine Salvage Protein DUF2419 in Entamoeba histolytica Article Full-text available Nov 2024 Jun Ye Meirav Trebicz-Geffen Serge Ankri Entamoeba histolytica causes amebiasis, a significant global health issue, with millions affected annually, especially in developing countries. EhDUF2419, an important protein involved in E. histolytica’s queuine salvage pathway and its interaction network, remains unclear. To explore this, we transfected E. histolytica trophozoites with a plasmid encoding Myc-tagged EhDUF2419 and achieved successful overexpression. Through immunoprecipitation with the Myc antibody followed by mass spectrometry, we identified 335 proteins interacting with Myc-tagged EhDUF2419, including over 100 ribosomal proteins, along with translation initiation and elongation factors, and aminoacyl-tRNA synthetases. Ribosome purification revealed the presence of EhDUF2419 in ribosomal protein-enriched fractions. Treatment with queuosine (Q) significantly reduced the EhDUF2419 protein levels and decreased the Q-modified tRNA in Myc-tagged EhDUF2419 overexpressing trophozoites. This effect, which was Q-dependent, was not observed in strains carrying an empty vector control or overexpressing a truncated form of EhDUF2419 lacking catalytic activity. The reduction in the EhDUF2419 protein levels was regulated by proteasome-mediated degradation, as evidenced by the reduced degradation in the presence of MG132, a proteasome inhibitor. Our study uncovers the novel interaction of EhDUF2419 with ribosomal proteins and its regulation by the proteasome machinery, providing new insights into its role in E. histolytica and potential therapeutic strategies. View Show abstract ... The human genome contains genes encoding separate fractions of tRNA species that function either in the cytoplasm or mitochondria (mt-tRNAs) . The complex secondary structure of tRNAs consists of 4 major regions: the amino-acid acceptor stem arm, D arm, T arm, and anticodon [86,87]. The anticodon is a triplicate of nucleotide bases which dictate one of the 20 amino acids to be deposited on the 3' end of the acceptor stem based on the degenerate coding system. ... Function of noncoding RNA in regulating cancer cell plasticity Article Full-text available Nov 2024 Peter Hyunwuk Her Magnus Lam Sarah Zeng Housheng Hansen He Recent advances have brought non-coding RNAs (ncRNAs) into the spotlight, revealing their critical regulatory roles in cancer cell plasticity. ncRNAs, such as microRNAs (miRNAs), transfer RNAs (tRNAs), long non-coding RNAs (lncRNAs) and circular RNAs (circRNAs), are now recognized as key players in cellular processes such as chromatin remodeling, mRNA stability, and translation. This review delves into the diverse functions of ncRNAs in stem cells and cancer stem cells (CSCs) biology, emphasizing their impact on maintaining and modulating cellular states. We explore the mechanisms by which ncRNAs influence stem cell self-renewal and differentiation, including their roles in establishing pluripotency and directing differentiation. In the context of cancer, ncRNAs are pivotal in driving processes like epithelial-mesenchymal transition (EMT), which underlies metastasis and therapy resistance. By regulating gene expression and epigenetic landscapes, ncRNAs sustain the dynamic nature of CSCs, facilitating tumor growth and heterogeneity. The review also highlights the potential clinical applications of ncRNAs as biomarkers and therapeutic targets. Advances in ncRNA detection and manipulation have opened new avenues for developing diagnostic tools and innovative treatments. Liquid biopsies, which utilize ncRNAs from biological fluids, provide a minimally invasive approach to monitor tumor dynamics and progression. Uncovering the intricate networks regulated by ncRNAs makes it evident that these molecules play central roles in understanding cancer cell plasticity. Insights into their functions offer promising strategies for targeted cancer therapies, aiming to disrupt the adaptability of cancer cells and improve treatment outcomes. View Show abstract ... This conundrum led to the wobble hypothesis, introduced by Francis Crick in 1966 , proposing that only the first two bases of the codon pair precisely with corresponding bases in the anticodon, while the third position allows for flexibility or "wobble". Accordingly, 30-40% of all codon recognition in a given organism is achieved through tRNA wobble recognition . The modified wobble hypothesis of 1991 expanded on the original hypothesis by including the role of certain base modifications occurring in or near the tRNA anticodon loop ( Figure 1B). ... SARS-CoV-2 Displays a Suboptimal Codon Usage Bias for Efficient Translation in Human Cells Diverted by Hijacking the tRNA Epitranscriptome Article Full-text available Oct 2024 INT J MOL SCI Laurence Briant Patrick Eldin Alexandre David Christophe Hirtz Codon bias analysis of SARS-CoV-2 reveals suboptimal adaptation for translation in human cells it infects. The detailed examination of the codons preferentially used by SARS-CoV-2 shows a strong preference for LysAAA, GlnCAA, GluGAA, and ArgAGA, which are infrequently used in human genes. In the absence of an adapted tRNA pool, efficient decoding of these codons requires a 5-methoxycarbonylmethyl-2-thiouridine (mcm⁵s²) modification at the U34 wobble position of the corresponding tRNAs (tLysUUU; tGlnUUG; tGluUUC; tArgUCU). The optimal translation of SARS-CoV-2 open reading frames (ORFs) may therefore require several adjustments to the host’s translation machinery, enabling the highly biased viral genome to achieve a more favorable “Ready-to-Translate” state in human cells. Experimental approaches based on LC-MS/MS quantification of tRNA modifications and on alteration of enzymatic tRNA modification pathways provide strong evidence to support the hypothesis that SARS-CoV-2 induces U34 tRNA modifications and relies on these modifications for its lifecycle. The conclusions emphasize the need for future studies on the evolution of SARS-CoV-2 codon bias and its ability to alter the host tRNA pool through the manipulation of RNA modifications. View Show abstract ... The culture was diluted 1:100 into 10 mL of fresh LB medium with ampicillin, grown at 30˚C and 200 rpm to an optical density at 600 nm (OD 600 ) of 0.6, induced with 1mM L-arabinose (30 °C, 200 rpm), and then cells were harvested after 2 h by centrifugation (4,000 rcf, 4 °C, 10 min). Cells were washed three times with 10 mL of ice-cold dH 2 O, re-centrifuged in between (4,000 rcf, 4 °C, 10 min), and finally, the cell pellet was resuspended in ice-cold dH 2 O. An aliquot of 50 µl of cell suspension was used for electroporation. ... Systematic analysis of tRNA transcription unit deletions in E. coli reveals insights into tRNA gene essentiality and cellular adaptation Article Full-text available Oct 2024 Sanja Tiefenbacher Valérie Pezo Philippe Marlière Sven Panke Transfer ribonucleic acids (tRNAs) are essential for protein synthesis, decoding mRNA sequences into amino acids. In E. coli K-12 MG1655, 86 tRNA genes are organized in 43 transcription units (TUs) and the essentiality of individual tRNA TUs in bacterial physiology remains unclear. To address this, we systematically generated 43 E. coli tRNA deletion strains in which each tRNA TU was replaced by a kanamycin resistance gene. We found that 33 TUs are not essential for survival, while 10 are essential and require the corresponding TU to be provided on plasmid. The analysis revealed E. coli’s tolerance to alterations in tRNA gene copy number and the loss of non-essential tRNAs, as most strains exhibited minimal to no growth differences under various conditions compared to the parental strain. However, deletions metZWV, alaWX and valVW led to significant growth defects under specific conditions. RNA-seq analysis of ∆alaWX and ∆valVW revealed upregulation of genes involved in translation and pilus assembly. Our results provide valuable insights into tRNA dynamics and the cellular response to tRNA TU deletions, paving the way for deeper understanding of tRNA pool complexity. View Show abstract ... The possible interaction between EhDUF2419 and EhDNMT2 (Dnmt2) (EHI_103830) in E. histolytica revealed by the present interactome analysis could have significant implications for translation regulation. DNMT2, known for its role in 5-cytosine methylation (m5C) of tRNA, particularly near the wobble base, is crucial for the precise control of translation efficiency . This modification, which has been shown to enhance translation efficiency in other organisms , might also affect the processing and function of tRNAs in E. histolytica. ... Exploring the Interactome of the Queuine Salvage Protein DUF2419 in Entamoeba histolytica Preprint Full-text available Sep 2024 Jun Ye Meirav Trebicz-Geffen Serge Ankri Entamoeba histolytica causes amebiasis, a significant global health issue with millions affected annually, especially in developing countries. EhDUF2419, an important protein involved in E. histolytica's queuine salvage pathway, shares homology with DNA glycosylase. However, its in-teraction network remains unclear. To explore this, we transfected E. histolytica trophozoites with a plasmid encoding Myc-tagged EhDUF2419 and achieved successful overexpression. Through immunoprecipitation with Myc antibody followed by mass spectrometry, we identified 335 proteins interacting with Myc-tagged EhDUF2419, including over 100 ribosomal proteins, along with translation initiation and elongation factors, and aminoacyl-tRNA synthetases. Ribosome purification revealed the presence of EhDUF2419 in ribosomal protein-enriched fractions. Treatment with queuosine (Q) significantly reduced EhDUF2419 protein levels and decreased Q-modified tRNA in Myc-tagged EhDUF2419 overexpressing trophozoites. This effect which is Q-dependent was not observed in strains carrying an empty vector control or overexpressing a truncated form of EhDUF2419 lacking catalytic activity. The reduction in EhDUF2419 protein levels is regulated by proteasome-mediated degradation, as evidenced by reduced degradation in the presence of MG132, a proteasome inhibitor. Our study uncovers the novel interaction of EhDUF2419 with ribosomal proteins and its regulation by the proteasome machinery, providing new insights into its role in E. histolytica and potential therapeutic strategies. View Show abstract ... Known as ancient macromolecules, tRNAs play multiple cellular roles, with translation being their most emblematic function . The degenerate canonical genetic code contains 61 codons encoding 20 essential AAs . Excitingly, previous studies have revealed the 21st and 22nd AAs, Pyrrolysine (Pyl) and selenocysteine (Sec), decoded by reassigned codons UAG and UGA, respectively . While tRNA Pyl is rare, found only in a few archaea and bacteria , tRNA Sec is present in both prokaryotes and eukaryotes . ... Novel Perspectives on Chloroplast tRNA Genomic and Structural Variations Imply the Evolution of Papilionoideae (Fabaceae) Article Full-text available Aug 2024 Shiyun Han Sijia Zhang Hui Peng Xian-zhao Kan Papilionoideae is the most species-rich subfamily of the third largest angiosperm family Fabaceae. One constituent large group, the inverted-repeat-lacking clade (IRLC), is well-known for the broad loss of one IR copy. Accumulating observations of massive plastomic disparities have made IRLC a well-suited model for exploring plastome evolution. However, there is still a large amount left to explore. The present study focused on the plastid tRNA (pttRNA) evolution within Papilionoideae, employing the currently densest sampling strategies for both the IRLC (156) and non-IRLC (109) lineages. Strikingly, our results revealed abundant inter-lineage variabilities in both tRNA sequences and structures, including a 3 nt difference in the average size of trnS-UGA, the consensus sequence disparities across 29 tRNAs, the distinct 3 nt indels in trnA-UGC, and an impressive 248 nt intron loss of IRLC trnI-GAU (potential markers). Additionally, there was unequal stability of the atypical secondary structures in trnS-GGA and trnS-UGA, as well as significantly diverse compositions of substitution events in all compared tRNAs (p < 0.05). Ultimately, these findings not only demonstrate the significant differences and unique markers of IRLC pttRNAs compared to other non-IRLC Papilionoideae, but also draw an important conclusion that the large losses of one IR potentially promote highly diverse evolutionary patterns of IRLC, which could partly compensate for the potential IR-lacking impacts. View Show abstract ... While this study did not directly demonstrate a mechanism of direct protein translation control through tRNA modification, the previously reported functions of tRNA modifications such as methylation, including the prevention of frameshifts, stabilization of tRNA, enhancement of codon-anticodon binding, and fluctuating base pair formation, are all crucial in tRNA-mediated protein translation. The mcm5U modification in tRNA discovered in this study may similarly enhance protein translation during dynamic changes in the developmental process through one or more of the aforementioned functions. ... RNA-modifying enzyme Alkbh8 is involved in mouse embryonic development Article Full-text available Aug 2024 Manami Nakai Hiroaki Hase Yutong Zhao Kazutake Tsujikawa RNAs undergo more than 300 modifications after transcription. Aberrations in RNA modifications can lead to diseases; their involvement in fetal development has been suggested. This study explored the RNA modifications related to fetal development in mice. We quantified changes in RNA modifications present in mouse embryos at each stage: Metaphase II (MII) oocyte; pronucleus; 2-cell; morula; blastocyst; embryonic days (E)10.5, 13.5, 16.5, and 19.5; and newborn (post-natal day [P]0) using ultra-performance liquid chromatography-tandem mass spectrometry (UPLC-MS/MS). Our results confirm that many RNAs undergo dynamic modifications. In particular, 5-methoxycarbonylmethyluridine (mcm5U) modification was distinctive and increased during the fetal period. In Alkbh8-knockout (KO) mice, the tRNA protein translation efficiency was reduced. Proteome analysis revealed that the factors downregulated in Alkbh8-KO mice were associated with red blood cell and protoporphyrin metabolism. Our results suggest that ALKBH8 facilitates changes in tRNA balance in conjunction with mcm5U, which are essential for normal red blood cell differentiation and embryogenesis in mice. View Show abstract ... Of particular interest is the significantly higher occurrence of s 4 U8 modifications in G. stearothermophilus compared to non-thermophilic bacteria. Previous studies have highlighted the role of s 4 U modifications in stabilizing tRNA structures and influencing translation efficiency [17,. In particular, the 4-thiolation at U8 is discussed to reinforce a reverse Hoogsteen interaction between this position and A14, increasing the tRNA melting temperature [55,57]. ... Temperature-Dependent tRNA Modifications in Bacillales Article Full-text available Aug 2024 INT J MOL SCI Anne Hoffmann Christian Lorenz Jörg Fallmann Peter F. Stadler Transfer RNA (tRNA) modifications are essential for the temperature adaptation of thermophilic and psychrophilic organisms as they control the rigidity and flexibility of transcripts. To further understand how specific tRNA modifications are adjusted to maintain functionality in response to temperature fluctuations, we investigated whether tRNA modifications represent an adaptation of bacteria to different growth temperatures (minimal, optimal, and maximal), focusing on closely related psychrophilic (P. halocryophilus and E. sibiricum), mesophilic (B. subtilis), and thermophilic (G. stearothermophilus) Bacillales. Utilizing an RNA sequencing approach combined with chemical pre-treatment of tRNA samples, we systematically profiled dihydrouridine (D), 4-thiouridine (s⁴U), 7-methyl-guanosine (m⁷G), and pseudouridine (Ψ) modifications at single-nucleotide resolution. Despite their close relationship, each bacterium exhibited a unique tRNA modification profile. Our findings revealed increased tRNA modifications in the thermophilic bacterium at its optimal growth temperature, particularly showing elevated levels of s⁴U8 and Ψ55 modifications compared to non-thermophilic bacteria, indicating a temperature-dependent regulation that may contribute to thermotolerance. Furthermore, we observed higher levels of D modifications in psychrophilic and mesophilic bacteria, indicating an adaptive strategy for cold environments by enhancing local flexibility in tRNAs. Our method demonstrated high effectiveness in identifying tRNA modifications compared to an established tool, highlighting its potential for precise tRNA profiling studies. View Show abstract ... Chemical modifications within tRNA are the most numerous and diverse . They stabilize its tertiary structure, introduce recognition determinants and anti-determinants towards RNAinteracting macromolecules and fine-tune the decoding process at the level of both efficiency and fidelity . ... [4Fe-4S]-dependent enzymes in non-redox tRNA thiolation Article Full-text available Aug 2024 BBA-MOL CELL RES Sylvain Gervason Sambuddha Sen Marc Fontecave Beatrice Golinelli-Pimpaneau Post-transcriptional modification of nucleosides in transfer RNAs (tRNAs) is an important process for accurate and efficient translation of the genetic information during protein synthesis in all domains of life. In particular, specific enzymes catalyze the biosynthesis of sulfur-containing nucleosides, such as the derivatives of 2-thiouridine (s2U), 4-thiouridine (s4U), 2-thiocytidine (s2C), and 2-methylthioadenosine (ms2A), within tRNAs. Whereas the mechanism that has prevailed for decades involved persulfide chemistry, more and more tRNA thiolation enzymes have now been shown to contain a [4Fe-4S] cluster. This review summarizes the information over the last ten years concerning the biochemical, spectroscopic and structural characterization of [4Fe-4S]-dependent non-redox tRNA thiolation enzymes. View Show abstract ... The reactions are classified into methylation, pseudouridylation, sulfuration, and other modifications . Depending on their positions and types, chemical modifications in tRNAs play various regulatory roles, influencing the interaction between cognate codons and anticodons, tertiary structure folding, and thermal stability 11, . tRNA chemical modifications and their abundance are dynamically regulated in response to the cellular metabolic state and environmental stress . ... Perturbation of METTL1-mediated tRNA N- methylguanosine modification induces senescence and aging Article Full-text available Jul 2024 Yudong Fu Fan Jiang Xiao Zhang Tao Wang Cellular senescence is characterized by a decrease in protein synthesis, although the underlying processes are mostly unclear. Chemical modifications to transfer RNAs (tRNAs) frequently influence tRNA activity, which is crucial for translation. We describe how tRNA N7-methylguanosine (m7G46) methylation, catalyzed by METTL1-WDR4, regulates translation and influences senescence phenotypes. Mettl1/Wdr4 and m7G gradually diminish with senescence and aging. A decrease in METTL1 causes a reduction in tRNAs, especially those with the m7G modification, via the rapid tRNA degradation (RTD) pathway. The decreases cause ribosomes to stall at certain codons, impeding the translation of mRNA that is essential in pathways such as Wnt signaling and ribosome biogenesis. Furthermore, chronic ribosome stalling stimulates the ribotoxic and integrative stress responses, which induce senescence-associated secretory phenotype. Moreover, restoring eEF1A protein mitigates senescence phenotypes caused by METTL1 deficiency by reducing RTD. Our findings demonstrate that tRNA m7G modification is essential for preventing premature senescence and aging by enabling efficient mRNA translation. View Show abstract ... The decades-long development of RNA innovations, from the discovery of the new functions of RNA (serving as a catalyst and regulator of many biochemical reactions) to its conventional role as a genetic-information carrier, linker in protein molecule synthesis, and structural scaffold of subcellular organelles, has contributed to the progress of nucleic acid research. RNA plays an important role in protein synthesis, and RNA-based drug discovery has attracted great interest, as it contains only four types of nucleotides compared with the 20 different amino acid residues present in proteins. 6 To achieve diversity in RNA structure and function, nature uses a variety of chemical groups for its modification. ... Sequencing, Physiological Regulation, and Representative Disease Research Progress of RNA m A Modification Article Full-text available Mar 2024 Xiaoqian Chen Yuanyuan Li Youfang Gan Rui Wang To date, more than 150 chemical modifications have been disclosed in different RNA species, which are employed to diversify the structure and function of RNA in living organisms. The N⁶ -methyladenosine (m ⁶ A) modification, which is found in the adenosine N⁶ site of RNA, has been demonstrated to be the most heavy modification in the mRNA in cells. Moreover, the m ⁶ A modification in mRNAs of mammalian and other eukaryotic cells is highly conserved and mandatorily encoded. Increasing evidence indicates that the m ⁶ A modification plays a pivotal role in gene-expression regulation and cell-fate decisions. Here, we summarize the most recent m ⁶ A-sequencing technology, as well as the molecular mechanism underlying its occurrence, development, and potential use as a target for the treatment of human diseases. Furthermore, our review highlights other newly discovered chemical modifications of RNA that are associated with human disease, as well as their underlying molecular mechanisms. Thus, significant advancements have been made in qualitative/quantitative m ⁶ A detection and high-throughput sequencing, and research linking this RNA modification to disease. Efforts toward simplified and more accessible chemical/biological technologies that contribute to precision medicine are ongoing, to benefit society and patients alike. View Show abstract ... Furthermore, no enzyme for the conversion of cZ to trans-zeatin (tZ) has been isolated, and in most plant species forms of tZ and isopentenyladenine (iP) are the most abundant forms of CK (Gajdošová et al. 2011). The prenylation of tRNA at position A 37 is thought to stabilize codon-anticodon interactions to ensure fidelity of translation (Ubonovičius et al. 2001;Miyawaki et al. 2006;Agris et al. 2007). In plants, further modification of the prenylated tRNA by methylthiolation leads to the production of the methylthiolated CK, 2-methylthio-N6-(cis-hydroxyisopentenyl) adenosine (ms 2 i 6 A) upon tRNA degradation (Dabravolski 2020; Gibb et al. 2020;Fig. ... Cytokinin biosynthesis in Hexapoda and Insecta: a bioinformatic analysis Article Full-text available Dec 2023 ARTHROPOD-PLANT INTE Nate Mooi Scott William Roy Edward F Connor Cytokinins (CKs) are widespread in a variety of organisms from bacteria to humans, and are particularly abundant in insects and hexapods. However, how organisms other than bacteria and plants obtain CKs has not been thoroughly studied. We examined the transcriptomes of 670 species of Hexapoda (predominantly Insecta) to determine if transcripts that encode proteins homologous to any of the known enzymes involved in CK biosynthesis and metabolism are widespread in these groups (occur in > 80% of species). We found that transcripts encoding proteins homologous to the enzymes tRNA-dimethylallyltransferase (EC: 2.5.1.75) and tRNA-2-methylthio-N6-dimethylallyladenosine synthase (EC: 2.8.4.3) are widespread in insects and hexapods. These enzymes could allow insects and hexapods to synthesize iP-based CKs and methylthiolated iP-based CKs via a tRNA-degradation pathway whereby tRNA is first prenylated and possibly methylthiolated prior to releasing CKs or methylthiolated CKs upon degradation. We also found widespread occurrence in insects and hexapods of transcripts encoding proteins that are homologous to five enzymes in the adenine salvage pathway: 5’- nucleotidase (EC: 3.1.3.5), adenosine kinase (EC:2.7.1.20), purine-nucleoside phosphorylase (EC: 2.4.2.1), purine nucleosidase (EC: 3.2.2.1), and adenine phosphoribosyltransferase (EC: 2.4.2.7). These enzymes could allow insects and hexapods to convert CK nucleotides to nucleosides and free base CKs. We found few transcripts encoding proteins homologous to enzymes that would convert CKs to storage forms such as their O-glucosides and no transcripts encoding proteins homologous to enzymes that would degrade CKs such as CK oxidases. We suggest that insects and hexapods have the enzymatic pathways necessary to synthesize and metabolize CKs, in contrast to the presumption that CKs are merely obtained via consumption and sequestration from plants or via microbial symbiosis. View Show abstract ... Among over 170 types of chemical markers, RNA 5methyluridine (m 5 U) is one of the most prevalent and plays a significant role in RNA stability, transcription, and translation. For instance, m 5 U contributes positively to the stability of RNA structures, enhancing their function by modifying base stacking and shaping secondary structures (Agris et al., 2007). Moreover, research studies have demonstrated that m 5 U modification may be associated with virus replication, antiviral immunity, and the development of certain diseases (Väre et al., 2017). ... m5U-GEPred: prediction of RNA 5-methyluridine sites based on sequence-derived and graph embedding features Article Full-text available Oct 2023 Zhongxing Xu Xuan Wang Jia Meng Bowen Song 5-Methyluridine (m⁵U) is one of the most common post-transcriptional RNA modifications, which is involved in a variety of important biological processes and disease development. The precise identification of the m⁵U sites allows for a better understanding of the biological processes of RNA and contributes to the discovery of new RNA functional and therapeutic targets. Here, we present m5U-GEPred, a prediction framework, to combine sequence characteristics and graph embedding-based information for m⁵U identification. The graph embedding approach was introduced to extract the global information of training data that complemented the local information represented by conventional sequence features, thereby enhancing the prediction performance of m⁵U identification. m5U-GEPred outperformed the state-of-the-art m⁵U predictors built on two independent species, with an average AUROC of 0.984 and 0.985 tested on human and yeast transcriptomes, respectively. To further validate the performance of our newly proposed framework, the experimentally validated m⁵U sites identified from Oxford Nanopore Technology (ONT) were collected as independent testing data, and in this project, m5U-GEPred achieved reasonable prediction performance with ACC of 91.84%. We hope that m5U-GEPred should make a useful computational alternative for m⁵U identification. View Show abstract ... UUG (but not UUA) decoding strictly requires the τm 5 U34 modification because it enables the tRNA to form a non-wobble Watson-Crick like U-G base pair . cmo 5 U34 also facilitates tRNA interactions with a wide array of codons, as highlighted by the ability of E. coli and S. typhirium tRNA species possessing cmo 5 U34 to recognize all four codons in their four-fold degenerate codon boxes (tRNA Ala , tRNA Ser , tRNA Thr , tRNA Pro , tRNA Val , with tRNA Leu having six codon degeneracy), while tRNAs from other species lacking the modification do not [35,36]. In vitro studies in E. coli demonstrate the tRNA 1B Ala(CGU) cmo 5 U34 modification permits the efficient recognition of both the cognate codon (GCA) and non-cognate Ala (GCG) codons, with the U-G pairing treated as an almost-correct base-pair versus a mismatch [28,37,38]. ... Anticodon stem-loop tRNA modifications influence codon decoding and frame maintenance during translation Article Jun 2023 SEMIN CELL DEV BIOL Tyler Smith Rachel N. Giles Kristin Koutmou RNAs are central to protein synthesis, with ribosomal RNA, transfer RNAs and messenger RNAs comprising the core components of the translation machinery. In addition to the four canonical bases (uracil, cytosine, adenine, and guanine) these RNAs contain an array of enzymatically incorporated chemical modifications. Transfer RNAs (tRNAs) are responsible for ferrying amino acids to the ribosome, and are among the most abundant and highly modified RNAs in the cell across all domains of life. On average, tRNA molecules contain 13 post-transcriptionally modified nucleosides that stabilize their structure and enhance function. There is an extensive chemical diversity of tRNA modifications, with over 90 distinct varieties of modifications reported within tRNA sequences. Some modifications are crucial for tRNAs to adopt their L-shaped tertiary structure, while others promote tRNA interactions with components of the protein synthesis machinery. In particular, modifications in the anticodon stem-loop (ASL), located near the site of tRNA:mRNA interaction, can play key roles in ensuring protein homeostasis and accurate translation. There is an abundance of evidence indicating the importance of ASL modifications for cellular health, and in vitro biochemical and biophysical studies suggest that individual ASL modifications can differentially influence discrete steps in the translation pathway. This review examines the molecular level consequences of tRNA ASL modifications in mRNA codon recognition and reading frame maintenance to ensure the rapid and accurate translation of proteins. View Show abstract ... The modified wobble hypothesis was proposed in 1991, suggesting that specific base modifications select particular codons . Since then, increasing evidence has shown that nucleoside modifications at tRNA positions 34 and 37 are crucial for the accurate and effective translation of genetic code, and modification at position 34 can limit or extend the decoding ability of tRNA . The low modification state of the 34th position is not conducive to the combination of codons and anticodons and affects the fidelity of translation . ... tRNA Modifications and Modifying Enzymes in Disease, the Potential Therapeutic Targets Article Full-text available Feb 2023 INT J BIOL SCI Weifang Cui Deze Zhao Junjie Jiang Chaojun Duan tRNA is one of the most conserved and abundant RNA species, which plays a key role during protein translation. tRNA molecules are post-transcriptionally modified by tRNA modifying enzymes. Since high-throughput sequencing technology has developed rapidly, tRNA modification types have been discovered in many research fields. In tRNA, numerous types of tRNA modifications and modifying enzymes have been implicated in biological functions and human diseases. In our review, we talk about the relevant biological functions of tRNA modifications, including tRNA stability, protein translation, cell cycle, oxidative stress, and immunity. We also explore how tRNA modifications contribute to the progression of human diseases. Based on previous studies, we discuss some emerging techniques for assessing tRNA modifications to aid in discovering different types of tRNA modifications. View Show abstract ... For example, S. cerevisiae contains 16 tRNAs genes that decode alanine codons, 11 with an AGC anticodon and 5 with a UGC anticodon. These two alanine isoacceptors decode the full set of four alanine codons (GCN) because although Watson-Crick base pairing occurs between the first and second positions of the codon and bases 35 and 36 of the anticodon (see Figure 1A), G:U wobble base pairing and other extended pairing achieved through modification of base 34 can occur between the third codon position and base 34 of the anticodon (35,36). In eukaryotic tRNAs, the latter involves modification of adenosine to inosine at position 34 to allow decoding of codons ending in A, U or C (37). ... Probing the genetic code and impacts of mistranslation using tRNA Ala anticodon variants Preprint Full-text available Nov 2022 Ecaterina Cozma Megha Rao Madison Dusick Matthew Berg Transfer RNAs (tRNAs) maintain translational fidelity through accurate charging by their cognate aminoacyl-tRNA synthetase and codon:anticodon base pairing with the mRNA at the ribosome. Mistranslation occurs when an amino acid not specified by the genetic code is incorporated into a protein. Since alanyl-tRNA synthetase uniquely recognizes a G3:U70 base pair in tRNA Ala and the anticodon plays no role in charging, tRNA Ala variants with anticodon mutations have the potential to mis-incorporate alanine. Our goal was to characterize the phenotypic consequences of expressing all 60 tRNA Ala anticodon variants in Saccharomyces cerevisiae . Overall, 36 tRNA Ala anticodon variants decreased growth in single- or multi-copy. Using mass spectrometry, we observed mistranslation for 45 of 55 variants when on single-copy plasmids. There was a weak but statistically significant correlation between mistranslation and reduced growth. Variants with G/C rich anticodons tend to have larger growth deficits and mistranslate at greater frequencies than A/U rich variants. In most instances, synonymous anticodon variants impact growth differently. We suggest that this is explained by decoding specificity, which results in different tRNA Ala variants mistranslating unique sets of peptides and proteins. Since potential mistranslating tRNAs exist in humans, our analysis identifies features of tRNA Ala variants that influence their potential contribution to disease. View Show abstract ... Modifications of tRNA in the anticodon region, primarily in position 34, are widespread because they are responsible for the mechanism of wobble base pairing, which allows one tRNA to recognize multiple mRNA codons, thus reducing the overall number of tRNAs required for translation . For example, 5-methylaminomethyl-2-thiouridine (mnm5s2U) at this position is required for the proper decoding of the lysine-encoding codons AAA and AAG by tRNA Lys3 , while another uridine modification, 5-methoxycarbonylmethyl-2thiouridine (mcm5s2U), is essential for the proper decoding of NNR codons . ... Epitranscriptome: Review of Top 25 Most-Studied RNA Modifications Article Full-text available Nov 2022 INT J MOL SCI Viktoriia A. Arzumanian Georgii Dolgalev Ilya Kurbatov Ekaterina Poverennaya The alphabet of building blocks for RNA molecules is much larger than the standard four nucleotides. The diversity is achieved by the post-transcriptional biochemical modification of these nucleotides into distinct chemical entities that are structurally and functionally different from their unmodified counterparts. Some of these modifications are constituent and critical for RNA functions, while others serve as dynamic markings to regulate the fate of specific RNA molecules. Together, these modifications form the epitranscriptome, an essential layer of cellular biochemistry. As of the time of writing this review, more than 300 distinct RNA modifications from all three life domains have been identified. However, only a few of the most well-established modifications are included in most reviews on this topic. To provide a complete overview of the current state of research on the epitranscriptome, we analyzed the extent of the available information for all known RNA modifications. We selected 25 modifications to describe in detail. Summarizing our findings, we describe the current status of research on most RNA modifications and identify further developments in this field. View Show abstract tRNA hydroxylation is an epitranscriptomic modulator of metabolic states affecting Pseudomonas aeruginosa pathogenicity Article Full-text available Jul 2025 NUCLEIC ACIDS RES Yannick Frommeyer Nicolas Oswaldo Gomez Matthias Preusse Susanne Christiane Häussler Post-transcriptional modification of transfer RNAs (tRNAs) represents an essential layer of translational regulation critical for bacterial adaptation to environmental changes. Increasing evidence links the tRNA epitranscriptome to pivotal roles in the regulation of gene expression and various cellular processes, including stress responses and establishment of virulence. In this study, we used mass spectrometry and nanopore sequencing to quantify and identify the sites of TrhPO-dependent tRNA hydroxylation in total and purified Pseudomonas aeruginosa tRNAs. Furthermore, transcriptome, ribosome profiling, and proteome data were integrated to demonstrate the post-transcriptional consequences of the absence of xo5U34 modifications at the wobble position of selected tRNAs. We suggest that the impaired ability to infect host cells and attenuated virulence in Galleria mellonella are driven by changes in metabolic fluxes. In the absence of TrhPO-mediated tRNA modification, chorismate, the precursor for the biosynthesis of xo5U modifications, is funneled into alternative pathways, including the production of aromatic amino acids and phenazines. Our findings that metabolic rerouting, rather than changes in proteome profiles, attenuates P. aeruginosa virulence highlight the multifunctional roles of tRNA-modifying enzymes and suggest an underexplored role for these enzymes in monitoring and modulating metabolic fitness. These insights open new avenues for combatting the pathogenicity of this challenging opportunistic pathogen. View Show abstract Dissecting the function of the DNMT2-homolog (DNMA) in Dictyostelium discoideum Article Jul 2025 Zaza Gelashvili Denis A. Larochelle Jacqueline M Dresch Robert A. Drewell Methylation of cytosine residues in nucleic acids plays a critical role in a range of biological activities in eukaryotes, including regulation of transcription, organization of chromatin structure, modulation of translation, cellular differentiation and development. While much of the scientific focus in this field was centered on DNA methylation over the past few decades, it has also become clear that methylation of RNA is a crucial modification. A group of homologous DNMT2 methyltransferase enzymes in different model organisms are now known to catalyze the transfer of a methyl group to the cytosine at position 38 in tRNAAspGUC molecules. The important biological role for tRNA methyltransferases is highlighted by the fact that the genomes of some model eukaryotes, including Dictyostelium discoideum, Drosophila melanogaster, Entamoeba histolytica and Schizosaccharomyces pombe, possess a DNMT2 homolog but do not encode any other enzymes of the DNMT family. In this study, we explore the function of the DNMT2 homolog (DNMA) in Dictyostelium discoideum by examining the phenotypic effects resulting from deletion of this enzyme. Pleiotropic impacts on cell growth, morphology and motility, nuclear organization, and disruption to the developmental program are detected. We also analyze global gene expression in the dnmA knock-out cells and develop a homology-based structural model of DNMA, allowing us to perform docking simulations of the molecular interaction with tRNAAspGUC. Our findings demonstrate that DNMA, as a tRNA methyltransferase, is critical to normal cellular activity and development in Dictyostelium. View Show abstract An Oxyprenylated Phenylpropanoid Pharmacologic Scaffold for SelU Inhibition Article Full-text available Jul 2025 CHEMBIOCHEM Stephen J. Dansereau Alexander Shekhtman Salvatore Genovese Jia Sheng MnmH, better known as tRNA 2‐selenouridine synthase (SelU), is a member of the Mnm family enzymes that work in concert to modify uridine at the wobble position. Instrumental in maintaining base pair fidelity and exclusive to bacteria, SelU is a promising drug target. Although no molecular structure has been experimentally calculated, insights into this enzyme's mechanism of catalysis have been empirically gleaned and proven useful for ligand‐based rational drug design. In this study, a small group of natural and semisynthetic oxyprenylated phenylpropanoids were selected based on their compositional resemblance to the purported SelU ligands. Specifically, these compounds contained one or more geranyl groups branching from aromatic frameworks, all of which are believed to heighten affinity to SelU. Meticulous screening of each compound against an N‐terminal SelU construct via fluorescence quenching of W83 further reveals details on the enzyme‐substrate binding mode. Conformational flexibility of residues around W83 is suggested by the slow bimolecular quenching constants calculated for each compound. This is consistent with the single binding site and the blend of interaction‐types calculated at the active site. Lastly, this general oxyprenylated framework, along with a cinnamic acid moiety, is established as a pharmacologic scaffold that can be further optimized into potential antibiotics. View Show abstract Targeting tRNA methyltransferases: from molecular mechanisms to drug discovery Article May 2025 Sci China Life Sci Yanrong Gao Xinyu Liu Jiazhi Li Transfer RNA methyltransferases (tRNA MTases) catalyze site-specific methylation on tRNAs, a critical process that ensures the stability and functionality of tRNA molecules, thereby maintaining cellular homeostasis of tRNA methylation. Recent studies have illuminated the structural diversity, specific substrate recognition, and conserved catalytic mechanisms of tRNA MTases, revealing how their dysregulation contributes to various diseases, including cancers and neurodevelopmental disorders. This review integrates these advances, exploring the challenges of achieving precise substrate recognition and modification in the context of complex and specific tRNA modification landscape, while emphasizing the crucial role of tRNA MTases in disease pathogenesis. The identification of small-molecule inhibitors targeting specific tRNA MTases marks a promising step toward the development of novel therapies. With continued research into the broader biological functions and regulatory mechanisms of tRNA MTases, these insights hold great potential to drive clinical advancements and therapeutic innovations. View Show abstract RNA in the Central Dogma: tRNAs Chapter May 2025 Phei Er Saw Erwei Song The elucidation of the genetic code and the identification of tRNAs as its translators were transformative events in the field of life sciences. Initially, the spotlight on tRNAs diminished as their basic role in protein synthesis was established, prompting researchers to explore other aspects of RNA biology. Recently, there has been a revival in tRNA research, demonstrating their involvement in various biological pathways beyond simple translation. This renewed interest underscores the adaptability of tRNAs to modify protein synthesis in accordance with environmental shifts. The emerging roles of tRNAs are closely connected to the existence of various tRNA forms, including isoacceptors and isodecoders, as well as numerous base modifications, a wide range of protein interactions, and tRNA fragmentation. These factors introduce a remarkable complexity to the tRNA milieu, providing cells with an elaborate selection of tRNA species. These species are crucial in preserving cellular equilibrium and modulating cellular activities in response to environmental variability, thereby ensuring the health and stability of the overall ecosystem. This enhanced understanding of tRNA functions not only deepens our insights into cellular biology but also unveils new possibilities for investigating how these primordial molecules influence intricate cellular mechanisms and adapt to environmental challenges. The continued exploration of tRNA biology is expected to further elucidate their significant impacts on health and disease. View Show abstract Expression, optimization and biological activity analysis of recombinant type XII collagen in Pichia pastoris Article Apr 2025 Qiao Gao Zhuo Zhang Rongzhan Fu Daidi Fan View Enzymatic Reactions of S-Adenosyl-L-Methionine: Synthesis and Applications Article Full-text available Mar 2025 BIOCHEMISTRY-MOSCOW+ A. Yu. Rudenko Sofia Mariasina Ratislav M. Ozhiganov Vladimir Polshakov S-adenosyl-L-methionine (SAM, AdoMet) is a ubiquitous biomolecule present in all living organisms, playing a central role in a wide array of biochemical reactions and intracellular regulatory pathways. It is the second most common participant in enzymatic reactions in living systems, following adenosine triphosphate (ATP). This review provides a comprehensive analysis of enzymatic reactions involving SAM, whether as a product, a reactant (cosubstrate), or as a non-consumable enzyme cofactor. The discussion encompasses various methods for SAM synthesis, including biotechnological, chemical, and enzymatic approaches. Particular emphasis is placed on the biochemical reactions where SAM functions as a cosubstrate, notably in transalkylation reactions, where it acts as a key methyl group donor. Beyond methylation, SAM also serves as a precursor for the synthesis of other molecular building blocks, which are explored in a dedicated section. The review also addresses the role of SAM as a non-consumable cofactor in enzymatic processes, highlighting its function as a prosthetic group for certain protein enzymes and its ability to form complexes with ribozymes. In addition, bioorthogonal systems involving SAM analogues are discussed. These systems employ engineered enzyme–cofactor pairs designed to enable highly selective interactions between target SAM analogues and specific enzymes, facilitating precise reactions even in the presence of other SAM-dependent enzymes. The concluding section explores practical applications of SAM analogues, including their use as selective inhibitors in clinical medicine and as components of reporter systems. View Show abstract ADAT3 variants disrupt the activity of the ADAT tRNA deaminase complex and impair neuronal migration Article Mar 2025 BRAIN Jordi Del-Pozo-Rodríguez Peggy Tilly Romain Lecat Juliette D Godin Abstract The ADAT2/ADAT3 (ADAT) complex catalyzes the adenosine to inosine modification at the wobble position of eukaryotic tRNAs. Mutations in ADAT3, the catalytically inactive subunit of the ADAT2/ADAT3 complex, have been identified in patients presenting with severe neurodevelopmental disorders. Yet, the physiological function of ADAT2/ADAT3 complex during brain development remains totally unknown. Here, we investigated the role of the ADAT2/ADAT3 complex in cortical development. First, we reported 21 neurodevelopmental disorders patients carrying biallelic variants in ADAT3. Second, we used structural, biochemical, and enzymatic assays to deeply characterize the impact of those variants on ADAT2/ADAT3 structure, biochemical properties, enzymatic activity and tRNAs editing and abundance. Finally, in vivo complementation assays were performed to correlate functional deficits with neuronal migration defects in the developing mouse cortex. Our results showed that maintaining a proper level of ADAT2/ADAT3 catalytic activity is essential for radial migration of projection neurons in the developing mouse cortex. We demonstrated that the identified ADAT3 variants significantly impaired the abundance and, for some, the activity of the complex, leading to a substantial decrease in I34 levels with direct consequence on their steady-state. We correlated the severity of the migration phenotype with the degree of the loss of function caused by the variants. Altogether, our results highlight the critical role of ADAT2/ADAT3 during cortical development and provide cellular and molecular insights into the pathogenic mechanisms underlying ADAT3-related neurodevelopmental disorders. Keywords: ADAT3, tRNA, deamination, neuronal migration, neurodevelopmental disorders View Show abstract Natural human tRNAAla anticodon variants mistranslate the genetic code Article Mar 2025 RNA Rasangi Tennakoon Teija M I Bily Farah Hasan Patrick O'Donoghue Transfer RNAs (tRNAs) play an essential role in protein synthesis by linking the nucleic acid sequences of gene products to the amino acid sequences of proteins. There are > 400 functional tRNA genes in humans, and adding to this diversity, there are many single nucleotide polymorphisms in tRNAs across our population, including anticodon variants that mistranslate the genetic code. In human genomes, we identified three rare alanine tRNA (tRNAAla) variants with non-synonymous anticodon mutations: tRNAAlaCGC G35T, tRNAAlaUGC G35A, and tRNAAlaAGC C36T. Since alanyl-tRNA synthetase (AlaRS) does not recognize the anticodon, we hypothesized that these tRNAAla variants will mis-incorporate Ala at glutamate (Glu), valine (Val), and threonine (Thr) codons, respectively. We found that expressing the naturally occurring tRNAAla variants in human cells led to defects in protein production without a substantial impact on cell growth. Using mass spectrometry, we confirmed and estimated Ala mis-incorporation levels at Glu (0.7%), Val (5%) and Thr (0.1%) codons. Although Ala mis-incorporation was higher at Val codons, cells mis-incorporating Ala at Glu codons had the most severe defect in protein production. The data demonstrate the ability of natural human tRNAAla variants to generate mistranslation leading to defects in protein production that depend on the nature of the amino acid replacement. View Show abstract Advancements in chemically inducible modified tRNA sequencing techniques: Elucidating novel insights into tRNA epitranscriptomics Article Feb 2025 BIOORGAN MED CHEM Xuan Li Linqian Mu Jiaying Liu Rui Wang View tRNA thiolation optimizes appressorium-mediated infection by enhancing codon-specific translation in Magnaporthe oryzae Article Jan 2025 NUCLEIC ACIDS RES Xinrong Zhang Rongrong He Yinan Li Xiao-Lin Chen Thiolation, a post-transcriptional modification catalyzed by Uba4-Urm1-Ncs2/Ncs6 pathway in three specific transfer RNAs (tRNAs), is conserved from yeast to humans and plays an important role in enhancing codon–anticodon interaction and translation efficiency. Yet, except for affecting effector secretion, its roles in plant pathogenic fungi are not fully understood. Here, we used Magnaporthe oryzae as a model system to illustrate the vital role of s2U34 modification on the appressorium-mediated virulence. The absence of tRNA thiolation leads to diminished translation elongation at AAA/CAA/GAA but not their synonymous codons, resulting in reduced levels of key proteins enriched in these codons, which are critical for appressorium development and function. Importantly, overexpressing these proteins can partially mitigate the defects resulting from NCS2 deletion. Our study sheds light on the s2U34 modification’s role in plant pathogenic fungi, enhancing our understanding of translational control beyond effector secretion. View Show abstract tRNA modifications: greasing the wheels of translation and beyond Article Full-text available Dec 2024 Minjie Zhang Zhipeng Lu Transfer RNA (tRNA) is one of the most abundant RNA types in cells, acting as an adaptor to bridge the genetic information in mRNAs with the amino acid sequence in proteins. Both tRNAs and small fragments processed from them play many nonconventional roles in addition to translation. tRNA molecules undergo various types of chemical modifications to ensure the accuracy and efficiency of translation and regulate their diverse functions beyond translation. In this review, we discuss the biogenesis and molecular mechanisms of tRNA modifications, including major tRNA modifications, writer enzymes, and their dynamic regulation. We also summarize the state-of-the-art technologies for measuring tRNA modification, with a particular focus on 2’-O-methylation (Nm), and discuss their limitations and remaining challenges. Finally, we highlight recent discoveries linking dysregulation of tRNA modifications with genetic diseases. View Show abstract AtTRM11 as a tRNA 2-methylguanosine methyltransferase modulates flowering and bacterial resistance via translational regulation Article Dec 2024 PLANT SCI Zhengyi Lv Lun Guan Ruixuan Yao Peng Chen View Nanopore sequencing of intact aminoacylated tRNAs Preprint Full-text available Nov 2024 Laura K White Aleksandar Radakovic Marcin Sajek Jay Hesselberth Transfer RNAs (tRNA) are decorated during biogenesis with a variety of modifications that modulate their stability, aminoacylation, and decoding potential during translation. The complex landscape of tRNA modification presents significant analysis challenges and to date no single approach enables the simultaneous measurement of important but disparate chemical properties of individual, mature tRNA molecules. We developed a new, integrated approach to analyze the sequence, modification, and aminoacylation state of tRNA molecules in a high throughput nanopore sequencing experiment, leveraging a chemical ligation that embeds the charged amino acid in an adapted tRNA molecule. During nanopore sequencing, the embedded amino acid generates unique distortions in ionic current and translocation speed, enabling application of machine learning approaches to classify charging status and amino acid identity. Specific applications of the method indicate it will be broadly useful for examining relationships and dependencies between tRNA sequence, modification, and aminoacylation. View Show abstract Strategies to overcome the challenges of low or no expression of heterologous proteins in Escherichia coli Article Jul 2024 BIOTECHNOL ADV Ruizhao Jiang Shuting Yuan Yilong Zhou Huimin Yu View tRNA Modifications and Dysregulation: Implications for Brain Diseases Article Full-text available Jun 2024 BSRCCS Xinxin Lv Ruorui Zhang Shanshan Li Xin Jin Transfer RNAs (tRNAs) are well-known for their essential function in protein synthesis. Recent research has revealed a diverse range of chemical modifications that tRNAs undergo, which are crucial for various cellular processes. These modifications are necessary for the precise and efficient translation of proteins and also play important roles in gene expression regulation and cellular stress response. This review examines the role of tRNA modifications and dysregulation in the pathophysiology of various brain diseases, including epilepsy, stroke, neurodevelopmental disorders, brain tumors, Alzheimer’s disease, and Parkinson’s disease. Through a comprehensive analysis of existing research, our study aims to elucidate the intricate relationship between tRNA dysregulation and brain diseases. This underscores the critical need for ongoing exploration in this field and provides valuable insights that could facilitate the development of innovative diagnostic tools and therapeutic approaches, ultimately improving outcomes for individuals grappling with complex neurological conditions. View Show abstract Naphthyl cyanoketene N,S-acetals in glycoside synthesis: a new preparative route to a new class of N-naphthylcyanoacrylamide thioglycosides and their conversions to naphthyl-pyrazole hybrids Article Jan 2024 NUCLEOS NUCLEOT NUCL Galal Elgemeie Nahed M. Fathy Sayed Shaarawi View The Greatwall-Endosulfine-PP2A/B55 pathway controls entry into quiescence by promoting translation of Elongator-tuneable transcripts Preprint Full-text available Nov 2023 Javier Encinar del Dedo Rafael López-San Segundo Alicia Vázquez-Bolado Sergio Moreno Quiescent cells require a continuous supply of proteins to maintain protein homeostasis. In fission yeast, entry into quiescence is triggered by nitrogen stress, leading to the inactivation of TORC1 and the activation of TORC2. Here, we report that the Greatwall-Endosulfine-PPA/B55 pathway connects the downregulation of TORC1 with the upregulation of TORC2, resulting in the activation of Elongator-dependent tRNA modifications essential for sustaining the translation programme during entry into quiescence. This process promotes U34 and A37 tRNA modifications at the anticodon stem loop, enhancing translation efficiency and fidelity of mRNAs enriched for AAA versus AAG lysine codons. Notably, some of these mRNAs encode inhibitors of TORC1, activators of TORC2, tRNA modifiers, and proteins necessary for telomeric and subtelomeric functions. Therefore, we propose a novel mechanism by which cells respond to nitrogen stress at the level of translation, involving a coordinated interplay between the tRNA epitranscriptome and biased codon usage. View Show abstract S-Adenosylmethionine: more than just a methyl donor Article Full-text available Mar 2023 NAT PROD REP Yu-Hsuan Lee Daan Ren Byungsun Jeon Hung-wen Liu Covering: from 2000 up to the very early part of 2023S-Adenosyl-L-methionine (SAM) is a naturally occurring trialkyl sulfonium molecule that is typically associated with biological methyltransfer reactions. However, SAM is also known to donate methylene, aminocarboxypropyl, adenosyl and amino moieties during natural product biosynthetic reactions. The reaction scope is further expanded as SAM itself can be modified prior to the group transfer such that a SAM-derived carboxymethyl or aminopropyl moiety can also be transferred. Moreover, the sulfonium cation in SAM has itself been found to be critical for several other enzymatic transformations. Thus, while many SAM-dependent enzymes are characterized by a methyltransferase fold, not all of them are necessarily methyltransferases. Furthermore, other SAM-dependent enzymes do not possess such a structural feature suggesting diversification along different evolutionary lineages. Despite the biological versatility of SAM, it nevertheless parallels the chemistry of sulfonium compounds used in organic synthesis. The question thus becomes how enzymes catalyze distinct transformations via subtle differences in their active sites. This review summarizes recent advances in the discovery of novel SAM utilizing enzymes that rely on Lewis acid/base chemistry as opposed to radical mechanisms of catalysis. The examples are categorized based on the presence of a methyltransferase fold and the role played by SAM within the context of known sulfonium chemistry. View Show abstract The Influence of the Nucleotide Composition of Genes and Gene Regulatory Elements on the Efficiency of Protein Expression in Escherichia coli Article Feb 2023 Artur I. Zabolotskii Stanislav V. Kozlovskiy Alexey G. Katrukha Recombinant proteins expressed in Escherichia coli are widely used in biochemical research and industrial processes. At the same time, achieving higher protein expression levels and correct protein folding still remains the key problem, since optimization of nutrient media, growth conditions, and methods for induction of protein synthesis do not always lead to the desired result. Often, low protein expression is determined by the sequences of the expressed genes and their regulatory regions. The genetic code is degenerated; 18 out of 20 amino acids are encoded by more than one codon. Choosing between synonymous codons in the coding sequence can significantly affect the level of protein expression and protein folding due to the influence of the gene nucleotide composition on the probability of formation of secondary mRNA structures that affect the ribosome binding at the translation initiation phase, as well as the ribosome movement along the mRNA during elongation, which, in turn, influences the mRNA degradation and the folding of the nascent protein. The nucleotide composition of the mRNA untranslated regions, in particular the promoter and Shine-Dalgarno sequences, also affects the efficiency of mRNA transcription, translation, and degradation. In this review, we describe the genetic principles that determine the efficiency of protein production in Escherichia coli. View Show abstract Show more A novel lysine-substituted nucleoside in the first position of the anticodon of minor isoleucine tRNA from Escherichia coli. Article Full-text available Jul 1988 Tomonari Muramatsu Shigeyuki Yokoyama Nobuyuki Horie Tatsuo Miyazawa A minor species of isoleucine tRNA (tRNA(minor Ile)) specific to the codon AUA has been isolated from Escherichia coli B and a modified nucleoside N+ has been found in the first position of the anticodon (Harada, F., and Nishimura, S. (1974) Biochemistry 13, 300-307). In the present study, tRNA(minor Ile)) was purified from E. coli A19, and nucleoside N+ was prepared, by high-performance liquid chromatography, in an amount (0.6) A260 units) sufficient for the determination of chemical structures. By 400 MHz 1H NMR analysis, nucleoside N+ was found to have a pyrimidine moiety and a lysine moiety, the epsilon amino group of which was involved in the linkage between these two moieties. From the NMR analysis together with mass spectrometry, the structure of nucleoside N+ was determined as 4-amino-2-(N6-lysino)-1-(beta-D-ribofuranosyl)pyrimidinium (“lysidine”), which was confirmed by chemical synthesis. Lysidine is a novel type of modified cytidine with a lysine moiety and has one positive charge. Probably because of such a unique structure, lysidine in the first position of anticodon recognizes adenosine but not guanosine in the third position of codon. View Show abstract Singly and Bifurcated Hydrogen-bonded Base-pairs in tRNA Anticodon Hairpins and Ribozymes Article Full-text available Oct 1999 Auffinger Pascal Eric Westhof The tRNA anticodon loops always comprise seven nucleotides and is involved in many recognition processes with proteins and RNA fragments. We have investigated the nature and the possible interactions between the first (32) and last (38) residues of the loop on the basis of the available sequences and crystal structures. The data demonstrate the conservation of a bifurcated hydrogen bond interaction between residues 32 and 38, located at the stem/loop junction. This interaction leads to the formation of a non-canonical base-pair which is preserved in the known crystal structures of tRNA/synthetase complexes. Among the tRNA and tDNA sequences, 93 % of the 32·38 oppositions can be assigned to two families of isosteric base-pairs, one with a large (86 %) and the other with a much smaller (7 %) population. The remainder (7 %) of the oppositions have been assigned to a third family due to the lack of evidence for assigning them into the first two sets. In all families, the Y32·R38 base-pairs are not isosteric upon reversal (like the sheared G·A or wobble G·U pairs), explaining the strong conservation of a pyrimidine at position 32. Thus, the 32·38 interaction extends the sequence signature of the anticodon loop beyond the conserved U-turn at position 33 and the usually modified purine at position 37. A comparison with other loops containing both a singly hydrogen-bonded base-pair and a U-turn suggests that the 32·38 pair could be involved in the formation of a base triple with a residue in a ribosomal RNA component. It is also observed that two crystal structures of ribozymes (hammerhead and leadzyme) present similar base-pairs at the cleavage site. View Show abstract Codon usage tabulated from international DNA sequence databases: Status for the year 2000 Article Full-text available Feb 2000 Yasukazu Nakamura Takashi Gojobori Toshimichi Ikemura The frequencies of each of the 257 468 complete protein coding sequences (CDSs) have been compiled from the taxonomical divisions of the GenBank DNA sequence database. The sum of the codons used by 8792 organisms has also been calculated. The data files can be obtained from the anonymous ftp sites of DDBJ, Kazusa and EBI. A list of the codon usage of genes and the sum of the codons used by each organism can be obtained through the web site . The present study also reports recent developments on the WWW site. The new web interface provides data in the CodonFrequency-compatible format as well as in the traditional table format. The use of the database is facilitated by keyword based search analysis and the availability of codon usage tables for selected genes from each species. These new tools will provide users with the ability to further analyze for variations in codon usage among different genomes. View Show abstract On the physical basis for ambiguity in genetic coding interactions Article Full-text available Mar 1978 S de Henau Henri Grosjean Donald M. Crothers We report the relative stabilities, in the form of complex lifetimes, of complexes between the tRNAs complementary, or nearly so, in their anticodons. The results show striking parallels with the genetic coding rules, including the wobble interaction and the role of modified nucleotides S2U and V (a 5-oxyacetic acid derivative of U). One important difference between the genetic code and the pairing rules in the tRNA-tRNA interaction is the stability in the latter of the short wobble pairs, which the wobble hypothesis excludes. We stress the potential of U for translational errors, and suggest a simple stereochemical basis for ribosome-mediated discrimination against short wobble pairs. Surprisingly, the stability of anticodon-anticodon complexes does not vary systematically on base sequence. Because of the close similarity to the genetic coding rules, it is tempting to speculate that the interaction between two RNA loops may have been part of the physical basis for the evolutionary origin of the genetic code, and that this mechanism may still be utilized by folding the mRNA on the ribosome into a loop similar to the anticodon loop. View Show abstract Modified bases in tRNA: the structures of 5-carbamoylmethyl- and 5-carboxymethyl uridine Article Full-text available Apr 1978 Helen Berman Deborah Marcu Poojappan Narayanan The crystal structures of two nucleosides, 5-carbamoylmethyluridine (1) and 5-carboxyinethyluridine (2), were determined from three-dimensional x-Ray diffraction data, and refined to R=0.036 and R=0.047, respectively. Compound 1 is in the C3′ -endo conformation with X= +5.2° (anti), ψ∞= +63.4° and ψ∞= +180.0° (tt); 2 is in the C2′-endo conformation with X= +49.4° (anti), ψ∞= -60.5° and ψ∞= +60.0° (gg). For each derivative, the plane of the side chain substituent is skewed with respect to the plane of the nucleobase; for 1, the carboxamide group is on the same side of the uracil plane vis a vis the ribose ring; for 2, the carboxyl group is on the opposite side of this plane. No base pairing is observed for either structure. Incorporation of structure 1 into a 3′-stacked tRNA anticodon appears to place 08 within hydrogen bonding istance of the 02′ hydroxyl of ribose 33, which may limit the ability of such a molecule of tRNA to “wobble”. View Show abstract Modified Nucleosides in Translation Chapter Apr 2014 James F Curran View The Role of Modified Nucleosides in tRNA Interactions Chapter Jan 2018 Glenn R. Björk View Milk protein synthesis Article Jan 1983 J C Mercier P Gaye View Biosynthesis and function of modified nucleosides in tRNA Article Jan 1995 GR Björk View Structure of a Ribonucleic Acid Article Apr 1965 SCIENCE Robert W. Holley Jean Apgar George A. Everett Ada Zamir The complete nucleotide sequence of an alanine transfer RNA, isolated from yeast, has been determined. This is the first nucleic acid for which the structure is known. View Show abstract Modified nucleosides of tRNA and mechanisms of codon recognition. Article Jan 1988 Biophysics Shigeyuki Yokoyama Tatsuo Miyazawa Two types of hypermodification of uridine in the first position of the anticodon of tRNAs make the conformation of this residue very rigid or very flexible, which contribute to the correct and efficient recognition of codons by tRNAs in protein biosynthesis according to the genetic code. View Show abstract A cytosolic tRNA with an unmodified adenosine in the wobble position reads a codon ending with the non-complementary nucleoside cytidine1 Article Apr 2002 Peng Chen Qiang Qian Shaoping Zhang Glenn R. Björk Out of more than 500 sequenced cytosolic tRNAs, there is only one with an unmodified adenosine in the wobble position (position 34). The reason for this rare occurrence of A34 is that it is mostly deaminated to inosine-34 (I34). I34 is a common constituent in the wobble position of tRNAs and has a decoding capacity different from that of A34. We have isolated a mutant (proL207) of Salmonella typhimurium, in which the wobble nucleoside G34 has been replaced by an unmodified A in tRNAGGGPro, which is the only tRNA that normally reads the CCC codon. Thus, this mutant apparently has no tRNA that is considered cognate for the codon CCC. Despite this, the mutant grows normally. As expected, Pro-tRNA selection at the CCC codon in the A-site in a mutant deleted for the proL gene, which encodes the tRNAGGGPro, was severely reduced. However, in comparison this rate of selection was only slightly reduced in the proL207 mutant with its A34 containing tRNAAGGPro suggesting that this tRNA reads CCC. Moreover, measurements of the interference by a tRNA residing in the P-site on the apparent termination efficiency at the A-site indicated that indeed the A34 containing tRNA reads the CCC codon. We conclude that A34 in a cytosolic tRNA is not detrimental to the cell and that the mutant tRNAAGGPro is able to read the CCC codon like its wild-type counterpart tRNAGGGPro. We suggest that the decoding of the CCC codon by a 5′-AGG-3′ anticodon occurs by a wobble base-pair between a protonated A34 and a C in the mRNA. View Show abstract Uridin-5-oxy acetic acid: A new minor constituent from valine transfer RNA I Article Feb 1970 K Murao M Saneyoshi Fumio Harada S Nishimura The primary sequence of tRNAIVal was recently established (1,2), and it was found that an unidentified minor component designated as V was located in the first position of the anticodon of this tRNA (1–3). The unique structure of V is of particular interest, since it must participate directly in codon-anticodon base pairing in the decoding process in protein synthesis. This report describes the characterization of this minor nucleoside as uridin-5-oxy acetic acid (Fig. 1). View Show abstract Restriction or amplification of wobble recognitionThe structure of 2-thio-5-methylaminomethyluridine and the interaction of odd uridines with the anticodon loop backbone Article Oct 1978 Wolfgang Hillen E. Egert H. J. LINDNER Hans Günter Gassen View 1 H NMR studies on the conformational characteristics of 2-thiopyrimidine nucleotides found in transfer RNAs Article Jun 1979 Ziro Yamaizumi Shigeyuki Yokoyama Susumu Nishimura Tatsuo Miyazawa The molecular conformations of naturally occurring 2-thio-pyrimidine nucleosides (5-methylaminomethyl-2-thiouridine, 5-methoxycarbonylmethyl-2-thiouridine and 2-thiocytidine) and 5′-mononucleotides (5-methylaminomethyl-2-thiouridine 5′ -monophos-phate and 2-thiocytidine 5′-monophosphate) in 2H2O solution were elucidated by analyses of the proton NMR spin-coupling constant, nuclear Overhauser effect, and lanthanide-induced shifts and relaxation enhancements. As monomers, these nucleotides are almost exclusively in the 3E-gg-anti form, even in the absence of ordinary stabilizing factors of this form; i. e., base-stacking and base-pairing interactions with other nucleotide units. This inherent confortnational rigidity of the 2-thiopyrimidine units probably contributes to stability of the conformation of tRNA. View Show abstract Nuclear Magnetic Resonance Spectroscopy and Molecular Modeling Reveal That Different Hydrogen Bonding Patterns Are Possible for G·U Pairs: One Hydrogen Bond for Each G·U Pair in r(GGC GU GCC) 2 and Two for Each G·U Pair in r(GAG UG CUC) 2 † , ‡ Article Aug 2000 Xiaoying Chen Jeffrey A. McDowell Ryszard Kierzek Douglas H. Turner G . U pairs occur frequently and have many important biological functions. The stability of symmetric tandem G . U motifs depends both on the adjacent Watson-Crick base pairs, e.g., 5'G > 5'C, and the sequence of the G . U pairs, i.e., 5'-<(UG)over bar>-3' > 5'-<(GU)over bar>-3', where an underline represents a nucleotide in a G . U pair [Wu, M., McDowell, J. A., and Turner, D. H. (1995) Biochemistry 34, 3204-3211]. In particular, at 37 degrees C, the motif 5'-C<(GU)over bar>G-3' is less stable by approximately 3 kcal/mol compared with other symmetric tandem G . U motifs with G-C as adjacent pairs: 5'-G<(GU)over bar>C-3', 5'-G<(UG)over bar>C-3', and 5'-C<(UG)over bar>G-3'. The solution structures of r(GAG<(UG)over bar>CUC)(2) and r(GGC<(GU)over bar>GCC)(2) duplexes have been determined by NMR and restrained simulated annealing. The global geometry of both duplexes is close to A-form, with some distortions localized in the tandem G . U pair region. The striking discovery is that in r(GGC<(GU)over bar>GCC)(2) each G . U pair apparently has only one hydrogen bond instead of the two expected for a canonical wobble pair. In the one-hydrogen-bond model, the distance between GO6 and UH3 is too far to form a hydrogen bond. In addition, the temperature dependence of the imino proton resonances is also consistent with the different number of hydrogen bonds in the G . U pair. To test the NMR models, U or G in various G . U pairs were individually replaced by N3-methyluridine or isoguanosine, respectively, thus eliminating the possibility of hydrogen bonding between GO6 and UH3. The results of thermal melting studies on duplexes with these substitutions support the NMR models. View Show abstract Functional Anticodon Architecture of Human tRNA Lys3 Includes Disruption of Intraloop Hydrogen Bonding by the Naturally Occurring Amino Acid Modification, t 6 A † Article Nov 2000 John W. Stuart Zofia Gdaniec Richard Guenther Paul Agris The structure of the human tRNALys3 anticodon stem and loop domain (ASLLys3) provides evidence of the physicochemical contributions of N6-threonylcarbamoyladenosine (t6A37) to tRNALys3 functions. The t6A37-modified anticodon stem and loop domain of tRNALys3UUU (ASLLys3UUU -t 6A37) with a UUU anticodon is bound by the appropriately programmed ribosomes, but the unmodified ASLLys3UUU is not (Yarian, C., Marszalek, M., Sochacka, E., Malkiewicz, A., Guenther, R., Miskiewicz, A., and Agris, P. F., Biochemistry 39, 13390-13395). The structure, determined to an average rmsd of 1.57 ( 0.33 Å (relative to the mean structure) by NMR spectroscopy and restrained molecular dynamics, is the first reported of an RNA in which a naturally occurring hypermodified nucleoside was introduced by automated chemical synthesis. The ASLLys3UUU-t6A37 loop is significantly different than that of the unmodified ASLLys3UUU, although the five canonical base pairs of both ASLLys3UUU stems are in the standard A-form of helical RNA. t6A37 ,3 '-adjacent to the anticodon, adopts the form of a tricyclic nucleoside with an intraresidue H-bond and enhances base stacking on the 3'-side of the anticodon loop. Critically important to ribosome binding, incorporation of the modification negates formation of an intraloop U 33‚A37 base pair that is observed in the unmodified ASLLys3UUU. The anticodon wobble position U34 nucleobase in ASLLys3UUU-t6A37 is significantly displaced from its position in the unmodified ASL and directed away from the codon-binding face of the loop resulting in only two anticodon bases for codon binding. This conformation is one explanation for ASLLys3UUU tendency to prematurely terminate translation and -1 frame shift. At the pH 5.6 conditions of our structure determination, A 38 is protonated and positively charged in ASLLys3UUU-t6A37 and the unmodified ASLLys3UUU. The ionized carboxylic acid moiety of t6A37 possibly neutralizes the positive charge of A +38. The protonated A+38 can base pair with C32, but t6A37 may weaken the interaction through steric interference. From these results, we conclude that ribosome binding cannot simply be an induced fit of the anticodon stem and loop, otherwise the unmodified ASLLys3UUU would bind as well as ASLLys3UUU-t6A37 .t 6A37 and other position 37 modifications produce the open, structured loop required for ribosomal binding. View Show abstract Chemistry and structure of modified uridines in the anticodon, wobble position of transfer RNA are determined by thiolation Article Nov 1987 Andrzej Malkiewicz Hanna Gracz Elzbieta Sochacka Paul Agris Uridines found in the first or wobble position of transfer RNA anticodons are most often modified at base ring carbon-5 and many times also thiolated at carbon-2. It is important to understand the chemistry and structure of the modified uridines because they influence codon recognition by tRNA. Uridine and five biologically important 5-position derivatives and the six analogous 2-thiouridines were all investigated by high-performance liquid chromatography (HPLC) and ultraviolet (UV), infrared (IR), 1H, and 13C NMR spectroscopy under physiological conditions. The modified nucleosides were chemically synthesized, and purity was assessed by HPLC. Thiolation produced a more hydrophobic nucleoside independent of the chemical nature of the 5-position substituent as determined by HPLC. Thiolation also produced characteristic differences in IR and UV spectra. The six 2-thiouridines strictly conformed to predominating fractional populations in the C(3′) endo, gauche plus [C(4′)-C(5′)], anti structure as determined by NMR techniques. However, the six uridines were found to be much less restricted to their predominating C(2′) endo, gauche plus structure that was either syn or anti. The syn conformer was found for those uridines with bulky 5-position modifications. Thus, we postulate thiolation of tRNA wobble position uridines may produce the hydrophobic, restricted C(3′) endo, gauche plus, anti conformation best suited for anticodon base stacking, and thereby they may effect a selective codon recognition. View Show abstract Chemistry and Structure of Modified Uridine Dinucleosides Are Determined by Thiolation Article Oct 1992 ChemInform Elzbieta Sochacka Wanda S. Smith Hanna Gracz Paul Agris View Thiolation of Uridine Carbon-2 Restricts the Motional Dynamics of the Transfer RNA Wobble Position Nucleoside Article Mar 1992 Wanda S. Smith Barbara Nawrot Paul Agris Hanna Gracz Thiolation of transfer RNA wobble position uridines produces a preferred conformation of the nucleoside in solution at ambient temperature that is of biological significance to codon recognition [Sierzputowska-Gracz, H.; Sochacka, E.; Malkiewicz, A.; Kuo, K.; Gehrke, C.; Agris, P. F. J. Am. Chem. Soc. 1987,109, 7171-7177]. We investigated and compared, by proton nuclear magnetic resonance (NMR) spectroscopy, the thermodynamic stability of the conformations of 2-thiouridine and five biologically important 5-position derivatives and the six analogous uridines. Under physiological conditions, there were 4.8 times larger values of enthalpy and an average change of 1 kcal/mol, DELTA-G, for the C(2') to C(3') endo transitions of the 2-thiouridines, found to favor the C(3') endo conformation, than for the respective non-thiolated uridines, found preferentially in the C(2') endo conformation. The effect of an adjacent nucleoside on the structures and dynamics of 2-thiouridine and uridine was studied by analyzing the dinucleoside s2UpU. Within the dinucleoside the individual nucleosides neither differed in structure nor dynamics from their respective mononucleosides. Therefore, the 2-position thiolation, and not the 5-position modification, produced a significantly more stable, motionally more restricted, C(3') endo, gauche plus, anti conformer. This thermodynamically preferred structure may be best suited for anticodon base stacking and loop and stem stability. The result in tRNA is a modified-wobble selection of adenine as the only suitable third base of the codon. View Show abstract Stabilities of consecutive A.cntdot.C, C.cntdot.C, G.cntdot.G, U.cntdot.C, and U.cntdot.U mismatches in RNA internal loops: evidence for stable hydrogen-bonded U.cntdot.U and C.cntdot.C+ pairs Article Aug 1991 John Santalucia Ryszard Kierzek Douglas H. Turner The stability and structure of RNA duplexes with consecutive A.C, C.A, C.C, G.G, U.C, C.U, and U.U mismatches were studied by UV melting, CD, and NMR. The results are compared to previous results for GA and AA internal loops [SantaLucia, J., Kierzek, R., & Turner, D. H. (1990) Biochemistry 29, 8813-8819; Peritz, A., Kierzek, R., & Turner, D.H. (1991) Biochemistry 30, 6428-6436)]. The observed order for stability increments of internal loop formation at pH 7 is AG = GA almost-equal-to UU > GG greater-than-or-equal-to CA greater-than-or-equal-to AA = CU = UC greater-than-or-equal-to CC greater-than-or-equal-to AC. The results suggest two classes for internal loops with consecutive mismatches: (1) loops that stabilize duplexes and have strong hydrogen bonding and (2) loops that destabilize duplexes and may not have strong hydrogen bonding. Surprisingly, rCGCUUGCG forms a very stable duplex at pH 7 in 1 M NaCl with a T(M) of 44.8-degrees-C at 1 X 10(-4) M and a DELTA-G-degrees-37 of -7.2 kcal/mol. NOE studies of the imino protons indicate hydrogen bonding within the U.U mismatches in a wobble-type structure. Resonances corresponding to the hydrogen-bonded uridines are located at 11.3 and 10.4 ppm. At neutral pH, rCGCCCGCG is one of the least stable duplexes with a T(M) of 33.2-degrees-C and DELTA-G-degrees-37 of -5.1 kcal/mol. Upon lowering the pH to 5.5, however, the T(M) increases by 12-degrees-C, and DELTA-G-degrees-37 becomes more favorable by 2.5 kcal/mol. The pH dependence of rCGCCCGCG may be due to protonation of the internal loop C's, since no changes in thermodynamic parameters are observed for rCGCUUGCG between pH 7 and 5.5. Furthermore, two broad imino proton resonances are observed at 10.85 and 10.05 ppm for rCGCCCGCG at pH 5.3, but not at pH 6.5. This is also consistent with C.C+ base pairs forming at pH 5.5. rCGCCAGCG and rGGCACGCC have a small pH dependence, with T(M) increases of 5 and 3-degrees-C, respectively, upon lowering the pH from 7 to 5.5. rCGCCUGCG and rCGCUCGCG also show little pH dependence, with T(M) increases of 0.8 and 1.4-degrees-C, respectively, upon lowering the pH to 5.5. CD spectra of sequences with CC, CU, UC, and UU internal loops are typical of A-form conformation. CD spectra of AC, CA, and GG have a positive band at 280 nm, similar to that observed for GA and AA internal loops (SantaLucia et al., 1990). CD spectra of all sequences studied, except rCGCCCGGCG, are independent of pH. For rCGCCCGCG, a weak negative band at 300 nm is observed at pH 7, but at pH 5.5 a weak positive band is observed. Current algorithms for the prediction of RNA secondary structure assume that the stability of internal loops is not sequence-dependent or depends only on stacking. This study indicates these approximations are wrong. The measured internal loop free energy increments range from -0.6 to +2.3 kcal/mol, and do not correlate with known stacking parameters. View Show abstract Conformational rigidity of specific pyrimidine residues in tRNA arises from posttranscriptional modifications that enhance steric interaction between the base and the 2'-hydroxyl group Article Feb 1992 Gota Kawai Yuriko Yamamoto Takashi Kamimura Shigeyuki Yokoyama In order to elucidate roles of the 2'-O-methylation of pyrimidine nucleotide residues of tRNAs, conformations of 2'-O-methyluridylyl(3'----5')uridine (UmpU), 2'-O-methyluridine 3'-monophosphate (Ump), and 2'-O-methyluridine (Um) in 2H2O solution were analyzed by one- and two-dimensional proton NMR spectroscopy and compared with those of related nucleotides and nucleoside. As for UpU and UmpU, the 2'-O-methylation was found to stabilize the C3'-endo form of the 3'-nucleotidyl unit (Up-/Ump-moiety). This stabilization of the C3'-endo form is primarily due to an intraresidue effect, since the conformation of the 5'-nucleotidyl unit (-pU moiety) was only slightly affected by the 2'-O-methylation of the 3'-nucleotide unit. In fact even for Up and Ump, the 2'-O-methylation significantly stabilizes the C3'-endo form by 0.8 kcal/.mol-1. By contrast, for nucleosides (U and Um), the C3'-endo form is slightly stabilized by 0.1 kcal/.mol-1. Accordingly, the stabilization of the C3'-endo form by the 2'-O-methylation is primarily due to the steric repulsion among the 2-carbonyl group, the 2'-O-methyl group and the 3'-phosphate group in the C2'-endo form. For some tRNA species, 2-thiolation of pyrimidine residues is found in positions where the 2'-O-methylation is found for other tRNA species.(ABSTRACT TRUNCATED AT 250 WORDS) View Show abstract Modified Nucleosides in Transfer RNA Article Nov 1977 James A. McCloskey Susumu Nishimura View Specificity of yeast glutamic acid transfer RNA for codon recognition Article Jun 1969 Biochim Biophys Acta Nucleic Acids Protein Synth Takao Sekiya Keiichi Takeishi Tyunosin Ukita Two species of tRNA for glutamic acid (glutamic acid tRNA's I and III) were separated from yeast tRNA, and the stimulation of their binding to Escherichia coli ribosomes by the trinucleotides GpApA and GpApG, which are known to be codons for glutamic acid, was tested. The specificity of these two species of glutamic acid tRNA's for codon recognition was further studied by testing the transfer of glutamic acids to the specific amino acid position in rabbit hemoglobin by these tRNA species.The results of these studies showed that yeast glutamic acid tRNA's I and III specifically recognized the glutamic acid codons GpApG and GpApA, respectively.The codons for glutamic acid on the hemoglobin mRNA were tentatively assigned.It is the first clear demonstration that a codon containing adenosine residue in the third letter was specifically recognized by a tRNA. View Show abstract Structure of serine tRNA from Escherichia coli: I. Purification of serine tRNA's with different codon responses Article Jan 1971 Biochim Biophys Acta Nucleic Acids Protein Synth Hisayuki Ishikura Yuko Yamada Susumu Nishimura Three tRNASer's (tRNASer1, tRNASer3a and tRNASer3b) were highly purified from Escherichia coli by the combined use of several column chromatographic systems, DEAE-Sephadex A-50, BD-cellulose and a reverse phase partition system. All three tRNASer's were charged with serine using aminoacyl-tRNA synthetases from baker's yeast and rat liver. tRNASer1 which recognized codons of UCA and UCG, and less effectively UCU, contained uridin-5-oxyacetic acid as well as 2-methylthio-N6-(Δ2-isopentenyl)adenosine. On the other hand, both tRNASer3a and tRNASer3b recognized codons of AGU and AGC. tRNASer3a contained , while tRNASer3b contained a minor constituent similar to but different from this threonine-containing adenosine derivative. Chromatographic profiles of the complete digestion with ribonuclease T1 suggested structural similarity between tRNASer3a and tRNASer3b. View Show abstract The role of tRNA as a molecular spring in decoding, accommodation, and peptidyl transfer Article Jan 2005 J. Frank Jayati Sengupta H. Gao Mans Ehrenberg The role of tRNA as a molecular spring in decoding, accommodation, and peptidyl transfer View Show abstract Structural dynamics of transfer ribonucleic acid: Carbon-13 nuclear magnetic resonance of [13C]methyl-enriched pure species Article Mar 1983 Randall A. Kopper Paul G. Schmidt Paul Agris Carbon-13 nuclear magnetic resonance (NMR) of 13C-enriched methyl groups native to tRNA has been used to investigate the structure and dynamics of four purified species of Escherichia coli tRNA. All four tRNA species, Phe, Cys, Tyr, and Ser-I, exhibited resonances from the hypermodified nucleoside 2-(methylthio)-N6-(Δ2-isopentenyl)-adenosine (ms2i6A) and from ribothymidine (T). In addition, tRNAPhe yielded a peak for 7-methylguanosine; tRNATyr and tRNASer, resonances for 2′-O-methylguanosine; and tRNASer for 2′-O-methylcytidine (Cm). Carbon-13 enrichment was restricted to the methyl groups, which were approximately 70 atom % 13C, resulting in site-specific 13C NMR probes in the TψCG, dihydrouridine, and anticodon loops of tRNA. Multiple peaks for the methyl groups of T in tRNAPhe and tRNATyr and Gm in the latter indicated multiple structural forms for the regions of the molecule probed. These multiple forms coalesced to a single structure upon the addition of MgCl2. In contrast, tRNAs specific for Cys and Ser existed in only one major structural form in both the absence and presence of Mg2+. Transfer RNAs specific for Phe and Tyr exhibited a single peak for T in the absence of Mg2+ after thermal denaturation (at approximately 36°C) downfield of that for tRNA in the presence of Mg2+. In the presence of Mg2+, the chemical shift of the T methyl group in tRNAPhe and tRNACys was 11.20 ppm, whereas, in tRNASer and tRNATyr, it was 11.10, suggesting that different structural environments may exist in portions of different tRNAs in their native states. Values of spin-lattice relaxation times, nuclear Overhauser enhancements (NOE), and line widths for the methyl carbons of tRNA in the presence of Mg2+ at 25°C were utilized for determining rotational reorientation correlation times. The different tRNA species had significantly different apparent overall rotational correlation times (τR). Transfer RNAPhe reoriented most rapidly with τR of 12 ns, whereas tRNACys was approximately twice this reorientation time. Since the two molecules are about the same in molecular weight, τR may reflect true differences in the motional capabilities of large sections of the two tRNA structures. Internal correlation times for diffusion of the methyl group on its axis varied between 0.4 and 1.6 ps for the methyl of T and 0.8 and 2.0 ps for the methylthio of ms2i6A for the four tRNA species. The 2′-O-methyl resonance of tRNASer, having a chemical shift of 58.36 ppm and tentatively assigned to Cm in the anticodon loop, exhibited a remarkably large NOE, 2.3, and a large T1, 2.0 s. This suggests that the methyl carbon experiences motion of the C2-O bond in addition to simple methyl rotation. However, for the majority of the methyl groups fast diffusional reorientation about the methyl-base or methyl-S bond was the greatest contributor to relaxation from internal motion. Enough differences in tRNA spectra, relaxation rates, NOE, and calculated τR and τi values exist among the four tRNA species to conclude that there are differences in structure and dynamics, especially in the region of the TψCG loop interaction with the dihydrouridine loop. View Show abstract 'Two out of three': An alternative method for codon reading Article May 1978 Ulf Lagerkvist An alternative method for codon reading, whereby only the first two codon nucleotides are recognized by the anticodon, is discussed and the experimental evidence for this "two of three" reading method is reviewed. Misreading of codons by the "two out of three" method could pose a significant threat to the fidelity of protein synthesis unless the genetic code is organized in such a way as to prevent this method from being used when it might compromise translational fidelity. Inspection of the genetic code shows that it is arranged in such a way that the "two out of three" reading method can be used without translational errors. View Show abstract Studies of the complex between transfer RNAs with complementary anticodons Article Jun 1976 Henri Grosjean Dieter Soll Donald M. Crothers We used the temperature-jump method to study the complex between yeast t RNAPheand Escherichia coli tRNAGlu, which have the complementary anticodons GmAA and s2UUC, respectively. The binding constant (3.6 × 105m−1 at 25 °C) is about six orders of magnitude larger than expected for two complementary trinucleotides. The association rate constant (3 × 106m−1 at 25 °C) is similar to typical values observed for oligonucleotides, so the enhanced affinity in the tRNA · tRNA complex is due entirely to a much slower dissociation than expected for a three base-pair helix. We found an association enthalpy of −25 kcal/mol, nearly twice as large as expected for two stacking interactions in a three base-pair helix. The association entropy (−58 cal/deg per mol) is close to the expected value. The reaction occurs with a single relaxation, and therefore does not involve any slow reorganization of the tRNA molecule.We studied structural variations to investigate the origin of affinity enhancement. The following general factors are important. (1) The “loop constraint”, or closure of the two anticodon sequences into hairpin loops, accounts for about a factor 50 in the affinity. (2) “Dangling ends”, or non-complementary nucleotides at the end of the double helix contribute strongly to the affinity. (3) Modified nucleotides, like the Y base, in the dangling ends can contribute a special stabilization of up to a factor seven. These observations can be understood in terms of a model in which the short three base-pair helix is sandwiched between stacked bases and hence stabilized. The potential importance of loop-loop interactions and stacking effects for codon-anticodon bonding is emphasized. The results suggest a possible simple physical basis for the evolutionary choice of a triplet coding system. View Show abstract Base pairing and fidelity in codon-anticodon interaction Article Oct 1976 Michael Topal Jacques R Fresco Base pairing in codon-anticodon interaction has been investigated in order to understand the basis on which particular base pairs have been selected for or against participation at the wobble position and the basis for codon-anticodon infidelity. View Show abstract Wobble position modified nucleosides evolved to select transfer RNA codon recognition: A modified-wobble hypothesis Article Dec 1991 BIOCHIMIE Paul Agris While recognized that some wobble exists in the base pairing of the first base of the tRNA anticodon with the third of the codon, specific base modifications have evolved to select particular codons. This modified-wobble theory would be exemplified by a single codon recognition imposed on the anticodon by modification of the tRNA wobble position nucleoside. View Show abstract Crystal structure of an RNA double helix incorporating a track of non-Watson-Crick base pairs Article Nov 1991 Stephen R Holbrook Chaejoon Cheong Ignacio Tinoco Sang Hoon Kim The crystal structure of the RNA dodecamer duplex (r-GGACUUCGGUCC)2 has been determined. The dodecamers stack end-to-end in the crystal, simulating infinite A-form helices with only a break in the phosphodiester chain. These infinite helices are held together in the crystal by hydrogen bonding between ribose hydroxyl groups and a variety of donors and acceptors. The four noncomplementary nucleotides in the middle of the sequence did not form an internal loop, but rather a highly regular double-helix incorporating the non-Watson-Crick base pairs, G.U and U.C. This is the first direct observation of a U.C (or T.C) base pair in a crystal structure. The U.C pairs each form only a single base-base hydrogen bond, but are stabilized by a water molecule which bridges between the ring nitrogens and by four waters in the major groove which link the bases and phosphates. The lack of distortion introduced in the double helix by the U.C mismatch may explain its low efficiency of repair in DNA. The G.U wobble pair is also stabilized by a minor-groove water which bridges between the unpaired guanine amino and the ribose hydroxyl of the uracil. This structure emphasizes the importance of specific hydrogen bonding between not only the nucleotide bases, but also the ribose hydroxyls, phosphate oxygens and tightly bound waters in stabilization of the intramolecular and intermolecular structures of double helical RNA. View Show abstract Codon and amino-acid specificities of a transfer RNA are both converted by a single post-transcriptional modification Article Dec 1988 Kazuya Nishikawa Fumiko Nemoto Tomonari Muramatsu Shigeyuki Yokoyama An Escherichia coli isoleucine transfer RNA specific for the codon AUA (tRNA(2Ile) or tRNA(minorIle] has a novel modified nucleoside, lysidine in the first position of the anticodon (position 34), which is essential for the specific recognition of the codon AUA. We isolated the gene for tRNA(2Ile) (ileX) and found that the anticodon is CAT, which is characteristic of the methionine tRNA gene. Replacement of L(34) of tRNA(2Ile) molecule enzymatically with unmodified C(34) resulted in a marked reduction of the isoleucine-accepting activity and, surprisingly, in the appearance of methionine-accepting activity. Thus, both the codon and amino-acid specificity of this tRNA are converted by a single post-transcriptional modification of the first position of the anticodon during tRNA maturation. View Show abstract A novel lysine-substituted nucleoside in the first position of the anticodon of minor isoleucine tRNA from Escherichia coli Article Aug 1988 Tomonari Muramatsu Nobuyuki Horie Shigeyuki Yokoyama Tatsuo Miyazawa A minor species of isoleucine tRNA (tRNA(minor Ile)) specific to the codon AUA has been isolated from Escherichia coli B and a modified nucleoside N+ has been found in the first position of the anticodon (Harada, F., and Nishimura, S. (1974) Biochemistry 13, 300-307). In the present study, tRNA(minor Ile)) was purified from E. coli A19, and nucleoside N+ was prepared, by high-performance liquid chromatography, in an amount (0.6) A260 units) sufficient for the determination of chemical structures. By 400 MHz 1H NMR analysis, nucleoside N+ was found to have a pyrimidine moiety and a lysine moiety, the epsilon amino group of which was involved in the linkage between these two moieties. From the NMR analysis together with mass spectrometry, the structure of nucleoside N+ was determined as 4-amino-2-(N6-lysino)-1-(beta-D-ribofuranosyl)pyrimidinium ("lysidine"), which was confirmed by chemical synthesis. Lysidine is a novel type of modified cytidine with a lysine moiety and has one positive charge. Probably because of such a unique structure, lysidine in the first position of anticodon recognizes adenosine but not guanosine in the third position of codon. View Show abstract Internal motions in yeast phenylalanine transfer RNA from 13C NMR relaxation rates of modified base methyl groups: A model-free approach Article Dec 1987 Paul G. Schmidt Hanna Gracz Paul Agris Internal motions at specific locations through yeast phenylalanine tRNA were measured by using nucleic acid biosynthetically enriched in 13C at modified base methyl groups. Carbon NMR spectra of isotopically enriched tRNA(Phe) reveal 12 individual peaks for 13 of the 14 methyl groups known to be present. The two methyls of N2,N2-dimethylguanosine (m22G-26) have indistinguishable resonances, whereas the fourteenth methyl bound to ring carbon-11 of the hypermodified nucleoside 3' adjacent to the anticodon, wyosine (Y-37), does not come from the [methyl-13C]methionine substrate. Assignments to individual nucleosides within the tRNA were made on the basis of chemical shifts of the mononucleosides [Agris, P. F., Kovacs, S. A. H., Smith, C., Kopper, R. A., & Schmidt, P. G. (1983) Biochemistry 22, 1402-1408; Smith, C., Schmidt, P. G., Petsch, J., & Agris, P. F. (1985) Biochemistry 24, 1434-1440] and correlation of 13C resonances with proton NMR chemical shifts via two-dimensional heteronuclear proton-carbon correlation spectroscopy [Agris, P. F., Sierzputowska-Gracz, H., & Smith, C. (1986) Biochemistry 25, 5126-5131]. Values of 13C longitudinal relaxation (T1) and the nuclear Overhauser enhancements (NOE) were determined at 22.5, 75.5, and 118 MHz for tRNA(Phe) in a physiological buffer solution with 10 mM MgCl2, at 22 degrees C. These data were used to extract two physical parameters that define the system with regard to fast internal motion: the generalized order parameters (S2) and effective correlation times (tau e) for internal motion of the C-H internuclear vectors.(ABSTRACT TRUNCATED AT 250 WORDS) View Show abstract Molecular mechanism of codon recognition by tRNA species with modified uridine in the first position of the anticodon Article Sep 1985 Shigeyuki Yokoyama T. Watanabe Katsutoshi Murao Tatsuo Miyazawa Proton NMR analyses have been made to elucidate the conformational characteristics of modified nucleotides as found in the first position of the anticodon of tRNA [derivatives of 5-methyl-2-thiouridine 5'-monophosphate (pxm5s2U) and derivatives of 5-hydroxyuridine 5'-monophosphate (pxo5U)]. In pxm5s2U, the C3'-endo form is extraordinarily more stable than the C2'-endo form for the ribose ring, because of the combined effects of the 2-thiocarbonyl group and the 5-substituent. By contrast, in pxo5U, the C2'-endo form is much more stable than the C3'-endo form, because of the interaction between the 5-substituent and the 5'-phosphate group. The enthalpy differences between the C2'-endo form and the C3'-endo form have been obtained as 1.1, -0.7, and 0.1 kcal/mol (1 cal = 4.184 J) for pxm5s2U, pxo5U, and unmodified uridine 5'-monophosphate, respectively. These findings lead to the conclusion that xm5s2U in the first position of the anticodon exclusively takes the C3'-endo form to recognize adenosine (but not uridine) as the third letter of the codon, whereas xo5U takes the C2'-endo form as well as the C3'-endo form to recognize adenosine, guanosine, and uridine as the third letter of the codon on ribosome. Accordingly, the biological significance of such modifications of uridine to xm5s2U/xo5U is in the regulation of the conformational rigidity/flexibility in the first position of the anticodon so as to guarantee the correct and efficient translation of codons in protein biosynthesis. View Show abstract Coding Properties and Nucleotide Sequences of E. coli Glutamine tRNAs Article Jul 1972 William R. Folk Moshe Yaniv The recent isolation of the sun + glutamine-inserting amber suppressor in E. coli8·9 prompted us to determine if the suppressor was a tRNA, and to attempt to define the modification leading to the suppressor activity. Ribosome binding experiments indicate that si/7+/s7 heterozygote bacteria contain a glutaminyl tRNA which binds to ribosomes in response to the trinucleotide UAG (unpublished results). View Show abstract Minor components in transfer RNA: their characterization, location, and function Article Feb 1972 PROG MOL BIOL TRANSL S Nishimura View Function of Y in codon-anticodon interaction of tRNAPhe Article Feb 1973 Olaf Pongs Erwin Reinwald Molar association constants of binding oligonucleotides to the anticodon loops of (yeast) tRNAPhe, (yeast) tRNAHClPhe and (E. coli) tRNAFMet have been determined by equilibrium dialysis. From the temperature dependence of the molar association constants, ΔF, ΔH and ΔS of oligomer-anticodon loop interaction have been determined. The data indicate that the free energy change of codon-anticodon interaction is highly influenced by the presence of a modified purine (tRNAPhe), of an unmodified purine (tRNAFMet) or its absence (tRNAHClPhe). Excision of the modified purine Y in the anticodon loop of tRNAPhe results in a conformational change of the anticodon loop, which is discussed on the basis of the corresponding changes in ΔF, ΔH and ΔS. View Show abstract Possibilities for the Evolution of the Genetic Code from a Preceding Form Article Dec 1973 Thomas H. Jukes Analysis of the interaction between mRNA codons and tRNA anticodons suggests a model for the evolution of the genetic code. Modification of the nucleic acid following the anticodon is at present essential in both eukaryotes and prokaryotes to ensure fidelity of translation of codons starting with A, and the amino acids which could be coded for before the evolution of the modifying enzymes can be deduced. View Show abstract Purification and characterization of AUA specific isoleucine transfer ribonucleic acid from Escherichia coli B Article Feb 1974 Fumio Harada Susumu Nishimura An isoleucine tRNA (tRNAminorIle) specific for the codon AUA was obtained from Escherichia coli B by successive column chromatographies on DEAE-Sephadex A-50 at pH 7.5 and pH 4.0, benzoylated DEAE-cellulose, and hydroxylapatite. Its binding to E. coli ribosomes was stimulated by A-U-A, but not A-U-U, A-U-C, or A-U-G. The isoleucine acceptor activity of this tRNA, unlike that of the major species of E. coli isoleucine tRNA (tRNAmajorIle), was inhibited almost completely by treatment with cyanogen bromide. Analysis of digests of tRNAminorIle with RNase T2 and RNase T1 showed that the structure of this tRNA differs completely from that of E. coli tRNAmijorIle. An oligonucleotide containing t6A and an unknown modified nucleoside N+ was isolated from a digest of tRNAminorIle with RNase T1. The sequence of this oligonucleotide was determined as A-C-U-N+-A-U-t6A-A-ψ-C-Gp, indicating that N+-A-U is the anticodon of this tRNA. The results showed that the unique, new minor component N+ functions in specific recognition of the AUA codon. View Show abstract Transfer of Valine into Rabbit Haemoglobin from Various Isoaccepting Species of Valyl-tRNA Differing in Codon Recognition Article Nov 1973 Toshiyuki Takemoto Susumu Nishimura Tyunosin Ukita Keiichi Takeishi The transfer of valine into haemoglobin from various isoaccepting species of Val-tRNA obtained from Escherichia coli and yeast, which differ in the specificity for codon recognition has been investigated using a rabbit reticulocyte cell-free system. It was demonstrated that the Val-tRNA species which bind to ribosomes in response to the codon GUG preferentially transferred valine into both α and β-globin chains and that the other species of Val-tRNA which cannot repond to GUG were scarcely utilized in the haemoglobin synthesis. Several possibilities to explain this phenomenon were examined and consequently it has been strongly suggested that valine codons in mRNA for rabbit globin, at least for α-globin chain, are very rich in GUG and possibly GUA. View Show abstract Fidelity in protein synthesis. The role of the ribosome. Article Nov 1968 S M Friedman R Berezney I B Weinstein The effect of ribosome species on ambiguity during the transfer of amino acids from Escherichia coli aminoacyl transfer RNAs into protein has been examined under a variety of environmental conditions. Streptomycin greatly enhanced polyuridylic acid-directed transfer of leucine relative to phenylalanine (ambiguity ratio) with E. coli ribosomes, but had little effect with equivalent concentrations of reticulocyte ribosomes. The addition of E. coli supernatant fraction did not increase the ambiguity ratio obtained with reticulocyte ribosomes. In the presence of streptomycin, ambiguity was maximal at low concentrations of transfer RNA with E. coli ribosomes; transfer RNA concentration had a lesser effect on the low ambiguity obtained with reticulocyte ribosomes. Raising the magnesium concentration from 6 to 13 x 10-3 m increased the ambiguity ratio from 9 to 63% with E. coli ribosomes and from 0 to 25% with reticulocyte ribosomes. Ethanol increased the ambiguity ratio with E. coli ribosomes but not with reticulocyte ribosomes. The high fidelity of reticulocyte ribosomes was not restricted to leucine-phenylalanine ambiguity; although streptomycin caused a copolymer of uridylic and guanylic acid to miscode for arginine with E. coli ribosomes, this effect was not observed with reticulocyte ribosomes. These results show that streptomycin, a low concentration of transfer RNA, a high concentration of magnesium, and the addition of ethanol exert their miscoding effects at the ribosomal level. The fact that reticulocyte ribosomes are more resistant to these miscoding effects than E. coli ribosomes suggests that the ribosome plays an active role in codon recognition. View Show abstract The origin of the genetic code Article Jan 1969 F. H. C. Crick The general features of the genetic code are described. It is considered that originally only a few amino acids were coded, but that most of the possible codons were fairly soon brought into use. In subsequent steps additional amino acids were substituted when they were able to confer a selective advantage, until eventually the code became frozen in its present form. View Show abstract Specificity of yeast glutamic acid transfer RNA for codon recognition Article Jul 1969 Biochim Biophys Acta Takao Sekiya Keiichi Takeishi T Ukita Two species of tRNA for glutamic acid (glutamic acid tRNA's I and III) were separated from yeast tRNA, and the stimulation of their binding to Escherichia coli ribosomes by the trinucleotides GpApA and GpApG, which are known to be codons for glutamic acid, was tested. The specificity of these two species of glutamic acid tRNA's for codon recognition was further studied by testing the transfer of glutamic acids to the specific amino acid position in rabbit hemoglobin by these tRNA species. The results of these studies showed that yeast glutamic acid tRNA's I and III specifically recognized the glutamic acid codons GpApG and GpApA, respectively. The codons for glutamic acid on the hemoglobin mRNA were tentatively assigned. It is the first clear demonstration that a codon containing adenosine residue in the third letter was specifically recognized by a tRNA. View Show abstract Uridin-5-oxy acetic acid: a new minor constituent from E. coli valine transfer RNA I Article Mar 1970 K Murao M Saneyoshi Fumio Harada S Nishimura View Structure of serine tRNA from Escherichia coli. I. Purification of serine tRNA's with different codon responses Article Feb 1971 Biochim Biophys Acta H Ishikura Y Yamada S Nishimura View RNA codons and protein synthesis. IX. Synonym codon recognition by multiple species of valine-, alanine-, and methionine-sRNA Article May 1966 D A Kellogg Bhupendra P Doctor J E Loebel Marshall W. Nirenberg View Codon-anticodon recognition: The missing triplet hypothesis Article Mar 1971 Jacques Ninio The purpose of this article is to determine whether the rules for codon-anticodon recognition can roughly be the same as for double-stranded RNA associations or if some special configuration which allows one and only one of the bases to wobble is necessary to account for the presence of inosine in a certain number of anticodons in yeast and the presumed absence of related anticodons. It is proposed that the recognition of triplets in a double helical configuration of the type which seems to occur in the ordered segments of transfer RNA is necessarily ambiguous because of the interactions between non-hydrogen-bonded bases on opposite strands (diagonal interactions of stacking). It is then deduced that the complex patterns of degeneracy in codon-anticodon recognition can occur in a situation which is less unsymmetrical with respect to the three positions of the codon-anticodon association than assumed by the wobble hypothesis, provided that some of the potential anticodons be absent in the cell. Our hypothesis makes predictions of this type: if in the anticodon for alanine in yeast (IGC), inosine is substituted by guanine, the resulting modified tRNA will be recognized, in addition to GCC, by GUC which is a codon for valine. View Show abstract Sulfur-Containing Nucleoside from Yeast Transfer Ribonucleic Acid: 2-Thio-5(or 6)-uridine Acetic Acid Methyl Ester Article Apr 1968 L Baczynskyj K. Biemann Ross H. Hall A nucleoside, isolated from yeast transfer RNA, has been assigned the structure 2-thio-5-uridine acetic acid methyl ester on the basis of high-esolution mass spectrometry, chemical properties, and ultraviolet spectra. The alternate 6-substituted isomeric structure cannot yet be completely ruled out. View Show abstract Show more Powered By 10 Hire top scientific talent with ResearchGate Share Next Stay Recommended publications Discover more about:tRNA Article [Post-transcriptional modifications of tRNA and molecular mechanism of genetic code decoding] January 1989 · Seikagaku. The Journal of Japanese Biochemical Society S Yokoyama T Miyazawa Read more Last Updated: 07 Aug 2025 Looking for the full-text? You can request the full-text of this article directly from the authors on ResearchGate. Request full-text Already a member? Log in ResearchGate iOS App Get it from the App Store now. Install Keep up with your stats and more Access scientific knowledge from anywhere or Discover by subject area Recruit researchers Join for free LoginEmail Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? - [x] Keep me logged in Log in or Continue with Google Welcome back! Please log in. Email · HintTip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? - [x] Keep me logged in Log in or Continue with Google No account? Sign up Company About us News Careers Support Help Center Business solutions Advertising Recruiting © 2008-2025 ResearchGate GmbH. All rights reserved. Terms Privacy Copyright Imprint Consent preferences
318
vc dimension - PAC-learning bound with epsilon-cover of hypothesis class - Theoretical Computer Science Stack Exchange =============== Join Theoretical Computer Science By clicking “Sign up”, you agree to our terms of service and acknowledge you have read our privacy policy. Sign up with Google OR Email Password Sign up Already have an account? Log in Skip to main content Stack Exchange Network Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Visit Stack Exchange Loading… Tour Start here for a quick overview of the site Help Center Detailed answers to any questions you might have Meta Discuss the workings and policies of this site About Us Learn more about Stack Overflow the company, and our products current community Theoretical Computer Science helpchat Theoretical Computer Science Meta your communities Sign up or log in to customize your list. more stack exchange communities company blog Log in Sign up Home Questions Unanswered AI Assist Labs Tags Chat Users Companies Teams Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Try Teams for freeExplore Teams 3. Teams 4. Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Explore Teams Teams Q&A for work Connect and share knowledge within a single location that is structured and easy to search. Learn more about Teams Hang on, you can't upvote just yet. You'll need to complete a few actions and gain 15 reputation points before being able to upvote. Upvoting indicates when questions and answers are useful. What's reputation and how do I get it? Instead, you can save this post to reference later. Save this post for later Not now PAC-learning bound with epsilon-cover of hypothesis class Ask Question Asked 7 years, 4 months ago Modified5 years, 6 months ago Viewed 1k times This question shows research effort; it is useful and clear 3 Save this question. Show activity on this post. In this video at 43:00, a version of the PAC bound for generalization error ϵ ϵ, which I hadn't seen before, is quoted: ϵ 2<log|H ϵ|+log 1/δ 2 m ϵ 2<log⁡|H ϵ|+log⁡1/δ 2 m where m m is the number of samples, δ δ is the confidence parameter, and H ϵ H ϵ is the cardinality of an "ϵ ϵ-cover of the hypothesis class", where he defines an ϵ ϵ-cover as a set of subsets of the hypothesis class, such that the probability that two hypothesis in the same subset disagree is less than ϵ ϵ. Apart from the fact that this isn't a formal statement, I couldn't prove this myself. Has anyone heard of this version of PAC, and if so, could they point me to resources explaining it, or give some explanation here? machine-learning vc-dimension pac-learning Share Share a link to this question Copy linkCC BY-SA 3.0 Cite Improve this question Follow Follow this question to receive notifications edited Mar 27, 2018 at 19:19 guillefix asked Mar 27, 2018 at 19:10 guillefixguillefix 133 6 6 bronze badges Add a comment| 2 Answers 2 Sorted by: Reset to default This answer is useful 4 Save this answer. Show activity on this post. This follows from Massart's finite class lemma. Let F F is a binary function class restricted to some set {X 1,…,X n}{X 1,…,X n}, and let P n P n be the empirical (i.e., uniform) measure on this set. Then, for any ϵ>0 ϵ>0, the empirical Rademacher complexity of F F is bounded by R n(F;X)≤ϵ+2 log N F(ϵ)n−−−−−−−−−√,R n(F;X)≤ϵ+2 log⁡N F(ϵ)n, where N F(ϵ)N F(ϵ) is the ℓ 2 ℓ 2 ϵ ϵ-covering number of F F w.r.t. P n P n. This is proved in display (1) here: -- check out the course notes: Share Share a link to this answer Copy linkCC BY-SA 3.0 Cite Improve this answer Follow Follow this answer to receive notifications answered Mar 27, 2018 at 19:34 AryehAryeh 10.6k 1 1 gold badge 28 28 silver badges 52 52 bronze badges 14 1 I'd say it's less difficult than, say, the standard PAC bounds, which rely on the "deep" Sauer's Lemma -- while this is really just a clever use of Hoeffding + union bound. –Aryeh Commented Mar 27, 2018 at 19:54 2 On a side note, I like how in learning theory what is essentially "Chernoff + union bound" is named after a person :) –Sasho Nikolov Commented Mar 28, 2018 at 12:25 1 @SashoNikolov you bring up an interesting meta-point that may be worthy of its own post. Massart's finite class lemma is indeed a rather trivial result, likely folklore long before he published it. But he's a famous probabilist, well-known for highly non-trivial results (such computing the optimal constants in the DKW inequality). I've observed that famous people often have simple results named after them, and this is not unique to learning theory. Just look at the Johnson-Lindenstrauss lemma -- certainly not the deepest or most difficult, thought probably what they're best known for. –Aryeh Commented Mar 28, 2018 at 12:29 1 I agree :). I think "Chernoff bound" is an even better example. Chernoff considered it a minor lemma in a paper, whose proof he had learned from Rubin. The proof technique is at least thirty years older than Chernoff's paper, and dates back to the work of Sergei Bernstein. But the name took off. –Sasho Nikolov Commented Mar 28, 2018 at 16:33 1 @SashoNikolov Bring back Legendre's constant! –Clement C. Commented Mar 28, 2018 at 19:47 |Show 9 more comments This answer is useful 0 Save this answer. Show activity on this post. Let the risk be a random variable, by the Chernoff-Hoeffding's inequality, you can bound the approximation to the true risk: P(∣∣∣1 n∑i=1 n ξ i−E(ξ)∣∣∣≥ϵ)P(|R^S(h)−R(h)|≥ϵ)≤2 exp(−2 n ϵ 2)≤2 exp(−2 n ϵ 2)P(|1 n∑i=1 n ξ i−E(ξ)|≥ϵ)≤2 exp⁡(−2 n ϵ 2)P(|R^S(h)−R(h)|≥ϵ)≤2 exp⁡(−2 n ϵ 2) Unfortunately, this bound only holds for a fixed function h h which does not depend on the training data, but our hypothesis certainly does depend. The reason for such constraint is intuitive. If we let the hypothesis space convey all possible functions and do not restrict our hypothesis to not depend on the training data, we can always generate a function that ``memorises'' the given sample and has no empirical error. Such function will most certainly not generalise well and invalidate the bound. Vapnik and Chervonenkis solved this conundrum by using the union bound. ==Union Bound== If we enumerate all the functions in H H, using the fact that it is finite (it is a ϵ ϵ-cover of H H), the bound still holds for each hypothesis: P(|R^S(h 1)−R(h 1)|>ϵ)P(|R^S(h 2)−R(h 2)|>ϵ)P(|R^S(h|H|)−R(h|H|)|>ϵ)∴P(sup h∈H|R^S(h)−R(h)|>ϵ)P[∃h∈H:|R^S(h)−R(h)|>ϵ]∨∨⋯≤∑|H|2 exp(−2 n ϵ 2)≤2|H|exp(−2 n ϵ 2)≤2|H|exp(−2 n ϵ 2)P(|R^S(h 1)−R(h 1)|>ϵ)∨P(|R^S(h 2)−R(h 2)|>ϵ)∨⋯P(|R^S(h|H|)−R(h|H|)|>ϵ)≤∑|H|2 exp⁡(−2 n ϵ 2)∴P(sup h∈H|R^S(h)−R(h)|>ϵ)≤2|H|exp⁡(−2 n ϵ 2)P[∃h∈H:|R^S(h)−R(h)|>ϵ]≤2|H|exp⁡(−2 n ϵ 2) P[∃h∈H:|R^S(h)−R(h)|>ϵ]P[∃h∈H:|R^S(h)−R(h)|>ϵ]∴δ<δ≤2|H|exp(−2 n ϵ 2)>2|H|exp(−2 n ϵ 2)(PAC)(PAC)P[∃h∈H:|R^S(h)−R(h)|>ϵ]<δ P[∃h∈H:|R^S(h)−R(h)|>ϵ]≤2|H|exp⁡(−2 n ϵ 2)∴δ>2|H|exp⁡(−2 n ϵ 2) Assuming δ=2|H|exp(−2 n ϵ 2)δ=2|H|exp⁡(−2 n ϵ 2), we have: exp(−2 n ϵ 2)=δ 2|H|−2 n ϵ 2=ln δ−l n 2|H|ϵ 2=ln|H|+ln 2−ln δ 2 n∴ϵ>0→ϵ=+ln|H|+ln 2/δ 2 n−−−−−−−−−−−−√exp⁡(−2 n ϵ 2)=δ 2|H|−2 n ϵ 2=ln⁡δ−l n 2|H|ϵ 2=ln⁡|H|+ln⁡2−ln⁡δ 2 n∴ϵ>0→ϵ=+ln⁡|H|+ln⁡2/δ 2 n Share Share a link to this answer Copy linkCC BY-SA 4.0 Cite Improve this answer Follow Follow this answer to receive notifications answered Feb 6, 2020 at 3:22 Fred GuthFred Guth 143 5 5 bronze badges Add a comment| Your Answer Thanks for contributing an answer to Theoretical Computer Science Stack Exchange! Please be sure to answer the question. Provide details and share your research! But avoid … Asking for help, clarification, or responding to other answers. Making statements based on opinion; back them up with references or personal experience. Use MathJax to format equations. MathJax reference. To learn more, see our tips on writing great answers. Draft saved Draft discarded Sign up or log in Sign up using Google Sign up using Email and Password Submit Post as a guest Name Email Required, but never shown Post Your Answer Discard By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy. Start asking to get answers Find the answer to your question by asking. Ask question Explore related questions machine-learning vc-dimension pac-learning See similar questions with these tags. Featured on Meta Upcoming initiatives on Stack Overflow and across the Stack Exchange network... Will you help build our new visual identity? Report this ad Report this ad Related 7What does PAC-learnability say about the learner runtime? 3Is this variant of PAC learning known? 4Adversarial Machine Learning, Learning with (Malicious) noise 11Proper PAC learning VC dimension bounds 5Tight VC bound for agnostic learning 4Are there hypothesis classes that are hard to learn but easy to test? Hot Network Questions Drawing a 3D vector field with vortices and perspective axis labels Collect coefficient of sum of terms in Mathematica Proper way to power off a Ubuntu 22.04-5 desktop from single user mode Is Adj N Adj possible? Can metal atoms act as ligands? Samba(Linux)/Windows interaction Why do we expect AI to reason instantly when humans require years of lived experience? How soon after parking a car in a paid parking area must I provide proof of payment? Wiring a bathroom exhaust fan Matching Illustrator Effect > Warp > Wave with Inkscape I failed to make Claus benzene. (With sticks.) Quantum model of atom Why do the rules allow resigning in drawn positions with insufficient mating material? What is a good way to get magnetic sensor input? SPDX: GPL-2.0-or-later vs the + operator If I self-publish a book and give it away for free, would it meet a future publisher's desire to be "first publishing rights"? What does, "For you alone are holy." mean in Revelation 15:4? Why isn't gauge symmetry a symmetry while global symmetry is? Does cell phone only receive (one way communication) or receive and transmit microwaves (two way communication) during download? Elfquest story where two elves argue over one's hypnotizing of an animal In Matthew 17:4, what was Peter’s intention in proposing to make three tents for Jesus, Moses, and Elijah? Can Suspended Sentence be cast Twice? "Melbourne saw the most significant change both in actual coffee prices and in percentages." Harry Potter fanfic where Petunia dies of cancer and Vernon works at a horse racing track? Question feed Subscribe to RSS Question feed To subscribe to this RSS feed, copy and paste this URL into your RSS reader. Why are you flagging this comment? It contains harassment, bigotry or abuse. This comment attacks a person or group. Learn more in our Code of Conduct. It's unfriendly or unkind. This comment is rude or condescending. Learn more in our Code of Conduct. Not needed. This comment is not relevant to the post. Enter at least 6 characters Something else. A problem not listed above. Try to be as specific as possible. Enter at least 6 characters Flag comment Cancel You have 0 flags left today Theoretical Computer Science Tour Help Chat Contact Feedback Company Stack Overflow Teams Advertising Talent About Press Legal Privacy Policy Terms of Service Your Privacy Choices Cookie Policy Stack Exchange Network Technology Culture & recreation Life & arts Science Professional Business API Data Blog Facebook Twitter LinkedIn Instagram Site design / logo © 2025 Stack Exchange Inc; user contributions licensed under CC BY-SA. rev 2025.8.13.32804 By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy. Accept all cookies Necessary cookies only Customize settings Cookie Consent Preference Center When you visit any of our websites, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences, or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and manage your preferences. Please note, blocking some types of cookies may impact your experience of the site and the services we are able to offer. Cookie Policy Accept all cookies Manage Consent Preferences Strictly Necessary Cookies Always Active These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information. Cookies Details‎ Performance Cookies [x] Performance Cookies These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance. Cookies Details‎ Functional Cookies [x] Functional Cookies These cookies enable the website to provide enhanced functionality and personalisation. They may be set by us or by third party providers whose services we have added to our pages. If you do not allow these cookies then some or all of these services may not function properly. Cookies Details‎ Targeting Cookies [x] Targeting Cookies These cookies are used to make advertising messages more relevant to you and may be set through our site by us or by our advertising partners. They may be used to build a profile of your interests and show you relevant advertising on our site or on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device. Cookies Details‎ Cookie List Clear [x] checkbox label label Apply Cancel Consent Leg.Interest [x] checkbox label label [x] checkbox label label [x] checkbox label label Necessary cookies only Confirm my choices
319
Published Time: 2017-05-31 (PDF) Solidification modeling with user defined function in Ansys Fluent =============== Conference Paper PDF Available Solidification modeling with user defined function in Ansys Fluent May 2017 Conference: International Conference on Computational Fluid Dynamics in the Oil & Gas, Metallurgical and Process Industries At: Trondheim Volume: 12 Authors: Moritz Eickhoff RWTH Aachen University Antje Rückert Aluminium Duffel Herbert Pfeifer RWTH Aachen University Download full-text PDFRead full-text Download full-text PDF Read full-text Download citation Copy link Link copied Read full-textDownload citation Copy link Link copied Citations (2)References (7)Figures (2) Abstract and Figures The modelling of solidification processes in combination with fluid flow is one main application of ANSYS Fluent. The solid-ification is modelled with the enthalpy porosity technique. Therefor the fluid flow is damped like a flow through a porous media of dendrites. In case of materials with large solidification ranges, like the nickel based superalloy 718, the adjustment possibilities of ANSYS Fluent are often not adequate. The program postulates a linear dependency between liquid fraction and temperature. To improve the simulation, the solidification was implemented by a user defined function (UDF). The principal modelling of fluid flow is based on the theory of AN-SYS Fluent, but it is now possible to adjust the liquid fraction in fine temperature steps. Test case: Built-in Fluent solidification model … Test case: UDF solidification model … Figures - uploaded by Moritz Eickhoff Author content All figure content in this area was uploaded by Moritz Eickhoff Content may be subject to copyright. Discover the world's research 25+ million members 160+ million publication pages 2.3+ billion citations Join for free Public Full-text 1 Content uploaded by Moritz Eickhoff Author content All content in this area was uploaded by Moritz Eickhoff on Jul 05, 2017 Content may be subject to copyright. 12th International Conference on CFD in Oil & Gas, Metallurgical and Process Industries SINTEF, Trondheim, Norway May 30th – June 1st 2017 CFD 2017 SOLIDIFICATION MODELING WITH USER DEFINED FUNCTION IN ANSYS FLUENT Moritz EICKHOFF 1, Antje RÜCKERT 1, Herbert PFEIFER 1 1 RWTH Aachen University, Department for Industrial Furnaces and Heat Engineering, Kopernikusstr. 10, 52074 Aa- chen, GERMANY E-mail: [email protected] ABSTRACT The modelling of solidification processes in combination with fluid flow is one main application of A NSYS Fluent. Th e solid- ification is modelled with the enthalpy porosity technique. Therefor the fluid flow is damped like a flow through a porous media of dendrites. In case of materials with large solidification ranges, like the nickel based superalloy 718, the adjustment possibilities of A NSYS Fluent are often not adequate. The pro- gram postulates a linear dependency between liquid fraction and temperature. To improve the simulation, the solidification was implemented by a user defined function (UDF). The prin- cipal modelling of fluid flow is based on the theory of A N- SYS Fluent, but it is now possible to adjust the liquid fraction in fine temperature steps. Keywords:Rheology, Interphases, Casting and solidifica- tion, Process metallurgy, Alloy 718. NOMENCLATURE Greek Symbols Turbulent dissipation rate, [-]. Thermal conductivity, [W/(m K)]. Dynamic viscosity, [kg/(m s)]. Divergence operator, [-]. Density, [kg/m³]. Shear stress tensor, [N/m²]. Latin Symbols  Mushy zone constant, [kg/(m ³ s)]. Internal energy, [J]. Fraction, [-]. Force against fluid flow per v olume, [N/m³]. Gravity, [m/s²]. Turbulent kinetic energy, [-]. Permeability, [m²]. Small number, [-]. Pressure, [Pa]. volumetric energy source, [J/m³]. Momentum sink for turbulence, [kg/(m ³ s)]. Velocity, [m/s]. Time, [s]. Temperature, [K]. Sub/superscripts eff Effective (molecular + turbulent). Electro slag remelting. Turbulent dissipation rate. Turbulent kinetic energy. Liquidus / liquid. Pulling (movement of the solid). Solidus. User-defined function. User-de fined memory. Vacuum arc remelting. X-direction. Y-direction. Z-direction. INTRODUCTION Metallurgical processes are of ten modeled to obtain de- tails of the inner fluid flow or temperature distribution, due to the difficult observation possibilities with classical measurement methods. The modelling of solidificati on processes is in focus of research since the 19 70s (Erick- son, 1975). One of the comm on simulation programs ANSYS Fluent uses the enthalpy-porosity approach (ANSYS Inc., Release 14.5, 2012) which was introduced by Poirier (1987). ANSYS Fluent us es the assumption that the liq- uid fraction is proportional to the temperature in the so- lidification range. For many standard steels, this assump- tion will be an appropriate approach. In case of some nickel based superalloys, like alloy 718, the supposition is far-out the real material behavior. Therefore, user-defined functions implement the solidifi- cation to reproduce the real material behavior. SOLIDIFICATION PHENOMENA Important for the simulation of solidi fication processes are the damping of the fluid flow in the mushy region and the solidification enthalpy. The damping is adjustable with the material specific mushy zone constant (Voller et al., 1990) and considers the liquid fraction also. Figure 1 shows the liquid fraction of an alloy 718 in re- spect to the temperature in the solidification range calcu- lated by JMatPro. Obviously, the linear approximation made by ANSYS Fluent is not appropriate for this mate- rial. After a cooling of 25 % of the temperature range the liquid fraction is not 75 % but only 40 %. Therefore, the M. Eickhoff, A. Rückert, H. Pfeifer damping of the fl uid flow is underestimated by AN- SYS Fluent. Figure 1: Liquid fraction of alloy 718 (Giesselmann et al., 2015) in comparison to ANSYS Fluent The deviation of the liquid fraction from alloy 718 results in a nonlinear behavior of the enthalpy in the solidifica- tion range, because the solidif ication enthalpy is depend- ent on the liquid fraction. Figure 2 shows the comparison of solidification en- thalpies in respect to the tem perature in the solidification range. The grey line shows the linear implementation of ANSYS Fluent. Obviously, the change in enthalpy of the mild steel (Koric and Thomas, 2008) is close to the ap- proximation from ANSYS Fluent. Whereas, the red line, representing Alloy 718 (Overfelt et al., 1994), shows a considerably different behavior. Figure 2: Comparison of solidification en- thalpies (Overfelt et al., 1994, Koric and Thomas, 2008) BUILT-IN SOLIDIFICATION IN ANSYS FLUENT The solidification module from ANSYS Inc. (Release 14.5, 2012) uses the enthalpy-porosity approach to im- plement the damping of the fluid flow in the mushy re- gion. Poirier (1987) shows, that the inter dendritic flow follows Darcy’s law (Darcy, 1856): Darcy’s law     (1) Voller and Prakash (1987) implemented the awareness of Poirier (1987) in the fluid flow modeling. Later, a mushy zone constant wa s introduced to replace the dynamic vis- cosity µ D and the unknown permeability K (Voller et al., 1990). The liquid fraction f liq represents the change in permeability, whereas the m ushy zone constant A mush im- plements the different material behavior (2). The small number ε is equal 0.001 t o avoid a division by zero (AN- SYS Inc., Release 14.5, 2012).           (2) The ratio between viscosity and permeability (see for- mula (2)) is then inserted in the equations (3) and (4) to formulate the force F against the fluid flow v as well as the momentum S against the turbulence quantities Φ.                (3)            (4) The necessary turbulence quantities depend on the used turbulence model. Equation (4) is equal for all quantities like turbulent dissipation rate ε, t urbulent kinetic en- ergy k, specific dissipation ω and so on (ANSYS Inc., Release 14.5, 2012). To show the implementation of the formula above, the momentum equation of the solver (5) is given below. The damping force F of the fluid flow (Equati on (4)) is in- serted in the last term.                       (5) As mentioned in the previous chapter, the solidification enthalpy is distributed linear over the temperature range of solidification and implemented as s ource term S m in the energy equation (6).                         (6) Solidification modeling with user defined function in ANSYS Fluent / CFD 2017 USER-DEFINED SOLIDIFICATION MODEL To reconstruct the real material behavior of alloy 718 an in-house developed solidification model based on UDFs is used for several process models, like electro slag re- melting (ESR) and vacuum arc remelting (VAR). Approach The aim of the m odified solidification model is to im ple- ment the nonlinear behavior of the liquid fraction in re- spect to the temperature. The curve progression can be received for example from a Scheil-Gulliver approach like in Figure 1 or other calculat ion programs for thermo- physical data. The idea was to reconstruct the solidification model of ANSYS Fluent by user-defined functions. Therefore, t he main equations ((3) and (4)) for the damping are also used. The solidification enthalpy is included in the heat capac- ity of the material. Implementation The implementation of the modified solidification model is based on a DEFINE_ADJUST function for the liquid fraction and several DEFINE_SOURCE functions for the damping. A modified heat capacity includes the change in enthalpy. The liquid fraction should be ad justed very detaile d to represent the real fluid flow. Therefore, liquid fraction and solidification enthalpy out of the thermophysical da- tabase are divided in 1 K steps. Damping of the fluid flow A DEFINE_ADJUST UDF loops over all the c ells in the fluid regions to get the temperature of the cells. A look- up function searches the corresponding liquid fraction for these temperatures out of the tabulated liquid fractions. The liquid fraction is saved in a user-defined memory (UDM) for post processing. Analog to the calculation procedure in ANSYS Fluent the ratio bet ween viscosity and permeability is calculated with equation (2) and saved in another UDM. This ratio is the damping term of velocities and turbulence quanti- ties (see equation (3) and (4)). The damping force and momentum values are calculated in several DEFINE_SOURCE UDFs. One UDF for each velocity direction and the turbulence quantities, typical turbulent dissipation rate ε and turbulent kinetic en- ergy k. The source value is the negative pro duct of the damping term with the velocity or turbulence value (See equations (7) to (11)). If a pull velocity v p moves the s olid region, it has to be subtracted from the fluid velocity, here in the x direction:               (7)              (8)              (9)              (10)              (11) The five source terms have to be included for the corre- sponding values in the ANSYS Fluent interface. The pro- gram implements the source terms in the momentum equation (5) as well as the turbulence model. Solidification enthalpy To implement the nonlinear behavior of the solidification enthalpy (see Figure 2) the enthalpy is included in the heat capacity of the material (see Figure 3). Therefore, it is not necessary to modify the energy equation (6) of the solver. Figure 3: Heat capacity of alloy 718 including the so- lidification enthalpy (Giesselmann, 2014) Obviously, most of the solidification enthalpy is needed or set free near to the liquidus temperature. This refers to the steep slope of the liquidus fraction in th is area (com- pare Figure 1). Another possibility to implement the enthalpy of solidi- fication would be a DEFINE_SOURCE UDF. The ad- vantage of the presented solution is the reversible char- acter of the heat capacity. Because some parts of the sim- ulated region maybe melt on again, the solution with source term would be more elaborate. Whereas the heat capacity offers directly the possibility for change of sign in the temperature derivation. COMPARISON OF THE MODELS To compare the built-in solidification of ANSYS Fluent with the UDF based solidification model a test case was set up. Figure 4 and Figure 5 show the flow of hot metal through a cooled pipe. The left face is a velocity inlet of hot liquid metal. The top wall is at constant temperature, which is lower than the solidus temperature. At the right side, the boundary is an outflow. The contour plot visu- alizes the liquid fraction from one (w hite) to zero (black). The black line symbolizes the position of 1 % solid frac- tion. The vectors and their lengths show the velocity. In Figure 4 the solidification model of ANSYS Fluent was used. Therefore, the liquid fraction increases uni- formly over the whole solidification range. M. Eickhoff, A. Rückert, H. Pfeifer Figure 4: Test case: Built-in Fluent solidification model Figure 5 shows the same test case simulation as Figure 4 with the UDF based solidification model. Obviously, the shape of the solidified area is slightly different, b ut more interesting is the case that there is sharp edge in the mid- dle of the gray scale. Therefore, t he fluid flow is damped at this position abruptly. Figure 5: Test case: UDF solidification model The comparison of the two test cases show the similarity of the models as well as the decisive differences. Whereas the flow in first case is damped smoothly, the damping with the UDF based model is more abrupt. CONCLUSION A modified solidification model for ANSYS Fluent was introduced. It offers the possibility to reproduce the real material behavior in context of liquid fraction in respect to temperature. Which is important for the da mping of the fluid flow in the mushy region as well as the distribu- tion of the solidification enthalpy over temperature. The solidification model of ANSYS Fluent was modified and calculated in a user-defined function to a djust the liq- uid fraction concerning the cell temperature properly. The damping of the motion values is then implemented by source terms for velocities and turbulence quantities. The solidification enthalpy is included in the heat capac- ity of the material. Therefore, the enthalpy can be fitted very detailed. A test case shows the similarities and differences of the two models. The modified solidification implem ents a more abrupt damping of the fluid flow. The modified solidification model is able to replicate the material behavior more detail ed than the built-in solidifi- cation module of ANSYS Fluent. REFERENCES ANSYS INC., (Release 14.5, 2012), "ANSYS® Academic Research Help System". DARCY, H.P.G., (1856), "Dètermination des lois d'ècoulement de l'eau à travers le sable". ERICKSON, W.C., (1975), "Use of a general- purpose heat-transfer code for casting simulation", United States. GIESSELMANN, N., (2014), "Numerische Untersuchungen des Elektroschlacke- Umschmelzprozesses für Alloy 718", Dissertation, Aachen, RWTH Aachen University, Fakultät für Georessourcen und Materialtechnik, 140. GIESSELMANN, N., et al., (2015), "Coupling of Multiple Numerical Models to Simulate Electroslag Remelting Process for Alloy 718", ISIJ International, 55, 1408-1415. KORIC, S. and THOMAS, B.G., (2008), "Thermo- mechanical models of steel solidification based on two elastic visco-plastic constitutive laws", Journal of Materials Processing Technology, 197, 408-418. OVERFELT, R.A., et al., (1994), "Porosity in cast equiaxed alloy 718", International Symposium on Superalloys 718, 625, 706 and Various Derivatives, Pittsburgh, 189-200. POIRIER, D., (1987), "Permeability for flow of interdendritic liquid in columnar-dendritic alloys", Metallurgical and Materials Transactions B, 18, 245- VOLLER, V.R., et al., (1990), "Modelling the mushy region in a binary alloy", Applied Mathematical Modelling, 14, 320-326. VOLLER, V.R. and PRAKASH, C., (1987), "A fixed grid numerical modelling methodology for convection-diffusion mushy region phase-ch ange problems", International Journal of Heat and Mass Transfer, 30, 1709-1719. Citations (2) References (7) ... Two of the most common commercial codes, namely ANSYS Fluent (Finite Volume Method-FVM) and COMSOL (Finite Element Method-FEM), adopt the enthalpy-porosity approach proposed by Poirier in 1987 and also assume a linear relationship between the liquid fraction and the solidification range [21,22]. This linear assumption is appropriate for many standard steels, however, for nickel-based alloys such as IN718, non-linear solidification behavior may result in significant differences in the temperature distribution . The solidification behaviors in alloy materials depends mostly on the composition of the alloys. ... ... Hence, different solidification behaviors may result in differences in the temperature field. In , the linear solidification behavior in numerical simulations gave a more smoothly damped melt-flow during solidification as compared to the non-linear solidification behavior of IN718, which gave a more abrupt damping. ... ... Consequently, the use of numerical simulations to determine the melt-pool characteristics and predict the microstructure of the SLM parts is commonly preferred. When modeling the melting and solidification processes of a metallic material, it is essential that the damping of the fluid flow within the mushy region and the solidification enthalpy be properly taken into account . In the present study, the damping of the fluid when solidifying is modeled using the Darcy damping term added to the momentum equation in Eq. (4). ... Numerical Simulation and Experimental Validation of Melting and Solidification Process in Selective Laser Melting of IN718 Alloy Article Aug 2020 Trong-Nhan Le Yu-Lung Lo Castro Lin A three-dimensional numerical model is developed to investigate the melting and solidification process during the Selective Laser Melting (SLM) of IN718. The model utilizes the effective medium approach to solve the heat transfer and melt-flow dynamics within the melt-pool in a computationally efficient manner. In addition, the model replaces the built-in linear solidification function in commercial code ANSYS Fluent with a self-written non-linear solidification function in order to better describe the melt-pool formation during the SLM process. The simulation results obtained for the depth and width of the melt-pool are compared with the experimental data reported in the literature. Moreover, the simulated melt-pool length defined by the liquid-solid phase transition point of the melt-pool trailing edge is compared with the experimental observations obtained using a self-built in-situ inspection system. For the simulated melt-pool depth and width, no significant difference is found between the non-linear and linear models for the solidification behavior of IN718. However, the non-linear model provides a better prediction of the melt-pool length than the linear solidification function. Many studies have shown that the solid-liquid transition in the melting range of the material during cooling plays an important role in determining both the melt-pool length and the cooling rate of the SLM parts. As a result, the effects of the non-linear solidification behavior of IN718 are important not only in determining the melt-pool formation, but also in governing the liquid-solid transition process at the melt-pool tailing edge, which is one of the most important factors affecting the microstructure and residual stress accumulation of SLM printed parts. View Show abstract Dynamics of a Vapor Bubble in Film Boiling and the Superheat Effect Article Full-text available Mar 2023 Fella Chouarfa Abida Bahloul M E Hocine Benhamza Samira Boufas This study aims at developing an improved numerical simulation of the film boiling regime phenomenon to understand and visualize the growth of vapor bubble at a heated surface during low and high superheats. The simulation of the bubble dynamics including the bubble growth, departure, coalescence, rising, and frequency of detachment under different wall superheats is numerically investigated. The continuity, momentum, and energy equations are solved for the two immiscible fluids phases using the finite volume method. The phase change model and the results exhibited a good agreement with the theoretical models. The obtained results show that the velocity of bubble growth and its frequency of emission promotes heat exchange. It is found that the shape of a bubble has been influenced by the wall superheat. It is also found that the high superheat generates a large amount of steam in which the steam bubble takes the shape of a fungus. So, a clear correlation exists between heat transfer and the frequency of detachment. As long as the frequency is greater, the heat transfer increases. Most of the heat transfer is induced by the liquid movements associated with the vapor bubble detachment. View Show abstract Coupling of Multiple Numerical Models to Simulate Electroslag Remelting Process for Alloy 718 Article Full-text available Jul 2015 ISIJ INT Nils Giesselmann Antje Rückert Moritz Eickhoff Jutta Klöwer The Electroslag Remelting (ESR) process concerns the use of a consumable metal electrode, which is used to melt through a slag layer into a water-cooled mold by applying an alternating electric current. The ESR process produces large ingots of a high quality. This is achieved by controlled solidification and chemical refinement. An understanding of the solidification, heat and fluid flow in the ESR process is essential in order to predict the presence of defects in the solidified ingot. Due to the transient nature of the multiphase, non-isothermal fluid flow problem with magneto-hydrodynamic effects, melting and solidification modeling is a complex task. This paper presents the combination of two coupled computational fluid dynamics (CFD) models to simulate both the fluid flow in the slag layer, including metal droplet formation, and solidification in the metal pool. The simulation has been performed for Alloy 718 and compared to experimental data. The presented 3d-model is able to simulate both the steady state and the transient ESR process. View Show abstract Modelling the mushy region in a binary alloy Article Jun 1990 APPL MATH MODEL Vaughan R. Voller A. D. Brent Chyawan Prakash In developing models of solidification systems a central element is the treatment of the mushy region. The aim of this paper is to investigate a number of alternative approaches to account for the nature of the mushy region in a model of a solidification system. Starting from two limiting assumptions for the nature of the mushy region, a number of alternative physical situations are identified from which one-phase models can result. These models are compared in the context of a simple test problem. View Show abstract Permeability of flow of interdentritic liquid in columnar-dendritic alloys Article Mar 1987 D. R. Poirier Permeability data for the flow of interdendritic liquid in partially solid Pb−Sn and borneol-paraffin columnar-alloys are summarized. The data are used in regression analyses and simple flow models to arrive at relationships between permeability and the morphology of the solid dendrites. When flow is parallel to the primary dendrite arms, the important morphological aspects are the volume fraction liquid (g L) and the primary dendrite arm spacing (d 1). When flow is normal to the primary dendrite arms, the permeability depends upon the secondary dendrite arm spacing (d 2) as well asd 1 andg L. The parallel permeability is best described by a model based on the Hagen-Poiseuille law for laminar flow through a tube; for the normal permeability an empirical multilinear regression gives the best fit to the data. However, those models are not appropriate for extrapolations beyond the range of the available data (0.19≤g L≤0.66), particularly asg L approaches 1. For extrapolations, models based upon the Blake-Kozeny equation for flow through porous media are recommended. View Show abstract A Fixed Grid Numerical Modelling Methodology for Convection-Diffusion Mushy Region Phase Change Problems Article Aug 1987 INT J HEAT MASS TRAN Vaughan R. Voller Chyawan Prakash An enthalpy formulation based fixed grid methodology is developed for the numerical solution of convection-diffusion controlled mushy region phase-change problems. The basic feature of the proposed method lies in the representation of the latent heat of evolution, and of the flow in the solid-liquid mushy zone, by suitably chosen sources. There is complete freedom within the methodology for the definition of such sources so that a variety of phase-change situations can be modelled. A test problem of freezing in a thermal cavity under natural convection is used to demonstrate an application of the method. View Show abstract Thermo-mechanical models of steel solidification based on two elastic visco-plastic constitutive laws Article Feb 2008 J MATER PROCESS TECH Seid Koric Brian G. Thomas Two thermo-mechanical models based on different elastic-visco-plastic constitutive laws are applied to simulate temperature and stress development of a slice through the solidifying shell of 0.27%C steel in a continuous casting mold under typical commercial operating conditions with realistic temperature dependant properties. A general form of the transient heat equation, including latent-heat from phase transformations such as solidification and other temperature-dependent properties, is solved numerically for the temperature field history. The resulting thermal stresses are solved by integrating the elastic-visco-plastic constitutive laws of Kozlowski [P.F. Kozlowski, B.G. Thomas, J.A. Azzi, H. Wang, Simple constitutive equations for steel at high temperature, Metall. Trans. 23A (1992) 903–918] for austenite in combination with the Zhu power-law [H. Zhu, Coupled thermal–mechanical finite-element model with application to initial solidification, PhD thesis, University of Illinois, 1993] for delta-ferrite with ABAQUS [ABAQUS Inc., User Manuals v6.6, 2006] using a user-defined subroutine UMAT [S. Koric, B.G. Thomas, Efficient thermo-mechanical model for solidification processes, Int. J. Num. Meth. Eng. 66 (2006) 1955–1989], and the Anand law for steel [L. Anand, Constitutive equations for the rate dependant deformation of metals at elevated temperatures, ASME J. Eng. Mater. Technol. 104 (1982) 12–17; S.B. Brown, K.H. Kim, L. Anand, An internal variable constitutive model for hot working of metals, Int. J. Plasticity 6 (1989) 95–130] using the integration scheme recently implemented in ANSYS [ANSYS Inc., User Manuals v100, 2006]. The results from these two approaches are compared and CPU times are benchmarked. A comparison of one-dimensional constitutive behavior of these laws with experimental tensile test data [P.J. Wray, Plastic deformation of delta-ferritic iron at intermediate strain rates, Metall. Trans. A 7A (1976) 1621–1627; P.J. Wray, Effect of carbon content on the plastic flow of plain carbon steel at elevated temperatures, Metall. Trans. A 13 (1982) 125–134] and previous work [A.E. Huespe, A. Cardona, N. Nigro, V. Fachinotti, Visco-plastic constitutive models of steel at high temperature, J. Mater. Process. Technol. 102 (2000) 143–152] shows reasonable agreement for both models, although the Kozlowski–Zhu approach is much more accurate for low carbon steels. The thermo-mechanical models studied here are useful for efficient and accurate analysis of steel solidification processes using convenient commercial software. View Show abstract Jan 2012 Ansys Inc ANSYS INC., (Release 14.5, 2012), "ANSYS® Academic Research Help System". Coupling of Multiple Numerical Models to Simulate Electroslag Remelting Process for Alloy 718 Jan 1856 ISIJ INT 1408-1415 H P G Darcy W C Erickson N Giesselmann DARCY, H.P.G., (1856), "Dètermination des lois d'ècoulement de l'eau à travers le sable". ERICKSON, W.C., (1975), "Use of a generalpurpose heat-transfer code for casting simulation", United States. GIESSELMANN, N., (2014), "Numerische Untersuchungen des Elektroschlacke-Umschmelzprozesses für Alloy 718", Dissertation, Aachen, RWTH Aachen University, Fakultät für Georessourcen und Materialtechnik, 140. GIESSELMANN, N., et al., (2015), "Coupling of Multiple Numerical Models to Simulate Electroslag Remelting Process for Alloy 718", ISIJ International, 55, 1408-1415. Powered By 00:00/01:07 10 0c0ecfe979d74104a82726ea7cb0ce85 Share Next Stay Recommended publications Discover more Sponsored content Call for Applicants - RWTH Junior Principal Investigator Fellowships 2023 December 2022 The fellowships offer an exceptional opportunity to sharpen your research profile. Support will be provided for a period of four years, with an optional fifth year. The program is designed to attract and support researchers with international experience. Applications from women as well as others... View post Sponsored content "To lead my own group is a fantastic opportunity!" Clio Azina, Ph.D. is a new Junior Principal Inv... January 2021 Cypriot Clio Azina came to RWTH in August 2020 as a Marie Skłodowska-Curie Fellow and is now a Junior Principal Investigator Fellow. We asked her about her start in Aachen in view of the coronavirus restrictions and her expectations for her time at RWTH. Welcome to RWTH Aachen University. We are... View post Sponsored content RWTH is inviting excellent and experienced early career researchers to apply for 6 Junior Principal... November 2020 The program is designed to attract and support stellar researchers with international experience. Applications from women as well as others whose background and experience enrich the culture of the university are particularly encouraged. Support will be provided for a period of four years, with... View post Article Full-text available Cross-Permeability of the Semisolid Region in Directional Solidification: A Combined Phase-Field and... November 2015 · JOM: the journal of the Minerals, Metals & Materials Society Nils Giesselmann B. Böttger Charlotte Haberstroh Based on the results of microstructure simulations, fluid flow through the semisolid region during directional solidification of the technical Ni-base alloy 718 has been studied. Three-dimensional microstructures at different positions in the semisolid region were obtained by using a multicomponent multiphase-field model that was online coupled to a commercial thermodynamic database. For the ... [Show full abstract] range of five different primary dendrite distances λ 1 between 50 µm and 250 µm, the flow velocity and the permeability perpendicular to the dendrite growth direction was evaluated by using a proprietary Lattice-Boltzmann model. The commercial CFD software ANSYS FLUENT was alternatively applied for reference. Consistent values of the average flow velocity along the dendrites were obtained for both methods. From the results of the fluid flow simulations, the cross-permeability was evaluated as a function of temperature and fraction liquid for each of the five different primary dendrite distances λ 1. The obtained permeability values can be approximated by a single analytical function of the fraction liquid and λ 1 and are discussed and compared with known relations from the literature. View full-text Conference Paper Full-text available Measurement of Emission Coefficients for Alloy 718 to improve numerical simulation of industrial sca... September 2015 Moritz Eickhoff Antje Rückert Herbert Pfeifer [...] Jutta Klöwer The vacuum arc remelting process (VAR) is used to improve both cleanliness and a controlled grain structure for high potential metals like Nickel based superalloys. A self-consuming electrode is continuously molten by electric arcs sparkling between electrode tip and metal pool. The liquid droplets fall from the electrode to the metal pool and resolidify because of the water cooled copper mould. ... [Show full abstract] An electric energy balance is made to calculate the boundary conditions for the simulation of the remolten ingot, including flow field as well as the temperature field respective the pool shape. To set up a reliable numerical model of the VAR process (ANSYS FLUENT) a deep understanding of the heat transfer mechanism between remolten ingot and the copper mould is required. With decreasing ingot temperature the diameter of the block decreases by thermal shrinkage. The developing gap between block and mould is flooded with helium to increase the thermal conductivity of the gap. Besides conduction also radiation takes place. To calculate the heat transfer by radiation, the emission coefficients of copper and Alloy 718 have to be well-known. In case of copper many measurements can be found in literature. Emission coefficients for Alloy 718 are only reported for sheets with blank surfaces, which are not comparable to the surface of a VAR-ingot. To estimate the temperature dependent emission coefficient of Alloy 718 with a surface similar to the VAR-ingot laboratory tests have been elaborated. Probes with predefined surfaces are electrically heated. The core temperature of the thin probe is measured by an inserted thermocouple whereas the surface temperature is measured by an infrared camera. The emission coefficient implemented in the camera is adjusted to equal both temperatures. In this paper the results of the laboratory tests will be presented as well as the implementation in the VAR-simulation model. View full-text Conference Paper Simulation of industrial sclae VAR process – heat transfer ingot to mould and validation September 2015 Moritz Eickhoff Antje Rückert Herbert Pfeifer [...] Jutta Klöwer The vacuum arc remelting process (VAR) is used to improve both cleanliness and a controlled grain structure for high potential metals like Nickel based superalloys. A self-consuming electrode is continuously molten by electric arcs sparkling between electrode tip and metal pool. The liquid droplets fall from the electrode to the metal pool and resolidify because of the water cooled copper mould. ... [Show full abstract] An electric energy balance is made to calculate the boundary conditions for the simulation of the remolten ingot, including flow field as well as the temperature field respective the pool shape. To set up a reliable numerical model of the VAR process (ANSYS FLUENT) a deep understanding of the heat transfer mechanism between remolten ingot and the copper mould is required. To model the different heat transfer conditions a thermal resistance layer is implemented between ingot and mould. The thermal resistance of the layer is modified with respect to the distance between ingot and mould. In the area near the bath surface the ingot is in direct contact with the mould. Following the down cooling ingot in direction towards the base plate the ingot shrinks thermally and a gap is formed between ingot and mould (Matlab). To increase the heat conduction the gap is floated with helium. In the region of the gap conduction and radiation take place, so both are included in the thermal resistance of the layer. To validate the heat transfer model with the resistance layer full scale probes of an industrial VAR ingot have been metallurgical prepared. The probes are extracted from the centerline of the ingot and range from one surface to the other. Using a high resolution camera with a close-up lens it is possible to create a stitched image of the whole probe with the possibility to zoom in as far as you can identify secondary dendrite arm spacings (SDAS). The model can be validated concerning pool shape and SDAS. Read more Conference Paper NUMERICAL SIMULATIONS OF THE MOLTEN METAL DROPLET FORMATION IN THE ELECTRO-SLAG REMELTING PROCESS WI... October 2021 Moritz Eickhoff Christian Schubert Herbert Pfeifer The electro-slag remelting (ESR) is a well-known process, which is widely used in the production chain of high-quality alloys. These alloys are used for many different applications ranging from tool steels to superalloys used in aero-or spacecraft applications. The process design is mainly unaltered since several decades. There only have been improvements regarding the process control and the ... [Show full abstract] input materials, as well as some developed derivatives of the process. Two ideas are currently discussed in the scientific community, of how to improve the ESR process, namely the use of a rotating or respectively a vibrating electrode. These approaches promise some significant process enhancements as well as some benefits regarding energy efficiency. There have been experimental and numerical studies made using these process modifications, potentially showing some of those beneficial effects regarding lab scale plants. The changing of the droplet size and trajectory as well as the metallic film thickness at the bottom of the electrode tip are significantly influencing these improvements. Numerical simulations can be used to predict certain influences caused by the scaling of the process. In this paper, we show different results for the simulated droplet formation in an ESR process with both a stationary and a rotating electrode using a coupled simulation between a commercial CFD (FVM) and electromagnetics (FEM) software. The results for low rotational speeds show a strong effect on the local shaping of the liquid metal film below the electrode and consequently a radial shift of the droplet incidence into the metal pool, but no significant effect on metal film thickness, droplet trajectory or droplet size. Furthermore, we are giving an overview of current bottlenecks, restrictions and possible strategies for improved process simulations. Read more Conference Paper Modellierung von Strömung, Wärmehaushalt und Erstarrung des VAR-Prozesses March 2016 Moritz Eickhoff Antje Rückert Herbert Pfeifer [...] Jutta Klöwer Der Vacuum Arc Remelting (VAR) Prozess ist ein Umschmelzverfahren zur Verbesserung der Werkstoffeigenschaften von Sondergüten, Titanwerkstoffen und Nickel-Superlegierungen. Vor allem letztere finden Anwendung in der Luft- und Raumfahrttechnik, wo höchste Ansprüche an die Reinheit der Materialien gestellt werden. Beim VAR-Prozess wird ein hoher elektrischer Strom zwischen der umzuschmelzenden ... [Show full abstract] Elektrode und einer Kupferkokille angelegt. Der Strom wird in Form von Lichtbögen zwischen den Polen übertragen. Die dabei frei werdende Energie schmilzt die Elektrode auf und kleine Metalltropfen fallen von der Elektrode in die Kupferkokille. Die kurzen Diffusionswege und das umgebende Vakuum ermöglichen eine Extraktion von gelösten Gasen aus dem flüssigen Metall. Die Tropfen vereinen sich zu einem Metallpool, welcher kontinuierlich und kontrolliert durch die Wasserkühlung der Kokille erstarrt. Modellierung Zum besseren Prozessverständnis und für kostengünstige Parametervariationen wird der Prozess bereits seit den 1980er Jahren numerisch nachgebildet. Das VAR-Modell des Instituts für Industrieofenbau und Wärmetechnik (IOB) besteht aus einem Strömungsmodell für den Metallpool (ANSYS Fluent, k-ε) und vielen kleineren Modellen zur Bestimmung der Randbedingungen. Als Besonderheiten sollen die Modellierungen von Wärmehaushalt und Erstarrung vorgestellt werden. Der Wärmetransport zwischen umgeschmolzenen Block und Kokillenwand erfolgt mittels verschiedener Wärmetransportphänomene. Diese werden teilweise in separaten Modellen berechnet und durch eine Wärmewiderstandsschicht implementiert. Die Erstarrung des Blocks wurde nicht mit dem Erstarrungsmodul von ANSYS Fluent sondern mit einem eigenen Modell implementiert, um nichtlineare Zusammenhänge zwischen Flüssiganteil und Erstarrungsenthalpie zu berücksichtigen. Read more Conference Paper Numerical simulation approach for modelling the ESR process with a rotating electrode September 2017 Antje Rückert Herbert Pfeifer Christian Schubert The electroslag remelting (ESR) process is used for the chemical refinement of high quality steel and titan- or nickel-based alloys, in which liquid metal droplets are refined by falling through a bath of molten slag. In recent years the theory, that a rotary movement of the electrode could further enhance the chemical refinement due to smaller liquid metal drop sizes and therefore enlarged ... [Show full abstract] reaction surfaces in the slag region, has been represented. Within a currently ongoing research project this theory will be investigated. This article focuses on currently existing numerical simulation approaches to model the ESR process utilizing a rotating electrode. Modelling the ESR process is quite multidisciplinary, since many different physical phenomena have to be considered. The different phenomena, including their common modelling approaches, will be described. In addition, an evaluation for practical application to overall process model implementation will be given. Read more Discover the world's research Join ResearchGate to find the people and research you need to help your work. Join for free ResearchGate iOS App Get it from the App Store now. Install Keep up with your stats and more Access scientific knowledge from anywhere or Discover by subject area Recruit researchers Join for free LoginEmail Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? - [x] Keep me logged in Log in or Continue with Google Welcome back! Please log in. Email · HintTip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? - [x] Keep me logged in Log in or Continue with Google No account? Sign up Company About us News Careers Support Help Center Business solutions Advertising Recruiting © 2008-2025 ResearchGate GmbH. All rights reserved. Terms Privacy Copyright Imprint Consent preferences
320
OhioLINK ETD: Beaton, Mary Elizabeth =============== Skip to Main Content Frequently Asked Questions Submit an ETD Global Search Box Need Help? Keyword Search Participating Institutions Advanced Search School Logo Files File List Mary Elizabeth Beaton Dissertation.pdf (4.83 MB) ETD Abstract Container Abstract Header Coda Liquid Production and Perception in Puerto Rican Spanish Author Info Beaton, Mary Elizabeth Permalink: Abstract Details Year and Degree 2015, Doctor of Philosophy, Ohio State University, Spanish and Portuguese. Abstract Dialects of Spanish in the Caribbean and southern Spain are described as "switching" liquids in syllable-final position, resulting in the neutralization of the two sounds. This dissertation considers liquid variation in San Juan Spanish (SJS), which is frequently cited as neutralizing /r/ to /l/ such that arma ('weapon') and alma ('soul') are both pronounced [al.ma]. In light of recent work suggesting that neutralization is often incomplete, i.e. small but significant differences exist between two sounds previous considered to be merged, this study examines the formant structure (F1, F2, F3, F4) and duration of rhotics and laterals in SJS to determine the neutralization status of /r/ and /l/. This dissertation also features a perception experiment which tests how well SJS listeners are able to hear acoustic differences between the liquids. Using twenty-four sociolinguistic interviews with SJS speakers, I extracted 2,212 vowel+/r/ and 728 vowel+/l/ sequences. The conditionings of word position, stress, vowel, preceding and following consonants, gender, and age are considered for two separate data analyses. The first analysis considers the conditioning on the manner of articulation of the liquid. Then, approximant liquids, which are the site of potential neutralization, were analyzed for formant structure and duration. In order to develop an understanding of the dynamic formant trajectories, seven equidistant points were sampled for all four formants. These measurements were submitted to both linear regression analyses and Smoothing Spline ANOVAs. To test liquid perception, an online survey with vowel+liquid audio clips with varying formant structure was presented to both SJS and northern Spain Castilian Spanish (CS) listeners. The results for approximant liquid production indicate that rhotics are far more variant in SJS than laterals and their realization depends on linguistic and social factors. Therefore, I propose viewing this dialect as possessing a liquid continuum, rather than as switching /r/ for /l/. I find that rhotics have similar formant structure to the vowel that precedes them, thus becoming more /l/-like after vowels with high or fronted articulation. While rhotics assimilate to vowels, they dissimilate from surrounding consonants. I assert that liquid variation in SJS is motivated by low coarticulatory resistance of rhotics to vowels combined with a sensitivity to sonority sequencing principles such that the liquid segments contrast maximally in sonority from surrounding consonants. Gender plays a role in rhotic manner of articulation; women produce more tap and fricative rhotics, men produce more deletions, and gender plays no role in approximant productions. Speaker age, however, is significant for the production of F3 in approximant liquids. Younger speakers have more tongue bunching for rhotics than laterals, whereas older speakers neutralize the sounds. The tendency for younger speakers to neutralize less may serve to avoid negative social evaluation. In the perception experiment, SJS listeners, unlike CS listeners, were able to hear differences in /r/ and /l/ in the context of less neutralizing vowel environments. This finding suggests that SJS listeners are able to hear liquid differences in their own dialect, whereas listeners with more distinct sounds are unable to utilize these small articulatory differences. Committee Rebeka Campos-Astorkiza (Advisor) Scott Schwenter (Advisor) Terrell Morgan (Committee Member) Pages 308 p. Subject Headings Foreign Language; Language; Linguistics Keywords linguistics; sociophonetics; sociolinguistics; phonetics; Hispanic linguistics; liquids; rhotics; laterals; Puerto Rican Spanish; San Juan Spanish Recommended CitationsRefworks RefworksEndNote EndNoteRIS RISMendeley Mendeley Citations Beaton, M. E. (2015). Coda Liquid Production and Perception in Puerto Rican Spanish [Doctoral dissertation, Ohio State University]. OhioLINK Electronic Theses and Dissertations Center. APA Style (7th edition) Beaton, Mary. Coda Liquid Production and Perception in Puerto Rican Spanish. 2015. Ohio State University, Doctoral dissertation. OhioLINK Electronic Theses and Dissertations Center, MLA Style (8th edition) Beaton, Mary. "Coda Liquid Production and Perception in Puerto Rican Spanish." Doctoral dissertation, Ohio State University, 2015. Chicago Manual of Style (17th edition) Abstract Footer Document number: osu1437135547 Download Count: 3,221 Copyright Info © 2015, all rights reserved. This open access ETD is published by The Ohio State University and OhioLINK. Global Footer Report a Problem) Acceptable Use Policy Privacy Policy Ohio Library and Information Network (OhioLINK) • A member of the Ohio Technology Consortium 1224 Kinnear Road Columbus OH 43212 US • 614-292-9191 • [email protected] Copyright © 2016 by the Ohio Library and Information Network. All Rights Reserved. OHIO DEPARTMENT OF HIGHER EDUCATION 25 South Front Street Columbus, Ohio 43215 STATE GOVERNMENT LINKS Mike DeWine, Governor | Ohio.gov EDUCATION LINKS Ohio Department of Higher Education OH-TECH | OARnet | OhioLINK OSC | OACC | IUC | OTTA | ODE If you have a disability and experience difficulty accessing this content, please contact the OH-TECH Digital Accessibility Team [email protected]. Release 3.2.12 Release 3.2.12
321
Published Time: Mon, 11 Aug 2025 23:56:02 GMT Asymptotic expansion - Wikipedia =============== Jump to content [x] Main menu Main menu move to sidebar hide Navigation Main page Contents Current events Random article About Wikipedia Contact us Contribute Help Learn to edit Community portal Recent changes Upload file Special pages Search Search [x] Appearance Donate Create account Log in [x] Personal tools Donate Create account Log in Pages for logged out editors learn more Contributions Talk Contents move to sidebar hide (Top) 1 Formal definition 2 Examples 3 Worked exampleToggle Worked example subsection 3.1 Integration by parts 4 PropertiesToggle Properties subsection 4.1 Uniqueness for a given asymptotic scale 4.2 Non-uniqueness for a given function 4.3 Subdominance 5 See alsoToggle See also subsection 5.1 Related fields 5.2 Asymptotic methods 6 Notes 7 References 8 External links [x] Toggle the table of contents Asymptotic expansion [x] 13 languages Čeština Deutsch Español فارسی Français Italiano Кыргызча Magyar 日本語 Português Русский Українська 中文 Edit links Article Talk [x] English Read Edit View history [x] Tools Tools move to sidebar hide Actions Read Edit View history General What links here Related changes Upload file Permanent link Page information Cite this page Get shortened URL Download QR code Print/export Download as PDF Printable version In other projects Wikidata item Appearance move to sidebar hide From Wikipedia, the free encyclopedia Series of functions in mathematics In mathematics, an asymptotic expansion, asymptotic series or Poincaré expansion (after Henri Poincaré) is a formal series of functions which has the property that truncating the series after a finite number of terms provides an approximation to a given function as the argument of the function tends towards a particular, often infinite, point. Investigations by Dingle (1973) revealed that the divergent part of an asymptotic expansion is latently meaningful, i.e. contains information about the exact value of the expanded function. The theory of asymptotic series was created by Poincaré (and independently by Stieltjes) in 1886. The most common type of asymptotic expansion is a power series in either positive or negative powers. Methods of generating such expansions include the Euler–Maclaurin summation formula and integral transforms such as the Laplace and Mellin transforms. Repeated integration by parts will often lead to an asymptotic expansion. Since a convergentTaylor series fits the definition of asymptotic expansion as well, the phrase "asymptotic series" usually implies a non-convergent series. Despite non-convergence, the asymptotic expansion is useful when truncated to a finite number of terms. The approximation may provide benefits by being more mathematically tractable than the function being expanded, or by an increase in the speed of computation of the expanded function. Typically, the best approximation is given when the series is truncated at the smallest term. This way of optimally truncating an asymptotic expansion is known as superasymptotics. The error is then typically of the form ~ exp(−c/ε) where ε is the expansion parameter. The error is thus beyond all orders in the expansion parameter. It is possible to improve on the superasymptotic error, e.g. by employing resummation methods such as Borel resummation to the divergent tail. Such methods are often referred to as hyperasymptotic approximations. See asymptotic analysis and big O notation for the notation used in this article. Formal definition [edit] First we define an asymptotic scale, and then give the formal definition of an asymptotic expansion. If φ n{\displaystyle \ \varphi _{n}\ } is a sequence of continuous functions on some domain, and if L{\displaystyle \ L\ } is a limit point of the domain, then the sequence constitutes an asymptotic scale if for every n, φ n+1(x)=o(φ n(x))(x→L).{\displaystyle \varphi {n+1}(x)=o(\varphi {n}(x))\quad (x\to L)\ .} (L{\displaystyle \ L\ } may be taken to be infinity.) In other words, a sequence of functions is an asymptotic scale if each function in the sequence grows strictly slower (in the limit x→L{\displaystyle \ x\to L\ }) than the preceding function. If f{\displaystyle \ f\ } is a continuous function on the domain of the asymptotic scale, then f has an asymptotic expansion of order N{\displaystyle \ N\ } with respect to the scale as a formal series ∑n=0 N a n φ n(x){\displaystyle \sum {n=0}^{N}a{n}\varphi _{n}(x)} if f(x)−∑n=0 N−1 a n φ n(x)=O(φ N(x))(x→L){\displaystyle f(x)-\sum {n=0}^{N-1}a{n}\varphi {n}(x)=O(\varphi {N}(x))\quad (x\to L)} or the weaker condition f(x)−∑n=0 N−1 a n φ n(x)=o(φ N−1(x))(x→L){\displaystyle f(x)-\sum {n=0}^{N-1}a{n}\varphi {n}(x)=o(\varphi {N-1}(x))\quad (x\to L)\ } is satisfied. Here, o{\displaystyle o} is the little o notation. If one or the other holds for all N{\displaystyle \ N\ }, then we write[citation needed] f(x)∼∑n=0∞a n φ n(x)(x→L).{\displaystyle f(x)\sim \sum {n=0}^{\infty }a{n}\varphi {n}(x)\quad (x\to L)\ .} In contrast to a convergent series for f{\displaystyle \ f\ }, wherein the series converges for any _fixed x{\displaystyle \ x\ } in the limit N→∞{\displaystyle N\to \infty }, one can think of the asymptotic series as converging for fixed N{\displaystyle \ N\ } in the limit x→L{\displaystyle \ x\to L\ } (with L{\displaystyle \ L\ } possibly infinite). Examples [edit] Plots of the absolute value of the fractional error in the asymptotic expansion of the Gamma function (left). The horizontal axis is the number of terms in the asymptotic expansion. Blue points are for x = 2 and red points are for x = 3. It can be seen that the least error is encountered when there are 14 terms for x = 2, and 20 terms for x = 3, beyond which the error diverges. Gamma function (Stirling's approximation) e x x x 2 π x Γ(x+1)∼1+1 12 x+1 288 x 2−139 51840 x 3−⋯(x→∞){\displaystyle {\frac {e^{x}}{x^{x}{\sqrt {2\pi x}}}}\Gamma (x+1)\sim 1+{\frac {1}{12x}}+{\frac {1}{288x^{2}}}-{\frac {139}{51840x^{3}}}-\cdots \ (x\to \infty )} Exponential integralx e x E 1(x)∼∑n=0∞(−1)n n!x n(x→∞){\displaystyle xe^{x}E_{1}(x)\sim \sum _{n=0}^{\infty }{\frac {(-1)^{n}n!}{x^{n}}}\ (x\to \infty )} Logarithmic integralli⁡(x)∼x ln⁡x∑k=0∞k!(ln⁡x)k{\displaystyle \operatorname {li} (x)\sim {\frac {x}{\ln x}}\sum _{k=0}^{\infty }{\frac {k!}{(\ln x)^{k}}}} Riemann zeta functionζ(s)∼∑n=1 N n−s+N 1−s s−1−N−s 2+N−s∑m=1∞B 2 m s 2 m−1¯(2 m)!N 2 m−1{\displaystyle \zeta (s)\sim \sum {n=1}^{N}n^{-s}+{\frac {N^{1-s}}{s-1}}-{\frac {N^{-s}}{2}}+N^{-s}\sum {m=1}^{\infty }{\frac {B_{2m}s^{\overline {2m-1}}}{(2m)!N^{2m-1}}}}where B 2 m{\displaystyle B_{2m}} are Bernoulli numbers and s 2 m−1¯{\displaystyle s^{\overline {2m-1}}} is a rising factorial. This expansion is valid for all complex s and is often used to compute the zeta function by using a large enough value of N, for instance N>|s|{\displaystyle N>|s|}. Error functionπ x e x 2 e r f c(x)∼1+∑n=1∞(−1)n(2 n−1)!!(2 x 2)n(x→∞){\displaystyle {\sqrt {\pi }}xe^{x^{2}}{\rm {erfc}}(x)\sim 1+\sum {n=1}^{\infty }(-1)^{n}{\frac {(2n-1)!!}{(2x^{2})^{n}}}\ (x\to \infty )} where (2 _n − 1)!! is the double factorial. Worked example [edit] Asymptotic expansions often occur when an ordinary series is used in a formal expression that forces the taking of values outside of its domain of convergence. Thus, for example, one may start with the ordinary series 1 1−w=∑n=0∞w n.{\displaystyle {\frac {1}{1-w}}=\sum _{n=0}^{\infty }w^{n}.} The expression on the left is valid on the entire complex planew≠1{\displaystyle w\neq 1}, while the right hand side converges only for |w|<1{\displaystyle |w|<1}. Multiplying by e−w/t{\displaystyle e^{-w/t}} and integrating both sides yields ∫0∞e−w t 1−w d w=∑n=0∞t n+1∫0∞e−u u n d u,{\displaystyle \int {0}^{\infty }{\frac {e^{-{\frac {w}{t}}}}{1-w}}\,dw=\sum {n=0}^{\infty }t^{n+1}\int _{0}^{\infty }e^{-u}u^{n}\,du,} after the substitution u=w/t{\displaystyle u=w/t} on the right hand side. The integral on the left hand side, understood as a Cauchy principal value, can be expressed in terms of the exponential integral. The integral on the right hand side may be recognized as the gamma function. Evaluating both, one obtains the asymptotic expansion e−1 t Ei⁡(1 t)=∑n=0∞n!t n+1.{\displaystyle e^{-{\frac {1}{t}}}\operatorname {Ei} \left({\frac {1}{t}}\right)=\sum {n=0}^{\infty }n!t^{n+1}.} Here, the right hand side is clearly not convergent for any non-zero value of _t. However, by truncating the series on the right to a finite number of terms, one may obtain a fairly good approximation to the value of Ei⁡(1 t){\displaystyle \operatorname {Ei} \left({\tfrac {1}{t}}\right)} for sufficiently small t. Substituting x=−1 t{\displaystyle x=-{\tfrac {1}{t}}} and noting that Ei⁡(x)=−E 1(−x){\displaystyle \operatorname {Ei} (x)=-E_{1}(-x)} results in the asymptotic expansion given earlier in this article. Integration by parts [edit] Using integration by parts, we can obtain an explicit formulaEi⁡(z)=e z z(∑k=0 n k!z k+e n(z)),e n(z)≡(n+1)!z e−z∫−∞z e t t n+2 d t{\displaystyle \operatorname {Ei} (z)={\frac {e^{z}}{z}}\left(\sum {k=0}^{n}{\frac {k!}{z^{k}}}+e{n}(z)\right),\quad e_{n}(z)\equiv (n+1)!\ ze^{-z}\int {-\infty }^{z}{\frac {e^{t}}{t^{n+2}}}\,dt}For any fixed z{\displaystyle z}, the absolute value of the error term |e n(z)|{\displaystyle |e{n}(z)|} decreases, then increases. The minimum occurs at n∼|z|{\displaystyle n\sim |z|}, at which point |e n(z)|≤2 π|z|e−|z|{\displaystyle \vert e_{n}(z)\vert \leq {\sqrt {\frac {2\pi }{\vert z\vert }}}e^{-\vert z\vert }}. This bound is said to be "asymptotics beyond all orders". Properties [edit] Uniqueness for a given asymptotic scale [edit] For a given asymptotic scale {φ n(x)}{\displaystyle {\varphi {n}(x)}} the asymptotic expansion of function f(x){\displaystyle f(x)} is unique. That is the coefficients {a n}{\displaystyle {a{n}}} are uniquely determined in the following way: a 0=lim x→L f(x)φ 0(x)a 1=lim x→L f(x)−a 0 φ 0(x)φ 1(x)⋮a N=lim x→L f(x)−∑n=0 N−1 a n φ n(x)φ N(x){\displaystyle {\begin{aligned}a_{0}&=\lim {x\to L}{\frac {f(x)}{\varphi {0}(x)}}\a_{1}&=\lim {x\to L}{\frac {f(x)-a{0}\varphi {0}(x)}{\varphi {1}(x)}}\&\;\;\vdots \a_{N}&=\lim {x\to L}{\frac {f(x)-\sum {n=0}^{N-1}a_{n}\varphi {n}(x)}{\varphi {N}(x)}}\end{aligned}}} where L{\displaystyle L} is the limit point of this asymptotic expansion (may be ±∞{\displaystyle \pm \infty }). Non-uniqueness for a given function [edit] A given function f(x){\displaystyle f(x)} may have many asymptotic expansions (each with a different asymptotic scale). Subdominance [edit] An asymptotic expansion may be an asymptotic expansion to more than one function. See also [edit] Related fields [edit] Asymptotic analysis Singular perturbation Asymptotic methods [edit] Watson's lemma Mellin transform Laplace's method Stationary phase approximation Method of dominant balance Method of steepest descent Notes [edit] ^Jahnke, Hans Niels (2003). A history of analysis. History of mathematics. Providence (R.I.): American mathematical society. p.190. ISBN978-0-8218-2623-2. ^Boyd, John P. (1999), "The Devil's Invention: Asymptotic, Superasymptotic and Hyperasymptotic Series"(PDF), Acta Applicandae Mathematicae, 56 (1): 1–98, doi:10.1023/A:1006145903624, hdl:2027.42/41670. ^O’Malley, Robert E. (2014), O'Malley, Robert E. (ed.), "Asymptotic Approximations", Historical Developments in Singular Perturbations, Cham: Springer International Publishing, pp.27–51, doi:10.1007/978-3-319-11924-3_2, ISBN978-3-319-11924-3, retrieved 2023-05-04 ^ abcS.J.A. Malham, "An introduction to asymptotic analysis", Heriot-Watt University. References [edit] Ablowitz, M. J., & Fokas, A. S. (2003). Complex variables: introduction and applications. Cambridge University Press. Bender, C. M., & Orszag, S. A. (2013). Advanced mathematical methods for scientists and engineers I: Asymptotic methods and perturbation theory. Springer Science & Business Media. Bleistein, N., Handelsman, R. (1975), Asymptotic Expansions of Integrals, Dover Publications. Carrier, G. F., Krook, M., & Pearson, C. E. (2005). Functions of a complex variable: Theory and technique. Society for Industrial and Applied Mathematics. Copson, E. T. (1965), Asymptotic Expansions, Cambridge University Press. Dingle, R. B. (1973), Asymptotic Expansions: Their Derivation and Interpretation, Academic Press. Erdélyi, A. (1955), Asymptotic Expansions, Dover Publications. Fruchard, A., Schäfke, R. (2013), Composite Asymptotic Expansions, Springer. Hardy, G. H. (1949), Divergent Series, Oxford University Press. Olver, F. (1997). Asymptotics and Special functions. AK Peters/CRC Press. Paris, R. B., Kaminsky, D. (2001), Asymptotics and Mellin-Barnes Integrals, Cambridge University Press. Pascal Remy(2024). Asymptotic Expansions and Summability: Application to Partial Differential Equations, Springer, LNM 2351. Whittaker, E. T., Watson, G. N. (1963), A Course of Modern Analysis, fourth edition, Cambridge University Press. External links [edit] "Asymptotic expansion", Encyclopedia of Mathematics, EMS Press, 2001 Wolfram Mathworld: Asymptotic Series | Authority control databases | | --- | | National | United States France BnF data Japan Israel | | Other | Yale LUX | Retrieved from " Categories: Mathematical analysis Complex analysis Asymptotic analysis Series (mathematics) Hidden categories: Articles with short description Short description is different from Wikidata All articles with unsourced statements Articles with unsourced statements from November 2017 This page was last edited on 3 June 2025, at 01:58(UTC). Text is available under the Creative Commons Attribution-ShareAlike 4.0 License; additional terms may apply. By using this site, you agree to the Terms of Use and Privacy Policy. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit organization. Privacy policy About Wikipedia Disclaimers Contact Wikipedia Code of Conduct Developers Statistics Cookie statement Mobile view Search Search [x] Toggle the table of contents Asymptotic expansion 13 languagesAdd topic
322
APPROXIMATION THEORY: A volume dedicated to Borislav Bojanov (D. K. Dimitrov, G. Nikolov, and R. Uluchev, Eds.) additional information (to be provided by the publisher) Twelve Proofs of the Markov Inequality Aleksei Shadrin This is the story of the classical Markov inequality for the k-th deriva-tive of an algebraic polynomial, and of the remarkably many attempts to provide it with alternative proofs that occurred all through the last cen-tury. In our survey we inspect each of the existing proofs and describe, sometimes briefly, sometimes not very briefly, the methods and ideas be-hind them. We discuss how these ideas were used (and can be used) in solving other problems of Markov type, such as inequalities with majo-rants, the Landau–Kolmogorov problem, error of Lagrange interpolation, etc. We also provide a bit of some less well-known historical details, and, finally, for techers and writers in approximation theory, we show that the Markov inequality is not as scary as it is made out to be and offer two candidates for the “book-proof” role on the undergraduate level. 1 Introduction 1.1 The Markov inequality This is the story of the classical Markov inequality for the k-th derivative of an algebraic polynomial and attempts to find a simpler and better proof that occured all through the last century. Here is what it is all about. ∥p(k)∥≤∥T (k) n ∥∥p∥, ∀p ∈Pn (1.1) Here (and elsewhere), Pn is the set of all algebraic polynomials of degree ≤n, ∥f∥:= max x∈[−1,1]|f(x)|, and Tn(x) := cos n arccosx is the Chebyshev polynomial of degree n. Numerically, the constant is given by the formula ∥T (k) n ∥= T (k) n (1) = n2 [n2 −12] · · · [n2 −(k−1)2] 1 · 3 · · · (2k −1) , 2 Twelve Proofs of the Markov Inequality so that, for example, ∥p∥≤1 ⇒ ∥p′∥≤n2, ∥p′′∥≤n2(n2 −1) 3 , ∥p(n)∥≤2n−1n! . The inequality is sharp, with equality only if p = γTn where |γ| = 1. That’s it, simple and elegant. Proved originally by V. Markov in 1892 in a rather sophisticated way, this inequality plays an important role in approximation theory, and there have been remarkably many attempts to provide it with an alternative proof. I counted twelve proofs in total which divide into four groups. Here they are to satisfy any taste: long, short, elementary, complex, erroneous, incomplete. 1) original variational proof of V. Markov (1892), which ran to 110 pages, 2) its condensed form given by Gusev (1961), 3) and its “second variation” by Dubovitsky–Milyutin (1965), 4) “small-o” arguments of Bernstein (1938), 5) its variation by Tikhomirov (1975), 6) and another variation by Bojanov (2001), 7) a pointwise majorant of Schaeffer–Duffin (1938), 8) a refinement of Duffin–Schaeffer for the discrete restrictions (1941), 9) trigonometric proof of Mohr (1963), 10) an erroneous proof for Chebyshev systems by Duffin–Karlovitz (1985), 11) a majorant of my own for the discrete restrictions (1992), 12) an incomplete proof of mine for the oscillating polynomials (1996) [which was an attempt to revive the proof of Duffin–Karlovitz]. In our survey we inspect each of the existing proofs and describe, sometimes briefly, sometimes not very briefly, the methods and ideas behind them. We have three goals. 1) The first one is pedagogical. It is a widely held opinion that, besides the case k=1, there is no “book-proof” of the Markov inequality. Almost each mo-nograph in approximation theory cites this result, but only two of them, Rivlin and Sch¨ onhage provides a proof, namely that of Duffin-Schaeffer. We offer two more candidates for the book-proof role (4 pages each). Also, we show that the original proof of V. Markov is not as scary as it is made out to be. 2) The second goal is methodological. There are many problems of the Markov type where we need to estimate the max-norm of the k-th derivative of a function f from a certain functional class F; they are, in short, the prob-lems of numerical differentiation. Examples are polynomial inequalities with majorant, Landau–Kolmogorov inequalities, error bounds of certain interpola-tion processes, etc. For all these mostly open problems, the classical Markov inequality is a model where a new method of the proof can be tested, or where an existing method can be taken from. 3) The final goal is historical. It was the homepage on the History of Approximation Theory (HAT), opened recently by Pinkus and de Boor , that formed my decision to write this survey, so that I am also eager to uncover Aleksei Shadrin 3 who proved the Bernstein inequality, why Chebyshev was the first to study Markov’s inequality, and how it could happen that Voronovskaya did not read Markov’s memoirs. 1.2 Prehistory Those who try to respect historical details (e.g., Duffin–Schaeffer) call Markov’s inequality the inequality of the brothers Markoff, because these details are as follows. 1889 A. Markov, k = 1, ∥p′∥≤n2 ∥p∥, 1892 V. Markov, k ≥1, ∥p(k)∥≤∥T (k) n ∥∥p∥. The first Markov, Andrei (1856-1922), was the famous Russian mathematician (Markov chains), while the second, Vladimir (1871-1897), was his kid brother who wrote only two papers and died from tuberculosis at age 26. Both results appeared in Russian in (as Boas put it) not very accessible papers, so that (to cite Boas once again) they must be ones of the most cited papers and ones of least read. A. Markov’s result for k = 1 was published in the “Notices of Imperial Academy of Sciences” under the title “On a question by D. I. Mendeleev” . In his nice survey, Boas describes the chemical problem that Mendeleev was interested in and how he arrived at the question about the values of the 1-st derivative of an algebraic polynomial. V. Markov’s opus “On functions deviating least from zero in a given in-terval” that contained (amongst others) the result for all k appeared as a small book, 110 pages of approximately A5-format, with the touching subhead-ing “A composition of V. A. Markov, the student of St. Petersburg University”, and with the stern notice “Authorized to print by the decision of the Physico-Mathematical Faculty of the Imperial St.-Petersburg University, 25 Oct 1891. Dean A. Sovetov”. Probably, it was S. Bernstein who discovered and popularized both Markov’s papers in 1912 when he started his studies in approximation theory. Actually, Bernstein reproved the case k = 1 by himself, but the result for general k was beyond his ability (for 26 years). So, quite certain about importance and difficulty of V. Markov’s achievement, he organized its translation into German which was published in “Mathematische Annalen” in 1916. Nowadays the text in German helps, perhaps, not much more than the Russian one, so that only a few lucky ones could appreciate the flavour of V. Markov’s work. However, for those not very lucky, there is an exposition in English by Gusev (with the flavour of Voronovskaya notations). Even though it puts the first half of V. Markov’s proof in a slightly different form, it reproduces its final part almost identically. As to the A. Markov’s paper for k=1, it was reprinted (in modern Russian orthography) in his Selected Works (1948), but its English translation had to wait another 50 years for the enthusiasm of de Boor and Holtz (2002). 4 Twelve Proofs of the Markov Inequality We close this section with the remark that, actually, the earliest reference to Markov’s inequality must be 1854 P. Chebyshev, k = n, ∥p(n)∥≤2n−1 n! ∥p∥, because his result on the minimum of the max-norm of the monic polynomial, ∥p∥:= ∥xn + cn−1xn−1 + · · · + c0∥≥ 1 2n−1 , is nothing but the inequality ∥p∥≥ 1 2n−1 1 n! ∥p(n)∥, and that is exactly the Markov inequality for k = n. 1.3 Pointwise problem for polynomials and other functional classes We will study the Markov inequality as the problem of finding the value Mk := sup ∥p∥≤1 ∥p(k)∥. There are many problems of this (Markov) type where we need to estimate the max-norm of the k-th derivative of a function f from a certain functional class F, i.e. to find Mk,F := sup f∈F ∥f (k)∥, and in this section we will list several of them which were (and still are) of some interest to the approximation theory community and to which our studies will be somehow related. But before we start, let us make some general remarks. There is no way of getting a uniform bound for ∥f (k)∥other than bounding |f (k)(z)| pointwise, for each particular z ∈[−1, 1]. Therefore, we have to split the original problem into two subsequent ones. Problem 1.1. For k integer, find Mk,F(z) := sup f∈F |f (k)(z)| , z ∈[−1, 1] , Mk,F := sup f∈F ∥f (k)∥ = sup z∈[−1,1] Mk,F(z) . (The pointwise estimate is also useful in applications and is therefore of independent interest.) The solution of both problems depends on what is being meant by a solution. Ideally, a solution is an effective value or a reasonable upper bound for both suprema. Aleksei Shadrin 5 Another type of solution is a characterization of the function fz that achieves the supremum in the pointwise problem for each particular z, i.e., a description of its particular properties that distinguish it from the other functions of the given class. In most cases, such a description is not constructive, and cannot help much in finding the actual quantitative value (or bound) for Mk,F(z). But sometimes it leads to conclusions about the qualitative behaviour of the function Mk,F(z), e.g., whether its maximum is attained at the endpoints ±1, thus helping to solve the global problem. Anyway, knowing a smaller set {fz} where to choose from is always an advantage. For the pointwise problem, there is always a one-parameter family of func-tions which contains extremal functions fz for any z ∈[−1, 1], this is the family {fz} itself. One needs however something more constructive, and it is not too much a surprise that, for the Markov-type problems, this something describes certain equioscillation properties of fz. It is not so surprising either that the mostly oscillating function f ∗ z is thought to be extremal for the global problem. Below we formulate the Markov-type problems appearing in this survey and give a short description of their current status. More details are given within the text. Problem 1.2 (Markov problem). For k integer, and p ∈Pn, find Mk(z) := sup ∥p∥≤1 |p(k)(z)| , z ∈[−1, 1] , Mk := sup ∥p∥≤1 ∥p(k)∥ = sup z∈[−1,1] Mk(z) . V. Markov (1892) proved that, for each z, the extremal polynomial is given by fz(x) = Zn(x, θz), where Zn(x, θ) is a one-parameter family of Zolotarev polynomials having at least n equioscillations on [−1, 1]. He made a very detailed investigation of the character of the value Mk(z) when z runs through certain subintervals, and proved, using some very fine methods, that the Chebyshev polynomial Tn achieves the global maximum Mk. Problem 1.3 (Markov problem with majorant or Turan problem). Given a majorant µ ≥0, denote by Pn(µ) the set of polynomials p of degree ≤n such that |p(x)| ≤µ(x) , x ∈[−1, 1] . For n, k integers, and p ∈Pn(µ), we want to find the values Mk,µ(z) := sup p∈Pn(µ) |p(k)(z)| , z ∈[−1, 1] , Mk,µ := sup p∈Pn(µ) ∥p(k)∥ = sup z∈[−1,1] Mk,µ(z) . 6 Twelve Proofs of the Markov Inequality As in the classical case µ ≡1, the extremal polynomial is given by fz(x) = Zn,µ(x, θz) where Zn,µ(x, θ) is a one-parameter family of weighted Zolotarev polynomials having at least n equioscillations between ±µ (this is a relatively simple con-clusion). One can expect that the µ-weighted Chebyshev polynomial should attain the global maximum Mk,µ, but that was proved only for a few classes of majorants. Problem 1.4 (Markov problem for perfect splines). A piecewise po-lynomial function s of degree n with r knots (breakpoints) is called a perfect spline if |s(n)| ≡const. Denote the set of perfect splines with ≤r knots by Pn,r. For n, k, r integers, we want to find Mk,r(z) := sup s∈Pn,r |s(k)(z)| , z ∈[−1, 1] , Mk,r := sup s∈Pn,r ∥s(k)∥ = sup z∈[−1,1] Mk,r(z) . Karlin was the first (and the last) to study this problem, in 1976, and he proved that an extremal perfect spline is given by fz(x) = Zn,r(x, θz) , where Zn,r(x, θ) is a one-parameter family of Zolotarev perfect splines in Pn,r having at least n + r equioscillations on [−1, 1] (thus having r knots or being Tn,r−1). Compared with polynomial cases this fact is rather nontrivial. Glob-ally, the Chebyshev perfect spline Tn,r with r knots and n+1+r equioscillations should be a solution. Problem 1.5 (Landau-Kolmogorov problem on a finite interval). Set W n+1 ∞ (σ) := {f : f (n) abs. cont., ∥f∥≤1, ∥f (n+1)∥≤σ} . For n, k integers, and σ > 0, find Mk,σ(z) := sup f∈W n+1 ∞ (σ) |f (k)(z)| , z ∈[−1, 1] , Mk,σ := sup f∈W n+1 ∞ (σ) ∥f (k)(z)∥ := sup z∈[−1,1] Mk,σ(z) . For σ = 0 we get the classical Markov problem. In 1978, Pinkus showed that an extremal function is given by fz(x) = Pn+1,σ(x, θz) , Aleksei Shadrin 7 where Pn+1,σ(x, θ) is a one-parameter family of the Pinkus perfect splines. (Of course, A. Pinkus did not bestow his own name to the perfect splines he introduced. He called them “perfect splines satisfying ∥P∥= 1, with exactly r + 1 knots, n + 1 + r points of equioscillation, and opposite orientation”, and even though he denoted their class by P(σ), one can argue that P stood for “perfect”. I take the credit for putting the more memorable “Pinkus” splines into use in .) As with Karlin’s proof, the arguments are rather elaborate. In the global problem, the solution must be given by an appropriate Zolotarev spline Zn+1,r (this is known as Karlin’s conjecture), but that was proved only in a few particular cases. Problem 1.6 (Error bounds for Lagrange interpolation). For a con-tinuous function f, and a knot-sequence δ = (ti)n i=0 ⊂[−1, 1], let ℓδ be the Lagrange polynomial of degree n that interpolates f on δ. For n, k integers, and any δ, find Mk,δ(z) := sup ∥f (n+1)∥∞≤1 |f (k)(z) −ℓ(k) δ (z)| , z ∈[−1, 1] , Mk,δ := sup ∥f (n+1)∥∞≤1 ∥f (k) −ℓ(k) δ ∥ = sup z∈[−1,1] Mk,δ(z) . This problem attracted a lot of attention, and a large number of various cases for small values of n and k were considered showing that ωδ(x) := 1 (n+1)! Qn i=0(x −ti) achieves the global maximum. For general n, Kallioniemi showed in 1976 that fz(x) = Sn+1(x, θz) , where Sn+1(x, θ) is a one-parameter family of perfect splines with just one knot θ (this is almost immediate), and established the behaviour of Mk,δ(z) when z runs through certain subintervals, which were surprisingly identical to those in classical Markov’s problem. In 1995, a complete solution was found , i.e., it was proved that Mk,δ = 1 (n+1)!∥ω(k) δ ∥for all n and k. This is the only complete result among all Markov-type problems. Problem 1.7 (Error bounds for general interpolation). We may ge-neralize the previous problem in two different ways. 1) We may consider instead of the Lagrange interpolation any other interpo-lation procedure, e.g., spline interpolation of degree n at the points δ = (ti)N i=1 (with another given sequence of spline breakpoints). 2) Alternatively, we may notice that Mk,δ(z) = sup ∥f n+1∥∞≤1 sup f|δ=0 |f (k)(z)| , and consider the problem of estimating the k-th derivative of a function f that satisfies ∥f (n+1)∥≤1 and vanishes on δ = (ti)N i=1 (which is related to the problem of optimal interpolation). 8 Twelve Proofs of the Markov Inequality Both problems are almost untouched. We can mention only the paper by Korneichuk who considered approximation of the 1-st derivative by interpolating periodic splines on the uniform knot-sequence. 1.4 Zolotarev polynomials General properties. Here we describe some properties of Zolotarev poly-nomials which are solutions to the pointwise Markov problem and which bear a certain similarity with one-parameter families from the other Markov-type problems. Definition 1.8. A polynomial Zn ∈Pn is called Zolotarev polynomial if it has at least n equioscillations on [−1, 1], i.e. if there exist n points −1 ≤τ1 < τ2 < · · · < τn−1 < τn ≤1 such that (−1)n−iZn(τi) = ∥Zn∥= 1. There are many Zolotarev polynomials, for example the Chebyshev poly-nomials Tn and Tn−1 of degree n and n−1, with n+1 and n equioscillation points, respectively. One needs one parameter more to get uniqueness. A convenient parametrization (due to Voronovskaya) is through the value of the leading coefficient: 1 n! Z(n) ≡θ ⇔ Zn(x) := Zn(x, θ) := θxn + n−1 X i=0 ai(θ)xi . By Chebyshev’s result, ∥p(n)∥≤∥T (n) n ∥∥p∥, so the range of the parameter is −2n−1 ≤θ ≤2n−1. As θ traverses the interval [−2n−1, 2n−1], Zolotarev polynomials go through the following transformations: −Tn(x) →−Tn(ax+b) →Zn(x, θ) →Tn−1(x) →Zn(x, θ) →Tn(cx+d) →Tn(x) . The next figure illustrates it for n = 4. Aleksei Shadrin 9 –1 –0.5 0.5 1 1.5 –2 –1 1 2 –1 –0.5 0.5 1 1.5 –2 –1 1 2 –1 –0.5 0.5 1 1.5 –2 –1 1 2 –2 –1 1 2 –2 –1 1 2 –2 –1 1 2 y –2 –1 1 2 x –2 –1 1 2 –2 –1 1 2 –1.5 –1 –0.5 0.5 1 –2 –1 1 2 –1.5 –1 –0.5 0.5 1 –2 –1 1 2 –1.5 –1 –0.5 0.5 1 –2 –1 1 2 There are many other parametrizations in use. The classical one is based on the definition of Zolotarev polynomial as the polynomial that deviates least from zero among all polynomials of degree n with two leading coefficients fixed: Zn(x, σ) := xn + σxn−1 + pn−2(x) := arg min q∈Pn−2 ∥xn + σxn−1 + q(x)∥. V. Markov used the parametrization with respect to z ∈[−1, 1], the point where Z(k) n (· , z) attains the value Mk(z) in the pointwise Markov problem. Zolotarev polynomials subdivide into 3 groups depending on the stucture of the set A := (τi) of their alternation points. A) A contains n + 1 points: then Zn is the Chebyshev polynomial Tn. B) A contains n points but only one of the endpoints: then Zn is a stretched Chebyshev polynomial Tn(ax + b), |a| < 1. C) A contains n points including both endpoints: then Zn is called a proper Zolotarev polynomial and it is either of degree n, or the Cheby-shev polynomial Tn−1 of degree n −1. 10 Twelve Proofs of the Markov Inequality For a proper Zolotarev polynomial of degree n there are three points β, γ, δ to either side of [−1, 1] such that Z′ n(β) = 0, Zn(γ) = −Zn(δ) = ±1 . As functions of θ ∈[−2n−1, 2n−1], the interior alternation points (τi)n−1 i=2 as well as β, γ, δ are monotonely increasing (the latter three go through the infinity as θ passes the zero), so that any of them may be chosen as a parameter, too. Theorem 1.9. For each z ∈[−1, 1], the value Mk(z) := sup∥p∥≤1 |p(k)(z)| is attained by a Zolotarev polynomial Zn. If Zn ̸= Tn, then Mk(z) = |Z(k) n (z)| ⇔ R(k)(z) = 0, where R(x) = Qn i=1(x −τi). This result is typical for all Markov-type problems for it says that if Z(k) n (z, θz) = sup θ Z(k) n (z, θ) , then either R(k)(z) := ∂θZ(k) n (z, θ) = 0 or θz is the endpoint of the θ-interval. The structure. The structure of the (proper) Zolotarev polynomials (let alone other Zolotarev-type functions) is rather unknown. Basically, {Zn} sat-isfy the differential equation 1 −y(x)2 = (1 −x2)(x −γ)(x −δ) n2(x −β)2 y′(x)2 , and Zolotarev himself provided implicit formulas for his polynomials in terms of elliptic functions, but explicit expressions for Zn are known only for n = 2, 3, 4. The case n = 2 is trivial, and it is quite easy to construct the family {Z3} (it has been done already by A. Markov in 1889, and repeated thereafter in many different forms). But already for n = 4 it seems that nobody really believed that an explicit form can be found. As a matter of fact it was, by V. Markov in 1892. Here it is: Z4(x, t) = 1 c0(t) 4 X i=0 bi(t) xi , |t| ≤ √ 2 −1 , where b0(t) = 2t (−3t6 −t4 −t2 + 1) , b1(t) = −t10 + t8 + 2t6 + 10t4 + 7t2 −3 , b2(t) = 2t (3t6 + t4 + t2 −5) , b3(t) = 4 (−3t4 −2t2 + 1) , b4(t) = 8t , c0(t) = P bi(t) = (1 −t2)(1 −t4)2 , (1.2) Aleksei Shadrin 11 with the alternation points τ1 = −1 < τ2 = t3 + t −(1 −t2) 2 < τ3 = t3 + t + (1 −t2) 2 < τ4 = 1 . A. Markov (1889) showed how construction of a Zolotarev polynomial in the form Zn(x, β) = p0(x −β)n + p′ 1(x −β)n−1 + · · · + p′ n−2(x −β)2 + p′ n can be reduced to two algebraic equations between the unknonws β, γ and δ, so that, theoretically, choosing β as a parameter it is possible to express γ and δ, and then all coeffients p0 and p′ i in terms of β. He also showed that Zn can be found as a solution to a system of linear differential equations of the 1-st order, and another (non-linear) system was suggested by Voronovskaya [56, p. 97]. But as far as we know, nobody (in-cluding A. Markov and Voronovskaya themselves) has ever tried to apply these methods for constructing Zn for any particular n. Recently, the interest in an explicit algebraic solution of the Zolotarev prob-lem was revived in the papers by Peherstorfer , Sodin-Yuditsky and Malyshev , but it is only Malyshev who demonstrates how his theory can be applied to some explicit constructions for particular n. From our side, we notice that there is a simple numerical procedure of constructing a polynomial pn, say, on [−1, 1], with any given values (yi) of its local maxima, i.e., such that with some −1 = x1 < x2 < · · · < xn+1 = 1 pn(xi) = (−1)iyi, i = 1..n + 1, p′ n(xi) = 0, i = 2..n . If we choose y = (1, 1, . . . , 1, yn, 1), then the resulting polynomial will be a proper Zolotarev polynomial parametrized by the value Zn(β) = yn and squeezed to the interval [−1, 1]. 2 Variational approach 2.1 General considerations Maximizing Mk over the one-parameter family. The following ap-proach is perhaps the only one that can be applied to any problem of the Markov type in the sense that, initially, it does not rely on any particular properties of polynomials or splines or whatsoever. (It is another question whether it will work or not, sometimes it does, sometimes is does not.) Let {Z(x, θ)} be the one-parameter family of functions that are extremal for the pointwise Markov-type problem, i.e., Mk,F(z) := sup f∈F |f (k)(z)| = |Z(k)(z, θz)| = sup θ |Z(k)(z, θ)| . 12 Twelve Proofs of the Markov Inequality Here we may assume (say, taking θz = z) that, under our parametrization, θz ∈[θz=−1, θz=1] =: [−¯ θ, ¯ θ] . Set K(x, θ) := Z(k)(x, θ), (x, θ) ∈[−1, 1] × [−¯ θ, ¯ θ ] =: Ω. The following statement is immediate. Proposition 2.1. We have Mk,F = sup z∈[−1,1] Mk,F(z) = sup x,θ K(x, θ) . Now, take T (·) := Z(·, ¯ θ) i.e., T is the function from F that attains the value Mk(z) at z = 1 (an analogue of the Chebyshev polynomial). This is our main candidate for the global solution, so we want to find whether Mk,F = sup x,θ ∈Ω K(x, θ) ? = ∥T (k)∥. (2.1) (Strictly speaking, we should have defined two functions T±(x) := Z(x, ±¯ θ), but they usually differ only in sign, or satisfy T−(x) = ±T+(−x) as in the Landau-Kolmogorov problem.) Notice that, directly from definition, 1) sup θ K(±1, θ) = |T (k)(±1)| , 2) sup x K(x, ±¯ θ) = ∥T (k)∥, i.e., on the boundary of the (x, θ)-domain Ωwe have sup x,θ ∈∂Ω K(x, θ) = ∥T (k)∥. Therefore, in order to verify (2.1), we have to deal with the following problem. Problem 2.2. Find whether sup x,θ ∈Ω K(x, θ) = sup x,θ ∈∂Ω K(x, θ) . (2.2) Checking local extrema. A straightforward approach for attacking this problem is to analyze the interior extremal points of K = K(x, θ): ∂xK(x∗, θ∗) = ∂θK(x∗, θ∗) = 0. If at every such point the strict inequality d := (∂xxK)(∂θθK) −(∂xθK)2 < 0 (2.3) Aleksei Shadrin 13 is valid, then (x∗, θ∗) is a saddle point, hence |K| has no local maxima in the interior of domain, and therefore (2.2), hence (2.1), are true. We mention that it makes sense to consider only those (x∗, θ∗), where the univariate functions |K(·, θ)| and |K(x, ·)| have local maxima in x and in θ respectively, i.e. such that sgn ∂xxK = sgn ∂θθK = −sgnK, therefore the above inequality (2.3) is not trivial. Since K(x, θ) := Z(k)(x, θ), the corresponding derivatives become ∂xK := Z(k+1), ∂θK := Z(k) θ , and ∂xxK := Z(k+2), ∂xθK := Z(k+1) θ , ∂θθK := Z(k) θθ , so that one needs to check whether, for a given one-parameter family of func-tions Z := Z(·, θ), the equality Z(k+1)(z) = Z(k) θ (z) = 0 implies d := Z(k+2)(z)Z(k) θθ (z) −[Z(k+1) θ (z)]2 < 0 . (2.4) The only problem is that, as has been mentioned, there are no explicit expres-sions for Zolotarev polynomials or Zolotarev-type functions. Comment 2.3. V. Markov’s original approach (repeated later in and ) had a slightly different form. Namely, he studied interior extrema of the univariate (positive or negative) function Mk(x) = Z(k)(x, θx). In this case, if the following implication is true M ′ k(z) = 0 ⇒ Mk(z)M ′′ k (z) > 0 , (2.5) then |Mk(·)| takes at x = z a locally minimal value, and hence the global maximum of |Mk(·)| is attained by a polynomial (or alike) other than the Zolotarev one. In fact, (2.5) is equivalent to (2.4) for one can show that the equality Mk(x) := sup θ Z(k)(x, θ) =: Z(k)(x, θx) implies b d := Mk(z)M ′′ k (z) = Z(k)(z) Z(k) θθ (z)  Z(k+2)(z) · Z(k) θθ (z) −[Z(k+1)(z)]2 . At the point z where Z(k) θ (z) = 0, the numerator and the denominator are of opposite sign, hence, b d > 0 is equivalent to d < 0. 14 Twelve Proofs of the Markov Inequality 2.2 V. Markov’s original proof Here we show how the variational approach just described works for the Markov problem, where the extremal set for the pointwise problem consists of Zolotarev polynomials. Let {Z(·, θ)} ⊂Pn be the family of proper Zolotarev polynomials of degree n with n equioscillation points −1 = τ1 < τ2 < · · · < τn−1 < τn = 1, τi = τi(θ) such that Z(1) = ∥Z∥= 1, Z′(β) = 0, |β| = |β(θ)| > 1, Z(x, θ) = n X i=0 ai(θ)xi , an(θ) = θ ̸= 0 . (2.6) The following theorem is the central achievement of V. Markov’s original work . Theorem 2.4 (V. Markov (1892)). If at some point (x, θ) = (z, θz) Z(k+1)(z) = Z(k) θ (z) = 0, (2.7) then d := Z(k+2)(z)Z(k) θθ (z) −[Z(k+1) θ (z)]2 < 0 . (2.8) To prove this theorem V. Markov established very fine relations between the functions involved in (2.8). Here they are. Lemma 2.5. For all (x, θ), we have Zθ(x) = n Y i=1 (x −τi) =: R(x). (2.9) Proof. First of all, it follows from (2.6) that Zθ(x) = xn + qn−1(x, θ), i.e. Zθ is a polynomial in x with the leading coefficient equal to 1. As to its roots, differentiating the identity Z(τi) ≡Z(τi(θ), θ) ≡±1 we obtain Z′(τi) · τ ′ i(θ) + Zθ(τi) = Zθ(τi) = 0, i = 1..n . □ The next formula provides a basic relation between Zθ = R and Zx = Z′, and is decisive in further considerations. Lemma 2.6. For all (x, θ), we have nan (x −β) R(x) = (x2 −1)Z′(x) . (2.10) Aleksei Shadrin 15 Proof. Both sides, as polynomials in x, have the same roots and the same leading coefficients. □ Finally, an expression for Zθθ. Lemma 2.7. For all (x, θ), we have nanZθθ(x) = −nR(x) + (x + β)R′(x) + (β2 −1) ψ(x) , (2.11) where (x −β) ψ(x) = R′(x) −R′(β) R(β) R(x) , ψ ∈Pn−1 . (2.12) Proof. Differentiating the identity (2.10) with respect to θ, and using (2.9) and the fact that a′ n(θ) = 1, we obtain n(x −β)R(x) −nanβ′(θ)R(x) + nan(x −β)Rθ(x) = (x2 −1)R′(x) = (x2 −β2)R′(x) + (β2 −1)R′(x) , and division by (x −β) and rearrangement of the terms gives nanRθ(x) = −nR(x) + (x + β)R′(x) + β2−1 x−β R′(x) + nanβ′(θ) x−β R(x) . Putting x = β in the first equality provides nanβ′(θ) = −(β2 −1) R′(β) R(β) , so nanRθ(x) = −nR(x) + (x + β)R′(x) + β2−1 x−β [R′(x) −R′(β) R(β) R(x)] . In the square brackets, we have a polynomial of degree n that vanishes at x = β, hence it is of the form (x −β)ψ(x), where ψ ∈Pn−1. □ Proof of Theorem 2.4. We assume that an < 0, hence β > 1, thus z −β < 0 if z ∈[−1, 1] . (2.13) We also assume that (at the point z where Z(k+1)(z) = 0) Z(k)(z) > 0, hence Z(k+2)(z) < 0 . (2.14) Under these assumptions (and assumptions (2.7) of the theorem) we will show that Z(k+1) θ (z) > z2−1 nan(z−β) Z(k+2)(z) > 0 , (2.15) Z(k+1) θ (z) > nan(z−β) z2−1 Z(k) θθ (z) , (2.16) and that clearly proves the theorem. 1) Our starting point is again the identity (2.10) nan(x −β)R(x) = (x2 −1)Z′(x) . 16 Twelve Proofs of the Markov Inequality Differentiating it (k + 1) times with respect to x and setting (x, θ) = (z, θz) we obtain (taking into account (2.7)) nan(z −β) R(k+1)(z) = (z2 −1) Z(k+2)(z) + k(k + 1) Z(k)(z) . (2.17) Both terms on the right-hand side are positive, and also nan(z −β) > 0, so R(k+1)(z) > z2−1 nan(z−β) Z(k+2)(z) > 0 , (2.18) which proves (2.15). 2a) Now we turn to (2.16). From (2.11) and (2.7), we have nanZ(k) θθ (z) = (z + β) R(k+1)(z) + (β2 −1) ψ(k)(z) , and from (2.12) and (2.7) we find (z−β)ψ(k)(z) + kψ(k−1)(z) = R(k+1)(z), i.e. ψ(k)(z) = 1 z −β [R(k+1)(z) −kψ(k−1)(z)] , so, putting this expression into the previous one, we obtain nanZ(k) θθ (z) = (z + β) R(k+1)(z) + β2−1 z−β [R(k+1)(z) −kψ(k−1)(z)] = z2−1 z−β R(k+1)(z) −k(β2−1) z−β ψ(k−1)(z) . Hence R(k+1) −nan(z−β) z2−1 Z(k) θθ (z) = k(β2−1) z2−1 ψ(k−1)(z) , (2.19) and since k(β2−1) z2−1 < 0, it follows that (2.16) ⇔ ψ(k−1)(z) < 0 . 2b) Consider relation (2.12) for ψ: (x −β)ψ(x) = R′(x) −R′(β) R(β) R(x) . For x ∈[−1, 1], since β > 1, both factors (x −β) and −R′(β) R(β) are negative, hence at the zeros of R′ we have R′(ti) = 0 ⇒ sgn ψ(ti) = sgn R(ti) . This means that the zeros of the polynomials ψ and R′ interlace, thus, by what we know now as the Markov interlacing property, R(k)(z) = 0 ⇒ sgn ψ(k−1)(z) = sgn R(k−1)(z) . At the points where R(k)(z) = 0 we have sgn R(k−1)(z) = −sgn R(k+1)(z), hence sgn ψ(k−1)(z) = −sgn R(k+1)(z) < 0 , the last inequality by (2.18). □ Aleksei Shadrin 17 Comment 2.8. Compared with Markov’s proof, we split the inequality (2.8) into two parts (2.15)-(2.16), and made one more simplifying assumption (2.14). We also got rid of expressions for τ ′ i(θ) and θ′(z) that were involved in Markov’s arguments. From relations (2.17) and (2.19), we find R(k+1)(z) = (z2−1) nan(z−β) Z(k+2)(z) + k(k+1) nan(z−β) Z(k)(z) , R(k+1)(z) = nan(z−β) z2−1 Z(k) θθ (z) + k(β2−1) z2−1 ψ(k−1)(z) , and we can derive the exact expressions for d in (2.8) −d = [R(k+1)(z)]2 −Z(k+2)(z)Z(k) θθ (z) = k(β2−1) nan(z−β) Z(k+2)(z) ψ(k−1)(z) + k(k+1) nan(z−β) Z(k)(z) R(k+1)(z) (2.20) = kZ(k)(z) z−β k+1 + (β2−1) ψ(k−1)(z) R(k+1)(z) Z(k+2)(z) Z(k)(z) − nan R(k+1)(z) . (2.21) The last one is formula (118) of Markov’s work, and he finished his proof by analyzing its sign. Comment 2.9. An interesting fact is that, as V. Markov himself wrote in “Appendix to §34” (which was omitted in the German translation), he found the proof of Theorem 2.4 at the very last moment, when his article was already in print. Until then he had proofs of the inequality ∥p(k)∥≤∥T (k) n ∥∥p∥only in the cases k = 1, k = 2, k = n −2, k = n −1 , each time a different one. (He added an “Appendix” to demonstrate these proofs; they are quite interesting, by the way.) 2.3 A brief account of V. Markov’s results Markov’s Theorem 2.4 (with preliminaries) reads as follows: A) For each z ∈[−1, 1] the value Mk(z) := sup ∥p∥≤1 |p(k)(z)| is attained either by a proper Zolotarev polynomial Zn(·, θ), or by the Cheby-shev polynomial Tn, or by a transformed Chebyshev polynomial e Tn(x) = Tn(ax + b), or by the Chebyshev polynomial Tn−1. B) If a local extreme value of the (positive) function Mk(·) is attained by a proper Zolotarev polynomial of degree n, then it is a local minimum. C) Hence, Mk = sup z Mk(z) = max { ∥T (k) n ∥, ∥e T (k) n ∥, ∥T (k) n−1∥, Mk(±1) } , and it is not difficult to show that the last maximum is equal to T (k) n (1). 18 Twelve Proofs of the Markov Inequality Theorem 2.10 (V. Markov (1892)). For all n, k we have sup ∥p∥≤1 ∥p(k)∥= T (k) n (1) . Actually, in his opus, V. Markov made a very detailed investigation of the character of the value Mk(z) when z runs through certain subintervals. 0) Given k, define the points (ξi) and (ηi) by η0 := −1, (x −1) T ′ n(x) =: ck Qn−k i=1 (x −ηi), (2.22) (x + 1) T ′ n(x) =: ck Qn−k i=1 (x −ξi), ξn−k+1 =: 1. (2.23) Then ηi−1 < ξi < ηi, and we define (following Voronovskaya ) Chebyshev intervals eT i := [ηi−1, ξi] , Zolotarev intervals eZ i := (ξi, ηi) , so that the interval [−1, 1] is split in the following way Chebyshev Chebyshev Chebyshev interval interval interval ↓ ↓ ↓ (−1, ξ1] (ξ1, η1) [η1, ξ2] (ξ2, η2) · · · (ξn−k, ηn−k) [ηn−k, 1) ↑ ↑ ↑ Zolotarev Zolotarev Zolotarev interval interval interval 1) If z belongs to a Chebyshev interval, then Mk(z) = |T (k) n (z)|, z ∈eT i . Moreover, the Chebyshev intervals contain the roots of T (k+1) n (and, as a matter of interest, those of T (k−1) n ), i.e., the local maxima of Mk(·) and |T (k) n | coincide. 2) If z belongs to a Zolotarev interval eZ i , then the value of Mk(z) is achieved either by a proper Zolotarev polynomial Zn(·, θz), or by a transformed Cheby-shev polynomial Tn(azx+bz), or by the Chebyshev polynomial Tn−1, each time on a certain subintervals as illustrated below. Chebyshev Zolotarev Chebyshev interval interval interval z }| { z }| { z }| { z →[ηi−1, ξi] (ξi, λi] (λi, νi) νi (νi, µi) [µi, ηi) [ηi, ξi+1] ↓ ↓ ↓ ↓ ↓ ↓ ↓ Extr. pol. →−Tn(x) −Tn(azx+bz) Zn(x, θz) Tn−1(x) Zn(x, θz) Tn(czx+dz) Tn(x) ↓ ↓ ↓ ↓ ↓ ↓ ↓ Mk(z) →|T (k) n (z)| |T (k) n (ξi)|(1+ξi)k (1+z)k |Z(k) n (z, θz)| |T (k) n−1(νi)| |Z(k) n (z, θz)| |T (k) n (ηi)|(1−ηi)k (1−z)k |T (k) n (z)| Aleksei Shadrin 19 Notice the exact behaviour of Mk(·) as a hyperbolic function c (1±z)k on the intervals (ξi, λi) and (µi, ηi), where the extremal functions are transformed Chebyshev polynomials. 3) The next figure represents the graph of Mk(·) for the case of cubic poly-nomials (n = 3) and the first derivative (k = 1). Bold are the parts where the value is achieved by the Chebyshev polynomial T3(x) = 4x3 −3x. 0 2 4 6 8 –1 –0.8 –0.6 –0.4 –0.2 0.2 0.4 0.6 0.8 1 x ξ1 λ1 ν1 µ1 η1 ξ2 λ2 ν2 µ2 η2 This graph (which appeared already in Boas without reference) is based on the exact expressions for the functions involved computed by A. Markov in 1889. Here they are (for the interval [0, 1]): n = 3, M1(x) =                    3(1 −4x2), x ∈[0, ξ], ξ = √ 7−2 6 ; 7 √ 7+10 9(1+x) , x ∈[ξ, λ], λ = 2 √ 7−1 9 ; 16x3 (9x2−1)(1−x2), x ∈[λ, µ], µ = 2 √ 7+1 9 ; 7 √ 7−10 9(1−x) , x ∈[µ, η], η = √ 7+2 6 ; 3(4x2 −1), x ∈[η, 1] . A. Markov also provided the formula of M1(·) for n = 2, and later, while studing the case k > 1, V. Markov found for n = 3 an exact analytic form of M2(·) (M3(·) is a constant). Using his expression for Z4 (see (1.2)) it is possible to find all Mk(·) for n = 4. 4) Inside each Zolotarev interval, there is exactly one local minimum of Mk(·), say, at x = σi. A naive conjecture that σi = νi, i.e., that these local minima Mk(σi) are attained by the Chebyshev polynomial Tn−1 is not true (as seen from the graph). V. Markov proved that this could happen only in the middle of the interval: a) if νi = 0, then σi = 0 , 20 Twelve Proofs of the Markov Inequality otherwise b) if νi > 0, then σi ∈(λi, νi), c) if νi < 0, then σi ∈(νi, µi). 5) In 1961, Gusev provided two supplements to V. Markov’s results. Firstly, he showed that while the first derivative M ′ k(·) is continuous on [−1, 1] (which is rather clear and was used by V. Markov), the second derivative M ′′ k (·) has jumps at the points ξ, λ, µ, η (but not at ν) where Zolotarev polynomials change from one type to another. His second and quite interesting observation was about the measure of Chebyshev and Zolotarev intervals, namely mes (eT ) = 2 k n , mes (eZ) = 2 n −k n . The proof is quite elementary, so we give it here. By definition, mes (eZ) = P(ηi −ξi) = P ηi −P ξi , where p(x) := c n−k Q i=1 (x −ηi) := (x −1) T ′ n(x) , c n−k Q i=1 (x −ξi) := (x + 1) T ′ n(x) . Then 1 n−k P ηi is the only root of the polynomial p(n−k−1) which is the poly-nomial (x −1) T ′ n(x), which has the only root 1 n [1 + Pn−1 i=1 ζi], i.e., P ηi = n−k n [1 + Pn−1 i=1 ζi] (where T ′ n(ζi) = 0). Similarly, P ξi = n−k n [−1 + Pn−1 i=1 ζi], hence the result. 6) We mention that V. Markov’s results for general k were essentially of the same type as earlier results of A. Markov for the case k = 1. Precisely, for the pointwise problem for the 1-st derivative, A. Markov showed that Zolotarev polynomials form the extremal set, proved that the value M1(z) is attained by either type of these polynomials when z belongs to certain intervals, and described the behaviour of M1(·) on these intervals exactly in the same way as it is given in the cases 1)-3) of this section. He did not get the result about the minima of M1(·) as in case 4) (which was the main achievement of his kid brother), but he proved the global inequality ∥p′∥≤n2 ∥p∥using what we call now Bernstein’s majorant (see §4.1 for details of his proof). 2.4 Works of Voronovskaya and Gusev Works of Voronovskaya. Voronovskaya is perhaps best known by her saturation estimate for the Bernstein polynomials, Bn(f, x) −f(x) = x(1 −x) 2n2 f ′′(x) + o(n2) . Aleksei Shadrin 21 However, most of her studies were on extremal properties of polynomials, which she summarized in her book “The functional method and its application” . Boas was very enthusiastic about Voronovskaya works. He translated her book into English in 1970, and, in his two surveys -, made a very delightful report about her results “[which solved] a great variety of extremal problems that had previously seemed too difficult for anyone to do anything with”. In particular, Boas attributes to Voronovskaya the solution of the “point-by-point” Markov problem (for the 1-st derivative). The latter is not correct. It is true that her 1959 paper “The functional of the first derivative and im-provement of a theorem of A. A. Markov” does improve upon some results of A. Markov (1889). But the whole truth is that this improvement (it is about the minima of M1(·)) can be found in V. Markov (1892). It is only her argu-ments (for k = 1) that are a bit different (and simpler) than those of V. Markov (for general k), but the results are the same. In this respect, astonishing is her final remark: “But neither A. A. Markov nor V. A. Markov, in studying the question of a bound for the derivatives at interior points of the fundamental interval, took advantage of the use of the Zolotarev polynomials [A. Markov, p. 64] and [V. Markov, p. 55], and hence they could not carry the problem to completion.” Since it suffices to take a brief look through either of Markov’s papers in order to find that Zolotarev polynomials occupy the central place in both articles, it is all the more interesting to look at the pages pointed out by Voronovskaya. Here are the exact quotations (about the only thing they did not want to use): A. Markov [p. 64]: “Without relying on E. I. Zolotarev’s formulas, we show how it is possible to reduce our problem to three algebraic equations.” V. Markov [p. 55]: “We notice that Zolotarev in his paper expressed the solution of the equation in terms of elliptic functions, but we will not focus on that.” The only explanation for this story that I can think of is that Voronovskaya – like most of us – never read either of Markov’s articles, and had no idea about their actual content. So, when her paper was about to be published, and somebody advised her to take a closer look at these works, she did not find the courage to admit that she simply rediscovered the results already 70 years old. Just another illustration of Boas’ words about A. Markov’s paper as “one of the most often cited, and one of the least read”. Gusev’s paper. V. A. Gusev begins his paper in a quite remarkable way. He is going “to study the problem considerably more completely than in Bernstein and in Duffin-Shaeffer, and in a considerably shorter way than in V. Markov”. The logic of this sentence leaves open the possibility that his way is not shorter than those of Bernstein and Duffin-Schaeffer, and that it gives not more complete results than those of Markov. And this is true! (well, almost: he proved two supplementary results, as we have seen). More than this, Gusev’s proof of Markov inequality is not new, it is essentially a reproduction 22 Twelve Proofs of the Markov Inequality of Markov’s original proof. There are some differences in the preliminaries, because V. Markov uses his own criterion for the norm of linear functional, while Gusev uses that of Voronovskaya (of course, both are equivalent). But the very essence of V. Markov’s treatise, the proof that Z(k+2)(z)Z(k) θθ (z) −[Z(k+1) θ (z)]2 < 0, hence a local extremum of Mk(·) if attained by a proper Zolotarev polynomial is a local minimum, hence the Markov inequality, is reproduced by Gusev almost without alterations. “A way considerably shorter than in V. Markov” is a slight exaggeration too, especially when you find that Gusev uses without proof some of Markov’s lemmas sending the reader for those to Markov’s paper. There is, however, a positive side of Gusev’s paper (as well as of Voronov-skaya), namely a clear and short exposition of Markov’s results (provided more-over with an English translation). V. Markov’s paper is rather mosaic and ar-chaic, and this makes it a difficult (albeit pleasurable) read. Gusev squeezed it to a small set of clear theorems which give a clear picture of behaviour of the exact upper bound Mk(·). To a certain extent, we followed his exposition in §2.3. 2.5 Similar results V. Markov’s variational approach, based on verifying the inequality d := Z(k+2)(z)Z(k) θθ (z) −[Z(k+1) θ (z)]2 < 0 for the one-parameter family Z(x, θ) of Zolotarev-type functions, was used in solution of two other problems of Markov type. Theorem 2.11 (Pierre-Rahman (1976)). For the Markov problem with the majorant µ(x) = (1 −x)m1/2(1 + x)m2/2, k ≥m1+m2 2 , we have Mk,µ := sup |p(x)|≤µ(x) ∥p(k)∥= max  ∥ω(k) n ∥, ∥ω(k) n−1∥  (2.24) where ωn ∈Pn is the polynomial oscillating most between ±µ. The proof is the exact reproduction of Markov’s arguments, but on a much more complicated technical level. In our notations, their final expression (which is the last equality on p. 728) has the form d = Z(k)(z) β−z  k(β2−1) ψ(k−1)(z) R(k+1)(z) Z(k+2)(z) Z(k)(z) + (k+1) “ k −m1+m2 2 ”ffR(k+1)(z) nan , Aleksei Shadrin 23 just to compare with formula (2.21) of V. Markov. For some reasons, Pierre & Rahman did not analyze when the maximum in (2.24) is attained by ω(k) n . It seems to be so if k > m1+m2 2 (when it looks that ∥ω(k) n ∥= ω(k) n (1)). Theorem 2.12 (Shadrin (1995)). For the Lagrange interpolation prob-lem on a knot-sequence δ = (ti)n i=0, we have Mk,δ := sup ∥f (n+1)∥≤1 ∥f (k) −ℓ(k) δ ∥= 1 (n + 1)!∥ω(k) δ ∥, where ωδ(x) := Qn i=0(x −ti). Here, the one-parameter family Z(x, θ) consists of perfect splines with at most one knot, and details of the proof are quite different from that of Markov. However, for the pointwise problem, there are complete analogues of the Chebyshev and Zolotarev intervals eT j = (ηj−1, ξj), eZ j = [ξj, ηj] . Here, the endpoints of the intervals are defined via ωi(x) := ω(x) x−ti as η0 := t1, ω(k) 0 (x) =: c Qn−k j=1 (x −ηj), ω(k) n (x) =: c Qn−k j=1 (x −ξj), ξn−k+1 := tn. But now, it is Zolotarev intervals where Mk,δ and ω(k) δ (and their local maxima) coincide: Mk,δ(z) := sup ∥f (n+1)∥≤1 |f (k)(z) −ℓ(k) δ (z)| = 1 (n + 1)!|ω(k) δ (z)|, z ∈eZ δ . This pointwise estimate is due to Kallioniemi who also generalized Gusev’s result: mes (eT δ ) = k n (tn −t0). 3 “Small-o” arguments 3.1 “Small-o” proofs of Bernstein and Tikhomirov In 1938, in the less-known and nowadays hardly accessible “Proceedings of the Leningrad Industrial Institute”, Bernstein published the article where he “found it not unnecessary to point out another and simpler proof” of V. Markov’s inequality. This article was reprinted in 1952 in his Collected Works, and since 1996 its English translation, thanks to Bojanov, is also avail-able. 24 Twelve Proofs of the Markov Inequality The proof we are going now to present is, in fact, not that of Bernstein but a mixture from different sources with the main part due to Tikhomirov, as it is given in his exposition [12, pp.111-113] for k = 1 (with our straightforward extension to any k). For preliminaries (where Tikhomirov used calculus of variations), we chose the more classical (and elementary) approach of Bernstein and Markov. This is a promised “book-proof” on 4 pages, so we start from the very very beginning pretending we forgot everything dicussed before. Book-proof. We are going to study the behaviour of the upper bounds of the k-th derivative of algebraic polynomials Mk(z) := sup ∥p∥≤1 p(k)(z), z ∈[−1, 1] , Mk := sup ∥p∥≤1 ∥p(k)∥= sup z∈[−1,1] Mk(z). We are going to prove that Mk = ∥T (k) n ∥ (3.1) by showing that, among all the polynomials p∗that are extremal for Mk(z) for different z, only Tn can hope to achieve the global maximum of Mk(z). This will be done in two steps. 1) For z = ±1, we will show that p∗= Tn. 2) For z ∈(−1, 1) we will show that if p∗̸= Tn and Mk(z) = p(k) ∗(z), M ′ k(z) = 0 = p(k+1) ∗ (z)  , then there exists a polynomial Pλ ∈Pn such that, for some zλ, ∥Pλ∥= ∥p∗∥−O(λ2), P (k) λ (zλ) = p(k) ∗(z) + o(λ2), so that, for λ small enough, Mk(zλ) ≥P (k) λ (zλ) ∥Pλ∥ > p(k) ∗(z) ∥p∗∥ = Mk(z) . The latter means that the local extrema of Mk(z) if attained by polynomials other than Tn are local minima, hence all local maxima of Mk(z) are attained by the Chebyshev polynomial, hence the conclusion (3.1). We start with some characterizations of the extremal polynomials. Lemma 3.1. Let Mk(z) := sup p∈Pn p(k)(z) ∥p∥ = p(k) ∗(z) ∥p∗∥, and let {τi}m i=1 be the set of all points for which |p∗(x)| = ∥p∗∥. Then there is no polynomial q ∈Pn such that q(k)(z) = 0 and q(τi)p∗(τi) < 0 . (3.2) Aleksei Shadrin 25 Proof. If there is such a q, then the polynomial r := p∗+ λq will satisfy r(k)(z) = p(k) ∗(z) and ∥r∥< ∥p∗∥, a contradiction to the extremality of p∗. □ Lemma 3.2. Let y∗, z ∈(−1, 1) and (yi)n−2 i=1 ∈R. Then there is a unique polynomial q ∈Pn such that q(yi) = 0, q(k)(z) = q(k+1)(z) = 0, q(y∗) = 1, and it changes its sign exactly at the points yi. Proof. It follows easily from Rolle’s theorem that the homogeneous inter-polation problem has only the trivial solution, hence existence of such a q. It also implies the sign pattern, since if there were a point x∗besides (yi) where q vanishes, then the homogeneous problem with y∗= x∗would have had a non-zero solution. Lemma 3.3. Let p∗be a polynomial extremal for Mk(z). Then it has at least n points (τi) of alternation between ±1. Proof. Let m be the number of alternations and let (τi)m i=1 be the points such that p∗(τi) = −p∗(τi+1) = ϵ ∥p∗∥, |ϵ| = 1. If m ≤n −1, then adding arbitrary (τj)n−1 j=m+1 with |τj| > 1 to the list, we can apply Lemma 3.2 to construct the polynomial q such that q  τi+τi+1 2  = 0, q(k)(z) = q(k+1)(z) = 0, q(τ1) = −sgnp∗(τ1), which satisfies the condition (3.2), a contradiction. □ The polynomials of degree n with n alternation points in [−1, 1] are called Zolotarev polynomials, they divide into 3 groups depending on the stucture of the set A := (τi) of their alternation points. A) A contains n + 1 points. Then p∗= Tn, B) A contains n points but only one of the endpoints. Then p∗can be continued to the larger interval, say [−1, 1+c], on which it has n+1 alternation points. Hence, it is a transformed Chebyshev polynomial, p∗(x) = Tn(ax + b), |a| < 1. We can exclude this case from consideration since clearly ∥p(k) ∗∥< ∥T (k) n ∥. C) A contains n points including both endpoints. Then p∗is called a proper Zolotarev polynomial, and we want to show that it does not attain any local maximum of Mk(z). For this, we need one more characterization property of Z. Lemma 3.4. Let Mk(z) = Z(k)(z), where Z has exactly n alternation points (τi). Then R(k)(z) = 0, R(x) := n Y i=1 (x −τi). 26 Twelve Proofs of the Markov Inequality Proof. By the Lagrange interpolation formula with the nodes (τi), any q ∈Pn can be written in the form q(x) = cR(x) + n X i=1 q(τi) R′(τi)Ri(x), Ri(x) := R(x) x −τi , so that q(k)(z) = cR(k)(z) + n X i=1 q(τi) R′(τi)R(k) i (z). If R(k)(z) ̸= 0, then we may set q(τi) = −Z(τi) and then use the freedom in choosing the constant c to annihilate the right-hand side, i.e., to obtain q(k)(z) = 0, a contradiction to Lemma 3.1. □ Remark 3.5. From the previous lemma, it follows that if Z ̸= Tn, then it can attain some value Mk(z) only for z strictly inside the interval [−1, 1], whence Mk(±1) = |T (k) n (±1)|. Theorem 3.6 (Tikhomirov (1976)). Let Z ∈Pn be a proper Zolotarev polynomial such that Z(k)(z) = Mk(z), (hence R(k)(z) = 0), Z(k+1)(z) = 0 . Then the polynomial Pλ := Z + λR + λ2 2 c0R′, c0 := R(k+1)(z) Z(k+2)(z) , satisfies for some zλ ∥Pλ∥= ∥Z∥−O(λ2), P (k) λ (zλ) = Z(k)(z) + o(λ2). Lemma 3.7. Let f, g, h ∈C2[a, b] with ∥f∥= |f(x0)|, and let f ′(x0) = 0, f(x0)f ′′(x0) < 0, g(x0) = 0, g′(x0) ̸= 0. Then there is an ϵ > 0 such that φ(λ) := f + λg + λ2 2 h C[x0−ϵ,x0+ϵ] = f(x0) + λ2 2  h(x0) −g′(x0)2 f ′′(x0)  + o(λ2). Proof. Set ψ(x, λ) := φ′ λ(x) := f ′(x) + λg′(x) + λ2 2 h′(x). Then ψ(x0, 0) = 0, ∂xψ(x0, 0) = f ′′(x0) ̸= 0, ∂λψ(x0, 0) = g′(x0). Aleksei Shadrin 27 By the implicit function theorem, there exists a function xλ = x(λ) such that ψ(x, λ) = 0 ⇔ x = xλ = x0 −g′(x0) f ′′(x0)λ + o(λ). This means that, for small λ, the function |f +λg+ λ2 2 h| has a unique maximum at the point x = x(λ), and ∥φλ∥ = f(xλ) + λg(xλ) + λ2 2 h(xλ) = f(x0) + λ2 g′(x0)2 f ′′(x0)2 f ′′(x0) 2 −λ2 g′(x0) f ′′(x0)g′(x0) + λ2 2 h(x0) + o(λ2) . Proof of Theorem 3.6 1) Firstly, let us apply the previous lemma to the functional φ(λ) := ∥P (k) λ ∥C[z−ϵ,z+ϵ] . In this case, f := Z(k), g := R(k), h := R(k+1), and the conditions of the lemma are satisfied. We obtain φ(λ) := Z(k)(z) + λ2 2  c0R(k+1)(z) −R(k+1)(z)2 Z(k+2)(z)  + o(λ2) = |Z(k)(z)| + o(λ2) (the expression in parentheses vanishes due to the definition of c0). 2a) Next, we apply the lemma to the functional φ(λ) := ∥Pλ∥C[τi−ϵ,τi+ϵ], τi ̸= ±1. Now f := Z, g = R, h = R′, and in a neighbourhood of each interior alternation point τi the norm of the polynomial Pλ is equal to the value |Z(τi) + λ2 2 γi| + o(λ2), where γi := h c0R′(τi) −R′(τi)2 Z′′(τi) i = R′(τi) Z′′(τi)[c0Z′′(τi) −R′(τi)] . To prove that ∥Pλ∥= ∥Z∥−O(λ2), it suffices to show that γiZ(τi) < 0, and because Z(τi)Z′′(τi) < 0 this is equivalent to the inequality δi := R′(τi)[c0Z′′(τi) −R′(τi)] > 0 , τi ̸= ±1 . (3.3) Consider the polynomial Q(x) := c0Z′(x) −R(x) . (3.4) It vanishes at (τi)n−1 i=2 , and Q(k)(z) = Q(k+1)(z) = 0. Hence, by Lemma 3.2, it changes its sign only at (τi), and Q′(τi) alternate in sign. So does R′(τi), thus all δi := R′(τi)Q′(τi) are of the same sign. Let us show that δn−1 > 0. We have sgn Q′(τn−1) = sgn Q(t) t→∞ (3.4) = −sgnR(t) t→∞= −1 = R′(τn−1) . 28 Twelve Proofs of the Markov Inequality The first equality is because τn−2 is the rightmost zero of Q, the next one is because Z′ in (3.4) is of degree n −1, and the last two follow because R(x) = Qn i=1(x −τi). 2b) It remains to consider the endpoints, say x = 1, where we have Pλ(1) = Z(1) + λ2 2 c0R′(1) . As we have seen, sgn Q(1) = sgn Q(t) t→∞= −1, on the other hand, by (3.4), sgn Q(1) = sgn c0Z′(1) = sgn c0Z(1), hence c0 and Z(1) are of opposite sign, and because R′(1) > 0 |Pλ(1)| = |Z(1)| −O(λ2). □ Comment 3.8. The difference between Tikhomirov’s and Bernstein’s proofs is that, while Tikhomirov simply presents the polynomial Pλ and then proves its required properties, Bernstein moves the other way round. He considers the polynomial P1(x) = Z(x + λ) −λφ(x + λ) −λ2ψ(x + λ), where φ and ψ are any polynomials satisfying φ(k)(z) = ψ(k)(z) = 0, so that P (k) 1 (z −λ) = Z(k)(z). Then he expands P1 with respect to λ, P1 = Z + λ[Z′ −φ] + λ2[ 1 2Z′′ −φ′ −ψ] + o(λ2), evaluates the value ∥P1∥, and tries to determine φ and ψ in order to get ∥P1∥= ∥Z∥−O(λ2) . With that he arrives at φ = Z′ −1 c0 R and ψ = −1 2φ′, so that the polynomial he uses is actually the same as in Tikhomirov: P1 = Z + (λ/c0)R + (λ/c0)2 2 c0R′ . Comment 3.9. Lemma 3.1 is actually a criterion for a polynomial to at-tain the norm of the linear functional µ(p) = p(k)(z) (and any other linear functional on Pn). It was a starting point of V. Markov’s studies [7, §2], and he derived from it two other criteria which were more convenient for applications. Notice the similarity between Lemma 3.1 and Kolmogorov’s criterion for the element of best approximation. Comment 3.10. The above given “book-proof” of V. Markov’s inequality is not entirely complete. To bring it to the final Markov form one still needs to prove that ∥T (k) n ∥= T (k) n (1) = n2 [n2 −12] · · · [n2 −(k−1)2] 1 · 3 · · · (2k −1) . Both equalities are usually referred to as “easy to show”, but it takes another half a page to really show them (we do it in §5.3) Aleksei Shadrin 29 3.2 “Small-o” proof of Bojanov Tikhomirov provided his proof with the following comment [54, p. 285]: “This proof is not quite consistent from the point of view of theory of extremal prob-lems. To act consistently, one should find a tangent direction (which is here unique, namely that of R(·)), write down a general variation of the second order Pλ(x) = Z(x) + λR(x) + λ2 2 Y (x) , and then apply again the necessary conditions of supremum. Such a plan is fulfilled in the paper by Dubovitsky–Milyutin . Here we took a shorter way borrowing some parts of our arguments from Bernstein ”. It is not clear whether here Tikhomirov had any particular polynomial Y in mind. The paper which we discuss in the next section does not make it clear either. A version of “small-o” proof with a different polynomial Y was presented in 2002 by Bojanov in his survey on Markov-type inequalities. Bojanov himself refers to his proof as “a simplification of Tikhomirov’s variational approach as outlined in a private communication”. We will fit Bojanov’s proof into the scheme of the previous section, and it makes our exposition quite different from his own. We discuss some of these differences in the comments below where we also show that, actually, he uses the polynomial Pϵ(x) = Z(x) + ϵZθ(x) + ϵ2 2 Zθθ(x) , (3.5) which is the Taylor expansion of the Zolotarev polynomial Z(x, θz + ϵ) in a neighbourhood of θz. Recall that R(x) := Zθ(x) = n Y i=1 (x −τi), τi = τi(θ) , (3.6) where τi are the equioscillation points of the Zolotarev polynomial Z, and set Y (x) := n−1 X i=2 ρiRi(x), ρi := R′(τi) Z′′(τi) , Ri(x) := R(x) x −τi . (3.7) Theorem 3.11 (Bojanov (2002)). Let Z ∈Pn be a proper Zolotarev polynomial such that Z(k)(z) = Mk(z) (hence R(k)(z) = 0), Z(k+1)(z) = 0. Then the polynomial Pϵ := Z + ϵR + ϵ2 2 Y (3.8) satisfies for some zϵ ∥Pϵ∥= 1 + o(ϵ2), |P (k) ϵ (zϵ)| = |Z(k)(z)| + O(ϵ2). (3.9) 30 Twelve Proofs of the Markov Inequality Proof. 1) From definition (3.7) of Y , we find that Y (τi) = ρiRi(τi) = [R′(τi)]2 Z′′(τi) , i = 2, . . . , n −1. Now, Tikhomirov’s Lemma 3.7 applied to Pϵ says that, in a neighbourhood of each interior τi, the local maximum of Pϵ has the value Pϵ(τ ϵ i ) = Z(τi) + ϵ2 2 h Y (τi) −[R′(τi)]2 Z′′(τi) i + o(ϵ2) = 1 + o(ϵ2) . Near the endpoints of [−1, 1], the norm ∥Pϵ∥will not exceed 1 for small ϵ because |Z(x)| ≤1 and Z′(±1) ̸= 0. 2) To prove the second equality in (3.9) we apply Lemma 3.7 to P (k) ϵ . So, in a neighbourhood of z, the local maximum of Pϵ has the value P (k) ϵ (zϵ) = Z(k)(z) + ϵ2 2 h Y (k)(z) −[R(k+1)(z)]2 Z(k+2)(z) i + o(ϵ2), and because Z(k)(z)Z(k+2)(z) < 0 we have to deal with the inequality d := Y (k)(z)Z(k+2)(z) −[R(k+1)(z)]2 ? < 0. (3.10) 3) Since Y = Pn−1 i=2 ρiRi, and (trivially) R′ = Pn i=1 Ri, we have Y (k)(x) = Pn−1 i=2 R′(τi) Z′′(τi)R(k) i (x), R(k+1)(x) = Pn i=1 R(k) i (x), (3.11) so we may write d = Z(k+2)(z) n−1 P i=2 R′(τi) Z′′(τi)R(k) i (z) −R(k+1)(z) n P i=1 R(k) i (z) = R(k+1)(z) n−1 P i=2 Z(k+2)(z) R(k+1)(z) R′(τi) Z′′(τi) −1 R(k) i (z) −R(k+1)(z) [R(k) 1 (z) + R(k) n (z)] . By Markov’s interlacing property (since zeros of R and Ri interlace) R(k)(z) = 0 ⇒ sgn R(k+1)(z) = sgn R(k) i (z) ∀i, so we are done once we prove that Z(k+2)(z) R(k+1)(z) R′(τi) Z′′(τi) −1 < 0, or, with the previ-ously used notation c0 := R(k+1)(z) Z(k+2)(z), that δi := 1 c0Z′′(τi)[c0Z′′(τi) −R′(τi)] > 0, τi ̸= ±1. 4) The latter is proved like in Tikhomirov’s proof, by considering the poly-nomial Q = c0Z′ −R. □ Aleksei Shadrin 31 Comment 3.12. Bojanov wrote his polynomial (3.8) in the form Pϵ(x) := Z(x) + ϵ n Q i=1 (x −τi + ϵ 2ρi) and dealing with (3.9) he repeated twice the arguments (of Tikhomirov’s lemma) based on the implicit function theorem. Also, he used not (3.11) but the formula Y (k)(z) = Pn−1 i=2 Ai [R′(τi)]2 Z′′(τi) = Pn i=1 AiY (τi)  , which stems from the representation of the linear functional µ(p) = p(k)(z) on Pn, p(k)(z) = n X i=1 Aip(τi), AiAi+1 < 0, (3.12) so that, finally, he verified not (3.10) but the inequality Z(k)(z) h −[R(k+1)(z)]2 Z(k+2)(z) + Pn−1 i=2 Ai [R′(τi)]2 Z′′(τi) i > 0. (3.13) Comment 3.13. Let us show that Pϵ has the form (3.5). We focus on the term Y in (3.7)-(3.8) and we claim that it is nothing but Rθ. Indeed, from definition (3.6) of R, since τ1(θ) ≡−1 and τn(θ) ≡1, we obtain Rθ(x) = Pn−1 i=2 (−τ ′ i(θ))Ri(x) , and, by differentiating the identity Z′(τi(θ), θ) ≡0, we find that −τ ′ i(θ) = Z′ θ(τi) Z′′(τi) = R′(τi) Z′′(τi) = ρi . Hence, Y = Rθ, and Bojanov’s polynomial (3.8) is Pϵ(x) = Z(x) + ϵR(x) + ϵ2 2 Rθ(x) , or, since R = Zθ, Pϵ(x) = Z(x, θz) + ϵZθ(x, θz) + ϵ2 2 Zθθ(x, θz) = Z(x, θz + ϵ) + o(ϵ2). So, Pϵ is nothing but the second order Taylor expansion of the Zolotarev poly-nomial Z(x, θz + ϵ) with the perturbed parameter θ in a neighbourhood of θz. In particular, the equality ∥Pϵ∥= 1 + o(ϵ2) is now straightforward, and moreover, the key inequality (3.10) to be verified turns out to be d := R(k) θ (z)Z(k+2)(z) −[R(k+1)(z)]2 ? < 0, (3.14) exactly the same as V. Markov considered. Basically, all three proofs – by V. Markov, Bernstein–Tikhomirov and Bojanov – deduce that Mk(z) = Z(k)(z, θz), Z(k+1)(z) = 0 ⇒ |Z(k)(z, θz)| < |Z(k)(zϵ, θz + ϵ)| . 32 Twelve Proofs of the Markov Inequality 3.3 Proofs of Dubovitsky–Milyutin and Mohr Dubovitsky–Milyutin’s proof. The main goal of , as postulated in section 5◦, is to show, “as a result of the analysis of Euler equations for the first and second variation, that the optimal polynomial [that attains the global maximum Mk] is uniquely determined and is the Chebyshev polynomial Tn”. The first two pages describe some general theory, the proof itself takes another two pages. In our notations, they start with the formula p(k)(z) Z(k)(z) = Z p(x) Z(x) dµ := n X i=1 (−1)iµip(τi) ! (6) (which is the analogue of (3.12)). After a while, the proof arrives at verification of the following inequality (which is the last but one formula on the very last page): [R(k+1)(z)]2 Z(k)(z) Z(k+2)(z) − Z [R′(x)]2 Z(x) Z′′(x) dµext ? < 0 . (10) With µext being the same measure µ from (6) but without the endpoints (so to say), this inequality is identical to inequality (3.13) considered by Bojanov, which as we showed is the same as the inequality (3.14) considered by Markov. At this point, nothing says that we are approaching the end, but then the magic happens. The next and final expression appears like a rabbit pulled from a hat. Quotation: “Since R(x)(x−β) = Z′(x)(x2−1), therefore by making use of R(k)(z) = Z(k+1)(z) = 0 and identity (6), we can reduce (10) to 1 R(k+1)(z) »R′(β)R(x) −R(β)R′(x) x −β –(k−1) x=z + k(k + 1)Z(k)(z)R(β) (z −β)Z(k+2)(z)(β2 −1) > 0 .” (11) The last two paragraphs swiftly show that both summands are positive (they are indeed), and that’s the end of the article. I don’t think that this “proof” can be taken seriously. First of all, both Markov and Bojanov spent more than a page on rather fine calculations before they brought their analogues of (10) to some clearer forms. It is hard to believe that Dubovitsky–Milyutin managed to do it in a few lines (which they did not even bother to present). Secondly, no matter how you transform (10), the final relation should be still equivalent to that of Markov. In (11), the expression in square brackets is equal to what we denoted in (2.12) by −R(β)ψ(x), so (11) is identical to −R(β)ψ(k−1)(z) R(k+1)(z) + k(k + 1)Z(k)(z)R(β) (z −β)Z(k+2)(z)(β2 −1) > 0 . (11′) This looks very close to Markov’s formula (2.20), but there is no match. Trigonometric proof of Mohr. Mohr starts his paper by making the change of variable, x = cos θ, thus switching from algebraic polynomials p(x) to Aleksei Shadrin 33 the cosine polynomials φ(θ). With such a switch, the Markov problem becomes the problem of finding Mk := sup ∥φ∥≤1 ∥φ[k]∥ Mk(ξ) := sup ∥φ∥≤1 |φk|  , where φ = − 1 sin θ ∂ ∂θ φ , ∥φ∥= max θ∈[0,π]|φ(θ)| . Mohr wants to show that “this supremum is attained exactly for φ(θ) = cos nθ”, so in §1.7 he assumes that Mk = Γk , (3.15) with some cosine polynomial Γ and some ξ ∈[0, π], and in §2 tries to prove that the case when Γ has less than n + 1 equioscillation points is impossible. 1) I did not understand the reasons to move to trigonometry as Mohr con-siders his cosine polynomials only on the interval [0, π], i.e., he does not make any use of periodicity (as one could expect). With such a move, nothing really changes except for complicating the matter of things. 2) At the begining, the proof develops as in the algebraic case. In particular, Mohr shows (§§2.1-2.5) that the extremal polynomial Γ has at least n points of equioscillation, and if it has exactly n points, then its resolvent satisfies Rk = 0, therefore ξ is strictly inside [0, π], hence Γk+1 = 0. (The latter means, by the way, that Mk(ξ) is not necessarily the global maximum, but only an extreme value of Mk(·).) 3) However, the final part starting from §2.13 is taking more and more strange forms, and in §2.15, assuming actually that Mk(ξ) = Γk, Rk = 0, Γk+1 = 0, (3.16) Mohr managed to construct a family of polynomials φ such that ∥φ∥≤1, φk > Γk . (3.17) This is of course a contradiction to the initial guess (3.15), so one might have concluded that the intermediate assumption that Γ has exactly n equioscilla-tions was false. But it is also a contradiction to (3.16), which as we know may well be true for some Γ of Zolotarev type. I think that Mohr somehow got it wrong (in his formula (30), I suspect). 4) Even more strange is that Mohr does not consider relations (3.17) as something extraordinary, and spends two pages more in deriving further state-ments before he finally arrives at a contradiction. 3.4 Limitations of variational and “small-o” methods All three authors – Bernstein, Tikhomirov and Bojanov – while using the small-o arguments, arrived actually, at exactly the same conclusion which was pro-vided by V. Markov. 34 Twelve Proofs of the Markov Inequality Theorem 3.14. The local extreme values of Mk(·) attained by a polyno-mial other than Tn are local minima, or, equivalently, all local maximal values of Mk(·) are attained by the Chebyshev polynomial ±Tn. The only difference is that V. Markov proved that Mk(·) indeed have local maxima and minima. What is important in such a conclusion is that it shows that we cannot apply the variational or a “small-o” method to the Markov-type problem, unless we are sure that the local behaviour of Mk,F(·) follows the pattern given by the theorem above. Example 3.15. Consider the Landau–Kolmogorov problem Mk,σ(z) = sup f∈W n+1 ∞ (σ) |f (k)(z)|, z ∈[−1, 1] , where W n+1 ∞ (σ) = {f : ∥f∥≤1, ∥f (n+1)∥≤σ}. For σ = 0 it reduces to the Markov problem for polynomials, hence for small σ, the pointwise bound Mk,σ(z) should be close to the Markov pointwise bounds Mk(z). The function Mk(z) has (n−k) local minima and (n−k−1) local maxima as illustrated on the graph below (for n = 3 and k = 1). Now, according to Pinkus’ results , the Chebyshev-like function T∗∈W n+1 ∞ (σ) that attains the value Mk,σ(z) at z = 1 takes other values of Mk,σ(z) only at a finite set of (n−k) points, and similarly for b T∗which is extremal for z = −1. As σ →0, these points will tend to the ends of Zolotarev intervals (ξi) and (ηi), respectively, and we see that, for small σ, there are local maxima of Mk,σ(·) that are achieved by functions of Zolotarev type (the maximum at z = 0 on the figure). 0 2 4 6 8 10 –1 –0.8 –0.6 –0.4 –0.2 0.2 0.4 0.6 0.8 1 x Hence, for small σ, we cannot prove that T∗is the global solution using variational or “small-o” methods. It does not mean that this is not true, most likely it is, but we certainly need other methods to prove it. In fact, the same picture is true for any σ > 0 when the extremal functions for z = 1 and for z = −1 are two proper Zolotarev splines (our Conjecture 6.1 in that, for σ > σ0, the function Mk,σ(·) is monotone on [0, 1] is not true, although it may still be true for σ = ∥T (n+1) n+1,r ∥). Aleksei Shadrin 35 4 Pointwise majorants 4.1 The case k = 1 Here we show how Andrei Markov proved the global inequality for the 1-st derivative using the fact that Zolotarev polynomials form the extremal set for the pointwise problem. Theorem 4.1 (A. Markov (1889)). We have sup ∥p∥≤1 ∥p′∥= T ′ n(1) = n2 . (4.1) Proof. For a fixed θ, the Zolotarev polynomial Zn(x, θ) satisfies the differ-ential equation 1 −y(x)2 = (1 −x2)(x −γ)(x −δ) n2(x −β)2 y′(x)2 , or y′2 = (x −β)2 (x −γ)(x −δ) · n2(1 −y2) 1 −x2 , where β, γ, δ are of the same sign, and |x| ≤1 < |β| < |γ| < |δ|. –1 –0.5 0.5 1 1.5 2 2.5 –1 –0.5 0.5 1 1.5 2 β γ δ The latter implies 0 < (x −β)2 (x −γ)(x −δ) < 1 , whence y′2 ≤n2(1 −y2) 1 −x2 ≤ n2 1 −x2 . 36 Twelve Proofs of the Markov Inequality The same inequality is valid for the Chebyshev polynomial Tn, for its transfor-mations Tn(ax + b) with |a| < 1, and for Tn−1. Hence M1(z) ≤ n √ 1 −z2 ⇔ |p′(x)| ≤ n √ 1 −x2 ∥p∥, (4.2) and we have arrived at the Bernstein inequality for algebraic polynomials (which A. Markov did not stop on). The last step is described in every monographs: a) if |x| ≤cos π 2n, then n √ 1−x2 ≤n2, b) if |x| > cos π 2n, then |p′(x)| ≤|T ′ n(x)|∥p∥≤n2 ∥p∥. Comment 4.2. Nowadays, the usual way to prove the Bernstein “alge-braic” inequality (4.2) (hence A. Markov’s inequality (4.1)) is through the Bern-stein inequality for trigonometric polynomials ∥t′ n∥≤n ∥tn∥, (4.3) since the latter has a very simple proof based on the so-called comparison lemma: ∥tn∥< 1, |tn(η)| = | cos nξ| ⇒ |t′ n(η)| < n| sin nξ| . However, Bernstein himself moved the other way round . Firstly, exactly in the same way as A. Markov (see the next comment), he derived (4.2). With the substitution x = cos θ, this gives the trigonometric version (4.3) only for even polynomials tn(θ) = P ak cos kθ, so he proved one more algebraic inequality ∂ ∂x(p(x) √ 1 −x2) ≤ n √ 1 −x2 max |p(x) p 1 −x2|, p ∈Pn−1 , which provides (4.3) for odd tn(θ) = P bk sin kθ. Finally, he got the general result by a tricky combination of those two. Comment 4.3. Bernstein derived the “Bernstein” inequality (4.2) exactly in the same way as A. Markov, which we have just described. He accompanied his result with the following footnote: “This is the statement of A. Markov’s theorem given in his aforementioned paper. Unfortunately, I became acquainted with that paper, as well as with the composition of V. A. Markov, only when preliminary algebraic theorems, which constitute the content of the present chapter, were found and derived independently by my-self. No doubt earlier acquaintance with the ideas of these scientists would have simplified my task and, probably, the presentation of this chapter. However, I considered it unnecessary to put changes into my fully accomplished proofs, because of the auxiliary character of the above-mentioned theorems ...” and there are further 2-3 lines of these beautiful poetry. Aleksei Shadrin 37 4.2 Bernstein’s results Bernstein was very enthusiastic about Markov’s inequality. Not only made he Markov’s results available to the western public, but he also put a lot of effort into deepening and improving them. It was not until 1938 that he managed to find a simpler proof, but meanwhile he produced several important refinements. 1) First of all, by iterating his inequality for the 1-st derivative, |p′(x)| ≤ n √ 1 −x2 ∥p∥, (4.4) Bernstein found a pointwise majorant for all k: |p(k)(x)| ≤ √ k √ 1 −x2 !k n(n −1) · · · (n −k + 1) ∥p∥. (4.5) The proof for the case k = 3 gives the general flavour: |p′′′(x)| ≤ n −2 p x2 1 −x2 ∥p′′∥C[−x1,x1] ≤ n −2 p x2 1 −x2 n −1 p x2 2 −x2 1 ∥p′∥C[−x2,x2] ≤ n −2 p x2 1 −x2 n −1 p x2 2 −x2 1 n p 1 −x2 2 ∥p∥C[−1,1], where x1, x2 are any numbers satisfying x2 < x2 1 < x2 2 < 1, and the choice x2 1 −x2 = x2 2 −x2 1 = 1 −x2 2 = 1−x2 k is clearly optimal and does the job. The estimate (4.5) shows in particular that, for a given k, the order of the k-th derivative of p ∈Pn inside the interval is O(nk) thus differing essentially from the order O(n2k) at the endpoints. 2a) He did not stop with that and, in 1913, established the exact asymp-totic bound: Mk(x) ∼  n √ 1 −x2 k . For this proof, Bernstein found an exact form of the polynomial q ∈Pn−2 that deviates least from the function φ(x) = cxn + σxn−1 + A x −a , |a| > 1, and, letting A →0, derived asymptotic formulas for Zolotarev polynomial. 2b) In the same paper , still bothered by complexity of V. Markov’s proof, he suggested simpler arguments that provide asymptotic form of Markov’s inequality |p(k)(x)| < Mk(1 + ϵn), ϵn = O(1/n2). (4.6) 38 Twelve Proofs of the Markov Inequality Here they are. Assuming that ∥p∥C[−1,1] = 1, it is quite easy to show that |p(k)(1)| ≤T (k) n (1), and applying this inequality to the interval [−1, x] we obtain the majorant |p(k)(x)| ≤T (k) n (1)  2 x + 1 k =: F(x). Comparing it with the previous majorant (4.5), |p(k)(x)| ≤ √ k √ 1 −x2 !k n! (n −k)! =: G(x), we notice that, on [0, 1], the functions F and G are decreasing and increas-ing respectively, hence the common bound for |p(k)(x)| is given by the value F(x∗) = G(x∗) which results in (4.6). 3) Finally, in 1930, Bernstein generalized his classical inequality (4.4) to the case when p is bounded by a polynomial majorant: if |pn+m(x)| ≤µ(x) = p P 2(x) + (1−x2)Q2(x) , where P and Q are two polynomials of degree m and and (m−1) respectively, which have interlacing zeros, then |p′ n+m(x)| ≤ sˆ nP(x) + xQ(x) + (x2−1)Q′(x) ˜2 + (1−x2) ˆ P ′(x) + nQ(x) ˜2 1 −x2 . (4.7) As a consequence, he concluded (without proofs) that if f(x) > 0 is any continuous function, then |pn(x)| ≤f(x) ⇒ |p′ n(x)| ≤ nf(x) √ 1 −x2 (1 + O(1/n)) , and, moreover, |pn(x)| ≤f(x) ⇒ ∥p(k) n ∥≤T (k) n (1)f(±1)(1 + O(k2/n)) . (With respect to the last two results, I have some doubts. I think that the value En(f) of the best approximation to f should be somehow involved.) 4.3 Schaeffer–Duffin’s majorant In 1938, the same year when Bernstein produced his proof of Markov’s in-equality using small-o arguments, two American mathematicians, R. Duffin and A. Schaeffer, came out with another proof , the main part of which was a generalization of the pointwise Bernstein inequality p′(x) ≤ n2 √ 1−x2 ∥p∥to higher derivatives. It is a very nice and short paper, so we only sketch briefly the main elements of the proofs. Let Tn be the Chebyshev polynomial and Sn(x) := 1 n √ 1 −x2 T ′ n(x). Aleksei Shadrin 39 Theorem 4.4 (Schaeffer-Duffin (1938)). Let p ∈Pn be such that |p(x)| ≤1 ≡|Tn(x) + iSn(x)| . Then |p(k)(x)| ≤Dk(x) := |T (k) n (x) + iS(k) n (x)| . Proof (Sketch). The formulation of the theorem is a bit misleading because what Schaeffer–Duffin really assume is that, by Bernstein’s inequality, |p′(x)| < D1(x) = |T ′ n(x) + iS′ n(x)| = n √ 1 −x2 , p ̸= ±Tn (and it is essential that S′ n is unbounded near the endpoints). From that it follows that, for every α ∈(0, π) and for every λ ∈[−1, 1], the function F ′(x) := cos α T ′ n(x) + sin α S′ n(x) −λp′(x) = n sin(nt−α) √ 1−x2 −λp′(x)  has at least n distinct zeros in (−1, 1). They also prove that F (n+1) = cS(n+1) does not change sign, hence, on (−1, 1), F (k) has exactly (n + 1 −k) zeros all of which are simple. Finally, they show that, if one supposes that, at some x0 ∈(−1, 1), |p(k)(x0)| ≥Dk(x0), then one can choose particular α and λ so that F (k) has a double zero at such x0, a contradiction that proves the theorem. □ Lemma 4.5. For all k, we have a) Dk(·) is a strictly increasing function on [0, 1), b) the (n−k) zeros of T (k) n interlace with (n−k+1) zeros of S(k) n . Proof (Sketch). This lemma is trivial for k = 1 because D1(x) = n √ 1−x2 and S′ 1(x) = −nTn(x) √ 1−x2 (hence a simple proof of A. Markov’s inequality), but for general k Schaeffer–Duffin had to come through the following arguments. Both functions T (k) n and S(k) n are independent solutions of the differential equation (1 −x2)y′′(x) −(2k + 1)xy′(x) + (n2 −k2)y(x) = 0, hence, by Sturm’s theorem, their zeros interlace. The latter equation may also be rewritten in the equivalent form d dx  (1 −x2)[fk+1(x)]2 + (n2−k2)[fk(x)]2 = 4kx [fk+1(x)]2 , (4.8) 40 Twelve Proofs of the Markov Inequality to which [T (k) n (x)]2 and [S(k) n (x)]2, hence also [Dk(x)]2, are particular solutions. Substituting the power series of [Dk(x)]2 into (4.8), they derive by induction on k that [Dk(x)]2 = ∞ X i=0 a2ix2i, a2i > 0. □ Proof of V. Markov’s inequality. From two previous results, Schaeffer– Duffin derived V. Markov’s inequality ∥p(k)∥≤∥T (k) n ∥∥p∥ exactly in the same way as A. Markov’s inequality for the 1-st derivative ∥p′∥≤ n2∥p∥can be derived from the Bernstein inequality |p(x)| ≤ n √ 1−x2 ∥p∥. Namely, for x∗being the rightmost zero of S(k) n , it follows that a) if |x| ≤x∗, then |p(k)(x)| ≤Dk(x) ≤Dk(x∗) = T (k) n (x∗), b) if |x| > x∗, then |p(k)(x)| ≤|T (k) n (x)| (by Rolle’s theorem). and the proof is completed. □ 0 2 4 6 8 10 y –1 –0.8 –0.6 –0.4 –0.2 0.2 0.4 0.6 0.8 1 x 1) At the (n −k + 1) zeros of S(k) n (x) Mk(x) = Dk(x) = |T (k) n (x)| . 2) Otherwise Mk(x) < Dk(x) . Dk(x) Mk(x) For k = 1, the Schaeffer–Duffin majorant coincides with that of Bernstein, D1(x)= n √ 1−x2 , but they did not try to find its exact form for any other k. We performed some computations ourselves. Lemma 4.6. For all k, we have 1 n2 [Dk+1(x)]2 = k X m=0 bm (1 −x2)k+1+m , where bm = k+m 2m  12 · 32 · · · (2m −1)2 · (n2−(m+1)2) · · · (n2−k2). Aleksei Shadrin 41 Proof. Assuming that 1 n2 [Dk(x)]2 = Pk−1 m=0 am (1−x2)k+m , from (4.8) we ob-tain 1 n2 −k2 b0 = a0, 1 n2 −k2 b1 = k + 1 k −1 a1, . . . , 1 n2 −k2 bk−1 = 2k −1 1 ak−1, and the last coefficient bk = 12 · 32 · · · (2k−1)2 is found from k X m=0 bk = [Dk(0)]2, Dk(0) =  n(n2 −12)(n2 −32) · · · (n2 −(k −2)2), odd k; n2(n2 −22)(n2 −42) · · · (n2 −(k −2)2), even k. In particular, we get 1 n2 [D2(x)]2 = (n2−1) (1 −x2)2 + 1 (1 −x2)3 , 1 n2 [D3(x)]2 = (n2−1)(n2−4) (1 −x2)3 + 3(n2−4) (1 −x2)4 + 9 (1 −x2)5 , 1 n2 [D4(x)]2 = (n2−1)(n2−4)(n2−9) (1 −x2)3 + 6(n2−4)(n2−9) (1 −x2)4 + 45(n2−9) (1 −x2)5 + 225 (1 −x2)6 . 4.4 Generalization: Vidensky majorant In 1951, Vidensky extended results of Schaeffer–Duffin to the case when restrictions on p are given by an arbitrary polynomial majorant: |p(x)| ≤µ(x) = p R2m(x), where R2m is any polynomial of degree ≤2m that is non-negative on [−1, 1]. By Lucas theorem, for any n ≥m, such a polynomial can be represented in the form R2m(x) = P 2 n(x) + (1 −x2)Q2 n−1(x), where Pn and Qn−1 satisfy the following conditions: a) Pn ∈Pn and Qn−1 ∈Pn−1; b) all zeros of Pn and Qn−1 lie in [−1, 1] and interlace; c) the leading coefficients of Pn and Qn−1 are positive. Moreover, Pm+n(x) + i √ 1−x2Qm+n−1(x) = Pm(x) + i √ 1−x2Qm−1(x) [Tn(x) + iSn(x)] (4.9) Theorem 4.7 (Vidensky (1951)). Let p ∈Pn be such that |p(x)| ≤µ(x) ≡|Pn(x) + i p 1 −x2Qn−1(x)| . Then |p(k)(x)| ≤Vk(x) := P (k) n (x) + i p 1 −x2Qn−1(x) (k) . 42 Twelve Proofs of the Markov Inequality In his proof, Vidensky follows the same route as Schaeffer–Duffin, taking as the starting point the generalization of the classical Bernstein inequality (that was established by Bernstein himself, see (4.7)) |p′(x)| < V1(x) = P ′ n(x) + i p 1 −x2Qn−1(x) ′ , p ̸= ±Pn . However, it was not a straightforward journey, because the Schaeffer–Duffin arguments heavily relied on the fact that both Tn(x) and √ 1 −x2 1 nT ′ n(x) sat-isfy one and the same differential equation, whereas Pn and √ 1 −x2Qn−1(x) have no such property in general. One of Vidensky innovations was a state-ment about functions with interlacing zeros that generalized the well-known V. Markov’s result about polynomials. Lemma 4.8. Let f1, f2 ∈C1[a, b] be two functions such that any linear combination c1f1 + c2f2 has ≤n zeros counting multiplicity. If both f1 and f2 have n zeros, all simple, and these zeros interlace, then zeros of f ′ 1 and f ′ 2 interlace too. Proof (Sketch). Let (ti)n i=1 be the zeros of f1, then by the interlacing conditions the function g = c1f1 + c2f2 alternates in sign on the sequence (ti)n i=1, hence all of its zeros are simple. The latter means that, for any x, the system c1f1(x) + c2f2(x) = 0, c1f ′ 1(x) + c2f ′ 2(x) = 0 has only the trivial solution, thus f1(x)f ′ 2(x) −f ′ 1(x)f2(x) ̸= 0, ∀x ∈[a, b] . From here we get that at the points (si)n−1 i=1 where f ′ 1(si) = 0, we have sgn f ′ 2(si) = sgn f1(si) = (−1)iγ, and the conclusion follows. □ The only result of Schaeffer–Duffin (based on the differential equation) for which Vidensky did not find an appropriate substitution was monotonicity of Dk, i.e., he did not find any general tools to verify the inequality Vk(x) ? ≤Vk(x∗). This is however the crucial point in the pass from the pointwise estimate to the global one, and as a result Vidensky could not obtain the Markov-type inequality for an arbitary majorant. In a series of paper -, he covered a number of particular cases where he suceeded to prove monotonicity of Vk using monotonicity of Dk. Aleksei Shadrin 43 Theorem 4.9 (Vidensky (1958-71)). Let µ2(x) = Qm i=1(1 + a2 i x2), k = 1, or µ2(x) = (1 + a2x2)(1 + b2x2), k ≥1, or µ2(x) = 1 + (a2 −1)x2, k ≥1. Then [Vk(x)]2 = P∞ 0 c2ix2i with c2i ≥0, whence Mk,µ := sup |p(x)|≤µ(x) ∥p(k)∥= ∥ω(k) µ ∥, where ωµ = Pn is the polynomial oscillating most between ±µ. These works of Vidensky remained largely unknown, and some of his results were rediscovered later. For example, the Markov-type inequality with circular majorant µ(x) = √ 1 −x2 was reproved in 70s by Pierre and Rahman in , , . Bojanov and Naidenov - proved the interlacing property for perfect splines and alike using arguments quite similar to those of Vidensky. We close this section with our own observation about the explicit form of the function Vk(x) which Vidensky did not try to compute. Now, there is no differential equation that allows us to find the exact formula for Dk for all n, k as in the previous section, but we can derive some recurrence relations instead. Lemma 4.10. For each k [Vk(x)]2 = [pk(x)]2 + (1 −x2)[qk(x)]2 (1 −x2)2k−1 , where pk and qk are polynomials from Pm+k−1 and Pm+k−2, respectively. Proof. The formula follows from the representation Pm+n(x) + i √ 1−x2Qm+n−1(x) (k) = (−i) " pk(x) + i √ 1 −x2qk(x) (1 −x2)k−1/2 # [Tn(x) + iSn(x)] which is derived from (4.9) by induction using the relation T ′ n(x) + iS′ n(x) = (−i) n √ 1 −x2 [Tn(x) + iSn(x)] . Moreover, we have the recurrence formula p1(x) = nPm(x) + xQm−1(x) −(1 −x2)Q′ m−1(x), q1(x) = nQm−1(x) + P ′ m(x), pk+1(x) = (1−x2)[p′ k(x) + nqk(x)] + (2k−1)xpk(x); qk+1(x) = (1−x2)q′ k(x) + 2(k−1)xpk(x) −npk(x) However, it is not clear whether it is possible to extract from here information about momotonicity of Vk (assuming, say, that µ is monotone). □ 44 Twelve Proofs of the Markov Inequality 5 Markov–Duffin–Schaeffer inequalities 5.1 Duffin–Schaeffer refinement for the discrete restrictions In 1941, Duffin–Schaeffer (now in alphabetical order) presented another proof of the Markov inequality that moreover strengthened Markov’s result in two different directions. Namely, they showed that in order to reach the conclusion |p(k)(x)| ≤T (k) n (1) it is sufficient, instead of the uniform bound ∥p∥≤1, to assume that |p(x)| ≤1 at the n + 1 points x ∈{cos πi n }n i=0 only. At the same time they showed that, under this weaker assumption, the Markov inequality can be extended to the complex plane. They started out by taking a rather general point of view. Namely, given a polynomial q with n distinct real zeros, q(z) = c n Y ν=1 (z −xν), q′(z) = q(z) n X ν=1 1 z −xν , (5.1) they tried to figure out the class K of polynomials and the conditions on q for which the derivative q′ takes the values larger than the derivative of any other polynomial p ∈K. By the Lagrange interpolating formula with the nodes (xν), we have p′(z) = n X ν=1 p′(xν) q′(xν) q(z) z −xν , and it is suggestive to consider those p ∈Pn that satisfy | p′(xν) q′(xν)| ≤1, so that p′(z) = q(z) n X ν=1 ϵν z −xν , ϵν ∈[−1, 1]. (5.2) It is clear that we may restrict ourselves to the polynomials for which ϵν = ±1, in particular, for real x we obtain |p′(x)| ≤ n X ν=1 q(x) x −xν . (5.3) Now, one needs to find a way to compare the two sums in (5.1) and (5.2), and Duffin–Schaeffer’s choice was the following elementary lemma from complex analysis. Lemma 5.1. Let p(z) = anzn + · · · a0 be any polynomial, and let q(z) = bnzn + · · · b0 be a polynomial with all its zeros lying to one side of a line ℓin the complex plane. If |p(z)| ≤|q(z)| on ℓ, Aleksei Shadrin 45 then |p(k)(z)| ≤|q(k)(z)| on ℓ, k = 1..n. Theorem 5.2. Let q(z) = c Qn ν=1(z −xν) with distinct xν ∈R, and let p ∈Pn be a polynomial that satisfies |p′(x)| ≤|q′(x)| at the zeros of q. If all zeros of q lie to the left of some b ∈R, and for some ξ0 ∈R we have |q(ξ0 + iy)| ≤|q(b + iy)|, ∀y ∈R , (5.4) then |p(k)(ξ0 + iy)| ≤|q(k)(b + iy)| , ∀y ∈R . (5.5) Proof. There is no loss of generality in assuming that ξ0 = 0. 1) Set b q(z) := c n Y ν=1 (z −|xν|), so that b q′(z) = b q(z) n X ν=1 1 z −|xν|. Then, from (5.2), we derive p′(iy) q(iy) = n X ν=1 ϵν iy −xν = n X ν=1 ϵν(xν + iy) x2 ν + y2 = n X ν=1 ϵνxν x2 ν + y2 + i n X ν=1 ϵνy x2 ν + y2 ≤ n X ν=1 |xν| |xν|2 + y2 + n X ν=1 iy |xν|2 + y2 = n X ν=1 1 iy −|xν| = b q′(iy) b q(iy) . Since clearly |b q(iy)| = |q(iy)|, ∀y ∈R , (5.6) we conclude that |p′(iy)| ≤|b q′(iy)|, ∀y ∈R . (5.7) 2) Now we are ready to apply Lemma 5.1. From (5.7), since all zeros of b q′ lie to the right of the line ℓ= {iy}, we obtain |p(k)(iy)| ≤|b q(k)(iy)|, ∀y ∈R . (5.8) Now we use Lemma 5.1 to evaluate |b q(k)(iy)|. From (5.6) and (5.4) (with ξ0 = 0), it follows that |b q(iy)| ≤|q(b + iy)| for all y ∈R, and because all zeros of q lie to the left of ℓ= iR, we conclude |b q(k)(iy)| ≤|q(k)(b + iy)|, ∀y ∈R , (5.9) and that together with (5.8) proves (5.5). □ 46 Twelve Proofs of the Markov Inequality Theorem 5.3 (Duffin–Schaeffer (1941)). If p ∈Pn satisfies |p(x)| ≤1, x ∈{cos πi n }n i=0 , then |p(k)(x + iy)| ≤|T (k) n (1 + iy)|, ∀x ∈[−1, 1], ∀y ∈R . (5.10) Proof. Theorem 5.3 is reduced to Theorem 5.2 by means of the following statements. Lemma 5.4. If a polynomial p ∈Pn satisfies |p(x)| ≤|Tn(x)| wherever |Tn(x)| = 1 then |p′(x)| ≤|T ′ n(x)| at the zeros of Tn . Lemma 5.5. We have |Tn(x + iy)| ≤|Tn(1 + iy)|, ∀x ∈[−1, 1], ∀y ∈R . We omit the proofs, and make only short comments. The main remark is that, unlike the rather general Theorem 5.2, these proofs depend on specific properties of the Chebyshev polynomials. 1) The first lemma is derived by differentiating the Lagrange formula with the nodes (cos πi n ), p(x) = n X ν=0 p(tν) ω′(tν) ω(x) x −tν , ω(x) := (x2 −1)T ′ n(x), and using the differential equation (x2 −1)T ′′ n(x) + xTn(x) = n2Tn(x) . 2) The second lemma is not that straightforward, and Duffin–Schaeffer’s proof is a bit tricky and lengthy, where the specific form of the roots of Tn play an important role. □ Comment 5.6. Duffin–Schaeffer’s original proof of Theorem 5.2 develops a bit differently from our presentation. They use Lemma 5.1 only once, in deriving the inequality (5.9) for k = 1, and then combine the latter with (5.7), thus proving Theorem 5.2 firstly for k = 1. They proceed further by induction on k, and for that they prove that if |p(k)(x)| ≤|q(k)(x)| wherever q(k−1)(x) = 0, then |p(k+1)(x)| ≤|q(k+1)(x)| wherever q(k)(x) = 0 . We cut this step and used Lemma 5.1 to derive both estiamtes (5.8) and (5.9) for all k at once. Aleksei Shadrin 47 Comment 5.7. Lemma 5.1 appeared originally in 1926 in Bernstein’s mo-nograph as “Troisieme corollaire” on pp. 55-56, with a rather lengthy proof (if you move all the way through). Later, Bernstein also showed that it is valid for a circle c instead of a line ℓ(by mapping c onto ℓusing a M¨ obius transform). Duffin–Schaeffer were perhaps unaware of this result and in their work gave their own short proof based on Rouche’s theorem (without making it an independent statement). In 1947, de Bruijn generalized the result for the boundary of any convex domain and made the proof even shorter. His proof is however too concise, so here is the one from Rivlin’s book on p. 142. Proof. Since q has no zeros in the half-plane H, and deg p ≤deg q, the function p/q is analytic in H, hence by the maximum principle maxz∈H | p(z) q(z)| = maxz∈ℓ| p(z) q(z)| ≤1. Thus, for any |λ| > 1, the polynomial p −λq has no zeros in H, and by the Gauss-Lucas theorem the same its true for any of its derivative p(k) −λq(k), hence |p(k)(z)| ≤|q(k)(z) in H ∪ℓ. □ Comment 5.8. The Duffin–Schaeffer inequality (5.10) makes not much sense for the points z = x + iy outside the unit (or even smaller) disc, because for such z a better estimate can be obtained by simpler tools. Let q(x) = Qn ν=1(x −xν) and let (τν) satisfy τ0 < x1 < τ1 < · · · < xn < τn. Then, for any p ∈Pn such that |p(τν)| ≤|q(τν)|, we have the inequality |p(k)(z)| ≤|q(k)(z)|, z ̸∈D, where D is the open disc with (ξ, η) its diameter. Here ξ (resp. η) is the leftmost (rightmost) zero of the polynomial ω(k) n (ω(k) 0 ), where ωi(x) = ω(x) x−τi and ω(x) = Q(x −τi). So, under the assumption of Theorem 5.3, we have |p(k)(x + iy)| ≤|T (k) n (x + iy)|, x + iy ̸∈Drk , with rk being the rightmost zero of the polynomial (x −1)T ′ n(x). 5.2 Duffin-Schaeffer inequalities with majorant A natural question is whether the Duffin-Schaeffer refinement can be extended to the Markov inequalities with a majorant. Namely, given a majorant µ(x) ≥ 0, let ωµ ∈Pn be the polynomial oscillating most between ±µ which is very likely to attain the supremum Mk,µ := sup |p(x)| ≤µ(x) ∥p(k)∥. 48 Twelve Proofs of the Markov Inequality If that is the case, then for δ∗:= (τ ∗ i ), the set of the oscillating points of ωµ, we define a Duffin-Schaeffer-type constant D∗ k,µ := sup |p(x)|δ∗≤|µ(x)|δ∗ ∥p(k)∥, and ask whether the two values are the same (as they are for µ ≡1). Moreover, since for any µ we have ∥ω(k) µ ∥≤Mk,µ ≤D∗ k,µ, we may try to solve the Duffin-Schaeffer problem even if the solution to the Markov problem is not known. With Duffin-Schaeffer’s general Theorem 5.2, all we have to do is to establish analogues of Lemmas 5.4-5.5 for the corresponding polynomial ωµ. However, this turns out to be a rather difficult task. First of all, the set δ∗= (τ ∗ i ) where ωµ touches the majorant µ becomes not that simple as with the case µ = 1, or is even unknown in the explicit form. But even if you know it (say, as with µ(x) = (1 −x2)m/2) you have to go through a rather delicate analysis to get that |p(m0+1)(x)| ≤|ω(m0+1)(x)| at the zeros of ω(m0) (it may also be not true for the derivatives of order ≤m0). Secondly, as we mentioned, the inequality |ω(m0)(x + iy)| ≤|ω(m0)(1 + iy)| was not that easy to establish even for ω = Tn, and this is another quite serious obstacle in getting Duffin-Schaeffer-type result even for the simplest majorants (using Theorem 5.2). This explains why there were only two results obtained in this direction. Theorem 5.9 (Rahman–Schmeisser (1988)). µ(x) = p 1 −x2 ⇒ ( ∥ω(k) µ ∥= Mk,µ < Dk,µ , k = 1; ∥ω(k) µ ∥= Mk,µ = Dk,µ , k > 1. Theorem 5.10 (Rahman–Watt (1992)). µ(x) = 1 −x2 ⇒ ∥ω(k) µ ∥= Mk,µ = Dk,µ , k > 2. Recently, Nikolov suggested a method that allowed to prove the in-equality |ω(x + iy)| ≤|ω(1 + iy)|, x ∈[−1, 1], y ∈R, for a sufficiently large class of polynomials, namely the ultraspherical polyno-mials ω = P (α,α) orthogonal with the weight (1 −x2)α−1/2 (where α > −1/2). This could lead to the Duffin–Schaeffer inequalities with the majorant µ(x) = (1 −x2)−α/2 which is however unbounded for α > 0. Aleksei Shadrin 49 5.3 Another proof of Duffin–Schaeffer inequality Duffin and Schaeffer derived their inequality starting with the Lagrange repre-sentation of the derivative of a polynomial p′ based on the roots of an a priori given polynomial q. In 1992, we took a more natural approach choosing as a starting point the Lagrange formula with exactly those points where the discrete restrictions on p are actually given. The main tool of this approach is the following lemma about polynomials with interlacing zeros. Lemma 5.11 (V. Markov (1892)). If the zeros of p(x) = Qn i=1(x −si) and q(x) = Qn i=1(x −ti) interlace, i.e., si ≤ti ≤si+1 all i, then the zeros of p(k)(x) =: Qn−k i=1 (x−ξi) and q(k)(x) =: Qn−k i=1 (x−ηi) interlace too (and, moreover, strictly): ξi < ηi < ξi+1 all i. There are many (short) proofs of this remarkable lemma, the simplest one is perhaps by Rivlin [52, p.125], but one may choose also from V. Markov [7, §34], Bojanov , or take that of Vidensky that we gave in Lemma 4.8. We will write p ⪯q if the polynomials p(x) = Qn i=1(x −ti) and q(x) = Qn i=1(x −si) have interlacing zeros, i.e., ti ≤si ≤ti+1. Then the Markov’s lemma can be written as: p ⪯q implies p(k) ≺q(k). Now we begin another version of the book-proof of Markov’s inequality. Book-proof. Given a polynomial q ∈Pn and a sequence δ of n + 1 points, we will study the value sup |p(x)|δ≤|q(x)|δ |p(k)(x)| , x ∈[−1, 1] , and we want to find when it can be majorized by ∥q(k)∥. We obtain the Markov– Duffin–Schaeffer inequality by setting δ = (cos πi n ) and q = Tn. Definition 5.12. Given δ = (τi)n i=0 on [−1, 1], set ω(x) := n Y i=0 (x −τi), ωi(x) := ω(x) x −τi , and let (ηj) and (ξj) be defined as η0 := −1, ω(k) 0 (x) =: c Qn−k j=1 (x −ηj), ω(k) n (x) =: c Qn−k j=1 (x −ξj), ξn−k+1 := +1. For k ∈N, we define the Chebyshev intervals: eT j = [ηj−1, ξj], eT δ = ∪n−k+1 j=1 eT j , the Zolotarev intervals: eZ j = (ξj, ηj), eZ δ = ∪n−k i=1 eZ j . 50 Twelve Proofs of the Markov Inequality Lemma 5.13. For any k, δ, j, the intervals eT j and eZ j are non-empty and, on any Chebyshev interval, we have sgn ω(k) 0 (x) = · · · = sgn ω(k) n (x) on eT j . Proof. We have ωn ⪯· · · ⪯ω0, hence ω(k) n ≺· · · ≺ω(k) 0 . Thus, zeros of ω(k) n and ω(k) 0 strictly interlace, i.e., ξj < ηj < ξj+1, thus eT j and eZ j are well-defined. Further, the j-th zero of any ω(k) i is located between those of ω(k) n and ω(k) 0 , which are ξj and ηj, respectively, i.e. on the Zolotarev interval, hence ω(k) i does not change its sign on the Chebyshev interval. It remains to notice that the leading coefficients of all ωi’s are equal 1, thus at their j-th zeros they change sign in the same way. □ Proposition 5.14. Let q(x) = Qn i=1(x −ti), and let δ = (τi)n i=0 be such that τi−1 < ti < τi, i.e., q alternates in sign on δ. If p ∈Pn satisfies |p(x)| ≤|q(x)| on δ, (5.11) then, for any k, |p(k)(x)| ≤|q(k)(x)| on eT δ . Proof. By the Lagrange interpolation formula with nodes (τi), p(x) = n X i=0 p(τi) ω′(τi) ω(x) x −τi = n X i=0 p(τi) ω′(τi)ωi(x) hence, |p(k)(x)| = n X i=0 p(τi) ω′(τi)ω(k) i (x) ≤ n X i=0 p(τi) ω′(τi) ω(k) i (x) (5.11) ≤ n X i=0 q(τi) ω′(τi) ω(k) i (x) . Now, both sequences q(τi) and ω′(τi) alternate in sign, hence sgn q(τi) ω′(τi) = const for all i, and, by Corollary 5.13, on any Chebyshev interval eT j , we have sgn ω(k) i (x) = const for all i as well. Thus, n X i=0 q(τi) ω′(τi) ω(k) i (x) = n X i=0 q(τi) ω′(τi)ω(k) i (x) = |q(k)(x)|, i.e., |p(k)(x)| ≤|q(k)(x)|. □ Theorem 5.15 (Shadrin (1992)). Let q have all its zero in [−1, 1]. If |p(x)| ≤|q(x)| at the zeros of (x2 −1) q′(x), then |p(k)(x)| ≤max n |q(k)(x)|, | 1 k(x2 −1)q(k+1)(x) + xq(k)(x)| o . Aleksei Shadrin 51 Proof. We have ω(x) = c Qn i=0(x −τi) = (x2 −1)q′(x), hence ω0(x) = (x −1)q′(x), ω(k) 0 (x) =: Qn−k j=1 (x −ηj), ωn(x) = (x + 1)q′(x), ω(k) n (x) =: Qn−k j=1 (x −ξj) and by the previous proposition, |p(k)(x)| ≤|q(k)(x)| on eT j = [ηj−1, ξj], (5.12) so that it is sufficient to prove that |p(k)(x)| < |r(x)| on eZ j = (ξj, ηj) , where r(x) := rk(x) := 1 k(x2 −1)q(k+1)(x) + xq(k)(x). 1) From the equalities ω(k) n/0(x) = (x ± 1)q(k+1)(x) + kq(k)(x), it follows that r(x) = 1 k(x + 1)ω(k) 0 (x) −q(k)(x) = 1 k (x −1)ω(k) n (x) + q(k)(x). From the definition of (ξj) and (ηj), we have (x + 1)ω(k) 0 (x) x∈{ηj} = 0 and (x −1)ω(k) n (x) x∈{ξj} = 0, hence r(ηj) = −q(k)(ηj), r(ξj) = +q(k)(ξj), ∀j. 2) Comparing these relations with (5.12) we obtain the inequalities |p(k)(x)| ≤|r(x)| on (ηj), (ξj) and, because q clearly does not change its sign on the Chebyshev interval [ηj−1, ξj], we also get the following sign pattern: sgn r(ηj−1) = −sgnr(ξj). 3) So, for any γ ∈[0, 1], at the endpoints of eT j = [ηj−1, ξj], we have |γp(k)(x)| ≤|r(x)|, sgn r(ηj−1) = −sgnr(ξj), hence each of the polynomials r ± γp(k) ∈Pn−k+1 has a zero in each eT j , i.e., the complete set of n −k + 1 zeros on eT δ . 4) Thus, for any γ ∈[0, 1], there are no zeros of r ± γp(k) on eZ j = (ξj, ηj), therefore, |p(k)(x)| < |r(x)| and we are done. □ Theorem 5.16. Let q have all its zero in [−1, 1], and let |p(x)| ≤|q(x)| at the zeros of (x2 −1) q′(x). If | 1 k(x2 −1)q(k+1)(x) + xq(k)(x)| ≤∥q(k)∥, then ∥p(k)∥≤∥q(k)∥. 52 Twelve Proofs of the Markov Inequality Lemma 5.17. For all n, and all x ∈[−1, 1], |T (k) n (x)| ≤T (k) n (1) , (5.13) 1 k(x2 −1)T (k+1) n (x) + xT (k) n (x) ≤T (k) n (1) , (5.14) T (k) n (1) = n2[n2−12]···[n2−(k−1)2] 1·3···(2k−1) . (5.15) Proof. We have Tn(x) = cos nθ, T ′ n(x) = n sin nθ sin θ , x = cos θ. 1) The equality sin nθ = 2 sin θ [cos(n−1)θ + cos(n−3)θ + · · · ] implies that, for k = 1, T (k) n (x) = n−k X i=0 aikTi(x), aik ≥0. (5.16) Differentiating and expanding the terms on the right-hand side we arrive at the same result for all k. Obviously, the maximum of the sum occurs at x = 1, hence (5.13) 2) The equality sin2 nθ+cos2 nθ = 1 transforms into the identity 1−x2 n2 [T ′ n(x)]2+ [Tn(x)]2 = 1, whose differentiation gives (x2 −1)T ′′ n (x) + xT ′ n(x) = n2Tn(x). Differentiating further we obtain the formula (x2 −1)T (k+1) n (x) + (2k −1)xT (k) n (x) = [n2 −(k −1)2]T (k−1) n (x). For x = 1 this reads: T (k) n (1) = n2−(k−1)2 2k−1 T (k−1) n (1), and that proves (5.15). 3) If k = 1, the left-hand side of (5.14) is evaluated as |(x2 −1)T ′′ n(x) + xT ′ n(x)| = |n2Tn(x)| ≤n2 = T ′ n(1), i.e., (5.14) is true for k = 1. If k > 1, then we also have | 1 k(x2 −1)T ′′ n(x) + xT ′ n(x)| = | 1 kn2Tn(x) + k−1 k xT ′ n(x)| ≤n2 = T ′ n(1), and from T (k) n (x) = Pn−k+1 i=0 bikT ′ i(x) , T (k+1) n (x) = Pn−k+1 i=0 bikT ′′ i (x) , bik ≥0 , it follows that 1 k(x2 −1)T (k+1) n (x) + xT (k) n (x) = P i bik[ 1 k(x2 −1)T ′′ i (x) + xT ′ i(x)] ≤ P i bik 1 k(x2 −1)T ′′ i (x) + xT ′ i(x) ≤ P i bikT ′ i(1) = T (k) n (1) . Aleksei Shadrin 53 Theorem 5.18. If p ∈Pn satisfies |p(cos πi n )| ≤1, i = 0..n, then |p(k)(x)| ≤max n T (k) n (x) , | 1 k(x2 −1)T (k+1) n (x) + xT (k) n (x)| o ≤T (k) n (1) . Comment 5.19. Kalliomiemi was the first to notice that the discrete restrictions of Duffin–Schaeffer imply the same pointwise estimate |p(k)(x)| ≤|T (k) n (x)| on the Chebyshev set eT , as in the case of the stronger uniform restrictions ∥p∥≤1. He derived it using the Voronovskaya criterion for the norm of the linear functional µ(p) = p(k)(x), and such a result could have been easily extracted already from V. Markov’s work, but nobody before paid attention to this fact. Proposition 5.14 (which appeared in ) is from the same “very simple, but not noticed before” class. For example, in [52, pp. 125-127], Rivlin gives exactly the same statements as Lemmas 5.11 and 5.13 (they are going back to V. Markov), and even uses them to establish the pointwise estimate outside the interval, yet he does not notice that they also provide the pointwise estimate on the Chebyshev intervals. Comment 5.20. Let D∗ k(x) := sup |p(x)|δ∗≤1 |p(k)(x)| . (5.17) where δ∗= (τ ∗ i ) = (cos πi n ). This is the exact upper bound for the value of the k-th derivative of a polynomial p under the discrete Duffin–Schaeffer restrictions. By the interlacing property, at any given point x ∈[−1, 1], with some i = ix, sgn ω(k) 0 (x) = · · · = sgn ω(k) i (x) = −sgnω(k) i+1(x) = · · · = −sgn ω(k) n (x) and from the Lagrange formula it follows that the set of the extremal polyno-mials for the pointwise problem (5.17) consists of those ps ∈Pn that satisfy sgn ps(τi) = −sgnps(τi+1), i ̸= s, sgn ps(τs) = sgn ps(τs+1) . The set of these polynomials may be viewed as a discrete analogue of the one-parameter family of Zolotarev polynomials. The next figure illustrates how the graphs of D∗ k, Mk and our majorant rk relate to each other. 54 Twelve Proofs of the Markov Inequality 0 2 4 6 8 10 –1 –0.8 –0.6 –0.4 –0.2 0.2 0.4 0.6 0.8 1 x 1) On the Chebyshev intervals: Mk(x) = D∗ k(x) = |T (k) n (x)| . 2) On the Zolotarev intervals Mk(x) < D∗ k(x) < |rk(x)| . rk(x) D∗ k(x) Mk(x) 5.4 Duffin-Schaeffer inequalities for polynomials and the Landau-Kolmogorov problem Theorem 5.16 is much more convenient for applications than Theorem 5.2 of Duffin-Schaeffer because, of the two assumptions | 1 k(x2 −1)q(k+1)(x) + xq(k)(x)| ≤|q(k)(1)| , (5.18) |q(x + iy)| ≤|q(1 + iy)| , the first one is much easier to verify. 1) Theorem 5.16 was used to obtain many other estimates, the so-called Duffin-Schaeffer (DS-) inequalities for polynomials. Definition 5.21. The polynomial q(x) = Qn i=1(x −ti) and the mesh δ = (τi)n i=0 such that τ0 ≤t1 ≤τ1 ≤· · · ≤tn ≤τn are said to admit the DS-inequality if sup |p(x)|δ≤|q(x)|δ ∥p(k)∥= ∥q(k)∥. (5.19) Two typical results (see for further references). a) Bojanov and Nikolov showed that (5.19) is true for the ultraspherical polynomial q = P (α,α) n , and for the mesh δ consisting of the points of its local extrema. Actually, we may take any polynomial q whose (k−1)-st derivative has a positive Chebyshev expansion, i.e., q(k−1) = P aiTi with ai ≥0. b) Milev and Nikolov obtained a refinement of Schur’s inequality for the polynomials vanishing at the endpoints. Let b Tn(x) := Tn(x cos π 2n) be the Chebyshev polynomial stretched to satisfy b Tn(±1) = 0 and let (τi) be the points of its local extrema. Then p(±1) = 0, |p(τi)| ≤1 ⇒ ∥p(k)∥≤b T (k) n (1) . Aleksei Shadrin 55 2) In principle, Theorem 5.15 can be applied to obtain Duffin-Schaeffer inequalities with majorant, but it is unlikely that one can get here anything more than results for µ(x) = (1 −x2)m/2 for small values of m. 3) Whereas Duffin-Schaeffer’s inequality gives only a uniform bound, The-orem 5.18 provides also the pointwise estimate inside the interval [−1, 1]. This was used by Eriksson to derive the Landau-Kolmogorov inequality ∥f (k)∥≤n −k n ∥T (k) n ∥ ∥Tn∥∥f∥+ k n ∥T (k) n ∥ ∥T (n) n ∥ ∥f (n)∥. 5.5 Erroneous proof by Duffin–Karlovitz In 1984, Duffin and Karlovitz revisited the Duffin-Schaeffer inequality and tried to generalize it from polynomials to arbitrary Chebyshev systems. They started with a formalization of the problem. The discrete restriction |p(x)| ≤1 on a set of n + 1 points in [−1, 1] is equivalent to bounding by the node norm ∥f∥δ := max i |f(τi)| that is defined for any given knot-sequence δ = (τi)n i=0, where −1 ≤τ0 < τ1 < . . . < τn ≤1. Problem 5.22 (Duffin–Schaeffer inequality). For integer n, k, find Dk := inf δ∈[−1,1] sup ∥p∥δ≤1 ∥p(k)∥. (5.20) In these notations the Markov–Duffin–Schaeffer results state that Mk = Dk = ∥T (k) n ∥, and the Chebyshev polynomial Tn, which equioscillates (n + 1) times between ±1, is extremal for both problems. In particular, the n+1 points of its equioscil-lation form the set δ∗giving the infimum in (5.20). Duffin and Karlovitz tried to find out which properties of Tn are crucial for such a result, and they came to the following conclusion. Theorem 5.23 (Duffin–Karlovitz (1984)). Let p∗∈Pn and δ∗= (τ ∗ i ) give the infimum for Dk, i.e. Dk = ∥p(k) ∗∥, ∥p∗∥δ∗= 1. Then τ ∗ 0 = −1, τ ∗ n = +1, and p′ ∗(τ ∗ i ) = 0, i = 2, . . . , n−1. 56 Twelve Proofs of the Markov Inequality “Proof”. Duffin and Karlovitz reasoned as follows. 1) Denote by ℓi the Lagrange fundamental polynomials corresponding to the knot-sequence δ. Then, since p(x) = Pn i=0 p(τi)ℓi(x), we have Dk,δ := sup x∈[−1,1] sup ∥p∥δ=1 |p(k)(x)| = sup x∈[−1,1] n X i=0 |ℓ(k) i (x)| =: n X i=0 |ℓ(k) i (xδ)| , and it is clear that the polynomial that attains the value Dk,δ is given by pδ(x) = n X i=0 pδ(τi)ℓi(x) , pδ(τi) = sgn ℓ(k) i (xδ) = ±1, (5.21) so that Dk,δ = ∥p(k) δ ∥= |p(k) δ (xδ)| . 2) Two remarks. Firstly, the polynomials ℓ(k) i vanish of course at certain x, but one can show that, at the point x = xδ that is a local maximum of the polynomial p(k) δ , all the values ℓ(k) i (xδ) are non-zero (as we wrote in (5.21)). Secondly, an optimal δ∗contains n+1 disinct nodes (for if the distance between two consecutive nodes in δ tends to zero, then, the value Dk,δ becomes arbitrary large). 3) Let p∗and δ∗be optimal for Dk, i.e. Dk = inf δ ∥p(k) δ ∥= ∥p(k) ∗∥= p(k) ∗(x∗) , ∥p∗∥δ∗= 1. Now, we perturb δ∗by an amount ϵ, i.e., τ ϵ i = τ ∗ i ± ϵi, and let Dk,ϵ = ∥p(k) ϵ ∥= p(k) ϵ (xϵ) . Then the inequality Dk ≤Dk,ϵ reads p(k) ∗(x∗) ≤p(k) ϵ (xϵ) . (5.22) Since p(k) ∗ has a global maximum at x∗, we also have p(k) ∗(xϵ) ≤p(k) ∗(x∗), hence p(k) ∗(xϵ) ≤p(k) ϵ (xϵ). (5.23) 4) We may assume that, if ϵ →0, then xϵ →x∗, pϵ →p∗, ℓi,ϵ →ℓi,∗, therefore sgn p∗(τ ϵ i ) = sgn p∗(τ ∗ i ) (5.21) = sgn ℓ(k) i,∗(x∗) = sgn ℓ(k) i,ϵ (xϵ) (5.21) = sgn pϵ(τ ϵ i ) ̸= 0. (5.24) 5) Now suppose that, for some i0, |p∗(τ ∗ i0)| = 1, p′ ∗(τ ∗ i0) > 0. Aleksei Shadrin 57 Take τ ϵ i0 = τ ∗ i0 + ϵi0 and τ ϵ i = τ ∗ i otherwise, so that |p∗(τ ϵ i0)| > 1 = |pϵ(τ ϵ i0)| , p∗(τ ϵ i ) = pϵ(τ ϵ i ) = ±1. Then, taking in account the sign pattern in (5.24), we obtain |p(k) ∗(xϵ)| = n X i=0 p∗(τ ϵ i )ℓ(k) i,ϵ (xϵ) > n X i=0 pϵ(τ ϵ i )ℓ(k) i,ϵ (xϵ) = |p(k) ϵ (xϵ)| , a contradiction to (5.23). Similarly, if p′(τ ∗ i0) < 0, then we take τ ϵ i0 = τ ∗ i0 −ϵi0, and arrive at the same contradiction. □ It is a very nice “proof” and it is not easy to find what is wrong. I provide my explanation in the comments below, so that an interested reader may attempt this exercise. Comment 5.24. The proof is correct if we assume that, for an optimal δ∗, there is a unique optimal polynomial p∗, and it seems that Duffin & Karlovitz overlooked that this could not be the case. Formally, wrong is the sequel of the arguments. In Step 4, we may assume that if ϵ →0, then the sign pattern (5.24) is valid, but one should add “going to a subsequence if necessary”. And going to a subsequence of (ϵ) (with p∗and x∗fixed) means that, in Step 5, we are not free to choose ϵ as we want to. Say, if p′ ∗(τi0) > 0, then a subsequence may turn out to be with the entries ϵi0 < 0. The assumption that an optimal p∗satisfies p′ ∗(τi0) > 0 is not contradictory if (and only if), for the sequence of δϵ defined as in Step 5, the sequences of pϵ and xϵ would tend to some other b p∗and b x∗, respectively, which are also optimal for Dk. Moreover, such b p∗must satisfy b p′ ∗(τi0) < 0 Comment 5.25. Theorem 5.23 is of course true for polynomials (by Duffin– Schaeffer’s inequality). It may well be true for the Chebyshev systems, although Duffin–Karlovitz failed to prove it. However, its analogue for the Duffin– Schaeffer problem with majorant is no longer true. Consider the two values Mk,µ = sup |p(x)|≤µ(x) ∥p(k)∥, Dk,µ := inf δ∈[−1,1] sup |p(x)|δ≤µ(x)δ ∥p(k)∥ Then, for the majorant µ(x) = √ 1 −x2, and for k = 1 we have D1,µ > M1,µ = ∥ω′ µ∥. This is the result of Rahman–Schmeisser mentioned in Theorem 5.9. 5.6 Inequality for the oscillating polynomials Generally, in the Markov–Duffin–Schaeffer problem with majorant we want to find the values Mk,µ = sup |p(x)|≤µ(x) ∥p(k)∥, Dk,µ := inf δ∈[−1,1] sup |p(x)|δ≤|µ(x)|δ ∥p(k)∥. 58 Twelve Proofs of the Markov Inequality For any µ ≥0 there is a unique polynomial ωµ ∈Pn, the so-called snake-polynomial, that oscillates n + 1 times between ±µ, i.e., such that |ωµ(x)| ≤µ(x) , x ∈[−1, 1] , and on some set of n + 1 points δ∗= (τ ∗ i )n i=0 we have ωµ(τ ∗ i ) = (−1)iµ(τ ∗ i ) . The question of interest is for which majorants µ it is this polynomial ωµ that gives the supremum to both values above (as in the case µ ≡1), in particular, whether it is the set δ∗that gives the infimum to Dk,µ. Notice that, for any majorant µ, ∥ω(k) µ ∥≤Mk,µ ≤Dk,µ ≤D∗ k,µ, where D∗ k,µ := sup |p(x)|δ∗≤|µ(x)|δ∗ ∥p(k)∥. so it may well be sufficient to estimate from above the value D∗ k,µ only. However, even with the simplest majorants, the location of the nodes τ ∗ i is not known, and we have to find some general arguments (as Duffin–Karlovitz tried to). In 1996, we tried to revive approach of Duffin-Karlovitz, where instead of varying knots along the graph of the majorant µ, we decided to vary them along the graph of ωµ. It is clear that the snake-polynomial ωµ has n zeros inside the interval [−1, 1], ωµ(x) = c Qn i=1(x −ti), and that these zeros interlace with (τ ∗ i ), i.e., −1 ≤τ ∗ 0 < t1 < τ∗ 1 < · · · < tn < τ∗ n ≤1 . Denote by ∆ω the class of knot-sequences δ = (τi) with the same interlacing properties. Then D∗ k,µ = sup |p(x)|δ∗≤|µ(x)|δ∗ ∥p(k)∥ = sup |p(x)|δ∗≤|ω(x)|δ∗ ∥p(k)∥≤sup δ∈∆ω sup |p(x)|δ≤|ω(x)|δ ∥p(k)∥=: Sk,ω , and we may try to evaluate the value Sk,ω in terms of ∥ω(k)∥. It turns out that the pointwise problem, Sk,ω(x) := sup δ∈∆ω sup |p(x)|δ<|ω(x)|δ |p(k)(x)| , has a remarkable solution. Proposition 5.26 (Shadrin (1996)). Let ω(x) = n Q i=1 (x−ti), ti ∈[−1, 1]. Then Sk,ω(x) = max  |ω(k)(x)|, max i |φ(k) i (x)| , Aleksei Shadrin 59 where φi(x) := ω(x)1 −xti x −ti , i = 1, . . . , n. Proof. Our original proof followed the idea of Duffin–Karlovits: we showed that variation of any single knot τi ∈[ti, ti+1] does not result in a local ex-tremum of the value |p(k)(x)|, hence the value Sk,ω is achieved when τi is either ti ot ti+1. A simpler proof was given later by Nikolov . □ The polynomials φi are quite interesting. They have the same zeros as ω except one ti, and because of the factor 1−xt x−t they satisfy the inequalities |φi(z)| > |ω(z)|, z ∈D1, |φi(z)| ≤|ω(z)|, z ̸∈D1, where D1 is the unit open disc in the complex plane. From this proposition and considerations at the beginning of the section we obtain the statement that gives a new way of deriving Markov–Duffin-Schaeffer inequalities with a majorant. Theorem 5.27. Given a majorant µ ≥0, let ωµ ∈Pn be the corresponding snake-polynomial. If max i ∥φ(k) i ∥≤∥ω(k) µ ∥, (5.25) then Mk,µ = Dk,µ = ∥ω(k) µ ∥ = Sk,ω  . (5.26) An advantage of studying the inequality (5.25) is that this is purely a poly-nomial problem on the class of polynomials ω having all their zeros in [−1, 1], with quite a simple and explicitly given polynomials φi involved. These polyno-mials may be viewed as the most extreme case of the Zolotarev-like polynomials. However, it was only recently when some real improvements have been made. 1) Nikolov proved that ω = Tn ⇒ (5.25) (hence (5.26) for µ ≡1). This gives one more proof of the classical Markov–Duffin–Schaeffer inequality. 2) Recently, in our joint paper with Nikolov , we extended this result: ω(k−1) = X aiTi, ai ≥0 ⇒ (5.25) (hence (5.26)). This allows to establish the Markov–Duffin–Schaeffer inequalities for a large class of majorants, e.g. µ2(x) = Qm i=1(1 + a2 i x2), k ≥1; or µ2(x) = (1 −x2)m, k > m. This improves results of Vidensky (Theorem 4.9) and Pierre–Rahman (Theo-rem 2.11). 60 Twelve Proofs of the Markov Inequality 5.7 Conclusion More than a hundred years have passed since Vladimir Markov, “a student of Sankt-Petersburg University”, proved his inequality. Since then it has received a dozen alternative proofs, hundreds of generalizations and it is still a lively part of Approximation Theory. So much power in just a single line: ∥p(k)∥≤∥T (k) n ∥∥p∥, ∀p ∈Pn. Acknowledgements. A part of this research was performed in Bonn in 1996, supported by the Alexander von Humboldt Foundation. It is a pleasure to thank Brad Baxter, Carl de Boor and Arieh Iserles for their comments on a draft of this paper. References Twelve proofs of the Markov innequality S. N. Bernstein, On the V. A. Markov theorem, Trudy Leningr. Industr. In-ta, no 5, razdel fiz-matem nauk, 1 (1938), 8–13 = Collected Works, v.1, 281–286 (Russian) = East J. Approx. 2 (1996), no. 2, 245–251. B. Bojanov, Another proof of V. Markov’s inequality, in , §3.1, pp. 39–46. A. Ja. Dubovickii, A. A. Miljutin, Second variations in extremal problems with constraints, Dokl. Akad. Nauk SSSR 160 (1965), 18–21 (Russian) = Soviet Math. Dokl. 6 (1965), 12–16. R. J. Duffin, L. A. Karlovitz, The Markoff–Duffin–Schaeffer inequalities ab-stracted, Proc. Nat. Acad. Sci. USA 82 (1985), 955–957. R. J. Duffin, A. S. Shaeffer, A refinement of an inequality of the brothers Markoff, Trans. Amer. Math. Soc. 50 (1941), 517–528. V. V. Gusev, Derivative functionals of an algebraic polynomial and V. A. Mar-kov’s theorem, Izv. Akad. Nauk SSSR, Ser. Math. 25 (1961), 371-384 (Russian) = Appendix to , pp. 179–197. V. Markov, On functions which deviate least from zero in a given interval, St-Petersburg, 1892 (Russian), available as a pdf-file at HAT = Math. Ann. 77 (1916), 213–258 (German translation with small abbreviations). E. Mohr, Elementarer Beweis einer Ungleichung von W. A. Markov, Tensor 14 (1963), 71–85 (German). A. S. Schaeffer, R. J. Duffin, On some inequalities of S. Bernstein and W. Markoff for derivatives of polynomial, Bull. Amer. Math. Soc. 44 (1938), 289–297. A. Yu. Shadrin, Interpolation with Lagrange polynomials. A simple proof of Markov inequality and some of its generalizations, Approx. Theory Applic. 8 (1992), no. 3, 51–61. A. Yu. Shadrin, On Markov-Duffin-Schaeffer inequalities with weight, a manu-script, 1996. Aleksei Shadrin 61 V. M. Tikhomirov, Inequalities of Bernstein and A. A. Markov, in , §2.3.3, pp. 109–113 (Russian) Surveys R. P. Boas, Inequalities for the derivatives of polynomials, Math. Mag. 42 (1969), 165–174. R. P. Boas, Extremal problems for polynomials, Amer. Math. Monthly, 85 (1978), no. 6, 473–475. B. Bojanov, Markov-type inequalities for polynomials and splines, in Approxi-mation Theory, X (Nashville, TN, 2002), pp. 31–90, Vanderbilt University Press, 2002. Markov inequalities with majorant R. Pierre, Q. I. Rahman, On a problem of Turan about polynomials, Proc. Amer. Math. Soc. 56 (1976), 231–238. R. Pierre, Q. I. Rahman, On a problem of Turan about polynomials. II, Canad. J. Math. 33 (1981), no. 3, 701–733. R. Pierre, Q. I. Rahman, G. Schmeisser, On polynomials with curved majorants, J. Approx. Theory 57 (1989), no. 2, 211–222. Q. I. Rahman, On a problem of Turan about polynomials with curved majorants, Trans. Amer. Math. Soc. 163 (1972), 447–455. V. S. Videnskii, On estimates of derivatives of a polynomial, Izvestiya Akad. Nauk SSSR, Ser. Mat. 15 (1951), 401–420 (Russian). V. S. Videnskii, A generalization of V. A. Markoffinequalities, Dokl. Akad. Nauk SSSR 120 (1958), 447–449 (Russian). V. S. Videnskii, Generalizations of Markov’s theorem on the evaluation of a polynomial derivative, Doklady Akad. Nauk SSSR 125 (1959), 15–18 (Russian). V. S. Videnskii, Least upper bounds for the successive derivatives of a polynomial on an interval, Izv. Vys. Uchebn. Zaved., ser. Mat. 106 (1971), no. 3, 18–22 (Russian). Markov inequalities for splines B. Bojanov, Markov interlacing property for perfect splines, J. Approx. Theory 100 (1999), no. 1, 183–201. B. Bojanov, N. Naidenov, Exact Markov-type inequalities for oscillating perfect splines, Constr. Approx. 18 (2002), no. 1, 37–59. S. Karlin, Generalized Markov Bernstein type inequalities for spline functions, in Studies in Spline Functions and Approximation Theory, pp. 461–484, Academic Press, New York, 1976. Landau-Kolmogorov problem on a finite interval B.-O. Eriksson, Some best constants in the Landau inequality on a finite interval, J. Approx. Theory 94 (1998), no. 3, 420–454. H. Kallioniemi, On bounds for the derivatives of a complex-valued function of a compact interval, Math. Scand. 39 (1976), no. 2, 295–314. 62 Twelve Proofs of the Markov Inequality H. Kallioniemi, The Landau problem on compact intervals and optimal numerical differentiation, J. Approx. Theory 63 (1990), no. 1, 72–91. A. Pinkus, Some extremal properties of perfect splines and the pointwise Landau problem on the finite interval, J. Approx. Theory 23 (1978), no. 1, 37–64. A. Shadrin, To the Landau-Kolmogorov problem on a finite interval, in Open Problems in Approximation Theory (B. Bojanov, Ed.), SCT Publishing, Singa-pore, 1994, pp. 192-204. Markov-Duffin-Schaeffer inequalities with majorant G. Nikolov, A. Shadrin, Markov–Duffin–Schaeffer inequalities with majorant, a manuscript, 2004. Q. I. Rahman, G. Schmeisser, Markov-Duffin-Schaeffer inequality for polynomi-als with a circular majorant, Trans. Amer. Math. Soc. 310 (1988), no. 2, 693–702. Q. I. Rahman, A. O. Watt, Polynomials with a parabolic majorant and the Duffin-Schaeffer inequality, J. Approx. Theory 69 (1992), no. 3, 338–354. Duffin-Schaeffer-type inequalities for polynomials B. Bojanov, G. Nikolov, Duffin and Schaeffer type inequality for ultraspherical polynomials, J. Approx. Theory 84 (1996), no. 2, 129–138. L. Milev, G. Nikolov, On the inequality of I. Schur, J. Math. Anal. Appl. 216 (1997), no. 2, 421–437. G. Nikolov, Inequalities of Duffin-Schaeffer type, SIAM J. Math. Anal. 33 (2001), no. 3, 686–698. G. Nikolov, An extension of an inequality of Duffin and Schaeffer, a manuscript, 2003. G. Nikolov, Inequalities of Duffin-Schaeffer type. II, a manuscript, 2003. Lagrange interpolation N. P. Korneichuk, Approximation of functions and their derivatives by interpo-lation splines, Dokl. Akad. Nauk SSSR 264 (1982), no. 5, 1063–1066 (Russian). A. Shadrin, Error bounds for Lagrange interpolation. J. Approx. Theory 80 (1995), no. 1, 25–49. Zolotarev polynomials V. A. Malyshev, Algebraic solution of the Zolotarev problem, Algebra i Analiz 14 (2002), no. 4, 238–240 (Russian) = St. Petersburg Math. J. 14 (2003), no. 4, 711–712. F. Peherstorfer, Orthogonal and Chebyshev polynomials on two intervals, Acta Math. Hungar. 55 (1990), no. 3-4, 245–278. M. L. Sodin, P. M. Yuditskii, Algebraic solution of E. I. Zolotarev and N. I. Akhiezer problems on polynomials that deviate least from zero, Teor. Funktsii Funktsional. Anal. i Prilozhen. no. 56 (1991), 56–64 (Russian) = J. Math. Sci. 76 (1995), no. 4, 2486–2492. Miscellaneous books and Markov-type results S. Bernstein, Sur l’ordre de la meilleure approximation etc., Mem. Cl. Sci. Acad. Roy. Belg. 4 (1912), 1–103 (French) = Collected Works, v.1, 8–105 (Russian). Aleksei Shadrin 63 S. Bernstein, Remarques sur l’inegalite de Wladimir Markoff, Soobsch. Khark. Matem. Ob-va, 14 (1913), 81–87 (French) = Collected Works, v.1, 151–156 (Rus-sian). S. Bernstein, Lecons sur les proprietes extremales, etc., Paris, 1926 (French). S. Bernstein, Sur la limitation des derivees des polynomes, C. R. Acad. Sci, Paris 190 (1930), 338–340 (French) = Collected Works, v. 1, 497–499 (Russian) S. Bernstein, Collected Works, Izd-vo Akad. Nauk SSSR, v. 1, 1952 (Russian). N. G. de Bruijn, Inequalities concerning polynomials in the complex domain, Indag. Math. 9 (1947), 591–598. A. Markov, On a question by D. I. Mendeleev, Zapiski Imper. Akad. Nauk , 62 (1890), 1–24 = Selected Works, Gostechizdat, Moscow, 1948, pp. 51–75 (Rus-sian), both articles are available as pdf-files at HAT , together with the English translation by de Boor–Holtz. T. J. Rivlin, Chebyshev Polynomials, Second edition, Pure and Applied Mathe-matics (New York), John Wiley & Sons, Inc., New York, 1990, xvi+249 pp A. Sch¨ onhage, Approximationstheorie, Walter de Gruyter & Co., Berlin New York, 1971, 212 pp (German). V. M. Tikhomirov, Some Questions in Approximation Theory, Izdat. Moskov. Univ., Moscow, 1976, 304 pp (Russian). E. V. Voronovskaja, The functional of the first derivative and improvement of a theorem of A. A. Markov, Izv. Akad. Nauk SSSR, ser. Matem. 23 (1959), 951–962 (Russian) ≈, Chapter III, pp. 156–167 (English translation of a restructured version). E. V. Voronovskaja, The Functional Method and its Applications, Izdat. LEIS, Leningrad, 1963 (Russian) = Translations of Mathematical Monographs, Vol. 28 Amer. Math. Soc., Providence, R.I., 1970, 203 pp. History of approximation theory A. Pinkus, C. de Boor, A Homepage on the History of Approximation Theory, = .
323
324
Published Time: Wed, 18 Jan 2023 13:53:48 GMT arXiv:0910.2148v2 [nucl-th] 11 Dec 2009 Fermion propagators in space-time M. B. Barbaro, 1 D. Berardo, 1 R. Cenni, 2 T. W. Donnelly, 3 and A. Molinari 1 1 Dipartimento di Fisica Teorica, Universit` a di Torino and INFN, Sezione di Torino, Via P. Giuria 1, 10125 Torino, Italy 2 INFN, Sezione di Genova, Via Dodecaneso 33, I-16146 Genova, Italy 3 Center for Theoretical Physics, Laboratory for Nuclear Science and Department of Physics, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA Abstract The one- and the two-particle propagators for an infinite non-interacting Fermi system are stud-ied as functions of space-time coordinates. Their behaviour at the origin and in the asymptotic region is discussed, as is their scaling in the Fermi momentum. Both propagators are shown to have a divergence at equal times. The impact of the interaction among the fermions on their momentum distribution, on their pair correlation function and, hence, on the Coulomb sum rule is explored using a phenomenological model. Finally the problem of how the confinement is reflected in the momentum distribution of the system’s constituents is briefly addressed. PACS numbers: 24.10.-i, 24.10.Cn, 25.30.-c 1I. INTRODUCTION In this study we derive expressions in space-time for the one- and two-fermion propaga-tors, considering the simple case of a non-relativistic, non-interacting infinite Fermi system where these quantities in energy-momentum space are well-known. To obtain the corre-sponding space-time results is not an entirely trivial job and we were unable to find any detailed discussion of these quantities in the literature. The motivation for the present study comes principally from the desire to investigate the roles played by correlations, in particular short-range correlations, in nuclear matter and finite nuclei, although our study applies as well to other fields of many-body physics. From this perspective the present work should be viewed as a first step in the direction of treating more complex systems: we present arguments later that the correlations among the constituents of a Fermi system are best appreciated in space-time, especially so when compared with the non-dynamically-correlated situation. We shall illustrate this point using a model for the dynamical correlations which modifies the non-interacting step-function momentum distribution at the Fermi surface. Two further items are addressed because of their significance for a non-interacting system. The first relates to the propagators’ second kind scaling property (independence of the Fermi momentum) [2, 3, 4, 5]: we prove that, providing an appropriate rescaling of space and time is performed, both the one-fermion and two-fermion propagators do scale in kF , the latter except at equal times. Indeed one finds that when the quantum field theory (QFT) is applied to a finite-density many-body system a divergence occurs if the propagators are evaluated at equal times. One also finds that both the one- and two-particle propagators become purely imaginary at equal times. The second item concerns the problem of the impact of confinement on the non-interacting fermion momentum distribution (and hence on the propagators). The present paper is organized as follows. In Sec. II we deal with the one-body propa-gator G0(x, x ′). We observe that, while the hole propagator is well-defined, at equal times the particle propagator is not. We derive an analytic expression for the space-time hole propagator and discuss its asymptotic behaviour. We find that it vanishes as a power law at large |~x − ~x′| and t − t′ as a consequence of the cut the function G0(~k, ω ) displays in the complex ω-plane just above the real axis for 0 ≤ ω ≤ ωF , where ωF = k2 F /2m, with kF the Fermi momentum and m the fermion (nucleon) mass (Pailey-Wiener theorem ). 2We also prove that G0(x, x ′) divided by the density scales in kF . In Sec. III we proceed to study the two-particle propagator, specifically the density-density correlation function, starting by focusing on the Coulomb Sum Rule (CSR) . To illustrate how such topics can be addressed in space-time, we derive the well-known expression for the CSR using the pair correlation function (which arises in the present case simply from the Pauli principle) as input and performing the integration in the complex coordinate space. In Sec. IV we obtain the space-time expression for the density-density correlation function, usually referred to as Π0(x, x ′). We do this partly in terms of the error function and partly in terms of a function for which we keep the integral representation, although it may be expressed in terms of the Meijer G-functions . We analyze the asymptotic behaviour of Π 0(x, x ′), compare it with that of G0(x, x ′) and discuss its scaling behaviour in kF . Next we show that the imaginary part of the density-density correlation function (a branch of the two-particle propagator), unlike the CSR, cannot be expressed directly through the Pauli pair correlation function, its only ingredient being the momentum distribution. Furthermore we show that the QFT shortcoming previously found in the case of G0(x, x ′) also affects Π 0(x, x ′). In Sec. V, we introduce a phenomenological momentum distribution, which we employ for computing the CSR and discuss the significance of the difference between the latter and the CSR of the free Fermi gas. In the Conclusions (Sec. VI) we briefly address the problem of how the confinement of our system affects the momentum distribution of its constituents, summarize our findings and outline a few further important issues we intend to address in future work. II. THE ONE-BODY PROPAGATOR In this section we deal with the four-dimensional Fourier transform of the well-known one-body fermion propagator G0(~k, ω ) in an infinite, homogenoeus, non-interacting system. That is, we compute G0(x, x ′) = d~k (2 π)3 ei~k·(~x−~x′) +∞−∞ dω 2π e−iω (t−t′ ) { θ(k − kF ) ω − ωk + iη + θ(kF − k) ω − ωk − iη } , (1) where here for simplicity spin indices are suppressed. The frequency integration in Eq. (1) is easily performed in the complex ω-plane and one can recognize forward (particle) and backward (hole) propagation. In Eq. (1) the angular integrations are also immediate, and 3one is left with the expression G0(x, x ′) = i 2π2r { θ(t′ − t) kF 0 dkke −iω k (t−t′ ) sin( kr ) − θ(t − t′) ∞ kF dkke −iω k (t−t′ ) sin( kr ) } , (2) where r ≡ | ~x − ~x′| and ωk = k2/(2 m). The integral ∞ 0 dkke −iω k (t−t′ ) sin( kr ) = √π 4 re i mr 2 2( t−t′) √ ( i 2m (t − t′)) 3 (3) diverges when computed at equal times t = t′ for any value of r, and thus at equal times the particle propagator in the present framework is ill-defined and cannot be computed unless regularized. For instance, this might be achieved in the context of the Wightman formulation of QFT (see ) which replaces the field function with a distribution. We do not dwell here on the problem of the regularization of the particle propagator and instead limit our attention to the computation of the hole propagator. This can be done analytically and yields ( α and β are the spin indices which we temporarily reintroduce) G0 αβ (x, x ′) = δαβ iθ (t′ − t)n0 23 kF r ei∆t { (kF r)2 (kF r)2 + 2 i∆t [ j1(kF r) + cos( kF r) kF r + sin( kF r) 2i∆t ] + √π 2 kF r (2∆ t)3/2 1 − i 2 e−i [( kF r 2√∆t )2 +∆ t ] × [ erf (1 − i 2 ( kF r √2∆ t − √2∆ t )) − erf (1 − i 2 ( kF r √2∆ t √2∆ t ))]} , (4) where n0 = k3 F /3π2 is the system’s density for spin 1/2 particles (the case we are considering). Furthermore, ∆t = k2 F 2m(t′ − t) (5) is the time difference expressed in inverse Fermi frequency units (the natural choice in the present context), j1(kF r) is the spherical Bessel function of order one and the standard definition erf (z) = 2 √π z 0 e−t2 dt (6) for the error function is employed. Note that, but for the overall factor n0, the Green function G0(x, x ′) scales in the system’s density, i.e., it loses all explicit kF dependence which then enters only in defining the scale of the space at any value of ∆ t.¿From the asymptotic expansion of the erf (z) (see Appendix A) it follows that the equal-time limit of G0(x, x ′) reads G0(~xt, ~ x′t+) = in0 23 kF r j1(kF r) (7) 4which, when r → 0 and the spin trace is taken, reduces to the system’s density, as it should. Notably Eq. (7) also holds for finite ∆ t and large r, as is shown in Appendix A where it is proven that under these conditions the contribution stemming from the two error functions exactly cancels the one arising from the second and third terms inside the square brackets in Eq. (4). Thus, for fixed ∆ t the propagator in Eq. (4) asymptotically behaves as cos( kF r)/(kF r)2, reflecting the square root singularity k = √2mω − iǫ of G0(~k, ω ) in the complex k-plane. Indeed as illustrated in , which deals with the theory of potential scattering, the structure of the Fourier transform basically implies that the transform of a singularity is an asymptotic behaviour. Analogously G0(~k, ω ) displays a [∆ t]−1 behaviour at large times for fixed r, since in the complex ω-plane the propagator has a simple pole for fixed ~k.The above findings are borne out by the results shown in Figs. 1, 2 and 3, where one sees the behaviour of the modulus of G0(x, x ′) versus kF r for a few values of ∆ t. Also shown in Figs. 2 and 3 are the real and imaginary parts of G0(x, x ′) for two finite values of ∆ t. As is well-known in ( ~k, ω ) space, these are connected through a dispersion relation. Importantly for ∆ t = 0 the hole propagator becomes purely imaginary and displays an oscillatory behaviour versus kF r. This accounts for the zeros in its modulus shown in Fig. 1. From the point of view of the Heisenberg principle ∆ t = 0 corresponds to the maximum energy (in fact momentum) uncertainty for the propagating particle. Thus the wave function of the latter corresponds to a large superposition of plane waves yielding the striking zeros seen in Fig. 1. For ∆ t small, but not vanishing (Fig. 2), the propagator starts to develop a real part. This also oscillates, but its zeros differ from those of the imaginary part. Accordingly, now the particle can be found everywhere in space, but of course with a small probability when in the vicinity of the zeros appearing in Fig. 1. As ∆ t grows (Fig. 3) so does the real part of G0(x, x ′) and the behaviour of the modulus of the propagator becomes smoother and smoother, until for ∆ t very large it becomes constant: indeed now, in accord with the Heisenberg principle, the wave function of the particle becomes a plane wave. III. THE COULOMB SUM RULE To pave the way to the more general treatment of the two-particle propagator (of which the density-density correlation function is a particular branch) to be discussed in the next 5section, we once more derive the well-known Coulomb sum rule, although this time through a somewhat different technique, namely an integration in coordinate space. For this purpose we recall (see ) that, using standard quantum mechanics, the CSR can be cast in the form S(~q) = < Ψ0|˜ρ†(−~q)˜ ρ(−~q)|Ψ0 > , (8) where |Ψ0 > is the system’s ground state and ˜ ρ(−~q) is the density deviation operator defined as ˜ρ(−~q) = ˆ ρ(−~q)− < Ψ0|ˆρ(−~q)|Ψ0 > . (9) It is then a straightforward matter to obtain from Eq. (8) the formula S(~q) = Z + d~ xd~ ye −i~ q·(~x−~y) < Ψ0| ˆΨ† α (~x) ˆΨ† β (~y) ˆΨβ (~y) ˆΨα(~x)|Ψ0 > −ρ20V δ (~q) (10) which expresses the CSR in terms of the fermion fields in the Schroedinger picture (spin indices have been reintroduced) always sticking to the model of a homogeneous system enclosed in a large volume V having Z = A/ 2 charges, with A being the total number of spin 1 /2 particles. In the second term on the right-hand side of Eq. (10) one recognizes the equal-time two-particle propagator. This can be expressed in terms of the correlations among the particles in the system and indeed, in our simple case, it reads < Ψ0| ˆΨ† α (~x) ˆΨ† β (~y) ˆΨβ (~y) ˆΨα(~x)|Ψ0 >= n20 [ 1 − 1 2 (3j1(kF r) kF r )2] = n20 [ 1 − 1 2g2(kF r) ] (11) with r ≡ | ~x − ~y|. The function 1 2 g2(kF r) is usually referred to as the Pauli pair correlation function. In the above the direct and exchange contributions clearly appear. Moreover it should be kept in mind that basic in deriving Eq. (11) have been the θ-functions entering in G0(~k, ω ). Our aim here is to show that even small variations of Eq. (11) at short distances (and hence of the θ-functions in momentum space) can produce quite dramatic changes in the CSR (see Sec. V). We proceed by inserting Eq. (11) into Eq. (10), getting for the latter S(~q) = Z + n20V δ (~q) − n20V d~ re −i~ q·~r 1 2 (3j1(kF r) kF r )2 − n20V δ (~q)= Z [ 1 − 6 π ∞ 0 dzj 21 (z)j0 ( q kF z )] , (12) where the elementary angular integrations have been performed and the term arising from the direct piece of Eq. (11) is seen to drop out. 6The integral in Eq. (12) can be computed in the complex z-plane using standard tech-niques (see Appendix B for details), yielding for the Coulomb sum rule the familiar non-relativistic expression S(~q) =  Z (3 4 q kF − 1 2 ( q 2kF )3) if q < 2kF Z if q ≥ 2kF . (13) In connection with the above derivation which is valid for a perfect Fermi gas, it should be pointed out that the result in Eq. (13) stems from the exact cancellation of two contributions, as shown in Appendix B. We shall then prove in Section V that, as anticipated above, even a minor modification induced by interactions among the system’s constituents of the θ-functions entering in G0(~k, ω ) is sufficient to disrupt the cancellation in Eq. (B9): hence such a modification induces a sizable change of the CSR for large q. Since, as will be shown in Sec. V, modifying the θ-function around the value k = kF actually corresponds to modifying the pair distribution in Eq. (11) for small distances, this outcome is what one should expect and it offers a nice example of how the Fourier transform works. IV. THE DENSITY-DENSITY CORRELATION FUNCTION In this section we analytically compute and explore in space-time the branch of the two-particle propagator usually referred to as the density-density correlation function or polarization propagator Π( x, y ). The expression for the latter is again well-known in energy-momentum space for a non-interacting system, as is the fact that its imaginary part provides the inelastic scattering cross section for many types of probes of a many-body system. The definition of Π( x, 0) is the following Π( x, 0) = −i < Ψ0|T (˜ ρH (x)˜ ρH (0)) |Ψ0 > , (14) where the density deviation operators are in the Heisenberg picture, unlike the case of the CSR where they were taken in the Schroedinger picture, i.e., where no T-product was introduced. To explore this point further we recast Eq. (14) in the form Π( x, 0) = −i { < Ψ0|T ( ˆΨ† α (x) ˆΨα(x) ˆΨ† β (0) ˆΨβ (0)) |Ψ0 > − < Ψ0| ˆΨ† α (x) ˆΨα(x)|Ψ0 >< Ψ0| ˆΨ† β (0) ˆΨβ(0) |Ψ0 >} . (15) 7A comparison with Eq. (10) then shows that what enters in the CSR is just the above expression, however with the T-product replaced by its equal-time specification and it is precisely this quantity which is directly connected with the pair distribution function. As we have seen (and as we will discuss in more detail later) the latter crucially affects the CSR, namely, the frequency integral of the response. The response, however, is not directly expressible in terms of the pair distribution cor-relation function, but rather the momentum distribution of the system’s constituents (in our case a θ-function) enters into its definition. It thus appears that the θ-function should be viewed as the fundamental ingredient of both the system’s response and of the CSR. Indeed in Sec. V we shall illustrate how the pair distribution function (and hence the CSR) is determined by the momentum distribution. To proceed further, observe that with a straightforward application of Wick’s theorem to Eq. (15) one obtains for the non-interacting Fermi system Π0(x, 0) = −2iG 0(x, 0) G0(x, 0) . (16) To compute this we start from the Fourier transform of its well-known expression in energy-momentum space Π0(q) = ( − 2i) d4k (2 π)4 G0(k)G0(k + q)= 2 d~k (2 π)3 θ(|~k + ~q| − kF )θ(kF − k) [ 1 q0 + ω~k − ω~q+~k + iη − 1 q0 − ω~k + ω~q+~k − iη ] , (17) where the integration on k0 has been performed in the complex plane. Next we take the inverse Fourier transform of Eq. (17) Π0(x, 0) = d4q (2 π)4 eiq ·xΠ0(q) (18) and carry out the q0-integration in the complex plane: this again distinguishes between forward and backward time propagation, corresponding to the two terms generated by the T-product. Both of these diverge for tx = 0 in accord with what was previously found for the one-fermion propagator. Here we recall that only the particle piece of the latter was seen to diverge at equal times. We shall return on this point later on. 8Furthermore, the second term of Π 0(x, 0), associated with q0 < 0, describes, the system’s response in the time-like domain. One obtains Π0(x, 0) = −2i d~ q (2 π)3 ei~ q·~x d~k (2 π)3 θ(kF − k)θ(|~k + ~q| − kF ) × { e−i ( q2 2m+1 m~q·~k )tx θ(tx) + ei ( q2 2m+1 m~q·~k )tx θ(−tx) } . (19) After some algebra (see Appendix C for details) the above expression can be reduced to a single integral (we focus on the first piece, since the second one is immediately derived when the first is known): Π0(x, 0) θ(tx) = − 9in 20 4∆ tkF r { ∞ 2 dse −i∆ts2 sin( sk F r)j1(2∆ ts)+ 21 dse −i∆ts2 sin( sk F r)j1(2∆ ts(s − 1))( s − 1) 2 1 2∆ t 21 ds sin( sk F r) [ e−i∆ts2 cos(2∆ ts(s − 1)) − e−2i∆ts2 (1 s cos ( ∆ts(s − 2) ) 1 2∆ ts2 sin ( ∆ts(s − 2) ))] 1 2∆ t 10 ds sin( sk F r)e−2i∆ts sin(2∆ ts2)1 s ( 1 2∆ ts + i ) cos(2 kF r) − 1 4∆ tkF r } , (20) where r ≡ | ~x| and s = q/k F . As previously anticipated, in the limit of vanishing ∆ t the above becomes purely imaginary and diverges. To show that this divergence is as severe as the one encountered for the one-particle propagator (see Eq. (3)) we consider the first term inside the curly brackets. Here we are allowed to replace the spherical Bessel function with his leading term in the small argument expansion getting 2∆ t 3 ∞ 2 dse −i∆ts2 sin( sk F r) ≃ 1 2 √ π ∆ti3 kF r 3 ei (kF r)2 4∆ t . (21) Hence our statement follows. In Figs. 4 - 6 we display the modulus of Π 0(x, 0) together with its real and imaginary parts for a few values of ∆ t. We observe that for small ∆ t a diffraction pattern emerges as in the case of the single-particle propagator: indeed, from the Heisenberg principle, here the energy is considerably spread out and, as a consequence, a wave packet can be set up which vanishes at fixed positions selected by the medium. As ∆ t increases this pattern is washed out until for very large ∆ t it turns into an almost uniform behaviour. From the formal point of view 9this evolution reflects the fact that for small time differences the particle-hole propagator is essentially imaginary, just as happened for the one-hole propagator. This imaginary part oscillates with the distance and hence the diffraction pattern follows. However, as the time difference increases a real part develops, also with an oscillatory behaviour, but with zeros displaced with respect to those of the imaginary part. Hence the zeros of the diffraction pattern are lifted up until at large ∆ t the modulus of Π 0(x, 0) becomes uniform in space. A further important feature of Π 0(x, 0) relates to its rapid decrease occurring in the range 0 < k F r . 4: we have numerically checked that in this domain all the terms in Eq. (20) contribute by roughly the same amount whereas for larger kF r the integrals with the variable running in the intervals (0 , 1) and (1 , 2), where the Pauli correlations are operative, essen-tially drop out. We thus conclude that are these correlations which damp the propagation of a density disturbance in the system. We turn now to the evaluation of the remaining integrals with respect to the variable q to complete our task of obtaining an analytic expression for Π 0(x, 0). For this purpose we introduce the dimensionless quantities z = kF r 2√∆t and ρ = √∆t (22) and the function g(a, b, c ) = ba dy eiy 2 y + c . (23) The latter can in fact be expressed in terms of the Meijer-G functions , although the resulting formulas are quite cumbersome and hence we prefer to use directly the definition expressed by Eq. (23). Even so the remaining q-integrations are, unfortunately, given by expressions which are far from simple; we report these in Appendix C for completeness. To pave the way to Appendix C here we simply rewrite Eq. (20) in terms of the variables in Eq. (22). It becomes Π0 a (x, x ′) = −in 20 9 8z 1 ρ3 ∞ 2 dse −iρ 2s2 sin(2 ρzs )j1(2 ρ2s) ≃ − in 20 3 2 ∞ 2 dse −iρ 2s2 s2 for ρ → 0 (24) Π0 b (x, x ′) = −in 20 9 8z 1 ρ3 21 dse −iρ 2s2 sin(2 ρzs )j1(2 ρ2s(s − 1))( s − 1) 2 ≃ − in 20 49 40 for ρ → 0 (25) 10 Π0 c (x, x ′) = −in 20 9 16 z 1 ρ5 21 ds sin(2 ρsz ) { e−iρ 2s2 cos(2 ρs (1 − s)) − e−2iρ 2s2 × [ cos( ρ2s(s − 2)) + 1 2ρ2s sin( ρ2s(s − 2)) ]} ≃ − in 20 21 16 1 ρ4 for ρ → 0 (26) Π0 d (x, x ′) = −in 20 9 16 z 1 ρ5 { cos(4 ρz ) − 1 8zρ 3 k3 F + 10 ds sin(2 ρsz )e−2iρ 2 s sin(2 ρ2s2)1 s ( 1 2sρ 2 + i )} ≃ in 20 9 8 z ρ4 for ρ → 0 (27) Concerning this all important issue, referred to as second kind scaling , much light on it is shed by the analysis of Π 0(x, 0) in space-time coordinates. Here one realizes that just as G0(x, x ′) turned out to be proportional to n0 (see Eq. (4)), Π 0(x, 0) is proportional to n20 as expected. Then, when these density factors are divided out in both propagators one finds that G0(x, x ′) and Π 0(x, 0) scale at any ∆ t providing that the space-time coordinates are in turn rescaled in terms of the Fermi momentum and frequency. This is clearly illustrated in Figs. 4, 5 and 6 where the spatial behaviour of the Π 0(x, 0) associated with three different values of kF is displayed as a function of kF r. The curves are seen to coincide at ∆ t = 0 .8, 1 and 1 .2, in accord with Eq. (20), which transparently exhibits the kF -scaling property. The above results concerning Π 0(x, 0) should, however, be viewed with some care. This is because the search for scaling in kF at vanishing ∆ t is impossible owing to the divergence discussed above which signals that the theory is ill-defined. Actually all of the terms in Eqs. (20) diverge at ∆ t = 0, with the exception of the second one. V. THE IMPACT OF A MORE REALISTIC MOMENTUM DISTRIBUTION Here we explore the impact of interactions among the constituent fermions on the pair correlation function and the CSR. This we do in a schematic frame that should, however, capture some of the relevant physics. We assume for the momentum distribution n(k) the expression n(k) = θ(kF − k)(1 − α k2 k2 F ) + θ(k − kF )β1e−β2( k kF−1) . (28) 11 The four parameters (indeed also kF should be viewed as such) entering in Eq. (28) must satisfy the normalization condition k3 F π2 [ 1 3 − α 5 + β1 β32 (β22 + 2 β2 + 2) ] = n0 , (29) where n0 is the experimental constant density of the system. Notice that for 1 − α = β1 the Fermi system becomes superconductive (the Fermi surface disappears) whereas for 1 −α > β 1 the discontinuity at the Fermi surface remains, as it should for a normal Fermi system according to the Luttinger theorem. This implies that both α and β1 should be positive. We now choose as an example nuclear matter (in this case the left-hand side of Eq. (29) should be multiplied by 2 to account for the isospin degeneracy) where one has n0 = 0 .17 fm −3. We then display in Fig. 7 the n(k) for nuclear matter for a specific choice for the four parameters, chosen to fulfill both Eq. (29) and the above-mentioned constraint. The tail at large momenta is evident and one finds that the new Fermi momentum turns out to be kF = 1 .54 fm −1, namely larger that the one associated with the non-interacting case. The pair correlation function for the momentum distribution in Eq. (28) is then easily computed and reads < Ψ0| ˆΨ† α (~x) ˆΨ† β (~0) ˆΨβ (~0) ˆΨα(~x)|Ψ0 >= n20 { 1 − 1 2 g2(r) } = n20 { 1 − 1 2 { 3 kF r [ j1(kF r) − α (kF r)4 ( 3(( kF r)2 − 2) sin( kF r) −kF r(( kF r)2 − 6) cos( kF r) ) β1 ( kF r (kF r)2 + β22 )2 (30) × ( sin( kF r) ( β2 + β22 + β32 (kF r)2 − 1 ) cos( kF r) ( kF r + 2β2 + β22 kF r )) ]} 2} . Using the same values of the parameters as for the momentum distribution shown in Fig. 7, this pair correlation function is displayed in Fig. 8 where it is compared with that of the pure Fermi gas from Eq. (11). What clearly appears in the figure is the marked difference between the two correlations functions at short distances, while they practically coincide at large distances: this behaviour nicely illustrates the role of the short-range correlations. Finally, we compute the CSR using Eq. (30). Although also in this case the calculation can be analytically performed using complex coordinates, as was previously done in the non-interacting situation, the resulting expression turns out to be very cumbersome; hence 12 we resort to the numerical evaluation of the formula S(~q) = Z − n20 1 2 d~ re −i~ q·~rg2(r) , (31) the function g(r) being defined in Eq. (30). The outcome is shown in Fig. 9 where it is clearly seen that results from Eq. (31) coincide with those from Eq. (13) at large q (say q > 4 fm −1), as they should, since in this domain of momenta the associated wavelengths are so small that the system appears to the probe as a collection of uncorrelated fermions. On the other hand for, say, 2 .5 < q < 4 fm −1 the two sum rules differ substantially due to the action of the correlations among the fermions. Finally, for smaller q this difference tends to disappear as both sum rules should go to zero when q vanishes. Noteworthy is that the sum rule arising from Eq. (31) reaches the asymptotic value 1 from below. VI. CONCLUSIONS In this work we have deduced expressions for the one- and two-particle propagators as functions of space-time coordinates, focusing on the hole propagator and the density-density correlator. To our knowledge these expressions were not previously available in the literature. We find that both propagators have infinities at equal time, a problem that is being addressed in other work. Next we have explored the asymptotic space-time behaviour of both Green functions and have found a transition from a diffractive regime to a uniform one as the time difference between the fields (for the one-particle propagator) or between the densities (for the two-particle propagator) grows. From the formal point of view this relates to the fact that both propagators for zero time difference are purely imaginary, but then the real parts start to develop as the time difference grows. Concerning the dependence upon kF , our analysis shows that both G0 and Π 0 scale once appropriate measures for space and time are chosen. This outcome goes in parallel with the situation in frequency-momentum space. However the divergence affecting Π 0 at ∆ t = 0 prevents the analysis of second-kind scaling for this propagator at vanishing ∆ t.We have found that the key ingredient contained in the propagators is the momentum distribution n(k). For example the Coulomb sum rule can be directly expressed in terms of the pair correlation function, which can, of course, be obtained once n(k) is known. We 13 have explored the consequences arising from a modification around the Fermi surface of n(k) away from the pure Fermi gas result, finding that this induces striking effects in the pair correlation function at short distances. This in turn leads to major differences in the Coulomb sum rule for momenta between about 2 .5 and 4 fm −1 for the model used in the present study, suggesting that it would be interesting to explore the responses of our infinite Fermi system to external probes, employing in the calculation of the polarization propagator our modified momentum distribution function given in Eq. (28). This, of course, is meant to account for the correlations (possibly of short range) among the fermions. A further issue, in some sense complementary to the above one, deserves consideration. It relates to the passage from an infinite to a finite system, the latter obviously of concern for nuclear physics. As a first approximation this transition can be accomplished by accounting for the modification of the density of the single-particle states induced by the presence of the system’s surface according to the prescription given by Feshbach . In a preliminary investigation we have done so and found, as expected, that the impact of the surface in coordinate space on the system’s response is only felt at low momenta. More specifically, the confinement on the one hand entails oscillations of the system’s density in coordinate space, on the other enlarges the momentum distribution n(k) to momenta greater than those of the corresponding infinite system with equal density. At the same time it digs a hole at low k in the momentum distribution. Thus confinement (in leading order) and short-range correlations appear to work in the same direction at large momenta. The disentangling of the interplay between the two effects is a problem we intend to address in forthcoming work. We will also explore in depth how the scaling of first kind is reflected in the space-time coordinates. Finally, since much of the physics we are addressing here occurs at large momenta, it is imperative to extend the present treatment to the relativistic context, with the goal of providing a comparison between the results one would obtain using the present approach and those already obtained in other approaches to electroweak superscaling. For instance, it will be of interest to answer the question: How will scaling in kF of the imaginary part of the polarization propagator occurring for all values of space-time coordinates in the non-interacting, homogeneous case be affected (and eventually disrupted) by confinement and short-range correlations in both the non-relativistic and relativistic context? 14 Acknowledgments We like to thank Prof. G. Chanfray and Dr. H. Hansen for useful discussions. This work was partially supported (TWD) by U.S. Department of Energy under cooperative agreement DE-FC02-94ER40818. APPENDIX A In this appendix we analyze the behaviour of the propagator G0(x, x ′) for very small ∆ t.For this purpose we use the well-known asymptotic expansion of the error function erf (z) = 1 − 1 √πz e−z2 [ 1 + ∞ ∑ m=1 (−1) m (2 m − 1)!! (2 z2)m ] . (A1) Then from the above, after some algebra, in leading order of ∆ t one obtains for the third term on the right-hand side of the propagator in Eq. (4) 1 2 kF |~x − ~x′| 2∆ t ( eik F |~x−~x′| kF |~x − ~x′| + 2∆ t − e−ik F |~x−~x′| kF |~x − ~x′| − 2∆ t ) = − 1 2i∆t sin( kF |~x − ~x′|) − cos( kF |~x − ~x′|) kF |~x − ~x′| (A2) which exactly cancels the second term of the right-hand side; hence Eq. (7) follows. APPENDIX B Here we compute the integral in Eq. (12) working in the complex z-plane. Setting q = q/k F , we have I = ∞ 0 dzj 21 (z)j0(qz ) = 1 2 ∞−∞ dz (sin z z2 − cos z z )2 sin( qz ) qz = 1 2q ∞−∞ dz z5 (sin z − z cos z)2 sin( qz ) . (B1) The integrand, which behaves as ∼ z2 for z → 0, is a regular analytic function; hence the integration path along the real axis can be deformed by inserting a very small semicircle going around the origin from below. This we indicate with the symbol % . Closing the integration path along a large semicircle in the Im z > 0 region, we thus get I = i 8q & ∞−∞ dz z5 { (1 − iz )2ei(2+ q)z − (1 − iz )2ei(2 −q)z + (1 + iz )2ei(q−2) z −(1 + iz )2e−i(q+2) z − 2(1 + z2)eiqz + 2(1 + z2)e−iqz } , (B2) 15 where the fourth and the sixth term on the right-hand side do not contribute because for these the integration path has to be closed in the Im z < 0 domain where no singularities exist. It is then convenient to split Eq. (B2) into two pieces according to I = I1 + I2 (B3) with I1 = i 8q & ∞−∞ dz z5 [ (1 − iz )2ei(q+2) z − 2(1 + z2)eiqz ] (B4) and I2 = i 8q  % ∞−∞ dz z5 (1 + iz )2ei(q−2) z if q > 2 − % ∞−∞ dz z5 (1 − iz )2ei(2 −q)z if q < 2 . (B5) The straightforward, while somewhat tedious, computation of the residues then yields I1 = − π 8q { 1 4! (q + 2) 4 − 2 3! (q + 2) 3 + 1 2 (q + 2) 2 − 2 4! q4 + q2 } , (B6) I2 = − π 8q { 1 4! (q − 2) 4 + 2 3! (q − 2) 3 + 1 2 (q − 2) 2 } if q > 2 (B7) and I2 = + π 8q { 1 4! (q − 2) 4 + 2 3! (q − 2) 3 + 1 2 (q − 2) 2 } if q < 2 . (B8) As a result of the cancellation occurring between Eqs. (B6) and (B7) one thus finds I = I1 + I2 = 0 if q > 2 (B9) and I = I1 + I2 = π 8 ( 4 3 − q + 1 12 q3 ) if q < 2 . (B10) APPENDIX C In this appendix we perform the integrals in Eq. (19). We focus on the first piece, since the second one is immediately derived once the first is known. For these pieces three angular integrations are trivial whereas the fourth should be approached with care owing to the 16 presence of the θ-function, leading naturally to the splitting of the integral over the modulus of the vector ~q into three pieces: Π0(x, 0) θ(tx) = −4i r kF 0 dk (2 π)2 k2 × { ∞ 2kF dq (2 π)2 qe −i q2 2mtx sin( qr ) 1 −1 d cos θe −i qk cos θ mtx + 2kF kF dq (2 π)2 qe −i q2 2mtx sin( qr ) 1 max [ k2 F−q2−k2 2qk ,−1 ] d cos θe −i qk cos θ mtx (C1) + kF 0 dq (2 π)2 qe −i q2 2mtx sin( qr ) 1 k2 F−q2−k2 2qk d cos θe −i qk cos θ mtx θ ( 1 − k2 F − q2 − k2 2qk )} . The remaining angular integration is trivial and the integral over the modulus of the vector ~k, while somewhat cumbersome, can be performed. Introducing for convenience a time variable with the dimensions of a length squared, τ = tx/m , one arrives at the expression Π0(x, 0) θ(tx) = − i 2π4τ r { k2 F ∞ 2kF dqe −i τ 2q2 sin( qr )j1(τ qk F )+ 2kF kF dqe −i τ 2q2 sin( qr )j1(τ q (q − kF ))( q − kF )2 1 τ 2kF kF dq sin( qr ) [ e−i τ 2q2 cos( τ q (kF − q)) − e−iτ q 2 (kF q cos ( τ ( q2 2 − qk F )) 1 τ q 2 sin ( τ ( q2 2 − qk F )) )] 1 τ kF 0 dq sin( qr )e−iτ qk F sin( τ q 2) ( 1 τ q 2 + ikF q ) cos(2 kF r) − 1 2τ r } . (C2) Then, for the first term in Eq. (C2), the q-integration between 2 kF and ∞ yields: Π0(1) (x, 0) θ(tx) = − 2 (2 π)4 θ(tx) τ 3 1 z {√2π(1 − i)e−2iρz cos( ρ2 + z2) (C3) +( −2iρ sin( ρ + z)2 + e−i(ρ+z)2 z)g∗(−∞ , +∞, ρ + z) + ei(ρ−z)2 (2 ρ − z)g∗(−∞ , +∞, ρ − z) } ;17 for the second term in Eq. (C2) the q-integration between kF and 2 kF yields Π0(2 a)(x, 0) θ(tx) = − i (2 π)4 θ(tx) τ 3 1 z × { − sin(4 ρz ) ρ (1 − e−i8ρ2 ) + 4 e−iρ 2 ρ sin(2 ρz ) + √π 1 + i 2 [ e−i(ρ−z)2 erf (1 − i √2 (ρ + z) ) −ei(ρ+z)2 erf (1 − i √2 (ρ − z) ) − (e−i(ρ−z)2 e−i(ρ+z)2 )erf (1 − i √2 z ) −i√3 ( erf (1 + i √21 √3 (5 ρ − z) ) − erf (1 + i √21 √3 (2 ρ − z) )) e i 3(ρ+z)2 ( z → − z) ] −2 [ e−i(ρ−z)2 (ρ − z)g(z, ρ + z, ρ − z) − ei(ρ+z)2 (ρ + z)g(−z, ρ − z, ρ + z)+ ( 1 √3 e i 3(ρ+z)2 (ρ + z)g ( 3ρ − z √3 , 5ρ − z √3 , ρ + z √3 ) − (z → − z) )] −(1 + i) √ π 2 [( e−i(ρ−z)2 ( erf ( 1 − i √2 (ρ + z) ) − erf (1 − i √2 z )) − (z → − z)+ (1 i e i 3(ρ+z)2 ( erf (1 + i √21 √3(5 ρ − z) ) − erf (1 + i √21 √3 (2 ρ − z) )) − (z → − z) ) − 2 1 + i √ 2 π ρ (( e i 3(ρ+z)2 g (2ρ − z √3 , 5ρ − z √3 , ρ + z √3 ) − (z → − z) ) + ( e−i(ρ−z)2 g(ρ, 2ρ, ρ − z) − (z → − z) ))]} (C4) 18 for the term embodying the j1 and Π0(2 b)(x, 0) θ(tx) = − i (2 π)4 θ(tx) τ 3 1 z × { e i 3(ρ+z)2 ( 1 √3 − 2i√3 ) √ π 21 − i 2 [ erf (1 + i √2 ( 2√3ρ − ρ + z √3 )) −erf (1 + i √2 (√3ρ − ρ + z √3 ))] − (z → − z) −e i 3(ρ+z)2 [ρ (1 − 2i) − 2iz ] g∗ (√3ρ − ρ + z √3 , 2√3ρ − ρ + z √3 , z − ρ √3 ) +e i 3(ρ−z)2 (ρ + 2 i) g∗ (√3ρ − ρ − z √3 , 2√3ρ − ρ − z √3 , ρ − z √3 ) − [ ei(ρ+z)2 √π 21 − i 2 ( erf (1 + i √2 (ρ − z) ) − erf ( −1 + i √2 z )) −e−i(ρ+z)2 √ π 21 − i 2 ( erf (1 + i √2 (ρ + z) ) − erf (1 + i √2 z ))] +ei(ρ+z)2 [ρ(1 − 2i) − 2iz ] g∗ (2 ρ + z, 3ρ + z, −ρ − z)+e−i(ρ−z)2 [√ π 2 (1 + i) ( erf (1 + i √2 (3 ρ − z) ) − erf (1 + i √2 (2 ρ − z) )) +2 i(ρ − z)g∗ (2 ρ − z, 3ρ − z, z − ρ) − ρg ∗ (2 ρ − z, 3ρ − z, −z + ρ)] +ei z2 2 √π 21 − i 2 ( erf (1 + i √2 (√2ρ + z √2 )) − erf (1 + i √2 (√2ρ − z √2 ))) 1 2ρ [2e−iρ (ρ−2z) − e−4iρ (2 ρ−z) − 2e−iρ (3 ρ+2 z) + e−4iρ (2 ρ+z) − 2e−iρ (ρ+2 z) + e−4iρ (2 ρ+z) −2e−i(2 z2 +5 ρ2−6ρz ) + e−2i(z2+5 ρ2−4ρz ) ]} (C5) for the other terms. Finally for the third term in Eq. (C2) the q-integration between 0 and kF yields: Π0(3) (x, 0) θ(tx) = i 2 (2 π)4 θ(tx) τ 3 1 z { − sin(2 ρz ) iρ + √π 1 + i 2 [ erf (1 − i 2 (ρ + z)) + erf (z → − z) ] ×(e− i 2(ρ−z)2 − e− i 2(ρ+z)2 ) + [ − (−z + ρ(1 − i)) e− i 2(ρ−z)2 g ( − ρ − z √2 , ρ + z √2 , ρ − z √2 ) ( z → − z) ] sin(2 ρz ) ρ e−4iρ 2 √π 1 − i 2 [ e i 2(ρ−z)2 ( erf (1 + i 2 (3 ρ − z)) − erf ( 1 + i 2 (ρ − z))) − (z → − z) ] −(e i 2(ρ−z)2 g∗ ( ρ − z √2 , 3ρ − z √2 , ρ − z √2 ) − (z → − z)) + 1 − cos(4 ρz ) 2z } . (C6) 19 It is worth noticing that the above formulas embody the physics of diffraction (through the familiar erf ) and the attenuation of the propagator (through the Meijer g-functions). R. Subedi et al. , Science, Vol.320, 1476 (2008). T. W. Donnelly and I. Sick, Phys.Rev. Lett. 82 , 3212 (1999). T. W. Donnelly and I. Sick, Phys. Rev. C 60 ,065502 (1999). M. B. Barbaro, R. Cenni, A. De Pace, T. W. Donnelly, and A. Molinari, Nucl. Phys. A643 ,137 (1998). J. E. Amaro, M. B. Barbaro, J. A. Caballero, T. W. Donnelly, and A. Molinari, Nucl. Phys. A697 , 388 (2002); A723 , 181 (2003); Phys. Rept. 368 , 317 (2002). V. De Alfaro and T. Regge, “Potential scattering” (North-Holland, Amsterdam, 1965). P. Amore, R. Cenni, T. W. Donnelly and A. Molinari, Nucl. Phys. A 615 , 353 (1997). M. Abramowitz and I. Stegun, “Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables”, New York: Dover, ISBN 0-486-61272-4 (1965). A.S. Wightman, “Quantum Field Theory in Terms of its Vacuum Expectation Values”, Phys. Rev. 860 (1956). A.L. Fetter and J.D. Walecka, Quantum theory of many-particle systems (McGraw-Hill, 1971). J.M. Luttinger, Phys. Rev. 119 , 1153 (1960). A. DeShalit and H. Feshbach, “Theoretical Nuclear Physics Vol. 1 : Nuclear Structure” (Wiley, 1990). 20 2 4 6 8 10 12 kF r 0.2 0.4 0.6 0.8 1.0 ÈG0Hx, x' LÈ  n0 FIG. 1: The equal-time hole propagator normalized to the density n0 as given in Eq. (7) versus kF r (r is the modulus of the relative distance). 2 4 6 8 10 12 kF r -0.35 -0.30 -0.25 -0.20 -0.15 -0.10 -0.05 0.05 ReG 0Hx, x' L  n0 (a) 246810 12 kFr 0.2 0.4 0.6 0.8 1.0 ImG 0Hx, x' L  n0 (b) 0 2 4 6 8 10 12 kF r 0.2 0.4 0.6 0.8 1.0 ÈG0Hx, x' LÈ  n0 (c) FIG. 2: Real part, imaginary part and modulus of G0(x, x ′) normalized to the density n0 for ∆t = 0 .5 . 21 2 4 6 8 10 12 kF r -0.8 -0.6 -0.4 -0.2 ReG 0Hx, x' L  n0 (a) 2 4 6 8 10 12 kF r -0.1 0.1 0.2 0.3 0.4 ImG 0Hx, x' L  n0 (b) 0 2 4 6 8 10 12 kF r 0.2 0.4 0.6 0.8 1.0 ÈG0Hx, x' LÈ  n0 (c) FIG. 3: Same as in Fig. 2, but for ∆ t = 2 . 22 2 4 6 8 10 12 kFr -1.0 -0.8 -0.6 -0.4 -0.2 0.2 Re P0Hx, 0 L  n02 (a) 2 4 6 8 10 12 kFr -0.6 -0.4 -0.2 0.2 Im P0Hx, 0 L  n02 (b) 2 4 6 8 10 12 kFr 0.2 0.4 0.6 0.8 1.0 ÈP0Hx, 0 LÈ  n02 (c) FIG. 4: Real part (a), imaginary part (b) and modulus (c) of Π 0(x, 0) divided by n20 and plotted versus kF r for ∆ t = 0 .8 and three values of kF : 1.2, 1.36 and 1.5 fm −1. Note that the same curve is obtained for any value of kF . 23 2 4 6 8 10 12 kFr -0.6 -0.4 -0.2 Re P0Hx, 0 L  n02 (a) 2 4 6 8 10 12 kFr -0.3 -0.2 -0.1 0.1 0.2 0.3 Im P0Hx, 0 L  n02 (b) 2 4 6 8 10 12 kFr 0.2 0.4 0.6 0.8 ÈP0Hx, 0 LÈ  n02 (c) FIG. 5: As for Fig. 4, but now with ∆ t = 1 . 2 4 6 8 10 12 kFr -0.5 -0.4 -0.3 -0.2 -0.1 Re P0Hx, 0 L  n02 (a) 2 4 6 8 10 12 kFr -0.2 -0.1 0.1 0.2 0.3 Im P0Hx, 0 L  n02 (b) 2 4 6 8 10 12 kFr 0.1 0.2 0.3 0.4 0.5 0.6 ÈP0Hx, 0 LÈ  n02 (c) FIG. 6: As for Fig. 5, but now with ∆ t = 1 .2 . 24 0.0 0.5 1.0 1.5 2.0 2.5 k 0.2 0.4 0.6 0.8 1.0 nHkL FIG. 7: The momentum distribution of an interacting Fermi system as given by formula Eq. (28) of the text. The following values for the parameters have been chosen kF = 1 .54 fm −1, α = 0 .2 , β1 = 0 .4 and β2 = 4 . The non interacting case (dotted line) corresponds to kF = 1 .36 fm −1 . 0 1 2 3 4 5 kF r 0.2 0.4 0.6 0.8 1.0 1.2 1 - 12 g2HkF rL FIG. 8: The pair correlation function 1 − 1 2 g2(kF r) for a free (dotted line) and for an interacting (continuos line) Fermi gas as given by our model (formula (30) of the text). The values of parameters are the same as in Fig. 7 . Note that the two curves refer to different kF . 25 1 2 3 4 5 q 0.2 0.4 0.6 0.8 1.0 CSR Z FIG. 9: The Coulomb sum rule (see Eq. (31) of the text) for a free Fermi gas (dotted line) and for a correlated one according to our model (continuous line). The parameters are the same as in Fig. 7 and Fig. 8 . 26
325
Skip to main content Does Sprague-Grundy help solve any impartial games that don't comprise independent sub-games? Ask Question Asked Modified 5 years, 2 months ago Viewed 165 times This question shows research effort; it is useful and clear 1 Save this question. Show activity on this post. By "solve" I mean efficiently compute whether a given position is a P-position (first-player win). By "efficiently" I mean compared with "brute force", which involves recursively labeling every reachable position a P-position if you can directly reach at least one N-position from it. Calculating the nimber of a state with the mex formula for a general impartial game seems, if anything, slightly less efficient than brute force. Clearly Sprague-Grundy helps when the game comprises independent sub-games because you can use nimber addition on the sub-games' nimbers, but impartial games in general don't have this property. For example, the Nim variant where you also have the option of choosing one stone from every nonzero pile. Does Sprague-Grundy help at all here? Or with any other impartial games other than the ones that comprise independent sub-games? Or might you just as well use brute force when a game lacks this property? game-theory combinatorial-game-theory Share CC BY-SA 4.0 Follow this question to receive notifications edited Apr 30, 2020 at 17:57 Cosmologicon asked Apr 30, 2020 at 17:25 CosmologiconCosmologicon 11133 bronze badges Add a comment | 1 Answer 1 Reset to default This answer is useful 1 Save this answer. Show activity on this post. For an example where the theory helps but the game does not obviously consist of independent subgames, see "Turning Turtles" in Winning Ways (Vol 2 in the original edition, Vol 3 in the new edition). Here, the game consists of a line of n coins each of which may be heads or tails. A move (for either player) consists of flipping at least one and at most k coins, provided the rightmost flipped coin goes from heads to tails (the other coins may be flipped in either direction). The values n and k are fixed parameters for the duration of the game. At first, it does not seem like this game decomposes into independent subgames, but in fact, the Sprague-Grundy theory can be applied. See Winning Ways for details. Share CC BY-SA 4.0 Follow this answer to receive notifications edited May 18, 2020 at 3:52 answered May 18, 2020 at 3:33 TedTed 35.8k33 gold badges6666 silver badges107107 bronze badges Add a comment | You must log in to answer this question. Start asking to get answers Find the answer to your question by asking. Ask question Explore related questions game-theory combinatorial-game-theory See similar questions with these tags. Featured on Meta Will you help build our new visual identity? Upcoming initiatives on Stack Overflow and across the Stack Exchange network... Community help needed to clean up goo.gl links (by August 25) Related 9 Are all nimbers included in the surreals? 5 Variation of Nim, where one has to divide a pile into any number of piles. 4 Is the nim sum just the disjunctive sum at a different "scale"? 2 Using mex in the Sprague-Grundy theorem vs. xoring everything 3 The reasoning behind the mex() function 1 Game theory : winning positions ? 2 Questions on Sprague-Grundy Theorem 0 Even-Nim and Odd-Nim are like Nim in that they are played with piles of stones. Hot Network Questions Factoring RSA numbers on a laptop Can metal atoms act as ligands? When was this builder's paper produced? Does it make any sense to run a journal for pre-college students interested in medicine? Do you email authors whose results you have improved? How can I tell that two analytical methods are orthogonal? I found that we can calculate the time of solar eclipses that will happen in the very far future. Do we need relativity in this calculation? Can "Accepted" Be Used as a Noun? How do I keep my internal drives active? Intel NUC automatically shuts down when trying Ubuntu In Grep, how can I grep -r --exclude build/lib/\\/\.py Formula of Simonene Why doesn't chatGPT learn from its interactions with users? If linear negation is interpreted as representing destructors, how to make sense of double linear negation elimination? Why לֶחֶם instead of לַחַם? Is Adj N Adj possible? Samba(Linux)/Windows interaction What's at stake if the E3/EU "snaps back" their sanctions on Iran? Why do the rules allow resigning in drawn positions with insufficient mating material? Graphical software tools for quick and easy diagrams Why was there a child at the dig site in Montana? How soon after parking a car in a paid parking area must I provide proof of payment? When was John Mark from Acts first identified as Mark the Evangelist? What's the difference between democracy and totalitarianism if, even in democracy, we must respect laws set by parties we didn't vote for? more hot questions Question feed By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy. Cookie Consent Preference Center When you visit any of our websites, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences, or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and manage your preferences. Please note, blocking some types of cookies may impact your experience of the site and the services we are able to offer. Cookie Policy Manage Consent Preferences Strictly Necessary Cookies Always Active These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information. Performance Cookies These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance. Functional Cookies These cookies enable the website to provide enhanced functionality and personalisation. They may be set by us or by third party providers whose services we have added to our pages. If you do not allow these cookies then some or all of these services may not function properly. Targeting Cookies These cookies are used to make advertising messages more relevant to you and may be set through our site by us or by our advertising partners. They may be used to build a profile of your interests and show you relevant advertising on our site or on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device.
326
Published Time: Mon, 23 Jun 2025 21:19:44 GMT Induction puzzles - Wikipedia =============== Jump to content [x] Main menu Main menu move to sidebar hide Navigation Main page Contents Current events Random article About Wikipedia Contact us Contribute Help Learn to edit Community portal Recent changes Upload file Special pages Search Search [x] Appearance Donate Create account Log in [x] Personal tools Donate Create account Log in Pages for logged out editors learn more Contributions Talk Contents move to sidebar hide (Top) 1 Muddy Children PuzzleToggle Muddy Children Puzzle subsection 1.1 Description 1.2 Logical solution 1.3 Game-theoretic solution 2 The King's Wise Men hat puzzleToggle The King's Wise Men hat puzzle subsection 2.1 Description 2.2 Solution 3 Josephine's ProblemToggle Josephine's Problem subsection 3.1 Description 3.2 Solution 4 Alice at the Convention of LogiciansToggle Alice at the Convention of Logicians subsection 4.1 Description 4.2 Solution 5 Basic hat puzzleToggle Basic hat puzzle subsection 5.1 Description 5.2 Solution 6 Two-hat variantToggle Two-hat variant subsection 6.1 Description 6.2 Solution 7 Three-hat variantToggle Three-hat variant subsection 7.1 Description 7.2 Solution 8 Four-hat variantToggle Four-hat variant subsection 8.1 Description 8.2 Solution 9 Five-hat variantToggle Five-hat variant subsection 9.1 Description 9.2 Solution 10 Ten-hat variantToggle Ten-hat variant subsection 10.1 Description 10.2 Solution 11 Ten-hat variant without hearingToggle Ten-hat variant without hearing subsection 11.1 Description 11.2 Solution 12 Countably infinite-Hat variant without hearingToggle Countably infinite-Hat variant without hearing subsection 12.1 Description 12.2 Solution 13 Countably infinite hat problem with hearingToggle Countably infinite hat problem with hearing subsection 13.1 Description 13.2 Solution 14 Ebert's version and Hamming codesToggle Ebert's version and Hamming codes subsection 14.1 Description 14.2 Solution 15 See also 16 References [x] Toggle the table of contents Induction puzzles [x] 1 language Српски / srpski Edit links Article Talk [x] English Read Edit View history [x] Tools Tools move to sidebar hide Actions Read Edit View history General What links here Related changes Upload file Permanent link Page information Cite this page Get shortened URL Download QR code Print/export Download as PDF Printable version In other projects Wikidata item Appearance move to sidebar hide From Wikipedia, the free encyclopedia Logic puzzle One type of induction puzzle concerns the wearing of colored hats, where each person in a group can only see the color of those worn by others, and must work out the color of their own. Part of a series on Puzzles Types | Guessing | | --- | | Riddle Situation | | Logic | | Dissection Induction Logic grid Self-reference | | Mechanical | | Combination Construction Disentanglement Lock Go problems Folding Stick Tiling | | Tour Sliding Chess Maze(Logic maze) | | Word and Number | | Crossword Sudoku | | Puzzle video games | | Mazes | | Metapuzzles | Topics Brain teaser Dilemma Joke Optical illusion Packing problems Paradox Problem solving Puzzlehunt Syllogism Tale Lists Impossible puzzles Maze video games Nikoli puzzle types Puzzle video games Puzzle topics v t e Induction puzzles are logic puzzles, which are examples of multi-agent reasoning, where the solution evolves along with the principle of induction. A puzzle's scenario always involves multiple players with the same reasoning capability, who go through the same reasoning steps. According to the principle of induction, a solution to the simplest case makes the solution of the next complicated case obvious. Once the simplest case of the induction puzzle is solved, the whole puzzle is solved subsequently. Typical tell-tale features of these puzzles include any puzzle in which each participant has a given piece of information (usually as common knowledge) about all other participants but not themselves. Also, usually, some kind of hint is given to suggest that the participants can trust each other's intelligence — they are capable of theory of mind (that "every participant knows modus ponens" is common knowledge). Also, the inaction of a participant is a non-verbal communication of that participant's lack of knowledge, which then becomes common knowledge to all participants who observed the inaction. The muddy children puzzle is the most frequently appearing induction puzzle in scientific literature on epistemic logic. Muddy children puzzle is a variant of the well known wise men or cheating wives/husbands puzzles. Hat puzzles are induction puzzle variations that date back to as early as 1961. In many variations, hat puzzles are described in the context of prisoners. In other cases, hat puzzles are described in the context of wise men. Muddy Children Puzzle [edit] Description [edit] A group of attentive children is told that some of them have muddy faces. Each child can see the faces of the others, but cannot tell if his or her own face is muddy. The children are told that those with muddy faces must step forward, but any child with a clean face who steps forward will be punished. At the count of three, every child who believes that his or her face is muddy must step forward simultaneously; any child who signals to another in any way will be punished. If any child with a muddy face has not stepped forward, the process will be repeated. Logical solution [edit] Assuming that each child has—and knows each of the others to have—perfect logic all children with muddy faces (X{\displaystyle X}) will step forward together on turn X{\displaystyle X}. The children have different information, depending on whether their own face is muddy or not. Each member of X{\displaystyle X} sees X−1{\displaystyle X-1} muddy faces, and knows that those children will step forward on turn X−1{\displaystyle X-1} if they are the only muddy faces. When that does not occur, each member of X{\displaystyle X} knows that he or she is also a member of X{\displaystyle X}, and steps forward on turn X{\displaystyle X}. Each non-member of X{\displaystyle X} sees X{\displaystyle X} muddy faces, and will not expect anyone to step forward until at least turn X{\displaystyle X}. Assume there are two children, Alice and Bob, and that only Alice is muddy (X=1{\displaystyle X=1}). Alice knows that "some" children have muddy faces but that nobody else's face is muddy, meaning that her own face must be muddy and she steps forward on turn one. Bob, seeing Alice's muddy face, has no way of knowing on turn one if his own face is muddy or not until Alice steps forward (indicating that his own face must be clean). If both Alice and Bob are dirty (X=2{\displaystyle X=2}), each is in the position of Bob when X=1{\displaystyle X=1}: neither can step forward on turn one. However, by turn two Bob knows that Alice must have seen that his face is muddy (because she did not step forward on turn one), and so he steps forward on turn two. Using the same logic, Alice also steps forward on turn two. Assume that there is a third child, Charlie. If only Alice is muddy (X=1{\displaystyle X=1}), she will see no muddy faces and will step forward on turn one. If both Alice and Bob are muddy (X=2{\displaystyle X=2}), neither can step forward on turn one but each will know by turn two that the other saw a muddy face—which they can see is not Charlie's—so their own face must be muddy and both will step forward on turn two. Charlie, seeing two muddy faces, does not know on turn two whether his own face is muddy or not until Alice and Bob both step forward (indicating that his own face is clean). If all three are muddy (X=3{\displaystyle X=3}), each is in the position of Charlie when X=2{\displaystyle X=2}: when two people fail to step forward on turn two, each knows that the other sees two muddy faces meaning that their own face must be muddy, and each steps forward on turn three. It can be proven that X{\displaystyle X} muddy children will step forward at turn X{\displaystyle X}. Game-theoretic solution [edit] Representation of Muddy Children Puzzle for two players in extensive form. The preliminary move by nature is colored green. Alice is colored red and Bob is colored blue. This game has only one single Nash equilibrium. Actions predicted by this equilibrium are colored black. Muddy children puzzle can also be solved using backward induction from game theory. Muddy children puzzle can be represented as an extensive form game of imperfect information. Every player has two actions — stay back and step forwards. There is a move by nature at the start of the game, which determines the children with and without muddy faces. Children do not communicate as in non-cooperative games. Every stroke is a simultaneous move by children. It is a sequential game of unlimited length. The game-theoretic solution needs some additional assumptions: All children are rational and all children's rationality is common knowledge. This means that Alice is rational, Alice knows that Bob is rational and Alice knows that Bob knows that Charly is rational and so on and vice versa. Stepping forward without having a muddy face results in a big penalty. Stepping forward with a muddy face results in a reward. Every stroke results in minor negative penalty aka discount factor for every child until any of them stepped forward. Any multiple of the minor penalty is always a lesser evil than the big penalty. If only Alice is muddy, the last assumption makes it irrational for her to hesitate. If Alice and Bob are muddy, Alice knows that Bob's only reason of staying back after the first stroke is the apprehension to receive the big penalty of stepping forward without a muddy face. In the case with X{\displaystyle X} muddy children, receiving X{\displaystyle X} times the minor penalty is still better than the big penalty. The King's Wise Men hat puzzle [edit] Description [edit] The King called the three wisest men in the country to his court to decide who would become his new advisor. He placed a hat on each of their heads, such that each wise man could see all of the other hats, but none of them could see their own. Each hat was either white or blue. The king gave his word to the wise men that at least one of them was wearing a blue hat; in other words, there could be one, two, or three blue hats, but not zero. The king also announced that the contest would be fair to all three men. The wise men were also forbidden to speak to each other. The king declared that whichever man stood up first and correctly announced the colour of his own hat would become his new advisor. The wise men sat for a very long time before one stood up and correctly announced the answer. What did he say, and how did he work it out? Solution [edit] The King's Wise Men is one of the simplest induction puzzles and one of the clearest indicators to the method used. Suppose that there was one blue hat. The person with that hat would see two white hats, and since the king specified that there is at least one blue hat, that wise man would immediately know the colour of his hat. However, the other two would see one blue and one white hat and would not be able to immediately infer any information from their observations. Therefore, this scenario would violate the king's specification that the contest would be fair to each. So there must be at least two blue hats. Suppose then that there were two blue hats. Each wise man with a blue hat would see one blue and one white hat. Supposing that they have already realized that there cannot be only one (using the previous scenario), they would know that there must be at least two blue hats and therefore, would immediately know that they each were wearing a blue hat. However, the man with the white hat would see two blue hats and would not be able to immediately infer any information from his observations. This scenario, then, would also violate the specification that the contest would be fair to each. So there must be three blue hats. Since there must be three blue hats, the first man to figure that out will stand up and say blue. Alternative solution: This does not require the rule that the contest be fair to each. Rather it relies on the fact that they are all wise men, and that it takes some time before they arrive at a solution. There can only be three scenarios: one blue hat, two blue hats or three blue hats. If there was only one blue hat, then the wearer of that hat would see two white hats, and quickly know that he has to have a blue hat, so he would stand up and announce this straight away. Since this hasn't happened, then there must be at least two blue hats. If there were two blue hats, then either one of those wearing a blue hat would look across and see one blue hat and one white hat, but not know the colour of their own hat. If the first wearer of the blue hat assumed he had a white hat, he would know that the other wearer of the blue hat would be seeing two white hats, and thus the 2nd wearer of the blue hat would have already stood up and announced he was wearing a blue hat. Thus, since this hasn't happened, the first wearer of the blue hat would know he was wearing a blue hat, and could stand up and announce this. Since either one or two blue hats is so easy to solve, and no one has stood up quickly, then they must all be wearing blue hats. Josephine's Problem [edit] Description [edit] In Josephine's Kingdom every woman has to pass a logic exam before being allowed to marry. Every married woman knows about the fidelity of every man in the Kingdom except for her own husband, and etiquette demands that no woman should be told about the fidelity of her husband. Also, a gunshot fired in any house in the Kingdom will be heard in any other house. Queen Josephine announced that at least one unfaithful man had been discovered in the Kingdom, and that any woman knowing her husband to be unfaithful was required to shoot him at midnight following the day after she discovered his infidelity. How did the wives manage this? Solution [edit] Josephine's Problem is another good example of a general case. If there is only 1 unfaithful husband, then every woman in the Kingdom knows that except for his wife, who believes that everyone is faithful. Thus, as soon as she hears from the Queen that unfaithful men exist, she knows her husband must be unfaithful, and shoots him. If there are 2 unfaithful husbands, then both their wives believe there is only 1 unfaithful husband (the other one). Thus, they will expect that the case above will apply, and that the other husband's wife will shoot him at midnight on the next day. When no gunshot is heard, they will realise that the case above did not apply, thus there must be more than 1 unfaithful husband and (since they know that everyone else is faithful) the extra one must be their own husband. If there are 3 unfaithful husbands, each of their wives believes there to be only 2, so they will expect that the case above will apply and both husbands will be shot on the second day. When they hear no gunshot, they will realize that the case above did not apply, thus there must be more than 2 unfaithful husbands and as before their own husband is the only candidate to be the extra one. In general, if there are n unfaithful husbands, each of their wives will believe there to be n-1 and will expect to hear a gunshot at midnight on the n-1 th day. When they do not, they know their own husband was the n th. This problem is also known as the Cheating Husbands Problem, the Unfaithful Wives Problem, the Muddy Children Problem. It is logically identical to the Blue Eyes Problem. This problem also appears as a problem involving black hats and white hats in C. L. Liu's classic textbook 'Elements of Discrete Mathematics'. Alice at the Convention of Logicians [edit] Description [edit] At the Secret Convention of Logicians, the Master Logician placed a band on each attendee's head, such that everyone else could see it but the person themselves could not. There were many different colours of band. The Logicians all sat in a circle, and the Master instructed them that a bell was to be rung in the forest at regular intervals: at the moment when a Logician knew the colour on his own forehead, he was to leave at the next bell. They were instructed not to speak, nor to use a mirror or camera or otherwise avoid using logic to determine their band colour. In case any impostors had infiltrated the convention, anyone failing to leave on time would be gruffly removed at the correct time. Similarly, anyone trying to leave early would be gruffly held in place and removed at the correct time. The Master reassured the group by stating that the puzzle would not be impossible for any True Logician present. How did they do it? Solution [edit] Alice at the convention of Logicians is general induction plus a leap of logic. Leap of logic: Every colour must appear at least twice around the circle. This is because the Master stated that it would not be impossible for any Logician to solve the puzzle. If any colour existed only once around the circle, the Logician who bore it would have no way of knowing that the colour even existed in the problem, and it would be impossible for them to answer. Each of the Logicians can look around the circle and count the number of times they see each colour. Suppose that you are one of the Logicians and you see another colour only once. Since you know each colour must exist at least twice around the circle, the only explanation for a singleton colour is that it is the colour of your own band. For the same reason, there can only be one such singleton colour, and so you would leave on the first bell. Likewise any Logicians who see another colour only once should be able to determine their own colour, and will either leave with dignity or be thrown out as an infiltrator. Equivalently, any colour for which there are only two bands of that colour will be eliminated after the first bell has rung. Thereafter there must be at least three bands of any remaining colour. Suppose you do not see any colour once, but you do see a colour twice. If these were the only bands of this colour, then these two Logicians ought to have left at the first bell. Since they did not, that can only be because your own band is the same colour, so you can leave at the second bell. Therefore, every logician would watch until a group of a given colour that they expected to leave failed to leave. Then they would know that they had that colour, and would leave on the next bell. When only one colour remained, that colour would all leave on the next bell, because they would know that they could not have any other colour (since then it would be impossible for them to know their colour). Basic hat puzzle [edit] Description [edit] A number of players are each wearing a hat, which may be of various specified colours. Players can see the colours of at least some other players' hats, but not that of their own. With highly restricted communication or none, some of the players must guess the colour of their hat. The problem is to find a strategy for the players to determine the colours of their hats based on the hats they see and what the other players do. In some versions, they compete to be the first to guess correctly; in others, they can work out a strategy beforehand to cooperate and maximize the probability of correct guesses. One variation received some new publicity as a result of Todd Ebert's 1998 Ph.D.thesis at the University of California, Santa Barbara. It is a strategy question about a cooperative game, which has connections to algebraic coding theory. Three players are told that each of them will receive either a red hat or a blue hat. They are to raise their hands if they see a red hat on another player as they stand in a circle facing each other. The first to guess the colour of his or her hat correctly wins. All three players raise their hands. After the players have seen each other for a few minutes without guessing, one player announces "Red", and wins. How did the winner do it, and what is the color of everyone's hats? Solution [edit] First, if two people had blue hats, not everyone's hand would have been raised. Next, if player 1 had seen a blue hat on player 2 & a red hat on player 3, then player 1 would have known immediately that his own hat must be red. Thus any player who sees a blue hat can guess at once. Finally, the winner realizes that since no one guesses at once, there must be no blue hats, so every hat must be red. In the case where every player has to make a guess, but they are free to choose when to guess, there is a cooperative strategy that allows every player to guess correctly unless all the hats are the same colour. Each player should act as follows: Count the numbers b of blue hats and r of red hats that you see. Wait b seconds or r seconds, whichever is sooner. If nobody has yet spoken, guess that your hat is blue if you can see fewer blue hats than red hats, or red if you can see fewer red hats than blue hats. If you have not yet spoken, guess that your hat is of the opposite colour to that of one of the first people to speak. Suppose that in total there are B blue hats and R red hats. There are three cases. If B = R then those players wearing blue hats see B−1 blue hats and B red hats, so wait B−1 seconds then correctly guess that they are wearing a blue hat. Similarly, those players wearing a red hat will wait R−1 seconds before guessing correctly that they are wearing a red hat. So all players make a correct guess at the same time. If B<R then those wearing a blue hat will see B−1 blue hats and R red hats, whilst those wearing a red hat will see B blue hats and R−1 red hats. Since B−1 <B ≤R−1, those players wearing a blue hat will be the first to speak, guessing correctly that their hat is blue. The other players then guess correctly that their hat is red. The case where R<B is similar. Two-hat variant [edit] Description [edit] Four prisoners each wear a hat which is either black or white. The front prisoner is concealed behind a screen. Four prisoners are arrested for a crime, but the judge offers to spare them from punishment if they can solve a logic puzzle. Three of the men stand a line. A faces the wall, B faces A, and C faces B and A. A fourth man is put behind a wall. All four men wear hats; there are two black hats and two white hats, each prisoner is wearing one of the hats, and each of the prisoners see only the hats in front of him but neither on himself nor behind him. The fourth man behind the screen can't see or be seen by any other prisoner. No communication among the prisoners is allowed. If any prisoner can figure out what color hat he has on his own head with 100% certainty (without guessing) he must then announce it, and all four prisoners go free. Solution [edit] The prisoners know that there are only two hats of each color. So if C observes that A and B have hats of the same color, C would deduce that his own hat is the opposite color. However, if A and B have hats of different colors, then C can say nothing. The key is that prisoner B, after allowing an appropriate interval, and knowing what C would do, can deduce that if C says nothing the hats on A and B must be different; able to see A's hat, he can deduce his own hat color. In common with many puzzles of this type, the solution relies upon the assumption that all participants are rational and intelligent enough to make the appropriate deductions. After solving this puzzle, some insight into the nature of communication can be gained by pondering whether the meaningful silence of prisoner C violates the "No communication" rule (given that communication is usually defined as the "transfer of information").[citation needed] Three-hat variant [edit] Description [edit] In this variant there are 3 prisoners and 3 hats. Each prisoner is assigned a random hat, either red or blue. Each person can see the hats of two others, but not their own. On a cue, they each have to guess their own hat color or pass. They win release if at least one person guessed correctly and none guessed incorrectly (passing is neither correct nor incorrect). Solution [edit] This puzzle doesn't have a 100% winning strategy, but can be won with a 75% chance. When considering the colors of hats as bits, this problem can be solved using coding theory, for example with hamming codes. Four-hat variant [edit] This section does not cite any sources. Please help improve this section by adding citations to reliable sources. Unsourced material may be challenged and removed.(February 2024) (Learn how and when to remove this message) Description [edit] In a variant of this puzzle, the prisoners know that there are 2 black hats and 2 white hats, and there is a wall in between A and B, yet the prisoners B, C & D can see who's in front of them i.e. D sees B, C and the wall, B sees the wall, and C sees B & the wall. (A again cannot be seen and is only there to wear one of the black hats.) How can they deduce the colours of all of them without communicating? Solution [edit] There are two cases: in the trivial case, two of the four prisoners wear the black hats. Each of the other two prisoners can see that one prisoner is wearing the off-colour hat. In the non-trivial case, two of the four prisoners wear hats of the same colour, while A and C wear the black hats. After a while, all four prisoners should be able to deduce that, because D and B were not able to state the colour of their own hat, A and C must be wearing the black hats. Five-hat variant [edit] This section does not cite any sources. Please help improve this section by adding citations to reliable sources. Unsourced material may be challenged and removed.(February 2024) (Learn how and when to remove this message) Description [edit] In another variant, only three prisoners and five hats of known colours (in this example two black and three white) are involved. The three prisoners are ordered to stand in a straight line facing the front, with A in front and C at the back. They are told that there will be two black hats and three white hats. One hat is then put on each prisoner's head; each prisoner can only see the hats of the people in front of him and not on his own. The first prisoner that is able to announce the color of his hat correctly will be released. No communication between the prisoners is allowed. Solution [edit] Assume that A wears a black hat: If B wears a black hat as well, C can immediately tell that he is wearing a white hat after looking at the two black hats in front of him. If B wears a white hat, C will be unable to tell the color of his hat (because there is a black and a white). So B can quickly deduce from A's black hat and C's lack of response that he (B) is wearing a white hat. So if A wears a black hat there will be a fairly quick response from B or C. Assume that A wears a white hat: C does not see two black hats, so he is unable to tell his hat color. B sees only a white hat, so he can't tell anything about his hat. In this case A, B and C would remain silent for some time, until A finally deduces that he must have a white hat because C and B have remained silent for some time. As mentioned, there are three white hats and two black hats in total, and the three prisoners know this. In this riddle, you can assume that all three prisoners are very clever and very smart. If C could not guess the color of his own hat that is because he saw either two white hats or one of each color. If he saw two black hats, he could have deduced that he was wearing a white hat. Ten-hat variant [edit] This section does not cite any sources. Please help improve this section by adding citations to reliable sources. Unsourced material may be challenged and removed.(February 2024) (Learn how and when to remove this message) Description [edit] In this variant there are 10 prisoners and 10 hats. Each prisoner is assigned a random hat, either red or blue, but the number of each color hat is not known to the prisoners. The prisoners will be lined up single file where each can see the hats in front of him but not behind. Starting with the prisoner in the back of the line and moving forward, they must each, in turn, say only one word which must be "red" or "blue". If the word matches their hat color they are released, if not, they are killed on the spot. A sympathetic guard warns them of this test one hour beforehand and tells them that they can formulate a plan where by following the stated rules, 9 of the 10 prisoners will definitely survive, and 1 has a 50/50 chance of survival. What is the plan to achieve the goal? Solution [edit] The prisoners agree that if the first prisoner sees an odd number of red hats, he will say "red". This way, the nine other prisoners will know their own hat color after the prisoner behind them responds. Ten-hat variant without hearing [edit] This section does not cite any sources. Please help improve this section by adding citations to reliable sources. Unsourced material may be challenged and removed.(February 2024) (Learn how and when to remove this message) Description [edit] As before, there are 10 prisoners and 10 hats. Each prisoner is assigned a random hat, either red or blue, but the number of each color hat is not known to the prisoners. The prisoners are distributed in the room such that they can see the hats of the others but not their own. Now, they must each, simultaneously, say only one word which must be "red" or "blue". If the word matches their hat color they are released, and if enough prisoners resume their liberty they can rescue the others. A sympathetic guard warns them of this test one hour beforehand. If they can formulate a plan following the stated rules, 5 of the 10 prisoners will definitely be released and be able to rescue the others. What is the plan to achieve the goal? Solution [edit] The prisoners pair off. In a pair (A, B) of the prisoners A says the color he can see on the head of B, who says the opposite color he sees on the head of A. Then, if both wear hats with the same color, A is released (and B is not), if the colors are different, B is released (and A is not). In total, 5 prisoners answer correctly and 5 do not. This assumes the pair can communicate who is A and who is B, which may not be allowed. Alternatively, the prisoners build two groups of 5. One group assumes that the number of red hats is even, the other assumes that there is an odd number of red hats. Similar to the variant with hearing, they can deduce their hat color out of this assumption. Exactly one group will be right, so 5 prisoners answer correctly and 5 do not. Note that the prisoners cannot find a strategy guaranteeing the release of more than 5 prisoners. Indeed, for a single prisoner, there are as many distributions of hat colors where he says the correct answer than there are where he does not. Hence, there are as many distributions of hat colors where 6 or more prisoners say the correct answer than there are where 4 or fewer do so. Countably infinite-Hat variant without hearing [edit] This section does not cite any sources. Please help improve this section by adding citations to reliable sources. Unsourced material may be challenged and removed.(February 2024) (Learn how and when to remove this message) Description [edit] In this variant, a countably infinite number of prisoners, each with an unknown and randomly assigned red or blue hat line up single file line. Each prisoner faces away from the beginning of the line, and each prisoner can see all the hats in front of him, and none of the hats behind. Starting from the beginning of the line, each prisoner must correctly identify the color of his hat or he is killed on the spot. As before, the prisoners have a chance to meet beforehand, but unlike before, once in line, no prisoner can hear what the other prisoners say. The question is, is there a way to ensure that only finitely many prisoners are killed? Solution [edit] If one accepts the axiom of choice, and assumes the prisoners each have the (unrealistic) ability to memorize an uncountably infinite amount of information and perform computations with uncountably infinite computational complexity, the answer is yes. In fact, even if we allow an uncountable number of different colors for the hats and an uncountable number of prisoners, the axiom of choice provides a solution that guarantees that only finitely many prisoners must die provided that each prisoner can see the hats of every other prisoner (not just those ahead of them in a line), or at least that each prisoner can see all but finitely many of the other hats. The solution for the two color case is as follows, and the solution for the uncountably infinite color case is essentially the same: The prisoners standing in line form a sequence of 0s and 1s, where 0 is taken to represent blue, and 1 is taken to represent red. Before they are put into the line, the prisoners define the following equivalence relation over all possible sequences that they might be put into: Two sequences are equivalent if they are identical after a finite number of entries. From this equivalence relation, the prisoners get a collection of equivalence classes. Assuming the axiom of choice, there exists a set of representative sequences—one from each equivalence class. (Almost every specific value is impossible to compute, but the axiom of choice implies that some set of values exists, so we assume that the prisoners have access to an oracle.) When they are put into their line, each prisoner can see all but a finite number of hats, and can therefore see which equivalence class the actual sequence of hats belongs to. (This assumes that each prisoner can perform an uncountably infinite number of comparisons to find a match, with each class comparison requiring a countably infinite number of individual hat-comparisons). They then proceed guessing their hat color as if they were in the representative sequence from the appropriate equivalence class. Because the actual sequence and the representative sequence are in the same equivalence class, their entries are the same after some finite number N of prisoners. All prisoners after these first N prisoners are saved. Because the prisoners have no information about the color of their own hat and would make the same guess whichever color it has, each prisoner has a 50% chance of being killed. It may seem paradoxical that an infinite number of prisoners each have an even chance of being killed and yet it is certain that only a finite number are killed. The solution to this paradox lies in the fact that the function employed to determine each prisoner's guess is not Measurable function. To see this, consider the case of zero prisoners being killed. This happens if and only if the actual sequence is one of the selected representative sequences. If the sequences of 0s and 1s are viewed as binary representations of a real number between 0 and 1, the representative sequences form a non-measurable set. (This set is similar to a Vitali set, the only difference being that equivalence classes are formed with respect to numbers with finite binary representations rather than all rational numbers.) Hence no probability can be assigned to the event of zero prisoners being killed. The argument is similar for other finite numbers of prisoners being killed, corresponding to a finite number of variations of each representative. Countably infinite hat problem with hearing [edit] This section does not cite any sources. Please help improve this section by adding citations to reliable sources. Unsourced material may be challenged and removed.(February 2024) (Learn how and when to remove this message) Description [edit] This variant is the same as the last one except that prisoners can hear the colors called out by other prisoners. The question is, what is the optimal strategy for the prisoners such that the fewest of them die in the worst case? Solution [edit] It turns out that, if one allows the prisoners to hear the colors called out by the other prisoners, it is possible to guarantee the life of every prisoner except the first, who dies with a 50% probability. To do this, we define the same equivalence relation as above and again select a representative sequence from each equivalence class. Now, we label every sequence in each class with either a 0 or a 1. First, we label the representative sequence with a 0. Then, we label any sequence which differs from the representative sequence in an even number of places with a 0, and any sequence which differs from the representative sequence in an odd number of places with a 1. In this manner, we have labeled every possible infinite sequence with a 0 or a 1 with the important property that any two sequences which differ by only one digit have opposite labels. Now, when the warden asks the first person to say a color, or in our new interpretation, a 0 or a 1, he simply calls out the label of the sequence he sees. Given this information, everyone after him can determine exactly what his own hat color is. The second person sees all but the first digit of the sequence that the first person sees. Thus, as far as he knows, there are two possible sequences the first person could have been labeling: one starting with a 0, and one starting with a 1. Because of our labeling scheme, these two sequences would receive opposite labels, so based on what the first person says, the second person can determine which of the two possible strings the first person saw, and thus he can determine his own hat color. Similarly, every later person in the line knows every digit of the sequence except the one corresponding to his own hat color. He knows those before him because they were called out, and those after him because he can see them. With this information, he can use the label called out by the first person to determine his own hat color. Thus, everyone except the first person always guesses correctly. Ebert's version and Hamming codes [edit] Description [edit] Ebert's version of the problem states that all players who guess must guess at the same predetermined time, but that not all players are required to guess. Now not all players can guess correctly, so the players win if at least one player guesses and all of those who guess do so correctly. How can the players maximise their chance of winning? Solution [edit] One strategy for solving this version of the hat problem employs Hamming codes, which are commonly used to detect and correct errors in data transmission. The probability for winning will be much higher than 50%, depending on the number of players in the puzzle configuration: for example, a winning probability of 87.5% for 7 players. Similar strategies can be applied to team sizes of N = 2 k−1 and achieve a win rate (2 k-1)/2 k. Thus the Hamming code strategy yields greater win rates for larger values of N. In this version of the problem, any individual guess has a 50% chance of being right. However, the Hamming code approach works by concentrating wrong guesses together onto certain distributions of hats. For some cases, all the players will guess incorrectly; whereas for the other cases, only one player will guess, but correctly. While half of all guesses are still incorrect, this results in the players winning more than 50% of the time. A simple example of this type of solution with three players is instructive. With three players, there are eight possibilities; in two of them all players have the same colour hat, and in the other six, two players have one colour and the other player has the other colour. The players can guarantee that they win in the latter cases (75% of the time) with the following strategy: Any player who observes two hats of two different colours remains silent. Any player who observes two hats of the same colour guesses the opposite colour. In the two cases when all three players have the same hat colour, they will all guess incorrectly. But in the other six cases, only one player will guess, and correctly, that his hat is the opposite of his fellow players'. See also [edit] Epistemic logic Common knowledge (logic) References [edit] ^Stuhlmüller, A.; Goodman, N.D. (June 2014). "Reasoning about reasoning by nested conditioning: Modeling theory of mind with probabilistic programs". Cognitive Systems Research. 28: 80–99. CiteSeerX10.1.1.361.5043. doi:10.1016/j.cogsys.2013.07.003. S2CID7602205. ^Lucci, Stephen; Kopec, Danny (2015). Artificial Intelligence in the 21st Century. Stylus Publishing, LLC. ISBN978-1-944534-53-0. ^Tagiew, Rustam (2008). "Simplest Scenario for Mutual Nested Modeling in Human-Machine-Interaction". KI 2008: Advances in Artificial Intelligence. Lecture Notes in Computer Science. Vol.5243. Springer. pp.364–371. doi:10.1007/978-3-540-85845-4_45. ISBN978-3-540-85844-7. ^ abcFagin, Ronald; Halpern, Joseph Y.; Moses, Yoram; Vardi, Moshe Y. (March 1999). "Common knowledge revisited". Annals of Pure and Applied Logic. 96 (1–3): 89–105. arXiv:cs/9809003. doi:10.1016/S0168-0072(98)00033-5. S2CID59551. ^ abvan der Hoek, Wiebe; van Ditmarsch, Hans (2007). Dynamic epistemic logic. Springer. ISBN978-1-4020-5838-7. ^"Google Scholar "Muddy Children Puzzle"". scholar.google.com. Retrieved 11 February 2020. ^Fagin, Ronald; Halpern, Joseph Y.; Moses, Yoram; Vardi, Moshe (2004). Reasoning about knowledge. MIT Press. ISBN978-0262562003. ^Hardin, Christopher; Taylor, Alan D. (2008). "An introduction to Infinite Hat Problems"(PDF). Mathematical Intelligencer. 30 (4): 20–25. doi:10.1007/BF03038092. S2CID24613564. Archived from the original(PDF) on 2012-04-05. ^"The Prisoners' Hats – Puzzles And Riddles". www.puzzlesandriddles.com. ^"Prisoners and Hats Puzzle". CrazyforCode. 13 August 2013. ^"Robots pass 'wise-men puzzle' to show a degree of self-awareness". techxplore.com. ^Leite, João (2005). Computational Logic in Multi-Agent Systems: 5th International Workshop, CLIMA V, Lisbon, Portugal, September 29–30, 2004, Revised Selected and Invited Papers. Springer Science & Business Media. ISBN978-3-540-28060-6. ^ abTagiew, Rustam (2011). Strategische Interaktion realer Agenten Ganzheitliche Konzeptualisierung und Softwarekomponenten einer interdisziplinären Forschungsinfrastruktur (in German). Südwestdeutscher Verlag für Hochschulschriften. pp.90–95. ISBN978-3838125121. ^Weber, Roberto A. (1 December 2001). "Behavior and Learning in the "Dirty Faces" Game". Experimental Economics. 4 (3): 229–242. doi:10.1023/A:1013217320474. ISSN1573-6938. S2CID123369018. ^Huth, Michael; Ryan, Mark (26 August 2004). Logic in Computer Science: Modelling and Reasoning about Systems. Cambridge: Cambridge University Press. ISBN978-0-521-54310-1. ^Moses, Yoram; Dolet, Danny; HaIpern, Joseph Y. (1985). "Cheating husbands and other stories (Preliminary version)"(PDF). Proceedings of the fourth annual ACM symposium on Principles of distributed computing - PODC '85. pp.215–223. doi:10.1145/323596.323616. ISBN0897911687. S2CID2519017. ^Liu, Chung Laung (1985). Elements of Discrete Mathematics (2 ed.). McGraw-Hil. pp.16–17. ISBN9780071005449. ^Charatonik, Włodzimierz J. (2010). "Alice at the logicians convention"(PDF). Missouri University of Science and Technology. Archived from the original(PDF) on 2010-07-05. Retrieved 2015-07-31. ^Brown, Ezra; Tanton, James (April 2009). "A Dozen Hat Problems"(PDF). Math Horizons. 16 (4): 22–25. doi:10.1080/10724117.2009.11974827. S2CID123345434. Archived from the original(PDF) on 2017-07-17. Retrieved 2011-10-08. ^Winkler, Peter (2004). Mathematical Puzzles: A Connoisseur's Collection. A K Peters. pp.125–126. hat puzzle todd. ^Biography of Todd Ebert at California State University, Long Beach ^Gardner, Martin (1978). Aha! Insight. Scientific American. p.102. ISBN0-89454-001-7. Retrieved 2011-10-08. ^"US Nuclear Regulatory Commission Vol. 1 No. 4"(PDF). Nuclear Regulatory Commission. 2011. Retrieved 2024-10-17. ^Guo, Wenge; Kasala, Subramanyam; Rao, M. Bhaskara; Tucker, Brian. "The Hat Problem And Some Variations"(PDF). ^Havil, Julian (2008). Impossible? Surprising Solutions to Counterintuitive Conundrums. Princeton University Press. pp.50–59. ISBN9780691131313. Retrieved 2011-10-08. Retrieved from " Categories: Logic puzzles Games of mental skill Theory of mind Game theory game classes Non-cooperative games Epistemic logic Hidden categories: CS1 German-language sources (de) Articles with short description Short description matches Wikidata EngvarB from July 2015 All articles with unsourced statements Articles with unsourced statements from October 2024 Articles needing additional references from February 2024 All articles needing additional references This page was last edited on 27 February 2025, at 18:00(UTC). Text is available under the Creative Commons Attribution-ShareAlike 4.0 License; additional terms may apply. By using this site, you agree to the Terms of Use and Privacy Policy. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit organization. Privacy policy About Wikipedia Disclaimers Contact Wikipedia Code of Conduct Developers Statistics Cookie statement Mobile view Search Search [x] Toggle the table of contents Induction puzzles 1 languageAdd topic
327
Proceedings of the International School of Physics “Enrico Fermi” Course 190 “Frontiers in Modern Optics”, edited by D. Faccio, J. Dudley and M. Clerici (IOS, Amsterdam; SIF, Bologna) 2016 DOI 10.3254/978-1-61499-647-7-31 Tutorial on nonlinear optics S. Choudhary School of Electrical Engineering and Computer Science, University of Ottawa Ottawa, Ontario, K1N 6N5 Canada R. W. Boyd School of Electrical Engineering and Computer Science, University of Ottawa Ottawa, Ontario, K1N 6N5 Canada The Institute of Optics, University of Rochester Rochester, New York, 14627 USA Department of Physics, University of Ottawa Ottawa, Ontario, K1N 6N5 Canada Summary. — Nonlinear optics deals with phenomena that occur when a very intense light interacts with a material medium, modifying its optical properties. Shortly after the demonstration of first working laser in 1960 by Maiman (Nature, 187 (1960) 493), the field of nonlinear optics began with the observation of second harmonic by Franken et al. in 1961 (Phys. Rev. Lett., 7 (1961) 118). Since then, the interest in this field has grown and various nonlinear optical effects are utilized for purposes such as nonlinear microscopy, switching, harmonic generation, parametric downconversion, filamentation, etc. We present here a brief overview of the various aspects on nonlinear optics and some of the recent advances in the field. c ⃝Societa Italiana di Fisica 31 32 S. Choudhary and R. W. Boyd 1. – Introduction to nonlinear optics Accordingˇ ¡proofsAuthor please note that we have written in full the reference quota-tions in the abstract and reordered those in the text accordingly, following the numerical sequence. Please check, thanks to ref. , nonlinear optics is the study of phenomena that occur due to the modification of material properties in the presence of light of high inten-sity. The nonlinearity is associated with the fact that material response varies in a nonlin-ear manner with the applied optical field. To study this effect, we consider the dependence of the dipole moment per unit volume, or polarization ˜ P(t) on the applied optical field strength ˜ E(t). On application of the optical field, there is displacement of both electrons and the nuclei with respect to the centre of mass of the molecule. Considering dipole ap-proximation, an electric dipole is formed due to charge separation between the negatively charged electron cloud and the positively charged nucleus. At optical frequencies, due to its much larger mass, the oscillations in the nucleus are much weaker than the electronic oscillations. Hence the nuclear contributions are far weaker than the electronic contri-butions, at least for linear polarizability. The nonlinear susceptibility on the other hand (manifested in terms of Raman scattering), might be comparable or even larger depending on whether we are on or offresonance . But for all practical purposes, we neglect the nuclear contributions for simplicity in our present discussion. The bulk polarization of the entire material is thus a vector sum of the dipole moments of all the molecules [3,4]. In a linear regime, the induced dipole also oscillates with the same frequency as the driving field and each molecule of the material can be viewed as a harmonic oscillator. Due to larger mass of the nucleus, these oscillations are very weak and about the mean position of the molecules. The induced polarization in this case can be expressed as ˜ P(t) = ϵ0χ(1) ˜ E(t), (1) where ϵ0 is the permittivity of free space and the χ(1) is the linear susceptibility. But for larger applied fields (comparable to inter-atomic fields) and proportionately stronger oscillations, this approximation breaks down and the behaviour deviates from that of a harmonic oscillator. In this anharmonic case, nonlinear terms come into play which give rise to different frequency components in the oscillations. To account for this, we expand the polarization ˜ P(t) as a generalized power series in ˜ E(t) and include all the nonlinear contributions as ˜ P(t) = ϵ0 χ(1) ˜ E1(t) + χ(2) ˜ E2(t) + χ(3) ˜ E3(t) + . . . , (2) ˜ P(t) = ˜ P 1(t) + ˜ P 2(t) + ˜ P 3(t) + . . . . The constants χ(2) and χ(3) are the second- and third-order nonlinear optical suscepti-bilities, respectively. This is a very simplified notation and does not take into account dispersion or losses because of the instantaneous nature of the response. Under general circumstances when losses and dispersion are present, the susceptibilities depend on fre-quency. If the vector nature of fields is also taken into account, then χ(1) is a tensor of Tutorial on nonlinear optics 33 rank 2, χ(2) a tensor of rank 3 and so on. ˜ P 1(t) is called the linear polarization while ˜ P 2(t) and ˜ P 3(t) are called the second- and third-order nonlinear polarizations respec-tively. Thus, the polarization is composed of linear and nonlinear components. A time varying nonlinear polarization is a source of newer electromagnetic field components and hence is key to the description of nonlinear optical phenomena. This is evident in the wave equation for nonlinear media: ▽2 ˜ E −n2 c2 ∂2 ˜ E ∂t2 = 1 ϵ0c2 ∂2 ˜ P NL ∂t2 . (3) Here, the nonlinear polarization ˜ P NL drives the electric field ˜ E and the term ∂2 ˜ P NL/∂t2 represents the acceleration of charges in the medium. This is consistent with Larmor’s theorem that accelerating charges generate electromagnetic waves. It should be noted that under certain circumstances such as resonant excitation of atomic systems or under very high applied laser field strength, the power series representation of (2) may not converge. Such cases are dealt with a formalism that includes the possibility of saturation effects. Susceptibilities may be complex or real depending on whether the nonlinear process involves exchange of energy with the medium or not, respectively. When there is no energy exchange between the interacting waves and the medium and the quantum state of the medium remains unchanged in the end (there may be population transfers be-tween real and virtual levels but they have a very short lifetime), the process is called a “parametric process”. Examples include SHG, SFG, DFG, OPA, THG, Kerr nonlinear-ity, SPM, XPM, FWM, etc, using standard notation that will be developed within this chapter. When the quantum state of the medium is changed in the end, the process is called a non-parametric process. Examples include SRS, SBS, multi-photon absorption, saturable absorption, etc. A brief description of all these processes are provided in the sections that follow. 2. – Second-order nonlinear optical processes The discovery of second-harmonic generation (SHG) in 1961 by Franken et al. marked the beginning of the field of nonlinear optics. In 1965, ref. reported the non-linear light scattering in a quartz crystal generating light with frequency twice that of the incident beam. Difference-frequency generation by a KDP crystal using non-collinear light beams was also reported in 1965 in ref. . Apart from second-harmonic gener-ation, the effects that result from second-order nonlinearity or a non-zero χ(2) include sum- and difference-frequency generation, optical parametric oscillation and spontaneous parametric downconversion. Material symmetry plays a significant role in determining the second-order response as only non-centrosymmetric materials, or materials lacking inversion symmetry show a second-order response. This will be elaborated later. A brief description of each of the second-order processes mentioned above is as follows. 34 S. Choudhary and R. W. Boyd Fig. 1. – (a) Schematic showing SHG process. (b) Energy level diagram for SHG process. 2.1. Second-harmonic generation (SHG). – When a monochromatic laser beam of electric field strength represented by ˜ E(t) = Ee−iωt + c.c. (4) is incident on a material with non-zero value of χ(2), it induces a second-order polariza-tion given by ˜ P (2)(t) = ϵ0χ(2)  Ee−iωt + c.c. 2 , (5) ˜ P (2)(t) = ϵ0χ(2)  2EE∗+ E2e−2iωt + E∗2e2iωt , ˜ P (2)(t) = 2ϵ0χ(2)EE∗+  ϵ0χ(2)E2e−2iωt + c.c.  . The second term oscillates with a frequency 2ω and is the second-harmonic contribution to the polarization, while the constant first term represents a static electric polarization developed in the material (as ∂2 ˜ P NL/∂t2 vanishes) and is called the optical rectification term. So we see that the second-harmonic term scales quadratically with the incident electric field. It is to be noted though that χ(2) has an order of magnitude value of approximately 10−12 m/V, and one might thus think that this contribution is not signif-icant. But with proper experimental conditions, very high efficiencies can be obtained such that nearly all the incident power is converted into the second harmonic. Figure 1b shows an energy level diagram of the SHG process. The solid line indicates the ground state while the dotted lines indicate virtual levels. This diagram illustrates that two photons of frequency ω are annihilated and one photon of frequency 2ω is created. Some results of a laboratory demonstration of SHG are shown in fig. 2. 2.1.1. Mathematical description. The mathematical treatment provided here follows those discussed in refs. [4,1] and . To develop a mathematical description of SHG, we need to derive the coupled wave equations for the incident pump field and the generated second-harmonic field within the material. We assume that the medium is lossless at the fundamental frequency ω1 as well as the second-harmonic frequency ω2 = 2ω1 and that the input beams are collimated, monochromatic and continuous-wave. The total electric Tutorial on nonlinear optics 35 Fig. 2. – SHG from lithium niobate crystal. (a) Setup, (b) Screen output. (c) Trajectories of the pump and the SHG . field within the nonlinear medium is given by ˜ E(z, t) = ˜ E1(z, t) + ˜ E2(z, t), (6) where ˜ Ej(z, t) = Ej(z)e−iωjt + c.c., Ej(z) = Aj(z)eikjz (7) with kj = njωj/c and nj = [ϵ(1)(ωj)]1/2. The amplitude of the second-harmonic wave A2(z) is taken to be a slowly varying function of z when the nonlinear source term is not too large, in the absence of which A2 is constant (as it should be for a plane-wave solution). The nonlinear polarization is ˜ P NL(z, t) = ˜ P1(z, t) + ˜ P2(z, t), (8) where ˜ Pj(z, t) = Pj(z)e−iωjt + c.c., j = 1, 2 (9) and P2(z) = ϵ0χ(2)E1(z)2 = ϵ0χ(2)A2 1e2ik1z. (10) As each frequency component obeys the inhomogeneous wave equation (3), we can write the wave equation for the second harmonic as ▽2 ˜ E2 −n22 c2 ∂2 ˜ E2 ∂t2 = 1 ϵ0c2 ∂2 ˜ P 2 ∂t2 . (11) 36 S. Choudhary and R. W. Boyd On expanding the first term and rewriting the equation, we get ∂2A2 ∂z2 + 2ik2 ∂A2 ∂z −k2 2A2 −n22ω22 c2 ∂2A2 ∂t2 ei(k2z−ω2t) = (12) −ω22 c2 χ(2)A1 2e2ik1z−ω2t. We take the slowly varying amplitude approximation which allows us to neglect the first term as it is much smaller than the second. Also, using k2 2 = n22ω22/c2, we get 2ik2 ∂A2 ∂z = −ω2 2 c2 χ(2)A1 2eiΔkz, (13) where Δk = 2k1 −k2 is known as the phase or wave vector mismatch factor and is crucial in determining the efficiency of the conversion process. It accounts for the conservation of momentum for the SHG process when we consider the quantum mechanical picture. For simplicity, we make the undepleted pump approximation which means that A1(z) is taken to be constant. It is a valid approximation in most cases as at most a negligible fraction of the pump power is transferred to the generated fields. This simplifies the expression even further and we obtain 2ik2 dA2 dz = −ω22 c2 χ(2)A1 2eiΔkz = −4ω12 c2 χ(2)A1 2eiΔkz. (14) On integrating both sides over the length L of the medium, we obtain A2(L) = 2ω1 n2c χ(2) eiΔkL −1 Δk . (15) For the case of perfect phase-matching or Δk = 0, on taking the limit Δk →0 in the above equation, we find A2(L) = 2iω1 n2c χ(2)A1 2L. (16) The intensity is given by I2 = 2n2ϵ0c|A2(L)|2, where |A2(L)|2 = 4ω12 n22c2 χ(2)2 |A1|4L2. (17) So the SHG intensity scales quadratically with the length of the medium or crystal. For the more general case of a nonzero Δk, we find |A2(L)|2 = 4ω12 n22c2  χ(2)2 |A1|4L2 sinc2 ΔkL 2 . (18) Tutorial on nonlinear optics 37 Fig. 3. – Intensity of the second-harmonic wave versus wave vector mismatch. In this case, the intensity of the second-harmonic wave varies with the phase mismatch ΔkL as [sinc2(ΔkL/2)] as shown in fig. 3. The coherence length is defined as the distance at which the output goes out of phase with the pump wave and is given by Lcoh = 2 Δk . (19) 2.2. Sum frequency generation (SFG). – Sum frequency generation is a more general situation than SHG in that the two input pump beams have different frequencies ω1 and ω2, leading to the generation of the sum frequency ω3 = ω1 + ω2. The total electric field associated with the input waves is given by ˜ E(t) = E1e−iω1t + E2e−iω2t + c.c. (20) The second-order nonlinear polarization in this case is given by ˜ P (2)(t) = ϵ0χ(2) ˜ E(t)2 (21) which on substitution of the expression for electric field gives ˜ P (2)(t) = ϵ0χ(2) E1 2e−2iω1t + E2 2e−2iω2t + 2E1E2e−i(ω1+ω2)t (22) + 2E1E2 ∗e−i(ω1−ω2)t + c.c. + 2ϵ0χ(2) [E1E1 ∗+ E2E2 ∗] . The polarization ˜ P (2)(t) can be expanded in its Fourier series and the corresponding fre-quency components on both sides are equated to get the complex amplitudes of different 38 S. Choudhary and R. W. Boyd frequency components of the nonlinear polarization P(2ω1) = ϵ0χ(2)E1 2; (SHG), (23) P(2ω2) = ϵ0χ(2)E2 2; (SHG), P(ω1 + ω2) = 2ϵ0χ(2)E1E2; (SFG), P(ω1 −ω2) = 2ϵ0χ(2)E1E2 ∗; (DFG), P(0) = 2ϵ0χ(2) [E1E1 ∗+ E2E2 ∗] ; (OR). As we can see from the above equations, in the most general case of mixing between two pump beams, we get second harmonic (SHG), sum frequency (SFG), difference frequency (DFG) and optical rectification (OR). But all these components are not present at the same time and it is mostly one component that is the dominant one which is determined by the phase-matching condition (to be discussed later). 2.2.1. Mathematical description. The derivation of the coupled wave equations is similar to that of second-harmonic generation except for the nonlinear source term which in the case of two pump beams becomes ˜ P3(z, t) = P3(z)e−iω3t, where P3(z) = 2ϵ0χ(2)A1A2e−i(k1+k2)z. (24) Also, ˜ E3(z, t) = A3(z)ei(k3z−ω3t) + c.c., ω3 = ω1 + ω2, (25) where k3 = n3ω3 c , n3 2 = ϵ(1)(ω3). (26) Note that the complex envelope A3(z) is again a slowly varying function of z in the presence of a small nonlinear source term which would have otherwise been a constant leading to a uniform plane-wave solution. Also, we make the undepleted pump approxi-mation for both A1 and A2 and take them as constants in the analysis. As each frequency component of the electric field satisfies the inhomogeneous wave equation, we write the wave equation for the sum frequency term ∂2A3 ∂z2 + 2ik3 ∂A3 ∂z −k3 2A3 −n32ω32 c2 ∂2A3 ∂t2 ei(k3z−ω3t) (27) = −2ω32 c2 χ(2)A1A2ei(k1+k2)z−ω3t Again, making the slowly varying envelope approximation and substituting the value of k3 = n3ω3/c, we obtain dA3 dz = iχ(2)ω32 k3c2 A1A2eiΔkz, (28) Tutorial on nonlinear optics 39 Fig. 4. – Schematic showing the process of difference frequency generation. where Δk = k1 + k2 −k3 is the phase or wave vector mismatch factor. Integrating the above equation along the length L of the crystal, we obtain A3(L) = iχ(2)ω3A1A2 n3c eiΔkL −1 iΔk . (29) The intensity of the sum frequency wave at the output of the crystal is given by I3(L) = 2n3ϵ0c|A3(L)|2 where |A3(L)|2 = 2χ(2)2ω32I1I2 n1n2n3ϵ0c2 L2 sinc2 ΔkL 2 . (30) So the sum-frequency intensity also shows a sinc2 dependence, as was observed for the second-harmonic case. Figure 3 thus also shows the variation of sum frequency intensity as a function of the phase mismatch factor. 2.3. Difference Frequency Generation (DFG). – In the previous section, we saw that a difference frequency component was one of the outcomes when two beams interact in a medium with non-zero value of χ(2). Let us now consider in detail such a situation, which as shown in fig. 4, where two waves ω3 and ω1 interact in a lossless optical medium. We use the undepleted pump approximation for the higher-frequency input wave ω3. The coupled wave equations for the difference frequency wave ω2 and the lower-frequency input wave ω1 are obtained by a method analogous to that for SFG and are as follows: dA1 dz = iω12χ(2) k1c2 A3A2 ∗eiΔkz, (31) and dA2 dz = iω22χ(2) k2c2 A3A2 ∗eiΔkz, (32) where Δk = k3 −k1 −k2. (33) 40 S. Choudhary and R. W. Boyd Fig. 5. – Spatial evolution of A1 and A2 for the case of perfect phase-matching in the undepleted pump approximation. On solving the above set of differential equations for the case of perfect phase-matching, Δk = 0, we obtain A1(z) = A1(0) cosh κz, (34) A2(z) = i n1ω2 n2ω1 1/2 A3 |A3|A1 ∗(0) sinh κz, (35) where the coupling constant is given by κ2 = χ(2)2ω12ω22 k1k2c4 |A3|2. (36) Figure 5 shows the spatial evolution of A1 and A2 for the case of perfect phase-matching in the undepleted pump approximation. It is observed that both A1 and A2 show monotonically increasing growth and that each field asymptotically experiences an exponential growth. The input field A1 retains its initial phase and the DFG wave A2 possesses a phase that depends on both that of the pump and of the ω1 waves. An intuitive explanation for this behavior is that the presence of the ω2 wave stimulates the generation of the ω1 wave and vice versa. This process of amplification of the signal wave ω1 due to nonlinear mixing resulting in the production of an idler is known as “parametric amplification” as DFG is a parametric process (due to the initial and final quantum-mechanical states being identical). 2.4. Optical parametric oscillation (OPO). – The previous section described the pro-cess of parametric amplification by DFG. This gain can be used to produce oscillation when it is supplied with the appropriate positive feedback. This can be done by placing mirrors that are highly reflective at one or both of the signal and idler frequencies on either side of the nonlinear medium as shown in fig. 6. If the end mirrors are reflecting at both the signal and idler frequencies, the device is called a doubly resonant oscillator, and if it is reflecting at either the signal or the idler frequency, then it is called singly resonant oscillator. The OPO can be used as a source of frequency-tunable radiation for Tutorial on nonlinear optics 41 ωs = ω1 χ(2) L ωi = ω2 ωi = ω2 ωs = ω1 R1, R2 R1, R2 (a) (b) ωp = ω3 ωp = ω3 Fig. 6. – (a) Energy-level diagram for a parametric amplification process. (b) Schematic for an OPO. infrared, visible and ultraviolet spectral regions and can produce either continuous wave, nanosecond, picosecond or femtosecond pulsed outputs. 2.5. Parametric downconversion. – The production of simultaneous photon pairs was described as early as 1970 . Also known as parametric fluorescence , parametric scattering or SPDC, it is the spontaneous splitting of the pump photon ωp into signal, ωs and idler, ωp photons such that ωp = ωs + ωi (energy conservation) and is stimulated by random vacuum fluctuations. The emitted photons must satisfy the phase-matching conditions due to momentum conservation, or ⃗ kp = ⃗ ks + ⃗ ki. The emitted photon pairs are simultaneously entangled in several sets of comple-mentary degrees of freedom. Specifically, the photon pairs can be entangled in time and energy, in position and momentum, in orbital angular. The fact that the emitted photons display entanglement has enormous implications for quantum information technologies. For example, entanglement allows one to test some of the fundamental properties in quantum mechanics such reality and non-locality. SPDC is also used to build single pho-ton sources. Entanglement between successive pairs does not occur . Figure 7a shows the energy level diagram for this process and fig. 7b shows a typical experimental setup. There are two different configurations for SPDC depending on whether the signal and idler waves have the same or orthogonal polarizations; these are called type-I and type-II configurations, respectively. For type I, the emission is in the form of concentric cones of signal and idler beams such that each photon of an entangled pair lie opposite to each other on the cones. In type II on the other hand, we get two separate cones for the orthogonal polarizations and each photon of the entangled pair is found opposite to each other on the respective cones. On the points of intersection of the two cones, we get photons that are entangled in polarization . ω ω ω Fig. 7. – (a) Energy level diagram for a parametric downconversion process. (b) Schematic of an experiment to perform coincidence counts for entangled photons. 42 S. Choudhary and R. W. Boyd Fig. 8. – (a) Angle-tuned phase-matching. (b) Dispersion curves for a negative uniaxial crystal. 2.6. Phase-matching. – In the previous sections, it was explained that the efficiency of all second-order processes depends on the crucial criterion of phase-matching. In vector representation, it is written as ⃗ k1 = ⃗ k1 + ⃗ k3 (37) where |⃗ ki| = niωi/c. When we consider collinearly propagating waves in an isotropic medium, this equation reduces to a scalar representation n1ω1 c = n2ω2 c + n3ω3 c . (38) From energy conservation, we have ω1 = ω2 + ω3. In a non-dispersive medium, we have n1 = n2 = n3 and so eq. (37) is automatically satisfied in a non-dispersive medium due to frequency matching. But when we have a dispersive medium, the refractive indices are not equal (and increase monotonically with frequency) which means that both frequency and phase-matching conditions are not simultaneously satisfied and all the three waves travel with different velocities in the medium. As a result, we cannot have phase-matching in isotropic, dispersive media. To compensate for the dispersion, birefringence, which is the dependence of refractive indices on polarization of the waves and the directions with respect to the principal axes of crystal, present in anisotropic media can be used. Hence, by properly adjusting the crystal orientation and the wave polarizations, phase-matching can be achieved. In , two ways of achieving phase-matching have been discussed: type I and type II. In type I, both lower frequency waves have ordinary polarization while in type II, one of them has the extraordinary polarization. Figure 8a shows how crystal angle can be tuned to achieve phase-matching in a negative uniaxial crystal. In the case of a uniaxial crystal, we have further two possibilities depending on whether the ordinary refractive index or extraordinary refractive index is larger. Figure 8b shows the dispersion curve for a negative uniaxial crystal. Table I shows the phase-matching method for all four cases. In cases where there is insufficient or no birefringence to compensate for dispersion, other methods need to be applied to achieve phase-matching. The most important Tutorial on nonlinear optics 43 Table I. – Phase-matching methods for uniaxial crystal. Positive Uniaxial Negative Uniaxial (ne > n0) (ne < n0) Type I n3 oω3 = n1 eω1 + n2 eω2 ne oω3 = n1 oω1 + n2 oω2 Type II n3 oω3 = n1 oω1 + n2 eω2 n3 eω3 = n1 eω1 + n2 oω2 method is called quasi–phase-matching where we have a periodically poled nonlinear crystal with the optic axis reversed at a period of less than or equal to twice the coher-ence length Lcoh given by eq. (19). Hence every time the output goes out of phase with the pump causing power to flow back from the output, the sign of χ(2) flips allowing the output to grow monotonically. Figure 9b shows the quasi–phase-matched output in comparison to perfectly phase-matched and phase-mismatched outputs. 3. – Third-order nonlinear optical processes The third-order contribution to nonlinear polarization is given by ˜ P (3)(t) = ϵ0χ(3) ˜ E(t)3, (39) where ˜ E(t) is the total electric field. The polarization then has various frequency com-ponents, the simplest being the third harmonic for the case of a monochromatic input. Fig. 9. – (a) A periodically poled crystal with arrows showing the direction of optic axis. (b) Comparison of perfect phase-matching and quasi–phase-matching. 44 S. Choudhary and R. W. Boyd Fig. 10. – (a) Schematic for a THG process. (b) Energy level diagram for third-harmonic generation. 3.1. Third-harmonic Generation (THG). – Let us consider the case of a monochro-matic beam incident on the medium with the electric field given by ˜ E(t) = Ee−iωt + c.c. (40) The nonlinear polarization is then given by ˜ P (3)(t) = ϵ0χ(3)  E3e−3iωt + c.c  + 3EE∗(E + E∗) e−iωt (41) The first term oscillating at frequency 3ω gives the third harmonic contribution. The energy level diagram for the process is shown in fig. 10. 3.2. Intensity dependent refractive index. – In eq. (41), the second term oscillating at the frequency of the pump ω has the coefficient which depends on the intensity of the pump. So this contribution leads to a refractive index which depends on the intensity of the pump and is given by n = n0 + n2I. (42) It is also called the Kerr nonlinearity. There are two ways in which this nonlinear effect becomes manifest: 1) Self-Phase Modulation (SPM): When a strong pump beam modifies its own propagation and 2) Cross-Phase Modulation (XPM): When a strong beam modifies the propagation of a weaker probe beam. Due to degeneracy factors associated with the coefficients, the nonlinear refractive index due to two beams, ¯ n2cross, is twice that for a single beam, ¯ n2self. By using the relation between the refractive index and susceptibility: n2 = 1 + χeff, (43) where χeff= χ(1) + 3χ(3)|E|2, we find that n2 = 3 4n02ϵ0cχ(3) (44) for the case of SPM. Tutorial on nonlinear optics 45 Fig. 11. – (a) Potential well for a non-centrosymmetric medium (b) Potential well for a cen-trosymmetric medium. 4. – Effect of material symmetry Material symmetry, most importantly the presence of inversion symmetry, plays a very important role in determining the value of susceptibility. All even-order nonlinear responses vanishes identically for centrosymmetric materials, that is, for materials that lack inversion symmetry. Conversely, odd-order nonlinear response is in principle present for all materials. Figures 11a and b show the potential wells that confine electrons to their parent atom for centrosymmetric and non-centrosymmetric materials. An intuitive explanation for this effect can be explained by examination of fig. 12, that shows the response for linear, centrosymmetric, and non-centrosymmetric media for a single-frequency applied field. While the response of a linear medium has the same form as the applied field with no distortion, the nonlinear responses for both types of nonlinear media shows significant distortion. For centrosymmetric media with potential well as shown in fig. 12a, only odd-harmonics are present in the response. For non-centrosymmetric media with potential well as shown in fig. 12b, both odd and even harmonics are present. Hence, we get a second-order nonlinear response from only non-centrosymmetric materials. 5. – Nonlinear optics with focussed Gaussian beams The preceding sections have assumed infinite plane-wave sources for the description on nonlinear effects. But in actual practice, we do not have infinite plane waves. The laser beam typically is a Gaussian, and in this case we need to account for focussing effects including that the effective interaction length is the Rayleigh range of the beam. SHG using focussed Gaussian beams have been discussed in [15,16] and . For a Gaussian beam with waist diameter w0, the Rayleigh range is zR = πw02 λ (45) and the peak intensity P/(πw02) occurs the waist The peak intensity is thus inversely 46 S. Choudhary and R. W. Boyd Fig. 12. – Response of centrosymmetric or non-centrosymmetric media to a plane wave excita-tion. proportional to the length of the interaction region. So for maximum efficiency, the Rayleigh range must be half the length of the medium. But ref. gives a value of L/2.84 for the confocal parameter (which is twice the Rayleigh range) for maximum efficiency of SHG. This is because we have an additional phase mismatch of Δk = 3.2/L due to the Guoy phase shift which needs to be compensated. 6. – Origin of third-order nonlinear response The nonlinear susceptibility is a characteristic of any given medium, and its value depends on the electronic and molecular structure of the material . There are different mechanisms responsible for introducing an intensity-dependent refractive index, and their relative strengths and response times are summarized in table II. Of the effects mentioned, the electronic polarizability is responsible for the generation of optical harmonics and has the fastest response. In liquids, effects due to molecular Tutorial on nonlinear optics 47 Table II. – Typical values of nonlinear refractive index (for linearly polarized light). Mechanism n2 (cm2/W) χ(3) (m2/V2) Response time (s) Electronic polarization 10−16 10−22 10−15 Molecular orientation 10−14 10−20 10−12 Electrostiction 10−14 10−20 10−9 Saturated atomic absorption 10−10 10−16 10−8 Thermal effects 10−6 10−12 10−3 Photorefractive effect (large) (large) (intensity-dependent) orientation and electrostriction dominate. Moreover, in solids with no degree of freedom for molecular orientation, electrostriction dominates. 6.1. Quantum-mechanical explanation of nonlinear optical susceptibility. – The para-metric nonlinear processes described in previous sections can be interpreted as a form of wave mixing involving energy exchange among the interacting waves of different frequen-cies. From a quantum mechanical perspective, they can be viewed as photon interaction processes involving creation of photons of some frequency and annihilation of another. This is represented in the energy-level diagrams illustrated previously. Thus, it involves electron transitions between the different energy levels which may be resonant, if they occur between real energy levels, or non-resonant, if they occur between virtual levels. Resonant transitions leads to a very large value of the susceptibility. The density-matrix formalism is the preferred means to derive expressions for the different orders of the non-linear susceptibility χ(n). A perturbation expansion is used to determine the expectation value of the induced dipole moment [9,1]. Figure 13 shows the expression and Feynman diagrams [19,1] for the each element of χ(2), which represents a three photon interaction process. 6.2. Non-resonant electronic nonlinearities. – Non-resonant nonlinearities arise due to electronic transitions involving virtual levels and are the weakest of all contributions due to their off-resonance nature . But these contributions are important as they are present in all dielectric materials. They are also extremely fast with response times of the order of 10−16 s, as the response time in this case is the time required for the atomic cloud to become distorted due to an applied optical field. We can estimate the order of magnitude of χ(3) in the far-offresonance case by considering the classical, anharmonic model for an oscillator under far-offresonance excitation. The expression obtained is χ(3) = Ne4 ϵ0m3ω06d2 . (46) 48 S. Choudhary and R. W. Boyd Fig. 13. – (a) Feynman diagrams for the electron transitions involved in a second-order process. (b) Expression for χ(2) in terms of the transition dipole moments of the different transitions involved. For the typical values of N = 4 × 1022 cm−3, d = 3 × 10−10 m, and ω0 = 7 × 1015 rad/s one finds that χ(3) ≃3 × 10−22 m2/V2. 6.3. Molecular orientation effect. – Molecular orientation contribution to the third or-der nonlinearity becomes important for anisotropic liquids i.e. liquids which have different polarizability along different axes. When subjected to an optical field, the molecules ex-perience a torque that twists them such that the axis with higher polarizability tends to be aligned along the direction of the applied field. An example of such a liquid is carbon disulfide (CS2), which is comprised of cigar-shaped (prolate spheroidal) molecule . The polarizability along the molecular axis, α3 is higher than along the transverse axis, α1. Due to this, the induced dipole moment has a much larger component along the molecular axis than along the transverse axis and is not parallel to the applied field as shown in fig. 14. A net torque then acts on the molecule given by ⃗ τ = ⃗ p × ⃗ E which tends to align the molecule with the applied electric field. But thermal agitation introduces a randomness in the molecular orientation. For a number density N temperature T, and neglecting the local-field effects, the first- and third-order susceptibilities for the given polarizabilities are given by χ(1) = N 1 3α3 + 2 3α1 , (47) χ(1) = 2N 45 (α3 −α1)2 kT , (48) where k is the Boltzmann constant. The response for this effect is slower as it takes some Tutorial on nonlinear optics 49 Fig. 14. – (a) The CS2 molecule. (b) Dipole moments that develop within the molecule upon application of an electric field. time for the molecules to align with the applied field, and the response time is of the order of picoseconds. 6.4. Thermal effects. – Thermal contributions to the nonlinearity occur when the incident laser power when passing through them medium is absorbed causing an increase in temperature and a change in the refractive index of the material with temperature. This change is negative for gases but may be either positive or negative for condensed matter depending on the internal structure of the material . It is a non-local optical phenomenon as the refractive index change at some point depends on the laser intensity nearby. The response time is of the order of nanoseconds and is very slow as the time taken to change the temperature of the material can be long. Mathematically, this change in refractive index with temperature can be expressed by the following relation: ˜ n = n0 + dn dT ˜ T1, (49) where (dn/dT) describes the temperature dependence of refractive index while ˜ T1 ac-counts for the change in temperature due to incident laser field and obeys the heat transport equation (ρ0C) ∂˜ T1 ∂t −κ ▽2 ˜ T1 = α˜ I(r), (50) where ρ0C denotes the heat capacity per unit volume, κ denotes the thermal conductivity and α the linear absorption coefficient of the material. There are a number of effects that can occur due to thermal contributions to the nonlinearity such as the formation of thermally induced optical grating, pattern formation etc., which have been discussed in [20-22]. 50 S. Choudhary and R. W. Boyd Fig. 15. – (a) Closed aperture scan schematic to measure real part of χ(3). (b) Open aperture scan schematic to measure imaginary part of χ(3). 7. – Measurement of optical nonlinearity: Z-scan Z-scan, first reported in , is a single beam technique to measure both real and imaginary components of the nonlinear refractive index coefficient. To measure the real (refraction) coefficient, a tightly focussed Gaussian beam is made incident on the sample and the transmission through the nonlinear medium (assumed to be thinner than the diffraction length of the beam) is measured at the far-field through an aperture. The setup is shown in fig. 15. To examine the effect of translation of the sample along the beam path, we consider a material with a negative value of n2. We are ignoring the losses for the moment. When the sample is far away from the focus, due to low intensity of the optical field on the sample, there is no effect on the transmitted beam as the nonlinear contribution to the refractive index n2I is very low. As the sample is moved from a negative z towards the focus, a negative lensing effect on the beam takes place prior to focus and the beam diver-gence at the aperture is reduced leading to increased transmission through the aperture. When the sample is moved beyond the focal plane towards positive z, the negative lens-ing effect causes defocussing at the aperture causing a decrease in transmission. This suggests that there is a null at the focus. The transmittance as a function of sample position for CS2 is shown in fig. 16a. The peak-valley positions are reversed if the sample has a positive value of n2 which is the case for CS2. When there are absorptive nonlinearities present, the transmittance curve shows asymmetrical peak and valley distribution. The presence of multi-photon absorption results in a larger valley while saturable absorption results in a larger peak. It is to be noted that the nonlinear refraction effect is probed by the aperture. When the aperture is removed from the far-field, the transmittance depends on absorption nonlinearities and there is no effect of nonlinear refraction. The transmittance obtained for an open Tutorial on nonlinear optics 51 Fig. 16. – (a) Closed aperture scan result for CS2 from Bahae et al. . (b) Open aperture scan result for gold-silica composite . aperture case is symmetrical with respect to the focus where the maxima (for saturable absorption) or the minima (for multi-photon absorption) occurs. Hence, the Z-scan mea-surement can not only be used to calculate the sign of nonlinear refraction, but the absorption mechanism within the sample as well. Compared to the other methods of measuring nonlinearity such as nonlinear interferometry [25, 26], degenerate four wave mixing , nearly degenerate three wave mixing , ellipse rotation and beam distortion , Z-scan is a much simpler and sensitive process. 8. – Self-action effects Self-action effects are effects in which a light beam modifies its own propagation by means of the nonlinear response of the medium. Common self-action effisre discussed briefly below. 8.1. Self-focussing. – When an intense beam of light modifies the optical properties of the medium such that it is caused to come to a focus within the medium, the phenomenon is called self-focussing of light, or catastrophic collapse [1,31]. For a positive value of n2, a beam with a varying transverse intensity profile induces refractive index variation with a maximum index at the centre of the beam that is larger than that at the periphery, creating a positive lens such that the beam comes to focus within the material. This situation results when the self-focussing effect is not compensated by diffraction or other nonlinearities (like quintic nonlinearity due to χ(5)). Also, the beam power P must be greater than the critical power for self-trapping, called Pcr, so that the self-focussing effect is larger than diffraction and other defocussing effects. Chiao et al. give the following expression for Pcr assuming a circular beam of uniform intensity and radius w0: Pcr = π(0.61)2λ0 2 8n0n2 , (51) 52 S. Choudhary and R. W. Boyd where λ0 is the vacuum wavelength of the applied optical field. The value of Pcr depends not only on the input beam profile, but is also different for bulk media and waveg-uides . The distance at which the intensity becomes anomalously large is called the self-focussing length, zsf or collapse length, Lcol . The expression for the self-focussing length zsf given by Kelley is zsf = 2n0w02 λ0 1  P/Pcr −1 . (52) Note that zsf scales with power as approximately 1/P 1/2. For sufficiently high pow-ers though, this collapse distance scales with 1/P as was demonstrated for cw beams propagating in CS2 [34,35]. In the previous cases, it was assumed that the input beams have no noise. But when noise is present, there is a second collapse threshold much greater than Pcr, called PMF, where the input beam breaks up into multiple filaments for powers higher than PMF as discussed by Fibich et al. in . 8.2. Optical solitons. – An optical soliton is any optical field that does not change its shape (spatially or temporally) during propagation due to exact cancellation of nonlinear and linear focussing and defocussing effects within the medium. We can have two kinds of solitons depending on which profile, spatial or temporal, is preserved during propagation. A spatial soliton is formed due to exact cancellation of self-focussing and diffraction, while a temporal soliton is formed when there is cancellation of self-phase modulation and dispersion within the medium. We may also have a spatio-temporal soliton when all these effects balance simultaneously. We describe the spatial and temporal solitons in the following sections. 8.2.1. Self-trapping and spatial solitons. When there is an exact balance between self-focussing and diffraction, the beam of light propagates with a constant diameter and the phenomenon is called self-trapping of light . The power carried by the beam is exactly equal to the Pcr, the critical power for self-trapping. Under these conditions, the beam forms its own waveguide and propagates without spreading. The nonlinear pulse propagation for this case is given by ▽T 2A + 2ik ∂A ∂z = −2k2n2 n0 |A|2A, (53) which is also called the nonlinear Schr¨ odinger equation (NLSE). The first term on the left accounts for diffraction while the term on the right accounts for self-focussing. When A(x, y, z) varies along only one transverse dimension, say x, (or the case of a slab-shaped beam) the solution is called a spatial soliton and is given by A(x, z) = A0 sech(x/x0)eiγz, (54) Tutorial on nonlinear optics 53 Fig. 17. – Radial profile of the self-focussed beam, also called the Townes profile . where x0 is the width of the field distribution of the soliton. For a cylindrical beam, where the transverse field variation has both x and y components, there is no analytic solution to the NLSE. The NLSE in cylindrical coordinates is written as d2A(r) dr2 + 1 r dA(r) dr −A(r) + A3(r) = 0 (55) and the numerical solution, shown in fig. 17, is called the Townes profile . In the absence of any saturation effects or plasma defocussing, this solution is not sta-ble and is susceptible to perturbations which might cause the beam to either diffract or self-focus . However, when a beam self-focusses, the on-axis component evolves into the circularly symmetric Townes profile irrespective of the initial beam profile as discussed in and and the collapsed on axis portion carries exactly Pcr power. But for super-Gaussian beams, the beam self-focusses into a ring profile as reported in . Spatial solitons can also be viewed as stationary wave packets that are localized in space. As such, they have the unique property that their energy and momentum is conserved even when they interact with each other leading to a number of interesting effects like soliton fusion, fission and annihilation . The first spatial soliton was observed in a sodium vapor cell by Ashkin and Bjorkholm in 1974 . Later on, spatial solitons were also observed in CS2 in 1985 , in AlGaAs waveguides , and in nematic liquid crystals . 8.2.2. Temporal solitons. When short optical pulses propagate within a non-dispersive, nonlinear medium, it experiences a nonlinear phase shift due to the medium’s Kerr response . If we assume that response of the medium is instantaneous, then the nonlinear phase shift experienced by an optical pulse of instantaneous pulse intensity I(t) travelling through a medium of length L and central frequency ω0 is φNL(t) = −n2I(t)ω0L/c. (56) 54 S. Choudhary and R. W. Boyd This is known as self-phase modulation as a propagating optical pulse modifies its own phase due to the medium’s nonlinearity as it propagates. This leads to spectral broad-ening. But in most instances, we also need to take into account the dispersion within a medium. For a pulse ˜ E(t) = ˜ A(z, t)ei(k0z−ω0t) + c.c., (57) the pulse propagation equation for a dispersive and nonlinear medium is given by ∂˜ As ∂z + 1 2ik2 ∂2 ˜ As ∂τ 2 = iγ    ˜ As    2 ˜ As, (58) where ˜ As(z, τ) = ˜ A(z, t), τ = t −z vg , (59) with vg being the group velocity. The second term on the left hand side of eq. (58) takes account of group velocity dispersion while the term on the right hand side takes account of self-phase modulation. Under proper circumstances, there can be an exact cancellation of the pulse spreading due to the two effects and the pulse shape is preserved as it propagates. These pulses are called temporal optical solitons. The fundamental solution for eq. (58) is given by ˜ As(z, τ) = A0 s sech(τ/τ0)eiκz. (60) Higher-order solutions to (58) have been discussed in and . Existence of temporal solitons in optical fibre was proposed in 1973 by Hasegawa and Tappert in . Since then, many demonstrations of temporal solitons propagating over long distances have been demonstrated in [49,50]. 8.3. Small-scale filamentation. – Small-scale filamentation, also known as beam breakup, is the breakup of an intense laser beam (with powers much higher than the Pcr) into multiple filaments, due to amplification of modulational instabilities and noise present in the optical wavefront by four-wave mixing . The transverse intensity pro-duced as a result may have a random distribution and reduced spatial coherence or may have highly regular pattern as shown in fig. 18 . Each of the filaments produced are almost ideal solitons, have the cylindrically sym-metric Townes profile and carry the power Pcr . Figure 19 illustrates the amplification process of wavefront perturbations. The field within the medium is composed of a strong on-axis component and weak, side-modes with non-collinear but symmetric k-vectors. The variation of gain coefficient of these side-modes with the magnitude of their wave vectors is shown in fig. 20. The peak value of the gain coefficient occurs when the four-wave mixing process is phase-matched. Tutorial on nonlinear optics 55 Fig. 18. – (a) Schematic of the experimental setup used in . (b) Honeycomb pattern obtained in far-field. Fig. 19. – Amplification of wavefront perturbations to give multiple filaments. q/qopt Λ/γ 0 0 1 1 Fig. 20. – Gain coefficient of side-modes vs. wave vectors. Beam breakup into multiple filaments have been reported by many groups in [54-60]. A possible application reported in suggests that loss of spatial coherence can be used as a power limiter by reducing intensity at the focus (see fig. 21). 9. – Local-field effects The treatment described above for calculating susceptibilities was based on macro-scopic Maxwell equations which considers the spatial average of microscopic electric fields. But the actual atomic transitions within the material are dependent on the local field 56 S. Choudhary and R. W. Boyd Fig. 21. – (a) Small-scale filamentation in CS2. Top: near-field intensity distributions; bottom: far-field intensity distributions with increasing pulse energy from left to right. (b) Far-field diffraction angle vs. incident pulse energy showing a square-root variation of the angle. which acts on the transition dipole moments associated with the material. For condensed matter, with atomic densities of the order of 1015 atoms/cm3, the difference between the local field and the macroscopic field becomes significant and local field needs to be con-sidered . There are different models for performing local-field corrections depending on the optical medium under consideration. For a homogeneous medium, for example, we multiply the local-field correction factor L to the macroscopic field to calculate the local field. Different models applied to calculate L are: 1) the Lorentz local-field model, 2) the Onsager model, and 3) the real-cavity model. The Lorentz model and the Onsager model are applicable for homogeneous media, with Lorentz model used specifically for solids while Onsager model is used for polar liquids. The real-cavity model is used to describe composite materials . The Lorentz model is the most commonly used model and is described in the following subsection. 9.1. Lorentz local field. – The Lorentz-Lorenz law gives the following expression for the linear susceptibility : χ(1) = Nα 1 −4π 3 Nα or ϵ(1) −1 ϵ(2) + 2 = 4π 3 Nα, (61) where N is the number density of dipoles within the medium (assumed to be a rectangular lattice) and α is the polarizability for a single dipole. The local field is expressed as the sum of local-field contributions for dipoles within the assumed cavity (with radius greater than dipole separation but less than optical wavelength), and the average macroscopic polarization for dipoles outside the cavity. Hence, the local field ˜ Eloc is given (in Gaussian Tutorial on nonlinear optics 57 Fig. 22. – Examples of nanocomposite geometries that have been used to construct materials with enhanced nonlinear optical response . units) by ˜ Eloc = ˜ E + 4π 3 ˜ P. (62) Since ˜ P = χ(1) ˜ E, the expression for local field is given by ˜ Eloc = ϵ(1) + 2 3 ˜ E. (63) 9.2. Nanocomposite materials for nonlinear optics. – Local-field effects can substan-tially boost the nonlinear response. For example, it was shown in that the expression for the third order susceptibility with local-field effects taken into account is χ(3)(ωk = ωl + ωm + ωn, ωl, ωm, ωn) = Nγ(3)(ωk)L(ωk)L(ωl)L(ωm)L(ωn), (64) where γ(3) is the hyperpolarizability leading to the generation of the sum-frequency ωk and the local-field correction factor is given by L(ωi) = [ϵ(1)(ω1) + 2]/3. Composite materials are made of two or more constituents with different susceptibil-ities, and they can alter the local field substantially depending on the choice of materials and the configuration. Some examples of composite material structures are shown in fig. 22. We can tailor these composites to exhibit the desired optical properties. In fact, the composite material can possess an enhanced nonlinearity that can even exceed those of individual materials. Especially important are nanocomposite materials; these are nanoscale mixtures of different materials in which the individual particles are much smaller than the optical wavelength, but nonetheless are large enough so that they can be characterized by their own dielectric constants. Optical properties such as n2 and χ(3) 58 S. Choudhary and R. W. Boyd Fig. 23. – (a) Layered geometry and experimental setup used in . (b) Predicted susceptibility enhancement curve. of such materials are characterized by their effective or volume-averaged values. Some of these geometries are described in the following subsections. 9.2.1. Layered composite materials. An example of a material with a layered geometry is shown in fig. 23a. It is composed of alternating layers of materials, say a and b, that have different optical properties and different thicknesses, which are assumed to be much smaller than the optical wavelength. The structural properties of each constituent are assumed to be essentially the same as for a bulk sample of such a material. The optical properties of the composite structure are dependent on the volume average of each constituent. For example, to enhance the contribution of material a to the nonlinear optical response of the composite, material b, must have a larger refractive index than material a. The enhancement of the χ(3) response occurs as a result of the non-uniform distribution of the incident electric field between constituents a and b . It was shown theoretically that to have such an enhancement, the more nonlinear material, for instance material a, must have the smaller linear refractive index. For p-polarized light incident on the layered composite, the effective permittivity ϵeff is given in terms of the volume fractions fa and fb and the permittivities of individual materials by 1 ϵeff = fa ϵa + fb ϵb . (65) Moreover, for the limiting case in which component b has a vanishingly small nonlinear response, the effective nonlinear response of the material becomes χ(3) eff=     ϵeff ϵa     2 ϵeff ϵa 2 faχa (3). (66) Tutorial on nonlinear optics 59 For s-polarized light, the effective permittivities are ϵeff= faϵa + fbϵb and χ(3) eff= faχ(3) a . (67) In , a layered geometry with alternating layers of titanium dioxide (material b) and the nonlinear optical polymer PBZT (material a) was investigated and a maximum enhance-ment of 35% of the third-order susceptibility was obtained which was experimentally measured in terms of the acquired nonlinear phase-shift by a propagating laser beam. In , the third-order susceptibility representing the electro-optic response of a layered composite material made of alternating layers of barium titanate and doped polycarbon-ate was investigated for different volume fill fractions. The predicted enhancement curve vs. fill fraction of the polycarbonate is shown in fig. 23b. One sees that an enhancement of the electro-optic response by a factor as large as 3.2 can be obtained. 9.2.2. Metal-dielectric photonic crystals. Metals possess a very large and fast intrinsic nonlinear response. For example, the χ(3) value of noble metals is 106 times higher than fused silica and has a sub-picosecond response . However, it has proven difficult to access this nonlinearity due to high attentuation associated with metals. Due to this high attenuation, metals structures with a thickness larger than tens of nm are non-transmitting. Attempts to circumvent this high loss using local-field effects have been made by using colloidal metal nanoparticles , granular metal films , glasses doped with nanoparticles and metal-dielectric composites in Maxwell-Garnett and Bruggeman geometries [70,71]. It has been shown that a metal-dielctric photonic crystal (MDPC) can be highly transmissive within a certain controllable spectral range for metal thicknesses even larger than the skin depth. Such a MDPC was proposed and demonstrated as a nonlinear photonic material. It was argued [72, 73], that, since the large attenuation of light in metals is more due to re-radiation than absorption, a method akin to Bragg reflection can be employed to redirect the light in the forward direction. Figures 24a and 24b compare the electric field distribution within a bulk Cu sample of thickness 40 nm and a MDPC with alternating layers of gold and silica of thicknesses 16 nm and 98 nm respectively, having resonance at 650 nm. Due to the resonance nature of the structure, the nonlinearity was measured in terms of the fractional change in nonlinear transmission and reflection and the comparison with bulk metal values is shown in figs. 24c and d. 9.3. Counterintuitive consequence of local-field effects. – In ref. , it was demon-strated that local-field effects can be used for sign reversal of the nonlinear absorption process. A colloid of metal nanoparticles in a glassy matrix showed saturable absorption, due to local-field correction even though the metal and glass themselves showed induced absorption. For a composite material consisting of a host material h (with permittivity ϵh) and inclusions i (with permittivity ϵi) the effective permittivity of the medium as a whole can be written in terms of the fill-fraction f of the inclusions ϵ = ϵh 1 + 2ηf 1 −ηf , (68) 60 S. Choudhary and R. W. Boyd Fig. 24. – (a) Electric field distribution within bulk copper, (b) electric field distribution within the MDPC, (c) normalized transmission measured for bulk copper and the MDPC using the Z-scan method, and (d) measured fractional nonlinear change in reflection and transmission for bulk copper and for the MDPC. where η = ϵi −ϵh ϵi + 2ϵh . (69) The third-order susceptibility can be written as χ(3) = fqi 2|qi|2χi (3) + qh 2|qh|2 [(1 −f) + xf] χh (3), (70) where x = 8 5η2|η|2 + 6 5η|η|2 + 8 5η3 + 18 5 (η2 + |η|2) (71) qi and qh are the local-field factors for the host and inclusions, respectively, and are given by qi = ϵ + 2ϵh ϵi + 2ϵh , (72) qh = ϵ + 2ϵh 3ϵh . (73) Tutorial on nonlinear optics 61 Fig. 25. – Normalized transmission curves obtained from Z-scan measurements showing reversal of the sign of Im[χ(3)]. For small fill-fractions, ϵ ≃ϵh and qh ≃1. So the effective χ(3) becomes χ(3) = fqi 2|qi|2χ(3) i + χ(3) h . (74) Even though the sign of both contributions to χ(3) is the same, we can have cancellation of the two at surface-plasmon resonance due to the condition Re[ϵi(ωs)] = −2 Re[ϵh], (75) where ωs is the surface plasmon resonance frequency. The local-field factor for the inclusions, qi, then becomes purely imaginary since qi ≈3 Re[ϵh]/i Im[ϵ]. Thus, at the surface plasmon resonance, qi2 < 0. If χ(3) i and χ(3) h have the same sign, for a particular fill-fraction f we have sign reversal of χ(3). Physically, we have a phase-difference between the field within inclusion and the ex-ternally applied field which is essentially given by the phase of qi. This phase-difference becomes π/2 at surface plasmon resonance making qi imaginary. This phase-shift occurs due to coupling of the p-polarized component of incident light with arbitrary polarization into surface-plasmons at resonance. If χ(3) i and χ(3) h have the same sign, the sign-reversal occurs at two fill-fractions f, as can be seen from eq. (74). But only the lower fill-fraction is feasible as higher values of f lead to higher nonlinear absorption. In , a col-loid of gold in 1, 1′, 3, 3, 3′, 3′-hexamethylindotricarbocyanine iodide (HITCI) (a reverse-saturable absorber), methanol and water showed this sign-reversal in Im[χ(3)] which can be seen in fig. 25 showing open-aperture Z-scan traces. Curves 1-5 have a valley indicating reverse-saturable absorption, whereas 6-9 have a peak, showing saturable absorption. 62 S. Choudhary and R. W. Boyd 10. – Nonlinear plasmonics Reasons for using plasmonic response in the context of nonlinear optics and photonics include the following: 1) Strong local-field enhancement: Surface plasmon polaritons (SPP) and localized surface plasmons (LSP) can provide very strong local-field enhancements . 2) Ultrafast response: Plasmonic excitations can respond on the scale of femtoseconds, making ultrafast signal processing possible . 3) Plasmon resonances are very sensitive to the dielectric constant of surrounding media . This fact allows for the tailoring of the plasmonic response. 4) Sub-wavelength dimensions: At the nanoscale, plasmonic structures have very sub-wavelength dimensions and phase-matching is not important. Thus the nonlinear optical signal is emitted in all directions, irrespective of the propagation direction of the incident field and is incoherent . Limiting factors to plasmonic responses are ohmic and radiative losses, which not only reduce the propagation length of SPP but also the local-field enhancement. SHG using plasmonic structures has been achieved using different configurations. The very first example employed SHG from surface enhancement using roughened silver sur-faces where there was considerable enhancement of the SHG signal compared to a flat surface. Other methods employing surface enhancement have been reported in [80, 81]. Third-harmonic generation due to surface enhancement has also been reported in . SHG from individual nano-particles such as gold nano-spheres , nano-cones , nano-apertures and nano-cups have also been reported. Structured plasmonic surfaces which are non-centrosymmetric like arrays of split ring resonators (SRR) , arrays of L-shaped nano-antennas have also been reported to have enhanced SHG. The intrinsic nonlinear response of SPPs has been explored for gold films. These results showed a strong wavelength dependence of the nonlinear refraction as well as in-crease in the nonlinear absorption with larger pulse durations. This increase in nonlinear absorption was attributed to the “hot-electron” effect or “Fermi-smearing” mechanism, which is a kind of thermal effect with a sub-picosecond response. 11. – Slow and fast light The group velocity of light is the velocity of propagation of the envelope of a light pulse. It can be represented mathematically as vg = c/ng where we have introduced the group index ng = n + ω dn dω , (76) where n is the refractive index and ω is the frequency of light. The phase velocity is the ve-locity with which points of constant phase of an optical field propagate within the medium and is equal to c/n. When light propagates in a medium for which the group velocity vg is much smaller than the speed of light in vacuum, that is, for vg ≪c, the phenomenon is Tutorial on nonlinear optics 63 Fig. 26. – Dispersion curves for absorption and gain resonances. called slow light. Fast light occurs when the group velocity becomes larger than c, which is also called superluminal propagation, or when vg is negative, which is also known as backward propagation. From the expressions for the group index and group velocity, it is clear that a higher group index results in a lower group velocity, which is possible if the value of dn/dω is large and positive, which is possible in the case of normal dispersion. For fast light, dn/dω must be large and negative, which is possible for anomalous disper-sion [91,92]. Thus, resonant systems having an absorption (gain) resonance can be used to achieve slow (fast) light. To examine this argument more fully, let us consider the plots of the absorption, gain α and refractive index n vs. ω as shown in fig. 26. The motivation is distortion-free propagation of pulses through media with different group indices. At the resonance, the absorption (gain) has a maxima and due to Kramers-Kronig relations, the refractive index makes a transition from maxima (minima) to minima (maxima). This steep transition results in a large value of dn/dω and consequently in a lower or higher group velocity depending on the sign of dn/dω. For resonances in an atomic vapor, this group index can become as large as 104. But close to resonance the absorption also becomes large, and the slow (fast) is no longer easily measurable. The first experimental observation of slow light and fast light in resonant systems with negli-gible pulse distortion was by Carruthers and Bieber in 1969 . But these results were limited by the presence of strong resonant absorption. To counteract the effect of large absorption, many schemes have been employed such as electromagnetically induced trans-parency (EIT) , coherent population oscillation (CPO) [95-98], stimulated Brillouin scattering (SBS) [97-99], stimulated Raman scattering (SRS) and couple resonator pptical waveguides (CROWs) . A very important experiment using Bose-Einstein 64 S. Choudhary and R. W. Boyd condensates achieved slow light with group velocity of 17 m/s using EIT . EIT was first described theoretically by Harris et al. and is a technique in which, under the influence of a large saturating optical field, the material is rendered transparent to resonant laser light. In the experiment of Hau et al. , the nanokelvin temperatures of the sample caused reduced Doppler broadening making the dispersion curve very steep leading to such a low group velocity. EIT was also used later by Budker et al. in a Rb vapor cell to achieve group velocities as low as 8 m/s . Smilar technique was used later to achieve “stopped-light” . Similarly, electromagnetically induced absorption has been used to achieve superlumi-nal propagation, or fast light in with group velocity of −c/23 000. Since slow light has possible applications for tunable optical delays, optical memories, and data storage, a slow light source at room temperature is desirable. Some techniques to achieve slow light at room temperatures are described in the following subsections. 11.1. Slow light using SBS. – Slow light using SBS in single-mode optical fibres at telecommunication wavelengths has been demonstrated [97, 99]. In this case, we have counterpropagating signal waves (ω) and pump waves (ωp) within the fiber, and the maximum delay is produced when the signal frequency corresponds to Brillouin resonance frequency, i.e. ω = ωp−ΩB, where ΩB is the Brillouin frequency. Due to a lowered group velocity, one observes a delay in the pulse propagation time, which can be adjusted by varying the intensity Ip of the pump beam. The SBS process is a gain process in which the generated Stokes wave undergoes amplification by means of its coupling with the pump wave and an acoustic wave [1,47]. Mathematically, the signal intensity variation with pump and signal is expressed as dIs dz = −gIsIp where g = g0 (ΓB) ΓB + 2i(ω −ωp) . (77) Here g is the complex gain factor associated with the SBS process. The nonlinear re-fractive index n2 thus depends on the imaginary part of g from which the propagation vector, ks, and subsequently the group velocity can be calculated as vg = (dks/dω)−1. The transit time difference for a medium of length L can be subsequently calculated, as discussed in ref. . Figure 27 shows the temporal evolution of Stokes pulses for a given gain value and different pulse lengths. There are several limiting processes that limit the observed delay, such as higher-order dispersion effects for very short pulse lengths, gain saturation for very high input Stokes pulse intensities, and spontaneous Brillouin scattering for very high gain values. 11.2. Slow light by coherent population oscillations. – Coherent population oscillations are a quantum effect that lead to the creation of a spectral hole in the absorption profile of a probe beam passing through an appropriate medium. These population oscillations are a periodic modulation of the ground state populations at the beat frequency δ between the pump and probe waves. For δ ≤(1/T1), with T1 being the population relaxation time, these population oscillations have a significant magnitude. This method of introducing a Tutorial on nonlinear optics 65 Fig. 27. – Temporal evolution of Stokes pulses for (a) 63 ns duration pulse, (b) 15 ns duration pulse. spectral hole in a homogeneously broadened absorption spectrum was first predicted by Schwartz and Tan in and was demonstrated by Hillman et al. for the case of a ruby crystal pumped by an Ar ion laser . Slow light using this method of introducing a spectral hole was demonstrated in a ruby crystal where group velocities as low as 57.5 ± 0.5 m/s was observed in . Here, a laser input at 514.5 nm from an Ar ion laser with pulse duration of the order of 1 ms was amplitude modulated to create frequency-shifted pump beams which were then focussed tightly within the crystal. A very narrow spectral hole of linewidth (HWHM) 35.8 Hz was observed which broadens with increased power. It is this narrow dip in the absorption profile that leads to very large values of Fig. 28. – Normalized input and output pulse intensities for different pulse durations. 66 S. Choudhary and R. W. Boyd Fig. 29. – Left: Conceptual prediction of superluminal propagation. Right: Laboratory results of Bigelow et al. (2003) . dn/dω and hence very low group velocities. Figure 28 shows the different pulse delays with increased pulse durations. Coherent population oscillations have also been used to achieve superluminal propa-gation in alexandrite due to formation on an anti-hole (increased absorption in a narrow spectral region) . The superluminal response obtained in laboratory is shown in fig. 29. 11.3. Slow and fast light in Erbium-Doped Fibre Amplifiers (EDFAs). – Slow and fast light has been successfully demonstrated using the nonlinear optical response of EDFAs. The mechanism is that of coherent population oscillations (CPOs) involving the erbium ground-state population. Because of the widespread use of EDFAs in telecom-munications, a slow-light source using EDFA has many potential important applications. Also, the use of fibre allows longer interaction lengths causing even larger delays. The width of the spectral hole is determined by the frequency of the population oscillations. Figure 30 shows the dependence of the fractional pulse advancement on both modulation frequency and on laser power. Tutorial on nonlinear optics 67 Fig. 30. – Dependence of fractional pulse delay after propagation through EDFA on the pump frequency and power. 12. – Spontaneous and stimulated light scattering Until now, we have dealt with parametric processes which involve light-by-light scat-tering. We will now discuss inelastic scattering of light by various material media. Light scattering occurs due to fluctuations and inhomogeneities in optical properties of the medium. A completely homogeneous medium cannot scatter light into directions other than the exact forward direction, as a consequence of complete destructive interference that occurs in other directions . Scattering into the forward direction is fully coher-ent and is the origin of the index of refraction . Figure 31 illustrates this concept where we see that if the density of the material is uniform, the contribution due to molecules in volume dV1 exactly cancels that due to molecules in dV2 in all other directions except forward, while for a non-uniform material density, these contributions do no exactly cancel out. Light scattering can be classified Fig. 31. – Light scattering in a material medium. 68 S. Choudhary and R. W. Boyd Fig. 32. – (a) A general light scattering experiment, (b) Spectrum of the scattered light showing source of various frequency components. as stimulated or spontaneous depending on whether or not the fluctuations responsible for the scattering are induced by the incident laser field. Let us next consider the most general case of a light scattering experiment as shown in fig. 32a. When we examine the spectrum of the scattered light, as shown in fig. 32b, we find contributions from different scattering mechanisms such as Rayleigh, Raman, Bril-louin and the distant wing of the Rayleigh line. The frequency components of scattered light which are lower (higher) than that of the incident field are called Stokes (anti-Stokes) . Raman scattering occurs due to interaction of light with the vibrational modes of molecules of the medium and is equivalent to scattering from optical phonons. Brillouin scattering occurs due to scattering of light from propagating density waves or sound waves and is equivalent to scattering from acoustic phonons. Rayleigh scattering on the other hand occurs due to static or non-propagating density fluctuations and is quasi-elastic in nature as it induces no frequency shift. Rayleigh-wing scattering occurs in anisotropic molecules due to fluctuations in molecular orientation and due to a very rapid reorientation of molecules, has a very broad spectrum. Table III states the typical linewidth, frequency-shifts, relaxation times and gain for the different light scattering processes. 12.1. Stimulated light scattering. – Spontaneous light scattering is a weak process and the efficiency is quite low even for condensed matter. Stimulated processes on the other hand can be highly efficient. Also, the emission from spontaneous scattering is in the form of a dipole, while that for a stimulated light scattering is in the form of a narrow cone in the forward or the backward direction . Conceptually, there are two separate configurations for studying stimulated light scattering : 1) The generator configuration: In this case, only the pump beam is applied externally Tutorial on nonlinear optics 69 Table III. – Typical values of parameters for different light scattering processes. Process Shift (cm−1) Linewidth (cm−1) Relaxation Time (s) Gain (m/MW) Raman 1000 5 10−12 5 × 10−5 Brillouin 0.1 5 × 10−3 10−9 10−4 Rayleigh 0 5 × 10−4 10−8 10−6 Rayleigh-wing 0 5 10−12 10−5 to the scattering medium, and the Stokes signal wave and phonon wave are created from noise within the medium. This process is shown in fig. 33a. For stimulated Brillouin scattering (SBS), the Stokes wave is amplified in all directions except in the exact forward direction, although it is usually observed only in the backward direction due to maximum spatial overlap with the pump in this case. Conversely, for stimulated Raman scattering (SRS), the Stokes signal is emitted in both the forward and backward directions. 2) The amplifier configuration: In this configuration, as shown in fig. 33b, both the pump and a weak Stokes seed signal are applied externally to the medium, and both the Stokes signal and the phonon waves are amplified. A strong coupling between the Stokes beam and pump occurs only when the frequency of the seed is close to the Stokes frequency of the generator case. Hellwarth in has explained the fundamental relation between spontaneous and stimulated light scattering in terms of the photon occupation numbers in different field modes. He argues that the probability per unit time PS for a photon to be emitted into Stokes mode S is given by PS = DmL(mS + 1), (78) where mL is the mean number of photons per mode in the incident laser field, mS is the number of photons in the Stokes mode and D is the proportionality constant that depends Fig. 33. – (a) Generator configuration for SBS; (b) amplifier configuration for SBS. 70 S. Choudhary and R. W. Boyd on the physical properties of the medium. From this assumption, one can deduce that the rate of change of the number of photons in a given Stokes mode for a wave traveling in the positive z direction with velocity c/n is given by dmS dz = 1 c/nDmL(mS + 1). (79) For the case of spontaneous emission, the occupation number in Stokes mode can be assumed to be much smaller than unity and the solution of eq. (79) becomes mS(z) = mS(0) + 1 c/nDmLz. (80) Hence, the Stokes intensity increases linearly with the length of the Raman medium. For the case of stimulated scattering, the number of photons contained in the Stokes mode can be assumed to be much larger than unity, which leads to the prediction mS(z) = mS(0)eGz where G = DmL c/n , (81) where G is the Raman gain coefficient. Thus, the Stokes intensity for a stimulated scattering case increases exponentially with z. The significance of this result is that Hell-warth was able to obtain an equation that relates the gain coefficient G of the stimulate process to the quantity D that quantifies the efficiency of the spontaneous process. For this reason, Hellwarth’s result is sometimes said to show that for any spontaneous light scattering process there is a stimulated analog. 12.1.1. Stimulated Brillouin scattering (SBS). Spontaneous Brillouin scattering was first predicted theoretically in 1918 by Mandelstam and then later independently by Brillouin in 1922. Gross provided the first experimental evidence of Brillouin scattering in crystals and liquids. Figure 34 shows the scattering of an incident laser beam of frequency ωL with a travelling pressure (or density wave) i.e. a sound wave of frequency Ω. Due to the acoustic wavefronts travelling away from the incident laser wave, the scattered light is shifted downward in frequency leading to a Stokes wave with frequency ωS = ωL −Ω. The interference of this pump wave and the Stokes wave leads to a wave of frequency ωL −ωS which is of course equal to Ω and thus the acoustic wave is reinforced. This acoustic wave further beats with the incident laser field leading to Stokes wave and so on. This situation leads to a kind of positive feedback system which under proper circumstances leads to amplification of both the Stokes wave and the acoustic wave exponentially . There are two different mechanisms for Stokes wave amplification due to the acoustic wave and the laser field: 1) Electrostriction: In the presence of a high optical intensity, materials have the tendency to become more compressed, leading to increased density. Here, the Tutorial on nonlinear optics 71 Fig. 34. – Scattering of an incident laser beam with sound wave. interference between the Stokes wave and the laser field leads to fringes of high and low light intensity which show density variation due to electrostriction and hence lead to a propagating density wave or acoustic wave. 2) Optical Absorption: In regions of high optical intensity, heat generation can cause material expansion leading to decreased density on those regions. This process also leads to the generation of an acoustic wave. Let us consider the case of an SBS generator as shown in fig. 33a. From the phonon dispersion relation ΩB = |qB|v and momentum conservation, we get the expression for Brillouin frequency as ΩB = 2v c/nω1 1 + v c/n . (82) Since nv/c is very small for most cases, we can approximate the Brillouin frequency as ΩB = 2v c/nω1. (83) For the case of an SBS amplifier, the Stokes frequency ω2 is determined by the lab-oratory settings and the acoustic wave frequency is given by Ω = ω1 −ω2. In a sense the Stokes frequency ω2 is arbitrary, but the acoustic wave is efficiently excited only when the Stokes seed frequency is chosen such that Ω lies within the Brillouin linewidth ΓB. If we consider the coupled-amplitude equations for the SBS amplifier case, we see that there is no phase mismatch term, indicating that SBS is a pure gain process and is automatically phase-matched. Hence, we can write the coupled-intensity equations as dI1 dz = −gI1I2, (84) dI2 dz = −gI1I2, (85) 72 S. Choudhary and R. W. Boyd Fig. 35. – Real and imaginary parts of Raman susceptibility. where g is the SBS gain factor given by g = g0 (ΓB/2)2 (ΩB −Ω)2 + (ΓB/2)2 , g0 = γ2 eω2 nvc3ρ0ΓB . (86) For a constant pump intensity, the output intensity for a medium of length L is given by I2(z) = I2(L)egI1(L−z). (87) 12.1.2. Stimulated Raman scattering (SRS). C.V. Raman discovered the spontaneous Raman scattering in 1930 . Stimulated Raman scattering occurs when the incident optical field within a medium interacts with the vibrational modes of molecules. Let us consider the simplest, classical explanation of SRS as discussed in , where each vibrational mode is described by a simple harmonic oscillator with time-varying inter-nuclear distance as ˜ q(t), resonance frequency ωv, damping constant γ and equilibrium inter-nuclear separation as q0. The equation of motion can be written as d2˜ q dt2 + 2γ d˜ q dt + ωv 2˜ q = ˜ F(t) m (88) with ˜ F(t) being the restoring force and m the reduced nuclear mass. It is assumed that the optical polarizability depends on the inter-nuclear separation ˜ q(t) according to ˜ α(t) = α0 + ∂α ∂q 0 ˜ q(t), (89) where α0 is the equilibrium polarizability. Oscillations in the molecular coordinate ˜ q(t) lead to periodic modulations in the polarizability with time which in turn leads to vari-ation in the refractive index with time as ˜ n(t) =  ˜ ϵ(t) = [1 + N ˜ α(t)]1/2 . (90) This modulation in refractive index with time forms frequency sidebands on the trans-mitted light with frequency ±ωv. These frequency sidebands then beat with the incident laser field to generate a Stokes wave with frequency ωS = ωL −ωv and modulate the Tutorial on nonlinear optics 73 intensity at the same frequency. This modulated intensity in turn coherently excites the molecule to oscillate at ωv. From the expression for the polarizability α in eq. (89), we can derive the expression for Raman susceptibility which is given by χR(ωS) = ϵ0(N/6m)(∂α/∂q)0 2 ωv2 −(ωL −ωS)2 + 2i(ωL −ωS)γ . (91) The real and imaginary parts of Raman susceptibility are shown in fig. 35. The valley of the imaginary part of susceptibility denotes the Raman resonance. ∗∗∗ The research was supported by the Canadian Excellence Research Chair (CERC) program. SC also acknowledges support through Societ a Italiana di Fisica. REFERENCES Boyd R. W., Nonlinear Optics, third edition (Academic Press, Boston, MA) 2008. Hellwarth R., Cherlow J. and Yang T., Phys. Rev. B, 11 (1975) 964. Wright J. K., Contemp. Phys., 6 (1964) 1. Bloembergen N., Nonlinear Optics (Benjamin, New York) 1964. Franken P. A., Hill A. E., Peters C. W. and Weinrich G., Phys. Rev. Lett., 7 (1961) 118. Terhune R. W., Maker P. D. and Savage C. M., Phys. Rev. Lett., 14 (1965) 681. Van Tran N., Spalter J., Hanus J., Ernest J. and Kehl D., Phys. Lett., 19 (1965) 4. Dolgaleva K., Lepeshkin N. and Boyd R. W., Frequency doubling, in Encyclopedia of Nonlinear Science, edited by Alwyn Scott (Routledge, New York) 2004. Shen Y. R., The Principles of Nonlinear Optics (Wiley, New York) 1984. Burnham D. C. and Weinberg D. L., Phys. Rev. Lett., 25 (1970) 84. Harris S. E., Oshman M. K. and Byer R. L., Phys. Rev. Lett., 18 (1967) 18. Rubin M. H., Klyshko D. N., Shih Y. H. and Sergienko A. V., Phys. Rev. A, 50 (1994) 5122. Kwiat P. G., Mattle K., Weinfurter H., Zeilinger A., Sergienko A. V. and Shih Y., Phys. Rev. Lett., 75 (1995) 4337. Midwinter J. E. and Warner J., Br. J. Appl. Phys., 16 (1965) 1135. Kleinman D. A., Ashkin A. and Boyd G. D., Phys. Rev., 145 (1966) 338. Kleinman D. A., Phys. Rev., 128 (1962) 1761. Boyd G. D. and Kleinman D. A., J. Appl. Phys., 39 (1968) 3597. Kleinman D. A., Phys. Rev., 128 (1962) 1761. Armstrong J. A., Bloembergen N., Ducuing J. and Pershan P. S., Phys. Rev., 127 (1962) 1918. Bentley S. J., Boyd R. W., Butler W. E. and Melissinos A. C., Opt. Lett., 26 (2001) 14. Bentley S. J., Boyd R. W., Butler W. E. and Melissinos A. C., Opt. Lett., 25 (2000) 16. Martin G. and Hellwarth R. W., Appl. Phys. Lett., 34 (1979) 371. Sheik-Bahae M., Said A. A., Wei T. H., Hagan D. J. and Van Stryland E. W., IEEE J. Quantum Electron., 26 (1990) 760. 74 S. Choudhary and R. W. Boyd Lepeshkin N. N., Schweinsberg A., Piredda G., Bennink R. S. and Boyd R. W., Phys. Rev. Lett., 93 (2004) 123902. Weber M. J., Milam D. and Smith W. L., Opt. Eng., 17 (1978) 463. Moran M. J., She C. Y. and Carman R. L., IEEE J. Quantum Electron., 11 (1975) 259. Friberg S. R. and Smith P. W., IEEE J. Quantum Electron., 23 (1987) 2089. Adair R., Chase L. L. and Payne S. A., J. Opt. Soc. Am. B, 4 (1987) 875. Owyoung A., IEEE J. Quantum Electron., 9 (1973) 1064. Williams W. E., Soileau M. J. and Van Stryland E. W., Opt. Commun., 50 (1984) 256. Kelley P. L., Phys. Rev. Lett., 15 (1965) 1005. Chiao R. Y., Garmire E. and Townes C. H., Phys. Rev. Lett., 13 (1964) 15. Fibich G. and Gaeta A. L., Opt. Lett., 25 (2000) 5. Campillo A. J., Shapiro S. L. and Surdyam B. R., Appl. Phys. Lett., 24 (1974) 178. Surdyam B. R., IEEE J. Quantum Electron., 10 (1974) 837. Fibich G., Eisenmann S., Ilan B., Erlich Y., Fraenkel M., Henis Z., Gaeta A. L. and Zigler A., Opt. Express, 13 (2005) 15. Stegman G. I. and Segev M., Science, 286 (1999) 1518. Moll K. D., Gaeta A. L. and Fibich G., Phys. Rev. Lett., 90 (2003) 20. Fibich G. and Ilan B., J. Opt. Soc. Am. B, 17 (2000) 1749. Gross B. and Manassah J. T., Phys. Lett. A, 169 (1992) 371. Grow T. D., Ishaaya A. A., Vuong L. T. and Gaeta A. L., Opt. Express, 14 (2006) 12. Bjorkholm J. E. and Ashkin A., Phys. Rev. Lett., 32 (1974) 4. Barthelemy A., Maneuf S. and Froehly C., Opt. Commun., 55 (1985) 3. Aitchison J. S. et al., Electron. Lett., 28 (1992) 1879. Beeckman J., Neyts K., Hutsebaut X., Cambournac C. and Haelterman M., Opt. Express, 12 (2004) 1011. Zakharov V. E. and Shabat A. B., JETP, 34 (1972) 63. Agrawal G. P., Nonlinear Fiber Optics (Academic Press, Boston) 1989. Hasegawa A. and Tappert F., Appl. Phys. Lett., 23 (1973) 142. Mollenauer L. F., Stolen R. H. and Gordon J. P., Phys. Rev. Lett., 45 (1980) 1095. Mollenauer L. F. et al., IEEE J. Quantum Electron., 22 (1986) 157. Bennink R. S., Wong V., Marino A. M., Aronstein D. L., Boyd R. W., Stroud C. R. jr., Lukishova S. and Gauthier D. J., Phys. Rev. Lett., 88 (2002) 113901. Bespalov V. I. and Talanov V. I., JETP Lett., 3 (1966) 307. Kip D., Soljacic M., Segev M., Eugenieva E. and Christodoulides D. N., Science, 290 (2000) 495. Berg L., Skupin S., Lederer F., Mjean G., Yu J., Kasparian J., Salmon E., Wolf J. P., Rodriguez M., Wste L., Bourayou R. and Sauerbrey R., Phys. Rev. Lett., 92 (2004) 225002. Fibich G., Eisenmann S., Ilan B. and Zigler A., Opt. Lett., 29 (2004) 15. Dubietis A., Tamoauskas G., Fibich G. and Ilan B., Opt. Lett., 29 (2004) 10. Mchain G., Couairon A., Franco M., Prade B. and Mysyrowicz A., Phys. Rev. Lett., 93 (2004) 3. Vidal F. and Johnston T. W., Phys. Rev. Lett., 77 (1996) 7. Schroeder H., Liu J. and Chin S. L., Opt. Express, 12 (2004) 20. Vuong L. T., Grow T. D., Ishaaya A. A., Gaeta A. L., ’T Hooft G. W., Eliel E. R. and Fibich G., Phys. Rev. Lett., 96 (2006) 13. Schweinsberg A., Kuper J. and Boyd R. W., Phys. Rev. A, 84 (2011) 053837. Tutorial on nonlinear optics 75 Dolgaleva K. and Boyd R. W., Adv. Opt. Photon., 4 (2012) 1. Nelson R. L. and Boyd R. W., Appl. Phys. Lett., 74 (1999) 2417. Fischer G. L., Boyd R. W., Gehr R. J., Jenekhe S. A., Osaheni J. A., Sipe J. E. and Weller-Brophy L. A., Phys. Rev. Lett., 74 (1995) 1871. Boyd R. W. and Sipe J. E., J. Opt. Soc. Am. B, 11 (1994) 297. Tokizaki T., Nakamura A., Kaneko S., Uchida K., Omi S., Tanji H. and Asahara Y., Appl. Phys. Lett., 65 (1994) 941. Hache F., Ricard D., Flytzanis C. and Kreibig U., Appl. Phys. A, 47 (1988) 4. Shalaev V. M., Nonlinear Optics of Random Media (Springer-Verlag, Berlin) 2000. Maxwell Garnett J. C., Philos. Trans. R. Soc. Lond. A, 203 (1904) 359. Bruggeman D. A. G., Ann. Phys., 24 (1935) 637. Gehr R. J., Fischer G. L. and Boyd R. W., J. Opt. Soc. Am. B, 14 (1997) 2310. Bloemer M. J. and Scalora M., Appl. Phys. Lett., 72 (1998) 1676. Bennink R. S., Young-Kwon Yoon, Boyd R. W. and Sipe J. E., Opt. Lett., 24 (1999) 1416. Smith D. D., Fischer G., Boyd R. W. and Gregory D. A., J. Opt. Soc. Am. B, 14 (1997) 1625. Kauranen M. and Zayats A. V., Nat. Photon., 6 (2012) 737. Stockman M. I., Opt. Express, 19 (2011) 22029. Homola J., Chem. Rev., 108 (2008) 462. Anceau C., Brasselet S., Zyss J. and Gadenne P., Opt. Lett., 28 (2003) 713. Chen C. K., de Castro A. R. B. and Shen Y. R., Phys. Rev. Lett., 46 (1981) 145. Smolyaninov I. I., Zayats A. V. and Davis C. C., Phys. Rev. B, 56 (1997) 9290. Wokaun A. et al., Phys. Rev. B, 24 (1981) 849. Kim E. M. et al., Phys. Rev. Lett., 95 (2005) 227402. Dadap J. I., Shan J., Eisenthal K. B. and Heinz T. F., Phys. Rev. Lett., 83 (1999) 4045. Bouhelier A., Beversluis M., Hartschuh A. and Novotny L., Phys. Rev. Lett., 90 (2003) 013903. Nahata A., Linke R. A., Ishi T. and Ohashi K., Opt. Lett., 28 (2003) 423. Hanke T. et al., Nano Lett., 12 (2012) 992. Lindennet S. et al., Phys. Rev. Lett., 109 (2012) 015502. Tuovinen H. et al., J. Nonlinear Opt. Phys. Mater., 11 (2002) 421. De Leon I., Shi Z., Liapis A. C. and Boyd R. W., Opt. Lett., 39 (2014) 2274. Brillouin L., Wave Propagation and Group Velocity (Academic Press, New York) 1960. Boyd R. W. and Gauthier D. J., Slow and fast light, in Progress in Optics (Elsevier) 2002. Chiao R. Y. and Milonni P. W., Opt. Photon. News, 13 (2002) 26. Carruthers J. A. and Bieber T. J., Appl. Phys., 40 (1969) 426. Hau L. V., Harris S. E., Dutton Z. and Behroozi C. H., Nature, 397 (1999) 594. Bigelow M. S., Lepeshkin N. N. and Boyd R. W., Science, 301 (2003) 200. Bigelow M. S., Lepeshkin N. N. and Boyd R. W., Phys. Rev. Lett., 90 (2003) 11. Okawachi Y., Bigelow M. S., Sharping J. E., Zhu Z., Schweinsberg A., Gauthier D. J., Boyd R. W. and Gaeta A. L., Phys. Rev. Lett., 94 (2005) 153902. Schweinsberg A., Lepeshkin N. N., Bigelow M. S., Boyd R. W. and Jarabo S., Europhys. Lett., 73 (2006) 218. Song K. Y., Herr’aez M. G. and Luc Thevenaz, Opt. Express, 13 (2005) 82. Okawachi Y., Foster M. A., Sharping J. E., Gaeta A. L., Xu Q. and Lipson M., Opt. Express, 14 (2006) 2317. Yariv A., Xu Y., Lee R. K. and Scherer A., Opt. Lett., 24 (1999) 711. 76 S. Choudhary and R. W. Boyd Harris S. E., Field J. E. and Imamoglu A., Phys. Rev. Lett., 64 (1990) 10. Budker D., Kimball D. F., Rochester S. M. and Yashchuk V. V., Phys. Rev. Lett., 83 (1999) 1767. Liu C., Dutton Z., Behroozi C. H. and Hau L. V., Nature, 409 (2001) 490. Akulshin A. M., Barreiro S. and Lezama A., Phys. Rev. Lett., 83 (1999) 4277. Schwarz S. E. and Tan T. Y., Appl. Phys. Lett., 10 (1967) 4. Hillman L. W., Boyd R. W., Krasinski J. and Stroud C. R. jr., Opt. Commun., 45 (1983) 6. Fabelinskii I. L., Molecular Scattering of Light (Plenum Press, New York) 1968. Feynman R. P., Leighton R. B. and Sands M., The Feynman Lectures on Physics, Vol. I (Addison-Wesley, Reading, MA) 1963. Hellwarth R. W., Phys. Rev., 130 (1963) 1850. Mandelstam L. I., Zh. Russ. Fiz. Khim. Ova., 58 (1926) 381. Brillouin L., Ann. Phys. (Paris), 17 (1922) 88. Gross E., Nature, 126 (1930) 400. Raman C. V., Indian J. Phys., 2 (1930) 387. Garmire E., Pandarese F. and Townes C. H., Phys. Rev. Lett., 11 (1963) 160.
328
Published Time: 2020-05-05T11:49:09+00:00 9 Chinese Wedding Customs in Singapore - SENICA Productions =============== Home About Reviews FAQ Portfolio Pre-Wedding & Wedding Photographers Photobooth Livestream Our Partners Contact Blog Corporate Home About Reviews FAQ Portfolio Pre-Wedding & Wedding Photographers Photobooth Livestream Our Partners Contact Blog Corporate blog May 5, 2020 9 Chinese Wedding Customs in Singapore Feature photos from Kang Wei & Jamie’s wedding. In current times, most couples adopt a modern take on the flow of their wedding by either incorporating some traditions or none at all. Some do it for the experience, but there are also others who continue to carry on the practices based on the traditions unique to their dialect groups. Although modern adaptations of the Chinese customs are becoming increasingly common in Singapore, it’s always great to backtrack and better understand why some traditions were done, and which ones continue to be part of modern Chinese weddings. In this article, we will be diving into traditional Chinese customs practiced by the four main dialect groups in Singapore – Hokkien, Teochew, Cantonese, and Hakka to explain the meaning behind some of the customs and how they have withstood the test of time. The typical flow of events and some minor differences that are unique to each dialect group are as such: Betrothal Gift Ceremony (过大礼 Guo Dà Lǐ) The Four Dialect Groups Guo Da Li Items (Download your free checklist!) Hui Li Items (Download your free checklist!) Matrimonial Bed Setup (安床 Ān Chuáng) Hair Combing Ritual (上 头 Shàng Tóu) Gatecrashing (闯门 Chuǎng Mén) & Fetching the Bride (接新娘 Jiē Xīn Niáng/ 迎 亲 Yíng Qīn) Leaving the Bride’s Home (出 阁 Chū Gé) Entering the Groom’s Home (过门 Guò Mén) Tea Ceremony (敬茶 Jìng Chá) Returning to the Bride’s Home (三朝回门 Sān Cháo Huí Mén) Wedding Banquet (喜酒 Xǐ Jiǔ) 1. Betrothal Gift Ceremony – 过大礼 Guo Da Li Guò Dà Lǐ (过大礼) Traditionally, Guò Dà Lǐ (过大礼) is considered to be the official wedding proposal. This is also the very first part of the Chinese customs which involves the gifting and receiving of betrothal gifts. Based on Chinese superstitions, the ceremony must be conducted on an auspicious day. Naturally, almost all the other parts of the wedding like the Matrimonial Bed Set-up (An Chuang) and the actual wedding day itself are done on auspicious days too. In modern times, 过大礼 happens1-2 weeks before the actual wedding day. The groom will also be accompanied by an elder with good fortune (福气 Fú qi) from the groom’s side. This elder should be someone who already has a few grandchildren, lives a decent life, financially well to do, and is well-liked. When it comes to matters of financing, both the bride and groom’s parents may share the cost of the wedding banquet, or in most cases these days, the couple may choose to pay for the expenses themselves instead. Rather than purchasing furniture items and bridal essentials (basin, spittoon, etc. necessary for child birthing in the past) for the couple, parents may also give the couple a sum of money for them to buy them instead. Other Preparations Other aspects of the wedding preparation include also include the shopping of wedding jewellery by the bride’s parents, and the compilation of the couples’ parents’ guests for the guest list. Then, wedding invitation cards can be printed. Also, not forgetting the appointment of very important people who will ensure the success of the wedding – the bridesmaids and groomsmen. Past Practices In the past, Guo Da Li would happen on an arranged meeting day. Together with the groom’s family members, he will deliver the betrothal gifts to the bride’s family to express his sincerity in marrying their daughter. During the meeting, both families will pick an auspicious day for the wedding day and returning of gifts (回礼 Huí lǐ). But of course, this practice is extremely rare in Singapore now. As mentioned before, Guo Da Li usually happens 1-2 weeks before the actual wedding day, rather than months before. Additionally, there would also be a discussion during the meeting about whether the bride’s parents would prefer to receive betrothal money, or the groom’s family would cover the costs for the wedding banquet. If it’s the latter, the groom’s family would proceed to source for a banquet venue and make a reservation. Other Chinese Superstitions Before proceeding with the marital process, the Chinese may also consult a Feng Shui master or even the Eight Characters of Life 八字 Bā zì (a.k.a “Four Pillars of Destiny”). The Ba Zi study believes that your birth time and date will determine your future – career, marriage, fortune, studies, and health to be precise. Hence, even today, the eight characters for some couples their eight characters need to match before they can get married. Huí Lǐ (回礼) During Huí Lǐ (回礼), the bride’s family will return half the items that were gifted during Guo Dà Lǐ, except the wine and red packet, to the groom’s family. This is an expression of their hopes that both families will maintain a good relationship with one another. In addition to these items, the bride’s dowry will also be included in the gift basket. The dowry items include personal items for her and the household/couple’s new home, the tea set, and the bride’s wedding jewellery (golden bangles or 四点金 Sì diǎn jīn). The tea set and wedding jewellery are used and presented during the tea ceremony respectively. It is also interesting to note that the bride’s dowry items should not be touched by pregnant ladies or children to avoid any clash in fortunes (撞喜 Zhuàng xǐ). This practice is still done today as it has a deep symbolic meaning of good luck and prosperity, and performing them properly is necessary to show respect and sincerity for the union between both families. I. The Four Dialect Groups Stemming from different parts of China, the dialect groups all share a similar flow of events but their differences are most apparent in the items included in the Betrothal gift (Guo Da Li). Hokkien & Teochew The Hokkien’s preference for sugarcanes dates back to the Song Dynasty when people evaded a massacre ordered by the emperor by hiding in sugarcane fields. Since then, it has been a symbol of protection to them. In a Hokkien gift basket, you can also find pig trotters and rice candies. The Teochews on the other hand, generally prefer flaky pastries and peanut candies (my grandmother is Teochew so I really enjoy these too), and also “Old Grandma Cake” (老 嫲糕 lǎo ma gāo) only if the bride’s grandmother is still alive. Out of the four groups, Teochews seem to be the remaining group who continue to closely follow the tradition of gifting “4 pieces of Jewellery” (四點金 sì diǎn jīn) to the bride. This set consists of a necklace, a pendant, a pair of earrings, and a bangle, all made of gold. Four-piece jewellery set (四點金). From Ephraim & Natasha’s Wedding. Dragon & Phoenix Earrings. From Kang Wei & Jamie’s Wedding Although some Hokkiens do gift this jewellery set, most would only gift two golden or jade bangles – one with a dragon and the other a phoenix (龙凤镯 Lóng fèng zhuó). Golden bangles (including Dragon & Phoenix bangles). From Mitchell & Elsie’s Wedding. Hokkien & Teochew Origin Both originating from the south, the Hokkien came from Fujian province and Teochew from Chaozhou prefecture in Guangdong province. Do you know? The first Teochew dialect group to arrive in Singapore after 1819 came from the Riau Islands of Indonesia and Siam (now Thailand). Cantonese & Hakka For both Cantonese and Hakka, a matchmaker (媒婆 Méi pó), an elderly female with good fortune, was hired in the past to bring both families together to pick out the auspicious date and time for Guo Da Li. As there are not many around these days, the role is undertaken by an older female relative with a husband and children who are all alive. This could be the groom’s elderly female relative (usually a married aunt with children), or his older married female cousin if no one fits the bill. For the Cantonese, it could also be the eldest uncle’s wife from the bride’s family (大妗姐 “Dai Kam Jie” in Cantonese). It has been a long-standing tradition to give a whole roast suckling pig to the bride’s mother, and it’s still a common practice today. This is usually either on the actual wedding day or when returning to the bride’s home. Giving a roast suckling pig whole is a sign of the bride’s chastity, and if she has lost her chastity, the pig’s tails or ears will be broken off. However, a whole roast suckling pig hasn’t always been used for this significance: Traditionally, a red dot (宫纱珠 Gōng shā zhū) placed on the bride’s forearm was used to represent her chastity and disappears when she loses it. As pig (猪“zhu”) sounds like a pearl (珠 “zhu”) in 宫纱珠 “Gōng shā zhū”, the tradition has then been replaced with the gifting of a whole roast pig. Though in current times, the meaning doesn’t hold as much importance anymore, it still remains a must-have gift to mothers-in-law. Unique to the Hakka is the gifting of the Hakka Abacus Beads (算 盘子 Suàn pán zi), which is a Hakka delicacy that is included in the groom’s betrothal gift to the bride’s family. Cantonese & Hakka Origin Also originating from the southern Guangdong province, the Cantonese share some similarities with their southern counterparts (Hokkien & Teochew) and the Hakka. The Hakka (客家 Kè jiā), which means “guest families” in mandarin, is a migratory group found all over China and thus, do not belong to a specific province or city. Thought to originate from the north, most Hakka have migrated to the south and have been practicing southern wedding customs since then. II. Guo Da Li Items Usually, the items given in the gift baskets are tailored according to the requests of the bride or groom’s family for both Guo Da Li and Hui Li respectively. It has also become more common to gift items that are meaningful to the family members. Hence, the following list is a guide of what is commonly included in Guo Da Li. A Hokkien Guo Da Li for the bride and her family. _Pictured from left to right, Back row: a pair of Golden Coconuts, Traditional wedding cakes (喜饼 Xǐ bǐng), Dragon & Phoenix Candles (wax & electronic versions), cans of Pig Trotter. Front row: Longan, Red Dates, Lotus Seeds, Dowry red packets, Golden jewellery wrapped in red cloth, 2 bottles of Hard Liquor._ A Large red packet containing dowry/betrothal gift money. Bolded are the items that are unique to the respective dialect groups Hokkien Teochew Cantonese Hakka Download Checklist – Guo Da Li (Hokkien) Black & Red or Straw Basket (过大礼盛篮 Guò dà lǐ shèng lán)Can be rented from shops 1 Large red packet with betrothal gift money (聘金 Pìn jīn) Amount given must have the number “8”, and usually ranges between $6888 to $8888. Prepared by groom’s parents in the past, but now by the groom himself. Most of the time, the bride’s family will only take a small amount to show their appreciation, and return the rest to the groom. 1 Red packet with diaper money (洗屎喜包 Xǐ shǐ xǐ bāo) To thank bride’s parents for her upbringing. Optional depending on families. 2 pairs of Dragon & Phoenix wedding candles (龙凤烛 Lóng fèng zhú) To be used during Hair-combing ritual, and actual wedding day. Minimum 6 cans of Pig trotter (猪蹄 Zhū tí)Gift for mother-in-law 2 bottles of Hard Liquor (烈性酒 Liè xìng jiǔ) or Red/White Wine (红/白葡萄酒 Hóng/bái pú táo jiǔ) Gift for father-in-law Traditional Wedding Cakes (喜饼 Xǐ bǐng) The type of cake will vary depending on bride’s family, and will be distributed amongst her family and relatives. The bride is recommended not to eat any as it’s considered inauspicious. Rice Candy (大米糖 Dà mǐ táng)Symbolizes prosperity Black Moss (发菜 Fā cài) Symbolizes striking rich 8-12 oranges (橘子 Jú zi) Symbolizes good luck Charcoal (旺炭 Wàng tàn) Symbolize a good life after marriage for the bride. Double Happiness Stickers (喜字貼紙 Xǐ zì tiē zhǐ) Used for home decoration. 2 sets of red banners [3 metres each] (红彩两套 Hóng cǎi liǎng tào) 1 for groom and 1 for bride to hang over their main door A pair of coconuts (椰子 Yē zi) Symbolize a future with multi-generation (有爷有子 Yǒu yé yǒu zǐ) 2 cans of tea leaves (茶叶 Chá yè) & 2 packets of white sesame seeds (白芝麻 Bái zhī ma) Symbolize seeds growing into trees 1 Gift box containing the following for An Chuang For presentation and are to be brought back to the groom’s home: Jewellery for the bride: Golden/Jade Dragon & Phoenix bangles (龙凤镯 Lóng fèng zhuó) Some families still prefer Si Dian Jin (四点金). Presented during Guo Da Li but will only be given by groom’s parents during tea ceremony. Dried Longan (龙眼干 Lóng yǎn gān) Symbolise blessings for a dragon boy (早生贵子 Zǎo shēng guì zǐ) Red Dates (红枣 Hóng zǎo) Symbolise good fortune (鸿运当头 Hóng yùn dāng tóu) Lotus Seeds (莲子 Lián zǐ) Symbolise having many children (连连生子 Lián lián shēng zǐ) Dried Cantaloupe (干哈密瓜 Gān hā mì guā) Symbolise having a sweet life together (甜甜蜜蜜 Tián tián mì mì) Lily bulbs (百合 Bǎi hé) Symbolise harmonious union for years to come (百年好合 Bǎi nián hǎo hé) Walnut (核桃 Hé táo) / Peanut (花生 Huā shēng) Symbolise harmony between families (和和气气 Hé hé qì qì) Dried Tangerine (干橘子 Gān jú zi) Symbolise great awesome luck (大吉大利 Dà jí dà lì) 2 boxes of 5 element seeds (五谷丰收 Wǔ gǔ fēng shōu) Symbolise blessings for the couple to have bountiful harvests (百年好合五谷 Bǎi nián hǎo hé wǔ gǔ). These are different grains like red beans, green beans, wheat, soy beans, barley or rice. Groom’s family buys these together with the rest of the the Guo Da Li items, but they will be left at the groom’s house for the Hair Combing Ritual. These will not be included in the gift basket. Sharp comb (尖头梳 Jiān tóu shū) Red string (红头绳 Hóng tóu shéng) Mirror (镜子 Jìng zi) Download Checklist – Guo Da Li (Teochew) Black & Red or Straw Basket (过大礼盛篮 Guò dà lǐ shèng lán) Can be rented from shops 1 Large red packet with betrothal gift money (聘金 Pìn jīn) Amount given must have the number “8”, and usually ranges between $6888 to $8888. Prepared by groom’s parents in the past, but now by the groom himself. Most of the time, the bride’s family will only take a small amount to show their appreciation, and return the rest to the groom. 1 Red packet with diaper money (洗屎喜包 Xǐ shǐ xǐ bāo) To thank bride’s parents for her upbringing. Optional depending on families. 2 pairs of Dragon & Phoenix wedding candles (龙凤烛 Lóng fèng zhú) To be used during Hair-combing ritual, and actual wedding day. Minimum 6 cans of Pig trotter (猪蹄 Zhū tí)Gift for mother-in-law 2 bottles of Hard Liquor (烈性酒 Liè xìng jiǔ) or Red/White Wine (红/白葡萄酒 Hóng/bái pú táo jiǔ) Gift for father-in-law Traditional Wedding Cakes (喜饼 Xǐ bǐng) The type of cake will vary depending on bride’s family, and will be distributed amongst her family and relatives. The bride is recommended not to eat any as it’s considered inauspicious. Peanut & sesame candy (花生芝麻糖 Huā shēng zhī ma táng)Symbolise having offspring soon Old Grandma Cake “Lao Ma Gor” (老嫲糕 Lǎo ma gāo)Wedding pastry for Teochew. Only applicable if grandmother is still around. Banana (香蕉 Xiāng jiāo)Symbolise bringing children in (连招贵子 Lián zhāo guì zǐ) Black Moss (发菜 Fā cài) Symbolizes striking rich 8-12 oranges (橘子 Jú zi) Symbolizes good luck Charcoal (旺炭 Wàng tàn) Symbolize a good life after marriage for the bride. Double Happiness Stickers (喜字貼紙 Xǐ zì tiē zhǐ) Used for home decoration 2 sets of red banners [3 metres each] (红彩两套 Hóng cǎi liǎng tào) 1 for groom and 1 for bride to hang over their main door A pair of coconuts (椰子 Yē zi) Symbolize a future with multi-generation (有爷有子 Yǒu yé yǒu zǐ) 2 cans of tea leaves (茶叶 Chá yè) & 2 packets of white sesame seeds (白芝麻 Bái zhī ma) Symbolize seeds growing into trees 1 Gift box containing the following for An Chuang For presentation and are to be brought back to the groom’s home: Jewellery for the bride: 4 pieces of gold jewellery – 四点金 sì diǎn jīn(Ring, earrings, necklace & bangle) Presented during Guo Da Li but will only be given by groom’s parents during tea ceremony. Dried Longan (龙眼干 Lóng yǎn gān) Symbolise blessings for a dragon boy (早生贵子 Zǎo shēng guì zǐ) Red Dates (红枣 Hóng zǎo) Symbolise good fortune (鸿运当头 Hóng yùn dāng tóu) Lotus Seeds (莲子 Lián zǐ) Symbolise having many children (连连生子 Lián lián shēng zǐ) Dried Cantaloupe (干哈密瓜 Gān hā mì guā) Symbolise having a sweet life together (甜甜蜜蜜 Tián tián mì mì) Lily bulbs (百合 Bǎi hé) Symbolise harmonious union for years to come (百年好合 Bǎi nián hǎo hé) Walnut (核桃 Hé táo) / Peanut (花生 Huā shēng) Symbolise harmony between families (和和气气 Hé hé qì qì) Dried Tangerine (干橘子 Gān jú zi) Symbolise great awesome luck (大吉大利 Dà jí dà lì) 2 boxes of 5 element seeds (五谷丰收 Wǔ gǔ fēng shōu) Symbolise blessings for the couple to have bountiful harvests (百年好合五谷 Bǎi nián hǎo hé wǔ gǔ). These are different grains like red beans, green beans, wheat, soy beans, barley or rice. Groom’s family buys these together with the rest of the the Guo Da Li items, but they will be left at the groom’s house for the Hair Combing Ritual. These will not be included in the gift basket. Sharp comb (尖头梳 Jiān tóu shū) Red string (红头绳 Hóng tóu shéng) Mirror (镜子 Jìng zi) Download Checklist – Guo Da Li (Cantonese) Black & Red or Straw Basket (过大礼盛篮 Guò dà lǐ shèng lán) Can be rented from shops 1 Large red packet with betrothal gift money (聘金 Pìn jīn) Amount given must have the number “8”, and usually ranges between $6888 to $8888. Prepared by groom’s parents in the past, but now by the groom himself. Most of the time, the bride’s family will only take a small amount to show their appreciation, and return the rest to the groom. 1 Red packet with diaper money (洗屎喜包 Xǐ shǐ xǐ bāo) To thank bride’s parents for her upbringing. Optional depending on families. 2 pairs of Dragon & Phoenix wedding candles (龙凤烛 Lóng fèng zhú) To be used during Hair-combing ritual, and actual wedding day. Minimum 6 cans of Pig trotter (猪蹄 Zhū tí) or a Whole Roast Suckling Pig (全体燒乳猪 Quán tǐ shāo rǔ zhū)If given on actual wedding day, groom still has to prepare pig trotter cans for Guo Da Li. Roast pig symbolizes virginity. Gift for mother-in-law 2 bottles of Hard Liquor (烈性酒 Liè xìng jiǔ) or Red/White Wine (红/白葡萄酒 Hóng/bái pú táo jiǔ) Gift for father-in-law Traditional Wedding Cakes (喜饼 Xǐ bǐng) The type of cake will vary depending on bride’s family, and will be distributed amongst her family and relatives. The bride is recommended not to eat any as it’s considered inauspicious. Seafood (海鲜 Hǎi xiān)(Sea cucumber, Abalone, Scallop, Shark fin, Cuttlefish, Dried prawn, Dried oyster, Dried mushroom, Dried fish maw) Black Moss (发菜 Fā cài) Symbolizes striking rich 8-12 oranges (橘子 Jú zi) Symbolizes good luck Charcoal (旺炭 Wàng tàn) Symbolize a good life after marriage for the bride. Double Happiness Stickers (喜字貼紙 Xǐ zì tiē zhǐ) Used for home decoration 2 sets of red banners [3 metres each] (红彩两套 Hóng cǎi liǎng tào) 1 for groom and 1 for bride to hang over their main door A pair of coconuts (椰子 Yē zi) Symbolize a future with multi-generation (有爷有子 Yǒu yé yǒu zǐ) 2 cans of tea leaves (茶叶 Chá yè) & 2 packets of white sesame seeds (白芝麻 Bái zhī ma) Symbolize seeds growing into trees 1 Gift box containing the following for An Chuang For presentation and are to be brought back to the groom’s home: Jewellery for the bride: Golden Dragon & Phoenix Bangles (龙凤镯 Lóng fèng zhuó) Presented during Guo Da Li but will only be given by groom’s parents during tea ceremony. Dried Longan (龙眼干 Lóng yǎn gān) Symbolise blessings for a dragon boy (早生贵子 Zǎo shēng guì zǐ) Red Dates (红枣 Hóng zǎo) Symbolise good fortune (鸿运当头 Hóng yùn dāng tóu) Lotus Seeds (莲子 Lián zǐ) Symbolise having many children (连连生子 Lián lián shēng zǐ) Dried Cantaloupe (干哈密瓜 Gān hā mì guā) Symbolise having a sweet life together (甜甜蜜蜜 Tián tián mì mì) Lily bulbs (百合 Bǎi hé) Symbolise harmonious union for years to come (百年好合 Bǎi nián hǎo hé) Walnut (核桃 Hé táo) / Peanut (花生 Huā shēng) Symbolise harmony between families (和和气气 Hé hé qì qì) Dried Tangerine (干橘子 Gān jú zi) Symbolise great awesome luck (大吉大利 Dà jí dà lì) 2 boxes of 5 element seeds (五谷丰收 Wǔ gǔ fēng shōu) Symbolise blessings for the couple to have bountiful harvests (百年好合五谷 Bǎi nián hǎo hé wǔ gǔ). These are different grains like red beans, green beans, wheat, soy beans, barley or rice. Groom’s family buys these together with the rest of the the Guo Da Li items, but they will be left at the groom’s house for the Hair Combing Ritual. These will not be included in the gift basket. Sharp comb (尖头梳 Jiān tóu shū) Red string (红头绳 Hóng tóu shéng) Mirror (镜子 Jìng zi) Download Checklist – Guo Da Li (Hakka) Black & Red or Straw Basket (过大礼盛篮 Guò dà lǐ shèng lán) Can be rented from shops 1 Large red packet with betrothal gift money (聘金 Pìn jīn) Amount given must have the number “8”, and usually ranges between $6888 to $8888. Prepared by groom’s parents in the past, but now by the groom himself. Most of the time, the bride’s family will only take a small amount to show their appreciation, and return the rest to the groom. 1 Red packet with diaper money (洗屎喜包 Xǐ shǐ xǐ bāo) To thank bride’s parents for her upbringing. Optional depending on families. 2 pairs of Dragon & Phoenix wedding candles (龙凤烛 Lóng fèng zhú) To be used during Hair-combing ritual, and actual wedding day. Minimum 6 cans of Pig trotter (猪蹄 Zhū tí) or a Whole Roast Suckling Pig (全体燒乳猪 Quán tǐ shāo rǔ zhū)If given on actual wedding day, groom still has to prepare pig trotter cans for Guo Da Li Roast pig symbolizes virginity. Gift for mother-in-law 2 bottles of Hard Liquor (烈性酒 Liè xìng jiǔ) or Red/White Wine (红/白葡萄酒 Hóng/bái pú táo jiǔ) Gift for father-in-law Traditional Wedding Cakes (喜饼 Xǐ bǐng) Type of cake will vary depending on bride’s family, and will be distributed amongst her family and relatives. The bride is recommended not to eat any as it’s considered inauspicious. Seafood (海鲜 Hǎi xiān)(Sea cucumber, Abalone, Scallop, Shark fin, Cuttlefish, Dried prawn, Dried oyster, Dried mushroom, Dried fish maw) Abacus Seeds (算盘子 Suàn pán zi)A Hakka delicacy Black Moss (发菜 Fā cài) Symbolizes striking rich 8-12 oranges (橘子 Jú zi) Symbolizes good luck Charcoal (旺炭 Wàng tàn) Symbolize a good life after marriage for the bride. Double Happiness Stickers (喜字貼紙 Xǐ zì tiē zhǐ) Used for home decoration 2 sets of red banners [3 metres each] (红彩两套 Hóng cǎi liǎng tào) 1 for groom and 1 for bride to hang over their main door A pair of coconuts (椰子 Yē zi) Symbolize a future with multi-generation (有爷有子 Yǒu yé yǒu zǐ) 2 cans of tea leaves (茶叶 Chá yè) & 2 packets of white sesame seeds (白芝麻 Bái zhī ma) Symbolize seeds growing into trees 1 Gift box containing the following for An Chuang For presentation and are to be brought back to the groom’s home: Jewellery for the bride: Golden Dragon & Phoenix Bangles (龙凤镯 Lóng fèng zhuó) Presented during Guo Da Li but will only be given by groom’s parents during tea ceremony. Dried Longan (龙眼干 Lóng yǎn gān) Symbolise blessings for a dragon boy (早生贵子 Zǎo shēng guì zǐ) Red Dates (红枣 Hóng zǎo) Symbolise good fortune (鸿运当头 Hóng yùn dāng tóu) Lotus Seeds (莲子 Lián zǐ) Symbolise having many children (连连生子 Lián lián shēng zǐ) Dried Cantaloupe (干哈密瓜 Gān hā mì guā) Symbolise having a sweet life together (甜甜蜜蜜 Tián tián mì mì) Lily bulbs (百合 Bǎi hé) Symbolise harmonious union for years to come (百年好合 Bǎi nián hǎo hé) Walnut (核桃 Hé táo) / Peanut (花生 Huā shēng) Symbolise harmony between families (和和气气 Hé hé qì qì) Dried Tangerine (干橘子 Gān jú zi) Symbolise great awesome luck (大吉大利 Dà jí dà lì) 2 boxes of 5 element seeds (五谷丰收 Wǔ gǔ fēng shōu) Symbolise blessings for the couple to have bountiful harvests (百年好合五谷 Bǎi nián hǎo hé wǔ gǔ). These are different grains like red beans, green beans, wheat, soy beans, barley or rice. Groom’s family buys these together with the rest of the the Guo Da Li items, but they will be left at the groom’s house for the Hair Combing Ritual. These will not be included in the gift basket. Sharp comb (尖头梳 Jiān tóu shū) Red string (红头绳 Hóng tóu shéng) Mirror (镜子 Jìng zi) III. Hui Li Items As mentioned before, Hui Li gifts are also tailored to the requests of the groom’s family, and meaningful gifts may be included as well. _A Teochew Hui Li for the groom and his family. Includes a portion of Guo Da Li gifts which are returned to the groom’s family._ From left to right: _Back row: Dragon & Phoenix candles, Sewing kit, a basket of an even number of Mandarin Oranges, a Watch for the groom, Prosperity Wedding Lamps for An chuang, Spittoon & metallic Washbasin. Front row: Tea ceremony set, Dining Set (Plates & Bowls), Cakes, various Red packets_ 0 2 Left: Red packets for groom and his family members, and one red packet with a portion of Dowry money returned. Right: Watch for the groom Bolded are the items that are unique to the respective dialect groups Hokkien/Teochew Cantonese/Hakka Download Checklist – Hui Li (Hokkien/Teochew) Watch (手表 Shǒu biǎo), Cufflinks (袖扣 Xiù kòu), Belt (腰带 Yāo dài), Gold Ring (黄金戒指 Huáng jīn jiè zhǐ), or wallet (钱包 Qián bāo) with Red Packet (红包 Hóng bāo) Pants (裤子 Kù zi) or suit (西服 Xī fú) Symbolizes lifelong good fortune Bottles of orange juice or syrup (罐装橙汁 Guàn zhuāng chéng zhī) In exchange for the liquor Symbolizes good luck Fortune Cake (发糕 Fā gāo / “Huat Kueh” in Hokkien) Symbolizes prosperity Dowry Items (嫁妆 jià zhuāng) Placed in couple’s bedroom/bridal chambers during An Chuang Furniture (家具 Jiā jù) (Bed, mattress or dressing table) 1 Dowry sewing kit (针线包 Zhē xiàn bāo) / Sewing machine (缝纫机 Féng rèn jī) Used during Hair-combing ceremony Symbolize being bound together 1 Wedding Ruler (子孙尺 Zǐ sūn chǐ) Used during Hair-combing ceremony Symbolise having many children and grandchildren (得寸进尺 Dé cùn jìn chǐ) Bridal Essentials Symbolizes fertility Descendant pail set (子孙宝桶 Zǐ sūn bǎo tǒng) Baby bathtub, potty/spittoon, washbasin Mug set (家翁家婆对杯 Jiā wēng jiā pó duì bēi) Face Towel set (家翁家婆面巾 Jiā wēng jiā pó miàn jīn) Toothbrush (结婚牙刷 Jié hūn yá shuā) 2 pairs of slippers for wedding couple (夫妻同鞋 Fū qī tóng xié) Linen covers for pillows and bed sheets (床单 Chuáng dān) Tea ceremony set (孝心茶具 Xiào xīn chá jù) Dining Set (家翁家婆对碗 Jiā wēng jiā pó duì wǎn) (2 bowls, 2 pairs of spoons & chopsticks) Symbolizes having ample food and clothes (丰衣足食碗 fēng Yī zú shí wǎn) 1 pair of prosperity lamp (添丁灯 Tiān dīng dēng) 1 red umbrella (红伞 Hóng sǎn) For when the bride leaves her home (Chu Ge) Fate coins (大缘小缘 Dà yuán xiǎo yuán) For An Chuang Charcoal (旺炭 Wàng tàn) Sugarcane (甘蔗 Gān zhè)Symbolizes going through thick and thin (同甘共苦 Tóng gān gòng kǔ) Hokkien only Dowry Items (嫁妆 Jià zhuāng) Bride’s family buys these together with the rest of the Hui Li items, but they will be left at the bride’s house. These are not part of the Hui Li items. Lady Fan (玉女扇 Yù nǚ shàn) Used during Chu Ge Round Comb (圆头梳 Yuán tóu shū), Red string (红头绳 Hóng tóu shéng) and Mirror (镜子 Jìng zi) set Used during hair combing ceremony Download Checklist – Hui Li (Cantonese/Hakka) Watch (手表 Shǒu biǎo), Cufflinks (袖扣 Xiù kòu), Belt (腰带 Yāo dài), Gold Ring (黄金戒指 Huáng jīn jiè zhǐ), or wallet (钱包 Qián bāo) with Red Packet (红包 Hóng bāo) Pants (裤子 Kù zi) or suit (西服 Xī fú) Symbolizes lifelong good fortune Bottles of orange juice or syrup (罐装橙汁 Guàn zhuāng chéng zhī) In exchange for the liquor Symbolizes good luck Fortune Cake (发糕 Fā gāo / “Fatt Koh” in Cantonese) Symbolizes prosperity Dowry Items (嫁妆 Jià zhuāng) Placed in couple’s bedroom/bridal chambers during An Chuang Furniture (家具Jjiā jù) Bed, mattress or dressing table 1 Dowry sewing kit (针线包 Zhē xiàn bāo) / Sewing machine (缝纫机 Féng rèn jī) Used during Hair-combing ceremony Symbolize being bound together 1 Wedding Ruler (子孙尺 Zǐ sūn chǐ) Used during Hair-combing ceremony Symbolise having many children and grandchildren (得寸进尺 Dé cùn jìn chǐ) Bridal Essentials Symbolizes fertility Descendant pail set (子孙宝桶 Zǐ sūn bǎo tǒng) Baby bathtub, potty/spittoon, washbasin Mug set (家翁家婆对杯 Jiā wēng jiā pó duì bēi) Face Towel set (家翁家婆面巾 Jiā wēng jiā pó miàn jīn) Toothbrush (结婚牙刷 Jié hūn yá shuā) 2 pairs of slippers for wedding couple (夫妻同鞋 Fū qī tóng xié) Linen covers for pillows and bed sheets (床单 Chuáng dān) Tea ceremony set (孝心茶具 Xiào xīn chá jù) Dining Set (家翁家婆对碗 Jiā wēng jiā pó duì wǎn) 2 bowls, 2 pairs of spoons & chopsticks Symbolizes having ample food and clothes (丰衣足食碗 Fēng yī zú shí wǎn) 1 pair of prosperity lamp (添丁灯 Tiān dīng dēng) 1 red umbrella (红伞 Hóng sǎn) For when the bride leaves her home (Chu Ge) Fate coins (大缘小缘 Dà yuán xiǎo yuán) For An Chuang Charcoal (旺炭 Wàng tàn) Dowry Items (嫁妆 Jià zhuāng) Bride’s family buys these together with the rest of the Hui Li items, but they will be left at the bride’s house. These are not part of the Hui Li items. Lady Fan (玉女扇 Yù nǚ shàn) Used during Chu Ge Round Comb (圆头梳 Yuán tóu shū), Red string (红头绳 Hóng tóu shéng) and Mirror (镜子 Jìng zi) set Used during hair combing ceremony 2. Matrimonial Bed Set-up – 安床 An Chuang Ān chuáng (安床) is generally done on an auspicious day (around 3 days to 1 week before the wedding day), and it is an important ritual of decorating and setting up nuptial beds. As “安 Ān” in 安床 Ān chuáng, means safe or secure in mandarin, completing this ritual will bless the couple with fertility so they can have a whole and complete family, sharing a harmonious relationship with each other. Most parts of the custom also have a traditional symbolic meaning of blessing the bride with sons. For Teochew & Hokkien, this ritual is preferably done by the groom’s parents or grandparents. On the other hand, for Cantonese and Hakka, the role is undertaken by a lady of good fortune (好命婆 Hǎo mìng pó), who is blissfully married, has many children and grandchildren who are all alive, to help set up the bed. Having a good fortune lady conduct the ritual symbolizes the passing of good fortune to the couple and their future offsprings. It can also be done by the groom’s parents or a married couple with good fortune as well. Steps for An Chuang This process begins with the changing of the bed linen to a bright, auspicious colour such as red, pink, or lavender, for a brand new bed. Based on Chinese superstitions, it is important to avoid darker and chrysanthemum-related colours, as these are associated with bad omens like death and funerals respectively. In the same vein, sleeping on the new bed is considered to be taboo as the act of sleeping alone symbolizes the death of either one of the couple. Also, the bride shouldn’t lie on the bed until the wedding day as it is believed that it will lead to poor health. If the groom had to sleep on the bridal bed before the wedding, some Chinese cultures believe that they must be accompanied by a young boy as it represents fertility. Next, items from Guo Da Li and other special items are placed on the bed. The following items are placed on a big plate: From Chin Wee & Jorin’s wedding. Even number of Oranges 2 Ang baos 1 packet of candy Items from the gift box: Dried longans (blessings for a dragon boy) Red date (good fortune) Lotus Seeds (having many offspring) Dried melon slice (sweet life together for the couple) Lily bulbs (a harmonious union for years to come) Walnut/peanut (harmony between family) Dried Tangerine (great awesome luck) Pine Tree Leaf 2 boxes of 5 element seeds – different grains like red beans, green beans, wheat, soybeans, barley or rice (bountiful harvests) After this, the bed is moved to be slightly slanted to symbolize that it has been set up, and adjusted back after the wedding day. Some phrases will be chanted to bless the couple with a happy marriage and life together: 百年好合 (Blissful Marriage) 早生 贵子 (To Bless with Offsprings) 白 头偕老 (To grow old together) 永浴 爱河 (Forever in Love) Chinese Feng Shui could play a role in the positioning of the furniture in the wedding chamber. For example, the bed shouldn’t be facing the door but the mirror should, and both should not be facing each other. Fate coins (大 缘小缘Dà yuán xiǎo yuán) from Guo Da Li are placed at all four corners of the bed, furniture items that have four corners – dressers and wardrobes, and the room itself. Each antique Chinese coins are inserted top-facing into individual red packets and one is placed at every corner of the mattress, and one under each pillow. Battery-operated wedding lamps. _From Wilfred & Jing Yeu’s wedding._ Wedding Lamps are switched on and placed on the bedside table to complete the ritual. In the past, the process of the lighting of oil lamps (添灯 Tiān dēng) sounds similar to adding sons (丁 Dīng). In present day, electric lamps powered by batteries or directly from the socket are used and have to be left lit throughout the night until after the wedding banquet or the 3rd day after the wedding. During the tea ceremony on the wedding day, the couple can eat the sweets prepared on the tray, and young boys will be encouraged to jump and roll around the bridal bed to bless the couple with fertility – provided their Chinese Zodiacs do not clash with the bride’s or groom’s. At the end of the ritual, red packets are given out to everyone who helped to set up the bed, especially the good fortune lady and the children who rolled and jumped on the bed. 3. Hair Combing Ritual – 上 头 Shang Tou The Hair Combing Ritual (上 头 Shàng Tóu) remains important in the Chinese culture as it symbolizes the coming of age of the bride and groom. Some Chinese parents even consider their child as an adult only after marriage.= This custom is done at the bride and groom’s respective homes usually on the night before the wedding day or at dawn on their actual wedding day. They must be conducted separately as they are not allowed to see each other before the wedding. Traditionally, it is done at an auspicious time but now, a good gauge of when to begin would be around 11 pm or at midnight, where the groom begins 15 minutes or 1 hour ahead of the bride. Steps for Shang Tou First, the table (or vanity) is set with the following items: Shang Tou Items. Image credits to Carousell. | Bride | Groom | | --- | --- | | 1 Rounded Comb | 1 Pointed Comb | | 1 piece of Red String (tied to bride’s hair at the end of the ceremony) | 1 piece of Red String (placed in groom’s pajama pocket at the end of the ceremony) | | 1 Mirror | | 1 new set of Pyjamas and Slippers | | 1 plate of Lotus Seeds, Red Dates & Dried Longans | Note: The groom’s family will purchase his Shang Tou items together with Guo Da Li, and the bride will purchase hers with Hui Li. And the following are placed at the altar table (2 sets – one for the bride, one for the groom) 1 Pair of Dragon & Phoenix Candles (龙凤烛 Lóng fèng zhú) 3 Joss Sticks 3 Bowls of cooked glutinous rice balls (汤圆 Tāng yuán) (6 to 9 pieces) 2 Eggs with Mee Sua (instead of Tang Yuan for Teochews) After the tables are set, the couple would then bathe with pomelo or pomegranate leaf-infused water and don on a new set of pajamas and slippers. The ritual begins when the candles are lit, and the bride and groom are sat next to the window where the moon is visible so that the Lunar God of Fate (月佬 Yuè lǎo) can watch over them. A good fortune lady or man, or the parents of the bride and groom will then combing through their hair 4 times, reciting the following with each stroke: 一梳梳到尾(Yī shū shū dào wěi)“May your marriage last a lifetime” 二梳百年好合(Èr shū bǎi nián hǎo hé)“May you be blessed with a happy and harmonious marriage until old age” 三梳子 孙满堂(Sān shū zǐ sūn mǎn táng)“May you be blessed with an abundance of children and grandchildren” 四梳白 发齐眉(Sì shū bái fà qí méi)“May you be blessed with longevity” A red string is then tied to the bride’s hair and placed in the groom’s pocket respectively, signifying the end of the ceremony. This red string is also known as the string of fate through which Yue Lao “ties” the couple and bind their fate together. After this, the couple will be served with a bowl of Tang yuan to symbolize a long-lasting marriage through the good times and bad. A pair of wax Dragon & Phoenix wedding Candles (龙凤烛 Lóng fèng zhú). From Ben & Celine’s wedding. Teochew & Hokkien The door must be locked, and no one should enter while the ceremony is ongoing. During the ritual, the groom faces the inside of the room, while the bride faces the ancestors at the altar. Cantonese & Hakka During the ceremony, the groom faces the wall, with his back against the door, while the bride does the opposite (faces the door, back against the wall). Additionally, silk pyjamas are preferred. Stay curious and read on: Wedding Photography in Singapore through the ages A time capsule gallery of traditional wedding customs in Singapore. 4. Gatecrashing – 闯门 Chuang Men & Fetching the Bride – 接新娘 Jie Xin Niang Gatecrashing (闯门 Chuǎng mén) is probably one of the most memorable and exciting parts of the wedding process, and it is an expression of the bride’s family’s reluctance in simply marrying their precious daughter off. Hence, the groom has to jump through all these hurdles and overcome these challenges to prove his sincerity in marrying her.= At the groom’s place, he will prepare a bridal bouquet for the bride, and the gifts from Guo Da Li (scroll down for the list for the respective dialect groups). At the same time, the bride gets her makeup done, puts on her wedding gown, and her parents will place the veil over her face. From Ivan & Amanda’s Wedding. The bridesmaids (姊妹Zǐ mèi or 姐妹 Jiě mèi) will have to arrive at the bride’s place early to prepare activities or tasks that the groom has to complete before he’s able to see his bride. From Mitchell & Elsie’s Wedding And when the groom arrives, he must wait in his car for a younger male member of the bride’s family to open his car door. When the car door is opened, the groom will give him an Ang Bao and in turn, the groom will receive two mandarin oranges for good luck. The mandarin oranges received are to be left in the car. From Mitchell & Elsie’s Wedding With the help of his groomsmen (兄弟 Xiōng dì), the groom will participate in gatecrashing activities prepared by the bridesmaids. These challenges usually involve dancing and singing (or even stunts!) and reciting his vows to his bride. Claron singing to Shu Han’s family members and her through a video call. From Claron & Shu Han’s wedding. Wen Jun and his groomsmen forming a human pyramid. From Wen Jun & Phoebe’s wedding. Sean reading out his vows to Li Rong at her door. From Sean & Li Rong’s wedding. The most important challenge is the tasting of four requisite flavours – sour, sweet, bitter, and spicy (酸甜苦辣 Suān tián kǔ là). Parallel to the phases of the newlywed’s relationship, this challenge is a measure of how much the groom can endure so it should not be skipped to ensure a smooth-sailing marriage. 酸甜苦辣 Suān tián kǔ là – Jamie’s bridesmaids prepared lemon slices, heart-shaped watermelon slices, dark chocolates and chili padi for the Flavour-tasting Challenge. From Kang Wei & Jamie’s Wedding. Other challenges we have seen bridesmaids prepare also include the quizzing of the groom and his entourage on how well they know the bride, like guessing which lipstick mark belongs to her! Mitchell guessing which lipstick stain belongs to Elsie. From Mitchell & Elsie’s wedding. At the door, the groom will negotiate with the bridesmaids for their Door Opening Ang Pao (开门 红包 Kāi mén hóng bāo) to compensate for all their effort, and will only be allowed through if they are satisfied with the amount given. After the groom successfully enters the bride’s home, he will present the bride with her bridal bouquet, lift her veil, and kiss her. After these, the couple and their entourage leave for the groom’s home. Here’s a video of how one of our couple’s gatecrash went! Teochew & Hokkien: Traditionally, prayers will be involved during this process. The groom will pray to his ancestors at his place before putting on his suit and setting off with his groomsmen, usually in even numbers, to fetch the bride. Similarly, the bridesmaids will also arrive earlier at the bride’s place, also in even numbers, to prepare for the gatecrash. If the couple decides to follow an auspicious timing, the activities should be scheduled accordingly as well. For traditional families, the brides will be required to have breakfast with her whole family to bid her farewell. After lifting the bride’s veil and presenting the bouquet to her, the couple will pray to heaven, earth, and their ancestors at the altar, and bow to the bride’s parents to thank them, before leaving for the groom’s place. Items to Prepare As mentioned above, the groom has to prepare a bridal bouquet and gifts for the bride’s family: Placed on a big red tray: 1 Ang Bao Even number of cans of Pig Trotters Dried Lily Bulb Lotus Seed Dried Longan 18 Mandarin Oranges 1 bottle of rice wine 2 bottles of wine More Information Some traditional Teochew families may find the act of haggling for red packet money to be rude and will object to it. So the bride’s father will lead the bride out to the living area, and directly to the groom instead. Cantonese & Hakka: In the past, instead of a bouquet of flowers, the groom had to give a big red wedding ball (花球 Huā qiú) to the bride. It’s also customary for the bridesmaids to hide the bride’s shoes for the groom to find, and put them on for her before taking her away. This symbolizes the couple walking through their marriage together for a long time. Similar to Teochew and Hokkien, when the couple is ready, they will pray at the altar to the heaven, earth and ancestors, and bow to the bride’s parents to thank them before leaving for the groom’s place. Items to Prepare The gift items prepared by the groom are slightly different in contrast to Teochew and Hokkien. Groom has to prepare a bridal bouquet and the following gifts for the bride’s family: Placed on a big red tray: 1 Ang Bao 1 Whole Roasted Pig 2 bags of Peanuts 2 Chicken 2 Lettuce 2 Spring Onion 2 Celery 18 Mandarin Oranges 1 bottle of rice wine 2 bottles of wine From Jun Hao & Lai Cheng’s Wedding. 5. Leaving the Bride’s Home – 出 阁 Chu Ge Usually, before the couple leaves for the groom’s home, the bride’s family will prepare Mee Sua (vermicelli) with hard-boiled eggs which symbolizes longevity. From Ephraim & Natasha’s wedding. As the couple leaves the bride’s home, the bride is sheltered under a red umbrella in open areas to ward off any negative elements. The person holding the umbrella differs depending on the bride’s dialect group. For Teochew & Hokkien, it’s usually a male elder like the bride’s father. For Cantonese & Hakka, besides the bride’s father, it could also be the matchmaker or bridesmaid. From Chun Long & Adeline’s wedding. As the bride leaves in the wedding vehicle, she will throw a red foldable fan out of the car window, leaving behind her past, bad habits and negative aspects to start a new chapter. As the car drives off, her family members will pick the fan up. From Chun Long & Adeline’s wedding. It is also taboo for the brides to look back at their home on the way to the groom’s place as this signifies a failed marriage. Traditionally, only single bridesmaids will accompany the couple to the groom’s place, while the rest who are not single including the groomsmen, will not follow along. More Information On the way to the bridal car, the matchmaker, bridesmaid, or family members may throw red beans or rice for good luck. From Derrick & Elyssa’s Wedding. After the bride leaves her home, her mother may also pour water out of their house door, which symbolizes the Chinese saying – “spilled water cannot be recovered” (泼出去的水,不能回收 Pō chū qù de shuǐ, bù néng huí shōu). Cantonese & Hakka In the past, the matchmaker will carry the bride on her back while the bridesmaid or helper holds the umbrella out for them on the way to the bridal sedan. Additionally, brides would also cry to express their gratitude to her parents for raising her. 6. Entering the Groom’s Home – 过门 Guo Men [Applies to all dialect groups] When the couple arrives at the groom’s home, it’s customary for his family to “hide” from the couple to avoid seeing them enter the house, and only appear after the couple has entered the bridal chamber. This is to prevent any future disputes between the bride and her new family members. Before heading to the bridal room, the couple has to pray once more to heaven, earth, and ancestors. Inside the room, they will then either be served a “sweet soup” which usually contains longans, red dates, lotus seed, hard-boiled egg and/or glutinous rice ball (汤圆 Tāng yuán) – ingredients cooked together to symbolize a blissful marriage. Sweet Soup consisting of longans, red dates, and tang yuan. From Kang Wei & Jamie’s wedding. The bride and groom then change into their Traditional Chinese Wedding clothes – Qún guà (群褂) or Guà (褂) (aka Kua) but now, some couples will proceed with the Tea Ceremony without changing into the Gua. These outfits are usually rented in Singapore, but can also be bought. The Traditional Chinese Wedding Dress – 群褂 Qún guà Originating from the Ming dynasty as the royal wedding dress for the females so you can expect the details of these two-piece sets to be exquisite. Usually made of red fabric embellished with gold and silver embroidery, this set takes from 3 to 8 months for a master tailor to embroider. The denser the embroidery, and the lesser the traces of red fabric, the more expensive they are and the longer they will take to make. They are also embroidered with various symbols to represent auspicious things: Dragon & Phoenix: Represents a happy and successful marriage, with the dragon (masculine) symbolizing auspicious power, complemented by the phoenix (feminine) which symbolizes prosperity and happiness. Pomegranate: Symbolizes fertility as it is a fruit that’s filled with seeds Peony/Lotus Flower: Symbols of spring and summer, representing beauty, prosperity, and fertility. Other animals (bats, goldfish, butterflies or birds): Represent a good pairing, wealth, and luck. Shu Han’s Kua is embroidered with Dragon & Phoenix, and Peony flowers while Claron’s is embroidered birds. From Claron & Shu Han’s Wedding. Jamie’s Kua is also embroidered with Dragon & Phoenix, and Peony flowers. From Kang Wei & Jamie’s Wedding. Stephanie’s Kua features dragons, phoenixes, bats, birds, and peony flowers. Her matching pair of red shoes are embroidered with peony flowers, and a dragon and phoenix on each side. From Sam & Stephanie’s wedding. These dresses are also tailored to be loose-fitting to represent bountiful years of marriage ahead, as the bride is expected to put on more weight each year. 7. Tea Ceremony – 敬茶 Jing Cha [Applies to all dialect groups] From Ephraim & Natasha’s wedding. A counterpart to Western solemnization, the Tea Ceremony is when the bride and groom meet each other’s families – not just their immediate family. Dressed in Guà (or Kua), the couple will first serve tea to the groom’s family. The sweet tea, symbolizing a harmonious relationship between the newly-wed and their respective families, is brewed from red dates, and longans or lotus seeds with the tea set from Guo Da Li. Red dates and lotus seeds tea (莲子红枣茶 Lián zǐ hóng zǎo chá) symbolizes blessing the couple with early childbirth and lots of offsprings, while the red dates and longans tea (龙眼红枣茶 Lóng yǎn hóng zǎo chá) represents wishes for the couple to have male children as the longans represent “dragon”. After the tea ceremony, a young boy is usually asked to jump and roll around on the couple’s bed to bless them with many children. This is called 压床 Yā chuáng or 翻床 Fān chuáng, which literally means “pressing” and “rolling over” the bed. As mentioned in the An Chuang section before, the boy’s Chinese Zodiac should not clash with the bride or groom’s. Tea Ceremony Guideline There are also other rules governing this ceremony: The bride has to be seated to the right of the groom (For elders, the males will be seated to the left of the female elders.) The couple may or may not kneel as they serve tea to the elders The teacup should be served to and received from the elders with two hands, and not with one hand only. The teacup should not be filled to the brim – only be 2/3 full Lotus seeds should not be halved as this signifies separation. The couple’s parents are served first, followed by relatives starting from the eldest. Male elders are served first. Formal titles should be used to address the relatives. An extra cup of tea will be served if a living member of the elder couple is not present, and the other will drink on their behalf. However, an extra cup will not be poured for a deceased spouse. After drinking the tea, tea ceremony gifts (red packets or jewellery) for the bride and groom will be presented on a serving plate. Some relatives will put the jewellery gifts on them as well. However, unmarried older siblings of the couple are not expected to gift the couple gifts after drinking the tea. If there are younger cousins or siblings around, they will serve tea to the couple instead and will be given red packets or gifts. Between each tea-serving, a female relative or matchmaker can also help with rinsing the cup and pouring the tea. Claron’s parents presenting gifts to the couple after tea has been served to them. FromClaron & Shu Han’s wedding. Relatives wearing jewellery gift on Adeline. From Chun Long & Adeline’s wedding. Child receiving a red packet from Chun Long after serving tea to the couple. From Chun Long & Adeline’s wedding. Chinese Superstitions Based on Chinese superstitions, anyone born in the year of the Tiger is not recommended to enter the bridal room and the room where the Tea Ceremony is held. Some couples or their family who strongly believe in the superstition may even choose not to invite anyone born in the year of the Tiger altogether, including children. This is because while Tiger babies are regarded as courageous, they can also be deemed aggressive or over-sensitive, and that might bring harm to family members. 8. Returning to the Bride’s Home – 三朝回门 San Chao Hui Men From Kang Wei & Jamie’s wedding. Traditionally, the bride returns home for a visit three days after the wedding. But now, it is common for the newlywedded couple to return to the bride’s family home on the same day to serve tea. Usually, the bride will change into another outfit, like the Gua or a simpler dress which signifies that three days have passed. Gifts to prepare Teochew Roast pig or pork Even number of mandarin oranges Peanut Candy Sesame Candy Hokkien Roast pig or pork Even number of mandarin oranges Traditional Hokkien Candies – Rice candy, peanut candy, popped rice and sesame rolls, bean paste pastries (豆沙 饼 Dòu shā bǐng) Cantonese & Hakka Roast Pig Even number of mandarin oranges After the tea ceremony, the roast pig will be divided into three sections – head, middle, and tail. The middle section will be kept for the bride’s family, and remaining sections will be wrapped in red paper or cloth, and returned to the groom’s family, symbolizing a perfect union (有 头有尾 Yǒu tóu yǒu wěi). The mandarin oranges will also be exchanged at the bride’s house for the couple to bring them back to the groom’s family. More Information In the olden days, sugarcanes and a pair of live hen and rooster were given to the couple when they return to the groom’s place after the ceremony. The sugarcane stalks were given to wish the couple a happy and sweet marriage (甜甜蜜蜜 Tián tián mì mì), and the pair of “Route-leading” Chickens(带路鸡 Dài lù jī) symbolize a blissful pairing and were to be left under the nuptial bed to predict the gender of the couple’s child. If the rooster comes out first, it’s believed that the couple is due for a boy. 9. Wedding Banquet – 喜酒 Xi Jiu [Applies to all dialect groups] Lastly, the most extravagant part of the wedding is the wedding banquet which lasts for two or more hours. Venue Back then in our parents’ days, wedding banquets were held at restaurants. Now, it has become increasingly common to hold one at a hotel due to its convenience and reputation.In recent years, we have also observed the shift in trend back to holding one at restaurants like Peony Jade. We’ve also seen couples holding celebrating their wedding at alternative venues such as CHIJMES, or even on a yacht for a more intimate celebration. Order of Events When the guests arrive, they will be invited to sign their names and well-wishes in the guest book at the reception table and present their red packets. At the reception, the guests will also be given their assigned seats. Once the couple is ready, they may also mingle around with the guests at the cocktail reception or snap some fun photos with them at the photobooth! Wen Jun & Phoebe snapping photos at our photobooth! From Wen Jun & Phoebe’s wedding. Chinese banquets usually feature emcee(s) who will entertain the wedding guests and ensure that the banquet runs smoothly and on schedule. As they announce the grand march-in of the couple, the wedding celebration begins. It is also common for the couple to proceed with the cake cutting ceremony after their first march-in. From David & Delia’s wedding. Depending on the time the bride requires to prepare for her next look, the couple may need to leave shortly after the cake cutting ceremony to get ready for the second march-in. Due to this short time-frame, it is also very common for the wedding couple to only have a few bites during the wedding banquet. Hence, we have also observed a trend of couples preferring to stick to only one march-in. If that’s the case, they may choose to do away with the cake-cutting ceremony or ask the emcee to re-invite them up on stage for the champagne-popping ceremony. With the cost of a table averaging between $1.3k (luncheon) to $1.8k (dinner), it is only understandable that modern couples would like to experience dining at the wedding banquet themselves. Before the couple’s entrance, the wedding highlights (be it photo or video) will be screened. Here’s a couple who got creative with theirs! After their second march-in (during the fourth course), the couple would proceed up on stage for the Champagne-popping ceremony,or more locally known as “Yam Seng“. From Ben & Fionne’s wedding. When family and relatives are up on stage, the toast begins and there’s usually a total of three toasts, with each ending with a very long and loud “Yam Seng!”: 百年好合Bǎi nián hǎo hé Wishes the couple a blissful marriage 2. 永浴 爱 河Yǒng yù ài hé For the couple to have an everlasting love 3. 早生 贵 子Zǎo shēng guì zǐ Wishes the couple to have an early childbirth At the end of it all, the couple may also give a thank you speech. From Ben & Fionne’s wedding. During the final few courses of the banquet, most of the time, the couple will move around the ballroom from table to table to take photographs with their guests, starting with their families, relatives then friends. _Customized table photo print-outs. From Kenneth & Josephine’s wedding._ When dessert is served, it is a signal to the guests that they may leave as the banquet comes to an end. More often, wedding emcees will also make the announcement that the wedding banquet has ended and thank the guests for their attendance on behalf of the wedding couple. At the end of the celebration, the bride and groom and their families will line up outside the ballroom and anticipate guests to thank them for their attendance. Chinese Banquet Courses Throughout the banquet, guests will usually be treated to an 8 to 9-course meal. Number 8 (八 Bā) because it sounds like “good luck” (发 Fā) in Mandarin and 9 (九 Jiǔ) as it sounds like “long life” (久 Jiǔ). Most if not of the following meals are served, with each holding different significance: First course – “Dragon Phoenix Plates” or Cold Appetisers Cold appetizers are usually served first and they will usually include lobster, chicken feet, jellyfish, abalone, sliced pork and beef, seaweed, and bean curd. The Phoenix (the Yin) symbolizes luck, beauty, and femininity, while the Dragon (the Yang) symbolizes strength, creativity, and masculinity. Hence, this course represents the union between the bride and groom. Lobster and chicken feet are commonly served during this course as lobster literally translates into “dragon shrimp”, and chicken feet into “phoenix feet” in mandarin. In Singapore, we can also see baby octopus and Ngoh Hiang served as an appetizer too. Second course – Soup Traditionally, this is when soup with rare and expensive ingredients will be served, like Shark Fin Soup with crab meat or Eight Treasures Soup. However, since shark finning is illegal, restaurants and hotels will usually offer alternative soup options like fish maw soup with chicken and crab, or seafood soup instead. Third Course – Seafood and/or Vegetables Seafood is essential for any Chinese wedding banquets. One of the seafood dishes you may expect is Scallops served with vegetables. Scallops (扇贝 Shàn bèi or 带子 Dài zi) symbolize fertility, and the second translation could mean “raising children”. Hence this dish blesses the couple with many offsprings in the future. Large succulent prawns (虾 Xiā) could also be served during this course. Pronounced “Haa” in Cantonese, it sounds a lot like laughter, and hence, eating them is meant to create more laughter and happiness between friends and family. Even though they are not traditional, honey prawns are said to be better as you get “sweet laughter”. Fourth Course –Whole Bird or Poultry Chicken, Duck, Quails, or pigeons are symbols of peace and unity. Serving them whole represents a lasting union between the couple. Similar to the roast suckling pig, the red colour of a roasted duck signifies good luck. After this course has been served, there will be a break for the couple’ssecond march-in. Fifth Course – Fish Symbolically, Fish represents fertility and abundance for the couple (due to its similar pronunciation to the word “abundance” in mandarin), and are usually served with the head and tail intact to symbolize wholeness and that their marriage will come to successful completion. In Chinese culture, a marriage will not be entirely fulfilled until the couple bears children. Hence, this dish is a must-have in all Chinese wedding banquets. Sixth Course – Premium Seafood Abalone (鲍鱼 Bào yú)is a homophone to “abundance” (保 Bǎo meaning “assurance” and 裕 yù meaning “abundance”). And because it’s also an expensive dish, serving it symbolizes a blessing of abundance for the couple throughout the years. Sea cucumber(海參 Hǎi shēn or “Hoi Sam”)in Cantonese, sounds like “good heart” (“Hou Sam“). Hence this dish serves as a reminder to be good-hearted in the face of conflicts. Due to their meanings, serving this dish shows a sign of respect and “face” (面子 Miàn zi) from both families, towards their guests. These also usually have a smooth texture and are from the ocean, so they symbolize a “smooth sailing” relationship between the couple and their families. In Cantonese, “sea cucumber” also sounds similar to “good heart”. Additionally, as abalone is an expensive dish, it also symbolizes yearly abundance for the couple. Some hotels may also serve Roast Suckling Pig during this course. However, due to its price-tag, it’s more likely to be reserved for more intimate settings. As mentioned in Guo Da Li, roast suckling pig is a symbol of the bride’s virtue and purity. Its rosy-red colour is also a symbol of good luck in Chinese culture. Seventh & Eighth Courses – Noodles & Rice As Noodles come in long strands, they symbolize longevity, and a blessing to the couple for a long and happy marriage and life together. Additionally, when using chopsticks to eat noodles, you should also be careful not to point your chopsticks at others or sticking them upright in the bowl as these are practices done to honour the dead. A large dish of rice symbolizes a plentiful supply of food throughout the couple’s life. Dessert Course Last but not least, desserts are served as not just a sign of the end of a meal, but also a sweet marriage. While there are many dessert options to choose from, Red bean and lotus seed soup is most commonly served. While red is the colour of happiness, beans and seeds are elements of fertility and growth. This dessert is very sweet as you wouldn’t want the relationship to turn sour. Note: The order of courses may differ between different hotels or restaurants. Additionally, dishes served during wedding dinners are likely to contain more luxurious ingredients than wedding luncheon. Post-Wedding Banquet At the end of the wedding banquet, the newlywed’s close friends and relatives would visit them at their bridal chamber or hotel room. They would play tricks on them as an expression of their well wishes. Traditionally, after sharing a glass of wine, the newlyweds would each cut off a lock of their hair symbolizing that they’re now of one heart. Three, seven or nine days after the wedding is when the bride returns to her maiden home to visit her family, but some couples who have already paid their visit earlier, may also choose to go on a honeymoon instead. Editor’s note: Having documented so many weddings, we have come across a wide array of customs and traditions practiced by different clients coming from various dialect groups, and very often, our clients question the meaning and purpose behind some of the practices that they have been requested to carry out by their seniors. Hence, we thought this article would provide a deeper understanding of our Chinese wedding customs and traditions. We have also observed that many Chinese wedding traditions that remain widely practiced are those that are significantly more meaningful for the couple, rather than based on superstition. Throughout the years, we have also observed the shift from an elaborate Chinese wedding celebration to a more westernized, simple, and intimate wedding celebration that solely focuses on the couple and their love story. This is only natural as Singaporeans have always been exposed to the Western culture and our focus on nuclear family probably contributed to the shift in preference for simpler intimate wedding celebrations. Stay curious and read on: Interesting Facts about Raffles Hotel _Raffles Hotel has been around since 1887. Ever wondered about the events that took place between over 100 years ago and now?_ Yet to decide on an engagement ring? 5-Step Guide to buying an Engagement Ring Here’s all you need to know when picking out an engagement ring! Yet to decide on a wedding dress? 11 Wedding Gown Trends You’ll Love Beautiful gowns that may inspire you! Singapore weather-approved! Comments are closed. Recent Posts Tips on how to prepare Suan Tian Ku La for your wedding gate crash!May 10,2022 2022 Wedding Gown TrendMay 4,2022 Ultimate guide to your ideal pre-wedding photoshoot based on themesMarch 29,2022 9 Chinese Wedding Customs in SingaporeMay 5,2020 11 Wedding Gown Trends You’ll LoveApril 7,2020 [email protected] - +65 8875 8701 © 2025 Senica Productions. All Rights Reserved. [email protected] +65 8875 8701 © 2025 Senica Productions. All Rights Reserved. FacebookTwitterPinterestCopy LinkShare ✓ Thanks for sharing! AddToAny More…
329
Contemporary Mathematics Rational Curves on Grassmannians: systems theory, reality, and transversality Frank Sottile Abstract. We discuss a particular problem of enumerating rational curves on a Grassmannian from several perspectives, including systems theory, real enumerative geometry, and symbolic computation. We also present a new transversality result, showing this problem is enumerative in all characteristics. While it is well-known how this enumerative problem arose in mathemati-cal physics and also its importance to the development of quantum cohomology, it is less known how it arose independently in mathematical systems theory. We describe this second story. Published in ”Advances in Algebraic Geometry Motivated by Physics”, ed. E. Previato, Contemp. Math., 276, AMS, 2001. pp. 9–42. 1. Introduction The enumerative geometry of curves on algebraic varieties has become an im-portant theme in algebraic geometry. One motivation for this development was to understand (and prove) remarkable formulae from theoretical physics, including a formula of Vafa and Intriligator [30, 62] involving curves on Grassmannians. The story of this direct influence of theoretical physics on algebraic geometry is well-known. What is less known is how the problem of enumerating rational curves on Grassmannians also arose and was solved in systems theory. Our purpose is to make that story better known and to relate the different solutions, from physics and from systems theory, of this enumerative problem. We also discuss some related work in algebraic geometry inspired by systems theory. We describe this enumerative problem. Let m, p ≥1 be integers. The space Mq m,p of maps M of degree q from P1 to Grass(p, Cm+p), the Grassmannian of p-planes in Cm+p, has dimension N := q(m + p) + mp [14, 58]. Given a point 2000 Mathematics Subject Classification. 13P10, 14-02, 14M15, 14N15, 14N35, 14P99, 65H20, 93B55. Key words and phrases. quantum cohomology, Schubert Calculus, pole placement, dynamic compensation, real enumerative geometry, Gr¨ obner basis. Research supported in part by NSF grant DMS-0070494. Based upon a talk by the author in the Special Session on Enumerative Geometry in Physics at the AMS sectional meeting in Lowell, Massachusetts, April 1-2, 2000. Corrected version of 29 November 2001. c ⃝2000 (Frank Sottile) 1 2 FRANK SOTTILE s ∈P1 and an m-plane L in Cm+p, the set of maps M which satisfy M(s)∩L ̸= {0} (the p-plane M(s) meets the m-plane L non-trivially) is a divisor on this space of maps. We consider the following enumerative problem: Question 1. Given general points s1, s2, . . . , sN ∈P1 and general m-planes L1, L2, . . . , LN ⊂ Cm+p, how many degree q maps M : P1 →Grass(p, Cm+p) satisfy (1.1) M(si) ∩Li ̸= {0} for i = 1, 2, . . . , N ? This is a special case of the more general enumerative problem considered by Vafa and Intriligator [30, 62] who replaced the Schubert condition M(s) ∩ L ̸= {0} by general Schubert conditions and the map M : (P1, s1, s2, . . . , sN) → Grass(p, Cm+p) by a map of a general pointed curve. There, a formula was pro-posed involving residues. This formula was justified by Siebert and Tian by computing the (small) quantum cohomology ring of the Grassmannian, whose struc-ture was also conjectured by Vafa and Intriligator. We describe this part of our story in Section 5. A completely different approach (and motivation) to this enumerative problem came from systems theory. Briefly, conditions of the form M(s) ∩L ̸= {0} arise in the problem of stabilizing a given linear system using dynamic output compensa-tion . In the critical dimension when there are finitely many compensators, the problem of enumeration was solved by Ravi, Rosenthal, and Wang [40, 41], who gave the closed formula for the intersection number d(m, p; q) of Question 1: (1.2) (−1)q(p+1) N! · X ν1+···+νp=q Q j<k(k−j+(νk−νj)(m+p)) Qp j=1(m+j+νj(m+p)−1)! . One of their motivations was to determine when this number is odd, for then there exists a real compensator stabilizing a given real linear system. We describe how this problem in systems theory is a special case of the general enumerative problem described above, and also how Ravi, Rosenthal, and Wang solved this enumeration in Section 2. We remark that the quantum cohomology of the Grassmannian also has applications to matrix interpolation problems [3, 38]. The geometric formulation from systems theory (and ideas from numerical ho-motopy continuation ) were exploited to prove the following result in real enu-merative geometry: There exist real points s1, s2, . . . , sN ∈P1 R and real m-planes L1, L2, . . . , LN ⊂Rm+p such that there are d(m, p; q) rational maps M : P1 → Grass(p, Cm+p) of degree q satisfying (1.1), and each of these maps is real . Thus the enumerative problem of Question 1 is fully real (in the sense of ). A variant of this argument gives the new result that Question 1 makes enumerative sense in any characteristic: If K is any algebraically closed field, then for general points s1, s2, . . . , sN ∈P1 K and general m-planes L1, L2, . . . , LN ⊂Km+p there are exactly d(m, p; q) degree q rational maps M : P1 →Grass(p, Km+p) satisfying (1.1) . The point here is that the corresponding varieties intersect transversally and so the solutions occur without multiplicities. We give a proof of these results in Section 3, where we also solve the enumerative problem of Question 1 without reference to the Chow or quantum Chow rings, the usual tools of enumerative geometry. Ravi, Rosenthal, and Wang [40, 41] also showed that d(m, p; q) equals the number of saturated chains in a certain poset of quantum Pl¨ ucker coordinates. This is the degree of the singular Uhlenbeck compactification [47, 7] of the space of rational curves in the Grassmannian in a natural projective embedding, also called RATIONAL CURVES ON GRASSMANNIANS 3 the quantum Grassmannian. Its degree may be computed from its defining ideal. In , quantum Pl¨ ucker relations for this ideal were constructed, giving a different proof that this degree equals the number of chains in the poset of quantum Pl¨ ucker coordinates. We describe that in Section 4 and give another proof that d(m, p; q) equals the number of chains in that poset. In the last section, we not only describe some of the classical story motivated by physicists, but also relate the formula (1.2) of Ravi, Rosenthal, and Wang to the formula of Vafa and Intriligator. This involves another, intermediate formula (5.10). We conclude by discussing some further aspects of the quantum cohomology ring of the Grassmannian, including how it arose in representation theory and open problems involving quantum Littlewood-Richardson numbers. 2. Dynamic Control of Linear Systems In control theory, the greatest interest is to obtain results valid over the real numbers R. As in algebraic geometry, the strongest and most elegant results are true only for the complex numbers C. Also as in algebraic geometry, much of the theory may be developed over any field. To that end, we let K denote an arbitrary field, keeping in mind the special cases of when K = R or K = C. Suppose we have a time-invariant physical system with m inputs u ∈Km and p outputs y ∈Kp whose evolution is governed by a system of constant coefficient linear differential equations 0 = F(u, u′, . . . ; y, y′, . . .) . One important way in which such a linear system arises is from a linear perturbation of a non-linear system. Introducing auxiliary variables or internal states x ∈Kn, we can transform this into a first order system of linear differential equations (2.1) d dtx = Ax + Bu y = Cx + Du , where A, B, C, and D are matrices of the appropriate size. The matrix D represents a direct linear dependence of y on u. Systems with D = 0, where the dependence of y on u is purely dynamic, are called strictly proper. The representation (2.1) is called a state space form or state space realization of the original system. There are many ways to realize a given system in state-space form and a fundamental invariant, the McMillan degree, is the minimal number n of internal states needed to obtain such a first order linear evolution equation. The McMillan degree measures the complexity of a linear system. A system is observable if the joint kernel of the matrices CAk for 0 ≤k < n is zero, which implies that the internal states (x) may be recovered from knowledge of y(t) and u(t). It is controllable if the matrices AkB for 0 ≤k < n span Kn, which implies that the system may be driven to any fixed internal state. A state space realization (2.1) of a system is minimal (n is its McMillan degree) if and only if it is both observable and controllable [19, §13]. 4 FRANK SOTTILE 2.1. Rational curves on Grassmannians. We give another fundamental representation of a linear system that links systems theory to the (quantum) coho-mology of the Grassmannian. Consider the Laplace transform of (2.1) s · x(s) = A · x(s) + B · u(s) , y(s) = C · x(s) + D · u(s) . We eliminate x and solve y(s) = ¡ C(sIn −A)−1B + D ¢ u(s) . This p by m matrix Γ(s) := C(sIn −A)−1B + D of rational functions is called the transfer function of the original system. It represents the response of the system in the frequency domain. The transfer function determines a curve in Grass(p, Km+p) by P1 ∋s 7− →column space · Im Γ(s) ¸ , whenever this is well-defined. This Hermann-Martin curve extends to P1 and its degree is equal to the McMillan degree of the system. Recall that the degree of a curve M : P1 →Grass(p, Km+p) has three equivalent descriptions: (1) The number of points s ∈P1 such that M(s) ∩L ̸= {0}, where L is a general m-plane. (2) The maximum degree of the (rational-function) minors of any (m + p) by p matrix of rational functions whose column space gives the map M. (3) The degree of the pullback of the generator O(1) of the Picard group of Grass(p, Km+p). One concrete way to see that the transfer function defines a curve in the Grass-mannian is via the algebra of polynomial matrices. A matrix Γ(s) of rational functions is proper if lims→∞Γ(s) exists and strictly proper if that limit is zero. The transfer function of the linear system (2.1) is proper, since lims→∞Γ(s) = D, and strictly proper linear systems have strictly proper transfer functions. Given a proper matrix of rational functions Γ(s) of size p by m, consider factorizations Γ(s) = P(s)Q(s)−1 where P(s) is a p by m matrix of polynomials and Q(s) is a m by m matrix of polynomials with non-zero determinant. There are many ways to do this: One could, for instance, let Q(s) be the diagonal matrix with entries f(s), the least common multiple of the denominators of the entries of Γ(s). There is a unique minimal, or (right) coprime factorization. Theorem 2.1. Suppose Γ(s) is a proper transfer function of a linear system of McMillan degree n. Then there exist matrices P(s), Q(s) of polynomials such that (i) P(s) and Q(s) are coprime in that there exist matrices of polynomials X(s) and Y (s) satisfying X(s)Q(s) + Y (s)P(s) = Im . (ii) Any other factorization Γ(s) = N(s)D(s)−1 into matrices of polynomials has deg det D(s) ≥deg det Q(s) = n . (iii) P(s) and Q(s) are unique up to multiplication on the right by elements of GLm(K[s]). RATIONAL CURVES ON GRASSMANNIANS 5 Theorem 2.1 is proven, for instance in any of or [16, §22] or [19, §4]. By (i) and the factorization, P(s)Q(s)−1 = C(sIn−A)−1B + D, the deter-minants of Q(s) and of sIn−A have the same roots. We call P(s)Q(s)−1 a right coprime factorization of Γ(s). By (i), the Hermann-Martin curve is also represented by (2.2) s 7− →column space · Q(s) P(s) ¸ , which has dimension m for all s ∈P1, as Γ(s) is proper. Since lims→∞Γ(s) = D, the value of the curve at infinity is the column space of lim s→∞ · Q(s) P(s) ¸ = · Im D ¸ . Thus the maximal minors of the matrix · Q(s) P(s) ¸ have degree at most the degree of the principal minor det Q(s), which is n. This shows that the Hermann-Martin curve has degree n. In this way, a linear sys-tem (2.1) with m inputs and p outputs of McMillan degree n corresponds to a rational curve M : P1 →Grass(p, Km+p) of degree n. In fact, every such rational curve comes from a linear system . An informal way to see this is to first observe that the entries of the matrices A, B, C, and D in (2.1) give the set of all possible state-space realizations of m-input p-output linear systems with n internal states the structure of affine space of dimension n2 + nm + np + mp. The conditions of controllability and observability for the system to be minimal are the non-vanishing of certain polynomials in the entries of A, B, C, and so the set of all such systems of McMillan degree n is an open subset of this affine space. Changing coordinates of the internal states x gives a free GLn(K)-action on these minimal realizations whose orbits are exactly the fibres of the map {Minimal state-space realizations} − → {Proper transfer functions} . Thus the space of Hermann-Martin curves of m-input p-output linear systems of McMillan degree n has dimension nm + np + mp, which is equal to the dimension of the space Mq p,m of degree n rational maps to Grass(m, Km+p). In fact, the Hermann-Martin curves constitute an open subset of this space of rational curves, and there are very natural objects from systems theory that yield the full space of rational curves, as well as various compactifications of this space. The work of Hermann and Martin continued work of Clark , who showed that the space of transfer functions is a smooth manifold. Later, Helmke stud-ied topological properties of this space and Hazewinkel and Byrnes studied compactifications of this space. This work was revived by Rosenthal, who intro-duced the quantum Grassmannian into systems theory in his 1990 PhD thesis . See for a discussion and further references. 2.2. Feedback control and Schubert calculus. Given a strictly proper linear system (2.3) d dtx = Ax + Bu , y = Cx , 6 FRANK SOTTILE we would like to control its behavior using dynamic output feedback. That is, we couple its inputs u to its outputs y through a p-input, m-output linear system of McMillan degree q, called a dynamic compensator. Consider a minimal state-space realization of this compensator (2.4) d dtz = Fz + Gy u = Hz + Ky , where z ∈Kq are the internal states, and F, G, H, and K are matrices of the appropriate size, K representing a constant (residual) linear feedback law. Schematically we have: Given system: Compensator: ❅ ¡ u ∈Km x ∈Kn ❅ ¡ y ∈Kp ❅ ¡ z ∈Kq ¡ ❅ We obtain a closed-loop or autonomous system from (2.1) and (2.4) by elimi-nating y and u (2.5) d dt · x z ¸ = · A + BKC BH GC F ¸ · x z ¸ . The behavior of this autonomous system is determined by the n+q eigenvalues of the matrix, that is, by the zeroes of the (monic) characteristic polynomial (2.6) ϕ(s) := det µ sIn+q − · A + BKC BH GC F ¸¶ . The pole placement problem asks the inverse question: Pole Placement Problem. Given a strictly proper m-input p-output linear sys-tem of McMillan degree n (2.3) and a desired behavior represented by a monic char-acteristic polynomial ϕ(s) of degree n + q, for which dynamic compensators (2.4) does the corresponding autonomous system (2.5) have characteristic polynomial ϕ? The reason for the word pole is that the zeroes of the characteristic polynomial are the poles of a transfer function. A linear system of McMillan degree n is arbitrarily pole-assignable by degree q compensators (over K) if the pole placement problem may be solved for all monic polynomials ϕ of degree n + q. Remark 2.2. Pole placement is a fundamental design problem for linear sys-tems. When K = R, an important property of an autonomous real linear system is whether or not it is stable, that is, whether or not all of the roots of its characteristic polynomial have negative real parts. In other situations, the control engineer may wish to destabilize a system. For discrete-time systems (which have an identical formalism), stability is achieved by placing the roots of the characteristic polyno-mial on the unit circle. These questions of placing poles in subsets of the complex plane are strictly weaker than the pole placement problem, yet little is known about them. Here is an important related question concerning stability. RATIONAL CURVES ON GRASSMANNIANS 7 Minimal Stability. Given a strictly proper m-input p-output real linear system of McMillan degree n, what is the minimal McMillan degree q of a real dynamic compensator (2.4) for which the corresponding autonomous system (2.5) is stable? When K is algebraically closed, the pole placement problem may be solved for q ≥n −1 and q ≥(n −mp)/(m + p −1) is necessary and sufficient for generic systems. Thus for q large enough there exist stabilizing dynamic compen-sators. The minimal stability problem is particularly important when the original system arises as a linear perturbation of a non-linear system. In this case, it asks how cheaply may we damp linear perturbations of McMillan degree n. We investigate the pole placement problem. Given a strictly proper system (2.3) and a monic characteristic polynomial ϕ(s) with distinct roots s1, s2 . . . , sn+q, we seek matrices F, G, H, and K for which det µ siIn+q − · A + BKC BH GC F ¸¶ = 0 for i = 1, 2, . . . , n + q . This gives n+q equations in the q2 +pq +mq +mp entries of F, G, H, and K. Since GLq(K) acts on these data, giving equivalent systems and fixing ϕ, we expect that the pole placement problem is solvable over the complex numbers when n + q ≤pq + mq + mp . This is in fact the case for generic systems, as we shall see. We reformulate the dynamic pole placement problem geometrically. Each step below involves only row or column operations applied to the matrix involved. ϕ(s) = det · sIn −A −BKC −BH −GC sIq −F ¸ = det     sIn −A −BKC −BH BK −B −GC sIq −F 0 0 0 0 Ip 0 0 0 0 Im     = det     sIn −A 0 0 −B 0 sIq −F G 0 C 0 Ip 0 0 −H K Im     = det[sIn −A] × det[sIq −F] × det     In 0 0 −(sIn −A)−1B 0 Iq (sIq −F)−1G 0 C 0 Ip 0 0 −H K Im     This becomes det[sIn −A] × det[sIq −F] × det     In 0 0 −(sIn −A)−1B 0 Iq (sIq −F)−1G 0 0 0 Ip C(sIn −A)−1B 0 0 H(sIq −F)−1G + K Im     8 FRANK SOTTILE And thus we obtain ϕ(s) = det · Ip C(sIn −A)−1B H(sIq −F)−1G + K Im ¸ × det[sIn −A] × det[sIq −F] . The off-diagonal entries in the first matrix are the transfer functions of the original system (2.3) and of the compensator (2.4). Consider coprime factorizations N(s)D(s)−1 = C(sIn −A)−1B P(s)Q(s)−1 = H(sIq −F)−1G + K . Because n and q are the respective McMillan degrees, we have det D(s) = det[sIn −A] , and det Q(s) = det[sIq −F] , and so our characteristic polynomial becomes (2.7) ϕ(s) = det · Q(s) N(s) P(s) D(s) ¸ . The first column of this 2 by 2 block matrix represents the Hermann-Martin curve M : P1 →Grass(p, Km+p) of the compensator and the second column the Hermann-Martin curve L: P1 →Grass(m, Km+p) of the original system. The determinant (2.7) must vanish at each root of the characteristic polynomial. Since, for every s, the columns giving the Hermann-Martin curves have full rank, we obtain the following version of the pole placement problem, when the characteristic polynomial has distinct roots. Geometric Version of the pole placement problem. Suppose we have a strictly proper m-input p-output linear system (2.3) of McMillan degree n with Hermann-Martin curve L and a monic polynomial ϕ(s) of degree n+q with distinct roots s1, s2, . . . , sn+q. Which rational curves M : P1 →Grass(p, Km+p) of degree q satisfy M(si) ∩L(si) ̸= {0} for i = 1, 2, . . . , n + q ? Thus we are looking for rational curves M which satisfy n + q Schubert conditions of the type in Question 1. Note that when q = 0 (the case of static compensators), Q(s) = Ip and P(s) = K, so a static compensator is represented by the matrix · Ip K ¸ , whose column space is just a point in the Grassmannian Grass(p, Km+p). This observation of Byrnes was the point of departure for the subsequent application of Schubert calculus to the pole placement problem. 2.3. Number of dynamic compensators in the critical dimension. Let Mq m,p be the space of degree q maps M : P1 →Grass(p, Km+p), which is also the space of Hermann-Martin curves of possible degree q dynamic compensators for m-input, p-output linear systems (2.3). (This includes both proper and im-proper compensators.) An important geometric perspective on the characteristic equation (2.7) is that a given strictly proper linear system of McMillan degree n RATIONAL CURVES ON GRASSMANNIANS 9 (represented by its Hermann-Martin curve L: P1 →Grass(p, Km+p)) determines a pole placement map ΛL : Mq m,p − →{Polynomials of degree n + q} by ΛL : M 7− →det[M(s) : L(s)] . This map gets its name from the fact that Λ−1 L (ϕ(s)) is the set of Hermann-Martin curves of degree q dynamic compensators giving characteristic polynomial ϕ(s). Thus a strictly proper linear system is arbitrarily pole assignable when the corresponding pole placement map is surjective. Consider expanding this determinant along the columns of M(s): ΛL(M) = X α∈( [m+p] p ) Mα(s) · Lα(s) . Here ¡[m+p] p ¢ is the collection of subsets of {1, 2, . . . , m+p} of size p, Mα(s) is the αth maximal minor of M(s) (given by the rows of M(s) indexed by α), and Lα(s) is the appropriately signed complementary maximal minor of L(s). The point of this exercise is that the pole placement map is a linear function of the coefficients of the polynomials Mα(s). Thus we are led to consider the Pl¨ ucker map (2.8) Mq m,p − →P(∧pKm+p ⊗Kq+1) which associates a m+p by p matrix M(s) of polynomials (representing a degree q compensator or degree q curve) to its ¡m+p p ¢ maximal minors Mα(s), which are polynomials of degree q. A more intrinsic definition of this map is given in Section 3 just before (3.1). This gives a map to projective space as multiplying M(s) by an invertible p by p matrix F multiplies each minor by the factor det F but does not change the curve. This Pl¨ ucker map is an embedding, and one compactification of Mq m,p is the closure Kq m,p of the image, which we call the quantum Grassmannian. This space was introduced to systems theory by Rosenthal . In this way, we see that the pole placement map factors Mq m,p − →Kq m,p πL − − →Pn+q , with the last map πL a linear projection on P(Vp Km+p ⊗Kq+1). Here, Pn+q is the space of polynomials of degree at most n + q, modulo scalars. (If the compensator is on the boundary of the compactification, then the polynomial has degree less than n+q.) Thus a necessary condition for arbitrary pole assignability of a strictly proper linear system L is that πL is surjective. The surjectivity of πL is sufficient for solving the pole placement problem for L and for generic polynomials ϕ(s). Rosenthal shows that if q(m + p) + mp ≤n + q and K is algebraically closed, then πL is surjective for generic strictly proper linear systems L. This gives the criterion n + q ≤q(m + p) + mp for a generic m-input, p-output system (2.3) of degree n to be arbitrarily pole assigned with degree q compensators. For generic systems L in the critical dimension (q(m + p) + mp = n + q so that dim Kq m,p = n + q) the map πL is finite and hence surjective, again, when K is algebraically closed. Thus #(π−1 L ϕ(s) ∩Mq m,p) is the number of compen-sators solving the pole placement problem for ϕ(s). Since πL is a linear projection, 10 FRANK SOTTILE this number is bounded by the degree of the quantum Grassmannian Kq m,p in its Pl¨ ucker embedding. Suppose K is algebraically closed. Since Mq m,p is open in Kq m,p, for generic ϕ(s) the degree of Kq m,p equals the number of dynamic compensators, possibly counted with multiplicity. This is the main theorem of . When K = R so that A, B, C, and ϕ(s) are real, π−1ϕ(s) ∩Mq m,p gives the complex dynamic compensators which solve the pole placement problem for these data. If n + q ≤q(m + p) + mp and Kq m,p has odd degree, then the set of dynamic compensators is a projective variety defined over the real numbers of odd degree, and hence contains a real point. We deduce the following result. Theorem 2.3. Suppose n ≤q(m + p −1) + mp, and deg Kq m,p is odd. Then a general strictly proper real linear system (2.3) with m inputs, p outputs, and McMil-lan degree n is arbitrarily pole assignable by real degree q dynamic compensators. When the degree of Kq m,p is even the strongest result is due to Rosenthal and Wang. Theorem 2.4 (). A generic strictly proper linear system (2.3) with m inputs, p outputs, and McMillan degree n is arbitrarily pole assignable by real degree q compensators if n < q(m + p −1) + mp −min{rm(p −1), rp(m −1)} , where rp and rm are the remainders of q upon division by p and m, respectively. The special case when q = 0 of static compensation has an interesting his-tory (see the excellent survey of Byrnes ). In this case, the Grassmannian Grass(p, Km+p) plays the rˆ ole of Kq m,p and once it was discovered that the equa-tions for pole placement were linear equations on the Grassmannian in its Pl¨ ucker embedding, significant progress was made. This included Brockett and Byrnes’ calculation of the number of static compensators for a generic m-input p-output linear system of McMillan degree mp as the degree of the Grassmannian : (2.9) (mp)! Q 1≤j<k≤p(k −j) Qp j=1(m + j −1)! = (mp)! 1! 2! 3! · · · (p−2)! (p−1)! m! (m+1)! · · · (m+p−1)! We can deduce the analog of Theorem 2.3 from this; unfortunately, this number is odd only when min(m, p) = 1 (and then it is 1), or else min(m, p) = 2 and max(m, p) + 1 is a power of 2 . The analog of Theorem 2.4 is due to Wang : n < mp is sufficient to guarantee arbitrary pole assignability over R, for generic systems. 2.4. Formulae for deg Kq m,p. Let zα(a) be the coefficient of sa in the αth max-imal minor Mα(s) of M(s). These coefficients provide quantum Pl¨ ucker coordinates for P(Vp Km+p ⊗Kq+1). Let Cq m,p := {α(a) | α ∈ ¡[m+p] p ¢ and 0 ≤a ≤q} be the indices of these quantum Pl¨ ucker coordinates. This index set has a natural partial order α(a) ≤β(b) ⇐ ⇒ a ≤b and αi ≤βb−a+i for i = 1, 2, . . . , p −b + a . The poset Cq m,p is graded with the rank, |α(a)|, of α(a) equal to a(m+p)+P i(αi−i). It is also a distributive lattice. Figure 1 shows C1 3,2 on the left. Given α(a) ∈Cq m,p, define the quantum Schubert variety (2.10) Zα(a) := {z ∈Kq m,p | zβ(b) = 0 if β(b) ̸≤α(a)} . RATIONAL CURVES ON GRASSMANNIANS 11 35 (1) 25 34 (1) (1) 15 24 (1) (1) 23 (1) (1) 14 13 (1) 45 (0) 35 12 (1) (0) (1) 25 34 (0) (0) 15 24 (0) (0) 14 23 (0) (0) 13 (0) 12 59 58 57 56 45 46 47 48 37 26 36 35 34 25 14 24 13 23 15 12 (0) 45 (1) 55 55 34 13 5 13 21 21 8 3 8 5 2 3 1 2 1 1 1 1 Figure 1. C1 3,2, J(C1 3,2), and deg Zα(a). From this definition, we see that Zα(a) ∩Zβ(b) = Zγ(c) (set-theoretically), where γ(c) is the greatest lower bound of α(a) and β(b). Let Hα(a) be the hyperplane defined by zα(a) = 0. We write β(b) ⋖α(a) to indicate that β(b) < α(a) and that there is no other index γ(c) with β(b) < γ(c) < α(a). The main technical lemma of [40, 41] is the following Proposition 2.5 ([40, 41]). Let α(a) ∈Cq m,p. Then (i) Zα(a) is an irreducible subvariety of Kq m,p of dimension |α(a)|. (ii) The intersection of Zα(a) and Hα(a) is generically transverse and we have Zα(a) ∩Hα(a) = [ β(b)⋖α(a) Zβ(b) . This result is proven essentially by working in local coordinates for Zα(a). Part (ii) is the geometric version of the (codimension-1) Pieri formula. It generalizes the result of Schubert , who proved it for the classical Grassmannian (a = 0). By B´ ezout’s Theorem (see [20, §8]), we deduce the following fundamental recursion (2.11) deg Zα(a) = X β(b)⋖α(a) deg Zβ(b) . The minimal quantum Schubert variety is a point, so we deduce a formula for deg Kq m,p. Theorem 2.6 ([40, 41]). The degree d(m, p; q) of Kq m,p is the number of max-imal chains in the poset Cq m,p. For example, the degree of K1 3,2 is 55, as shown by the diagram on the right in Figure 1, which recursively computes the degrees of the quantum Schubert varieties Zα(a). In Section 4 we give an alternative proof of Theorem 2.6 using Gr¨ obner bases. Ravi, Rosenthal, and Wang also solve this recursion to obtain the closed for-mula (1.2). A first step is to change the indexing of the quantum Pl¨ ucker coordi-nates, embedding Cq m,p into the set of increasing sequences 0 < i1 < i2 < · · · < ip 12 FRANK SOTTILE of positive integers of length p. Given α(a) ∈Cq p,m, write a = pl + r with p > r ≥0 and define a sequence J(α(a)) by (2.12) J(α(a))k := ½ l(m+p) + αr+k if 1 ≤k ≤p −r (l+1)(m+p) + αk−p+r if p −r < k ≤p . For instance, when m = p = 5, we have J((2, 3, 5, 6, 9)(7)) = (15, 16, 19, 22, 23). Note that we have J(α(a))p < J(α(a))1 + m + p. This gives an order isomorphism of the poset Cq p,m with the poset of sequences i1 < i2 < · · · < ip of positive integers where ip < i1 + m + p. This is illustrated in the middle diagram of Figure 1, which shows the image of C1 3,2. This isomorphism (of course) preserves the rank function of the two posets: (2.13) |α(a)| := a(m + p) + p X j=1 (αj −j) = p X i=1 ¡ J(α(a))i −i ¢ =: |J(α(a))| . Observe that J(α(a)) is congruent to α modulo m + p. Lemma 2.7. Let d(i1, i2, . . . , ip) be a function defined for all weakly increasing sequences of non-negative integers i1, i2, . . . , ip with ip ≤i1 + m + p. Suppose that for any sequence 0 < i1 < · · · < ip with ip < i1+m+p, d(i1, i2, . . . , ip) satisfies the recursion (2.14) d(i1, i2, . . . , ip) = p X k=1 d(i1, i2, . . . , ik −1, . . . , ip) , is subject to the initial condition (2.15) d(1, 2, . . . , p) = 1 , and the boundary conditions d(. . . , l, l, . . .) = 0 , (2.16) d(0, . . .) = 0 , and (2.17) d(i1, i2, . . . , ip) = 0 if ip = i1 + m + p . (2.18) Then d(J(α(a))) = deg Zα(a). Proof. Let j1, j2, . . . , jp = J(α(a)) for α(a) ∈Cq m,p. Then the sequence I := j1, j2, . . . , jk−1, . . . , jp fails to equal J(β(b)) for some β(b) ∈Cq m,p only if I has either two repeated indices (2.16), or if i1 = 0 (2.17), or else if ip = i1 + m + p (2.18). In each of these cases d(I) = 0, and so the function d(J(α(a))) defined for α(a) ∈Cq m,p satisfies the recursion (2.11) for deg Zα(a). Since the index of a minimal quantum Schubert variety (which is a point) is (1, 2, . . . , p)(0), and J(α(0)) = α, the function d(J(α(a))) also satisfies the initial condition for deg Zα(a). □ Sequences I : 0 < i1 < i2 < · · · < ip ≤l index Schubert varieties ΩI of Grass(p, Kl). Schubert showed that the degree g(I) of the Schubert variety ΩI in the Pl¨ ucker embedding satisfies the recursion, initial condition, and boundary conditions (2.16) and (2.17) of Lemma 2.7. He later gave the following closed formula for this degree (compare with (2.9)): (2.19) |I|! Q j<k(ik −ij) Q j(ij −1)! , RATIONAL CURVES ON GRASSMANNIANS 13 where |I| = P j ij −j. This formula (2.19) defines g(I) as an alternating function on all sequences of integers if we set 1/l! = 0 when l < 0. Theorem 2.8 (). Let α(a) ∈Cq m,p and set I := J(α(a)). Then we have d(I) = X b1+···+bp=0 g(i1 + b1(m + p), i2 + b2(m + p), . . . , ip + bp(m + p)) . Observe that the sum is in fact finite, as only sequences b1, b2, . . . , bm for which every term ij + bj(m + p) is positive contribute to the sum. Proof. Let δ(I) be the function defined by the sum. First observe that if max I ≤m + p, then there is only the trivial summand (all bi = 0) and so δ(I) = g(I). Also, since g is alternating, δ is an alternating function. We show that the function δ satisfies the conditions of Lemma 2.7, when I is a weakly increasing sequence of non-negative integers with ip ≤i1 + m + p. First, δ satisfies the recursion of Lemma 2.7 because the function g satisfies the recursion. Second, δ(1, 2, . . . , p) = g(1, 2, . . . , p) = 1, giving the initial condition. Next, since δ is alternating, it satisfies (2.16). Suppose ip = i1 +m+p. Then every summand indexed by b1, b2, . . . , bp with bp −1 = b1 vanishes as g is alternating, and every summand with bp−1 ̸= b1 is paired with another summand indexed by bp+1, b2, . . . , b1−1, which has the same absolute value, but opposite sign, as g is alternating. Thus (2.18) holds for δ. Finally, if i1 = 0, then either ip = m + p and so δ(I) = 0 or else ip < 0 and so δ(I) = g(I) = 0, giving (2.17) and proving the theorem. □ We now deduce the formula (1.2) from these results. The quantum Grassman-nian Kq m,p is the maximal quantum Schubert variety Z(m+1,...,m+p)(q). Let q = pl+r with 0 ≤r < p. Set α := (m+1, . . . , m+p) and n := m + p. Then J(α(q)) is the sequence (ln + m + r + 1, . . . , ln + m + p, (l + 1)n + m + 1, . . . , (l + 1)n + m + r) and we have |α(q)| = |J(α(q))| = pln + rn + mp = q(m+p) + mp . By Theorem 2.8, the degree d(J(α(q))) of Kq m,p is the sum over all sequences of integers b1, b2, . . . , bp satisfying b1 + b2 + · · · + bp = 0 of the terms g((l + b1)n + m + r + 1, . . . , (l + bp−r)n + m + p, (l + bp−r+1 + 1)n + m + 1, . . . , (l + bp + 1)n + m + r). This term equals (−1)r(p−r) g(ν1n + m + 1, ν2n + m + 2, . . . , νpn + m + p) , where (ν1, ν2, . . . , νp) = (l + bp−r+1 + 1, . . . , l + bp + 1, l + b1, . . . , l + bp−r) , and the sign (−1)r(p−r) comes from the resulting permutation of the argument of g. Since ν1 + · · · + νp = q, we obtain the formula deg Kq m,p = (−1)r(p−r) X ν1+···+νp=q g(ν1n + m + 1, ν2n + m + 2, . . . , νpn + m + p) . 14 FRANK SOTTILE Finally, to obtain the formula (1.2), we use Schubert’s formula (2.19) for g and the following identity r(p −r) = (q −pl)((l + 1)p −q) = 2pql + pq −q2 −p2l(l + 1) ≡ pq + q2 mod 2 ≡pq + q mod 2 . 3. Reality and Transversality Traditionally, intersection theory and enumerative geometry (both classical and quantum) treat the case of complex solutions to enumerative problems, for it is in this case that the most general and elegant results hold. The real numbers pose special problems as the number of real figures satisfying conditions imposed by general (fixed) real figures depends subtly on the configuration of the fixed real figures. Algebraically closed fields of positive characteristic also pose special prob-lems in enumerative geometry as the number of solutions may depend upon the characteristic of the field. One reason for this is that the solutions may occur with multiplicities; the subvarieties defined by the conditions may not intersect transver-sally in positive characteristic. In characteristic zero, Kleiman’s Theorem on the transversality of a general translate may be invoked to show that each solution to many enumerative problems (including that of Question 1) occurs without multi-plicities. In positive characteristic, general translates are not necessarily transverse, and other techniques must be employed to determine whether the solutions occur without multiplicity. For the enumerative problem of Question 1, both these difficulties may be overcome using the same elementary arguments, which are a version of the theory of adapted to this particular enumerative problem. These arguments are based upon the Pieri homotopy algorithm of and related to a numerical homotopy continuation algorithm for computing numerical solutions to these enumerative problems when K = C. See [29, §5] for details of this algorithm. Theorem 3.1 ([54, 53]). Let m, p > 1 and q ≥0 be integers. Set N := q(m + p) + mp. Suppose K is an infinite field with algebraic closure K. Then there exist points s1, s2, . . . , sN ∈P1 K and m-planes L1, L2, . . . , LN ⊂Km+p for which there are exactly d(m, p; q) maps M : P1 →Grass(p, K m+p) of degree q satisfying M(si) ∩Li ̸= {0} for i = 1, . . . , N . When K = R, we may further choose the real points and real m-planes so that all of the resulting maps are real. Thus the enumerative problem of Question 1 is enumerative in all characteristics and when K = R, there is some choice of points and m planes for which all of the a priori complex solutions are real. Suppose K is an infinite field. Let L ⊂Km+p be an m-plane, none of whose Pl¨ ucker coordinates vanish. That is, if L is the column space of a m + p by m matrix, also written L, then none of the m by m maximal minors of L vanishes. This choice is possible as K is infinite. Let e1, e2, . . . , em+p be the distinguished basis of Km+p corresponding to the rows of such matrices. We equip Km+p with an action of K×. For s ∈K×, set (s, ei) 7− →s.ei := sm+p−iei . RATIONAL CURVES ON GRASSMANNIANS 15 For s ∈K×, let H(s, L) be the hyperplane in Pl¨ ucker space P(Vp Km+p⊗Kq+1) whose intersection with the space Mq m,p of rational curves of degree q consists of those maps M : P →Grass(p, Km+p) satisfying M(sm+p) ∩s.L ̸= {0} . This is just the set of maps satisfying M(t) ∩K ̸= {0}, where t = sm+p and K = s.L. This condition is equivalent to the vanishing of the determinant det[M(sm+p) : s.L] . If we expand this determinant along the columns of M(sm+p), we obtain X α Mα(sm+p) Lα s( m 2) s|α| , where Mα(s) is the αth maximal minor of M(s) and Lα is the appropriately signed complementary maximal minor (Pl¨ ucker coordinate) of L. If we now expand the polynomials Mα(sm+p) in terms of the quantum Pl¨ ucker coordinates zα(a) of M and divide out the common factor s( m 2), we obtain the following equation for the hyperplane H(s, L): Φ(s, L) = X α(a)∈Cq m,p zα(a) Lα s|α(a)| , since a(m + p) + |α| = |α(a)|. We prove Theorem 3.1 by first showing that given any m-plane L ⊂Km+p with no vanishing Pl¨ ucker coordinates, there exist points s1, s2, . . . , sN ∈K such that there are exactly d(m, p; q) points in Kq m,p common to the hyperplanes H(si, L), and then argue that none of these d(m, p; q) points lie in the boundary Kq m,p −Mq m,p. We first study the boundary of Kq m,p, using results of Bertram . While Bertram works over the complex numbers C, his results we invoke remain valid over any field. A smooth compactification of Mq m,p is provided by a quot scheme Qq m,p . By definition, there is a universal exact sequence 0 →S →Km+p ⊗OP1×Qq m,p →T →0 of sheaves on P1 × Qq m,p where S is a vector bundle of degree −q and rank p. Twisting the determinant of S by OP1(q) and pushing forward to Qq m,p induces a Pl¨ ucker map (3.1) Qq m,p →P ¡Vp Km+p ⊗H0(OP1(q))∗¢ . The restriction to Mq m,p is the Pl¨ ucker map (2.8) and the image is the quantum Grassmannian Kq m,p. The Pl¨ ucker map fails to be injective on the boundary Qq m,p −Mq m,p of Qq m,p. Indeed, Bertram constructs a Pp−1 bundle over P1 × Qq−1 m,p that maps onto the boundary of Qq m,p, with its restriction over P1 × Mq−1 m,p an embedding. On this projective bundle, the Pl¨ ucker map factors through the base P1 × Qq−1 m,p and the image of a point in the base is s · M, where s is the section of OP1(1) vanishing at s ∈P1 and M is the image of a point in Qq−1 m,p under its Pl¨ ucker map. This identifies the image of the exceptional locus of the Pl¨ ucker map, which is the boundary of Kq m,p, with the image of P1 × Kq−1 m,p in Kq m,p under a map π which 16 FRANK SOTTILE we now describe. Let xα(a) be the quantum Pl¨ ucker coordinates for Kq−1 m,p. Then the boundary of Kq m,p is the image of the map π : P1 × Kq−1 m,p →Kq m,p defined by (3.2) ³ [A, B], (xβ(b) : β(b) ∈Cq−1 m,p ) ´ 7− →(Axα(a) −Bxα(a−1) : α(a) ∈Cq m,p) , where xα(q) = xα(−1) = 0. For a variety X defined over K, let X(K) be the K-points of X. Theorem 3.1 is a consequence of the following two theorems. Theorem 3.2. Suppose L ⊂Km+p is an m-plane with no vanishing Pl¨ ucker coordinates. Then there exist s1, s2, . . . , sN ∈K so that the intersection (3.3) Zα(a)(K) ∩H(s1, L) ∩H(s2, L) ∩· · · ∩H(s|α(a)|, L) is transverse for any α(a) ∈Cq m,p. If K = R, then we may further choose these numbers s1, s2, . . . , sN so that for any α(a) ∈Cq m,p, all points in the intersection (3.3) are real. Theorem 3.3. Suppose L ⊂Km+p is an m-plane with no vanishing Pl¨ ucker coordinates. If s1, s2, . . . , sk ∈K are distinct, then for any α(a) ∈Cq m,p the inter-section (3.4) Zα(a) ∩H(s1, L) ∩H(s2, L) ∩· · · ∩H(sk, L) is proper in that it has dimension |α(a)| −k. Proof of Theorem 3.1. By Theorem 3.2, there exist s1, s2, . . . , sN ∈K so that the intersection (3.5) Kq m,p(K) ∩H(s1, L) ∩H(s2, L) · · · ∩H(sN, L) is transverse and consists of exactly d(m, p; q) points, and when K = R, these points of intersection are real. Furthermore, we may choose these numbers si so that their (m+p)th powers are distinct. To prove Theorem 3.1, we show that these points all lie in Mq m,p. Thus each point in (3.5) represents a map M : P1 →Grass(p, Km+p) of degree q satisfying M(sm+p i ) ∩si.L ̸= {0} for i = 1, 2, . . . , N. Let π : P1 × Kq−1 m,p →Kq m,p be the map (3.2) whose image is the complement of Mq in Kq. Then π∗Φ(s, L) = X α(a)∈Cq m,p (Axα(a) −Bxα(a−1)) Lα s|α(a)| = (A −Bsm+p) X β(b)∈Cq−1 m,p xβ(b) Lβ s|β(a)| = (A −Bsm+p) Φ′(s, L) , where Φ′(s, L) is the linear form for Kq−1 m,p analogous to Φ(s, L). Let H′(s, L) be the hyperplane given by the linear form Φ′(s, L). Any point in the intersection (3.5) but not in Mq m,p is the image of a point ([A, B], x) in P1 × Kq−1 m,p satisfying π∗Φ(si, L) = (A −Bsm+p i )Φ′(si, L) for each i = 1, 2, . . . , N. As the (m+p)th powers of the si are distinct, such a point can satisfy A−Bsm+p i = 0 for at most one i. Thus x ∈Kq−1 m,p lies in at least N−1 of the hyperplanes H′(si, L). Since N−1 exceeds the dimension N−(m+p) of Kq−1 m,p, there are no such points x ∈Kq−1 m,p, by Theorem 3.3 applied to maps of degree q−1. □ RATIONAL CURVES ON GRASSMANNIANS 17 Proof of Theorem 3.3. For any s1, s2, . . . , sk, the intersection (3.4) has di-mension at least |α(a)| −k. We show it has at most this dimension, if s1, s2, . . . , sk are distinct. Suppose k = |α(a)| + 1 and let z ∈Zα(a). Then zβ(b) = 0 if β(b) ̸≤α(a) and so the form Φ(s, L) defining H(s, L) evaluated at z is X β(b)≤α(a) zβ(b) Lβ s|β(b)| . This is a non-zero polynomial in s of degree at most |α(a)| and thus it vanishes for at most |α(a)| distinct values of s. It follows that (3.4) is empty for k > |α(a)|. If k ≤|α(a)| and s1, s2, . . . , sk are distinct, but (3.4) has dimension exceed-ing |α(a)| −k, then we may complete s1, s2, . . . , sk to a set of distinct numbers s1, s2, . . . , s|α(a)|+1 which give a non-empty intersection in (3.4), a contradiction. □ Proof of Theorem 3.2. We prove both parts of the theorem simultaneously, making note of the differences when K = R. We construct the sequence si inductively. The unique element of rank 1 in Cq m,p is α(0), where α is the sequence 1 < 2 < · · · < p−1 < p+1. The quantum Schubert variety Zα(0) is a line in Pl¨ ucker space. Indeed, it is isomorphic to the set of p-planes containing a fixed (p−1)-plane and lying in a fixed (p+1)-plane. By Theorem 3.3 or direct observation, Zα(0) ∩H(s, L) is then a single point, for any non-zero s. When K = R, this point is real. Let s1 ∈K× be arbitrary. Suppose s1, s2, . . . , sk ∈K are distinct points with the property that for any β(b) with |β(b)| = k, Zβ(b) ∩H(s1, L) ∩H(s2, L) ∩· · · ∩H(s|β(b)|, L) is transverse. When K = R, we suppose further that all points of intersection are real. Let α(a) be an index with |α(a)| = k + 1 and consider the 1-parameter family Z(s) of schemes defined by Zα(a) ∩H(s, L), for s ∈K×. If we restrict the form Φ(s, L) to z ∈Zα(a), then we obtain X β(b)≤α(a) zβ(b) Lβ s|β(b)| , a polynomial in s with leading term zα(a) Lα s|α(a)|. Since the Pl¨ ucker coordinate Lα is non-zero, Z(∞) ⊂Zα(a) is defined by zα(a) = 0, and so Z(∞) equals Zα(a) ∩Hα(a) = [ β(b)⋖α(a) Zβ(b) , by Proposition 2.5 (ii). Claim: The cycle Z(∞) ∩H(s1, L) ∩H(s2, L) ∩· · · ∩H(sk, L) is free of multiplicities. If not, then there are two components Zβ(b) and Zγ(c) of Z(∞) such that Zβ(b) ∩Zγ(c) ∩H(s1, L) ∩H(s2, L) ∩· · · ∩H(sk, L) 18 FRANK SOTTILE is non-empty. But this contradicts Theorem 3.3, as Zβ(b) ∩Zγ(c) = Zδ(d), where δ(d) is the greatest lower bound of β(b) and γ(c) in Cq m,p, and so dim Zδ(d) < dim Zβ(b) = k. Because the intersection of Z(∞), the fibre of Z at infinity, with the cycle H(s1, L) ∩H(s2, L) ∩· · · ∩H(sk, L) is zero dimensional and free of multiplicities, it is transverse, and so the general fibre of Z meets H(s1, L)∩H(s2, L)∩· · ·∩H(sk, L) transversally. Thus there is a non-empty Zariski open subset Oα(a) of A1 K consisting of points s for which Zα(a) ∩H(s1, L) ∩H(s2, L) ∩· · · ∩H(sk, L) ∩H(s, L) is transverse. Choose sk+1 to be any point common to all Oα(a) for |α(a)| = k + 1. When K = R, the claim implies there is a real number Nα(a) > 0 such that if s > Nα(a), then Z(s) ∩H(s1, L) ∩· · · ∩H(sk, L) is transverse with all points of intersection real. Set Nk+1 := max{Nα(a) : |α(a)| = k + 1} and let sk+1 be any real number satisfying sk+1 > Nk+1. □ Remark 3.4. While these results rely upon work from systems theory, the result when K = R unfortunately does not give any insight into the dynamic pole placement problem: In the dynamic pole placement problem, the planes L(s) lie on a rational curve L(s) of degree mp + q(m + p) −q while the planes si.L of Theorem 3.2 lie on the rational curve s.L, which has degree mp. Thus there is overlap only when q = 0, which is the static pole placement problem. Remark 3.5. Theorem 3.2 proves reality and transversality for the enumera-tive problem of Question 1. There are more general enumerative problems involving rational curves on a Grassmannian obtained by replacing the Schubert condition M(s) ∩L ̸= {0} with more general Schubert conditions. It is not known, but is ex-pected, that the transversality and reality properties established in Theorem 3.2 for the enumerative problem of Question 1 hold also for these more general enumerative problems. Remark 3.6. The argument given at the end of the Proof of Theorem 3.1, that the intersection (3.5) contains no points in the boundary and hence lies in Mq m,p, may be generalized to show that when a = q, the intersection (3.3) similarly lies in Mq m,p. 4. Equations for the Quantum Grassmannian In Section 3, we solved the enumerative problem of Question 1 by arguing directly from the equations describing the conditions (1.1). This is an unusual feature of that enumerative problem: despite the fact that algebraic geometry is ostensibly concerned with solutions to polynomial equations, enumerative geometric problems are not typically solved in this manner. What is more unusual is that this enumerative problems admits a second solution also based upon equations, in this case equations for the quantum Grassmannian. We first argue that the number of solutions to the enumerative problem is the degree deg Kq m,p of the quantum Grassmannian, and then use the form of a Gr¨ obner basis for the ideal Iq m,p = I(Kq m,p) of the quantum Grassmannian to give another RATIONAL CURVES ON GRASSMANNIANS 19 proof of Theorem 2.6, that deg Kq m,p is the number of maximal chains in the poset Cq m,p of quantum Pl¨ ucker coordinates. 4.1. The enumerative problem of Question 1. Given an m-plane L ⊂ Km+p and a point s ∈P1, the set of degree q maps M ∈Mq m,p which satisfy (4.1) M(s) ∩L ̸= {0} is a hyperplane section of Mq m,p in its Pl¨ ucker embedding. This was shown both in Section 2.3 and, for special versions of (4.1), in Section 3. Thus the enumerative problem of Question 1 asks for the number of points common to Mq m,p and to N := q(m + p) + mp( = dim Mq m,p) hyperplanes. Hence if there are finitely many solutions, their number is bounded by deg Kq m,p and it equals this degree if there are no points in the boundary Kq m,p −Mq m,p common to all the hyperplanes. Given a point s ∈P1, the evaluation map evs : Mq m,p →Grass(p, Km+p) asso-ciates a map M to the p-plane M(s). The evaluation map extends to the quantum Grassmannian. One way to see this is that the evaluation map is defined on the Quot scheme Qq m,p and it factors through the Pl¨ ucker map Qq m,p →Kq m,p . Concretely, points M of Kq m,p are represented (possibly non-uniquely) by matrices of homogeneous forms whose minors are forms of degree q, and a point is on the boundary if the minors are not relatively prime. The (classical) Pl¨ ucker coordinates of evs(M) ∈Grass(p, Km+p) are given by first dividing each minor by the common polynomial factor, and then evaluating at the point s ∈P1. Since Grass(p, Km+p) is a homogeneous space, Kleiman’s Properness Theo-rem [32, Theorem 2(i)] shows that for distinct points s1, s2, . . . , sN ∈P1 and general m-planes L1, L2, . . . , LN ∈Km+p, the collection of hyperplanes given by M(si) ∩Li ̸= {0} meet the boundary properly. Since the boundary has dimension N −m −p + 1, there are no points common to the boundary and these hyperplanes. Again by Kleiman’s Properness Theorem, there are finitely many points M in Kq m,p (and hence in Mq m,p) common to these hyperplanes, as N is the dimension of Kq m,p. Since this set is a particular complementary linear section of Kq m,p, its number of points (possibly counted with multiplicity) is deg Kq m,p. When K is algebraically closed of characteristic zero, Kleiman’s Transversality Theorem [32, Theorem 2(ii)] implies that the solutions appear without multiplicity, and so deg Kq m,p solves the enumerative problem of Question 1. 4.2. The degree of Kq m,p via Gr¨ obner bases. For basics on Gr¨ obner bases, we recommend either or , whose Chapter 11 has a description of the classical q = 0 version of the results discussed here. Let K[Cq m,p] be the ring generated by the quantum Pl¨ ucker coordinates zα(a) for α(a) ∈Cq m,p, the coordinate ring of the Pl¨ ucker space P(Vp Km+p ⊗Kq+1). Let ≺be the degree reverse lexicographic term order on ring K[Cq m,p] induced by an ordering of the variables zα(a) corresponding to any (fixed) linear extension of the poset Cq m,p. The poset Cq m,p is in fact a distributive lattice, with α(a) ∧β(b) the meet (greatest lower bound) and α(a) ∨β(b) the join (least upper bound) of the indices α(a) and β(b). Theorem 4.1 (). The reduced Gr¨ obner basis of the Pl¨ ucker ideal Iq m,p of the quantum Grassmannian Kq m,p consists of quadratic polynomials in K[Cq p,m] 20 FRANK SOTTILE which are indexed by pairs of incomparable variables γ(c), δ(d) in the poset Cq p,m, S(γ(c), δ(d)) = zγ(c) · zδ(d) −zγ(c)∨δ(d) · zγ(c)∧δ(d) + lower terms in ≺, and all lower terms λzα(a)zβ(b) of S(γ(c), δ(d)) satisfy α(a) < γ(c) ∧δ(d) and γ(c) ∨ δ(d) < β(b). By Theorem 4.1, the initial ideal in≺(Iq m,p) of the Pl¨ ucker ideal is generated by all monomials zα(a)zβ(b) with α(a), β(b) ∈Cq m,p incomparable. We write this initial ideal as an intersection of prime ideals. For this, let Qq m,p be the set of (saturated) chains in the poset Cq m,p. Lemma 4.2. Let Iq m,p be the Pl¨ ucker ideal. Then in≺(Iq m,p) = \ q∈Qq m,p ⟨zδ(d) | δ(d) ̸∈q⟩. Proof. If zα(a)zβ(b) is a generator of in≺(Iq m,p), then α(a) and β(b) are incom-parable in the poset Cq m,p. Thus if q ∈Qq m,p is a saturated chain, at most one of α(a) or β(b) lies in the chain q, and so zα(a)zβ(b) lies in the ideal ⟨zδ(d) | δ(d) ̸∈q⟩. Suppose now that z is a monomial not in the initial ideal in≺(Iq m,p). Then the variables appearing in z have indices which are comparable in the poset Cq m,p. Thus we may write z = zα(a) · zβ(b) · · · zγ(c) with α(a) ≤β(b) ≤· · · ≤γ(c). There is some chain q ∈Qq m,p containing the indices α(a), β(b), . . . , γ(c) and so the monomial z does not lie in the ideal ⟨zδ(d) | δ(d) ̸∈q⟩. This proves the equality of the two monomial ideals. □ Each ideal ⟨zδ(d) | δ(d) ̸∈q⟩defines the coordinate subspace of Pl¨ ucker space spanned by the coordinates zα(a) with α(a) ∈q, which is isomorphic to Pq(m+p)+mp, as every maximal chain q of Cq m,p has length q(m + p) + mp + 1. Thus the zero scheme of in≺(Iq m,p) is the union of these coordinate subspaces, and so it has degree equal to their number. Alternative Proof of Theorem 2.6. The degree of the quantum Grass-mannian is the degree of its ideal Iq m,p. By Macaulay’s Theorem (see also [18, §1.10]), this is the degree of the initial ideal, deg(in≺(Iq m,p)), which is equal to the number of chains in Qq m,p, by Lemma 4.2. □ Remarks 4.3. (1) In , reduced Gr¨ obner bases for the quantum Schubert varieties Zα(a) which are restrictions of the Gr¨ obner basis of Theorem 4.1 are also con-structed, and a consequence is that the definition (2.10) is in fact ideal-theoretic: I(Zα(a)) = Iq m,p + ⟨zβ(b) | β(b) ̸≤α(a)⟩. The form of these Gr¨ obner bases also significantly strengthens Proposi-tion 2.5 (ii) to the level of homogeneous ideals. (2) The reduced Gr¨ obner basis for the Pl¨ ucker ideal of the classical Grass-mannian (q = 0) may be constructed as follows [27, 59]: First a Gr¨ obner basis consisting of linearly independent quadratic polynomials, one for each incomparable pair, is constructed using invariant theory. Then this RATIONAL CURVES ON GRASSMANNIANS 21 basis is reduced to obtain the desired reduced Gr¨ obner basis. In con-trast to that approach, the reduced Gr¨ obner basis of Theorem 4.1 was constructed explicitly using a double induction on the poset Cq m,p. For α(a) ≥β(b) in Cq m,p, define the skew quantum Schubert variety Zα(a)/β(b) := {(zγ(c)) ∈Kq m,p : zγ(c) = 0 unless α(a) ≥γ(c) ≥β(b)} . A first step is to show that if α(a) = γ(c) ∨δ(d), then zγ(c) · zδ(d) −zγ(c)∨δ(d) · zγ(c)∧δ(d) ∈I(Zα(a)/β(b)) , by downward induction on β(b). When β(b) ∈Cq m,p is minimal, Zα(a)/β(b) = Zα(a). Then the forms S(γ(c), δ(d)) of Theorem 4.1 are constructed by increasing induction on α(a). (3) An important part of was to study the rational parameterization of Kpn m,p given by m+p by p matrices whose entries are generic polynomials of degree n, and also by an intermediate variety, the Grassmannian of p-planes in K(pn+1)(m+p). This ‘long Grassmannian’ was used by Byrnes to obtain a different compactification of Mq m,p than Kq m,p. It was also used to prove Proposition 2.5(ii) [40, 41], and the indices of its Schubert varieties appeared implicitly in the indexing scheme of Section 2.5. Lastly, the classical (ideal-theoretic) version of Proposition 2.5(ii) for Schubert varieties in the long Grassmannian was used in the inductive steps of item (2) above. (4) We expect this approach and these results to generalize to other flag man-ifolds, giving an analog of standard monomial theory for spaces of rational curves in all flag manifolds. 5. Quantum Cohomology and the Formula of Vafa and Intriligator We describe some of the standard story of the enumerative problem of Ques-tion 1.We first briefly review some history of the formula of Vafa and Intriligator. Next, we visit the classical cohomology ring of the Grassmannian and its quantum deformation, and then give the formula of Vafa and Intriligator. We then show how this formula of Vafa and Intriligator agrees with the formula (1.2) of Ravi, Rosen-thal, and Wang. We next give an alternative way to view the quantum cohomology ring of the Grassmannian, and discusses how this same ring arose in two different contexts in representation theory. This survey concludes with some open problems concerning quantum Littlewood-Richardson coefficients. Inspired by Donaldson’s invariants of 4-manifold , Gromov proposed that topological invariants of moduli spaces of pseudo-holomorphic curves in a symplectic manifold X would give invariants of the symplectic structure of X. Fol-lowing ideas of Witten , Vafa proposed so-called quantum multiplications in the cohomology rings of symplectic manifolds with structure constants certain correlation functions, and conjectured remarkable residue formulae for these cor-relation functions when X is a Grassmannian. This was made more precise by Intriligator . Ruan (see ) was perhaps the first to link this work in theo-retical physics to the work of Gromov, realizing that Witten’s correlation functions were in fact Gromov’s invariants, and hence the formula of Vafa and Intriligator computes intersection numbers of curves of all genera on Grassmannians. Siebert 22 FRANK SOTTILE and Tian generalized the program of Vafa and Intriligator from the Grass-mannian to certain Fano manifolds—in particular, they proved the formula of Vafa and Intriligator and constructed the (small) quantum cohomology rings of these manifolds. Previously (and with different methods), Bertram, Daskalopoulos, and Wentworth had proven this formula for genus 1 invariants of high degree curves in Grassmannians of 2-planes, and Bertram later developed a quantum Schubert calculus which enabled the computation of intersection numbers involving arbitrary Schubert conditions. 5.1. The cohomology ring of the Grassmannian. The cohomology ring of the complex Grassmannian Grass(p, Cm+p) has a standard presentation (5.1) H∗(Grass(p, Cm+p)) ∼ − →C[c1, c2, . . . , cp]/⟨hm+1, hm+2, . . . , hm+p⟩, where deg ci = 2i and h1, h2, . . . , hm+p are defined recursively in terms of the ci as follows (5.2) hj −c1hj−1 + · · · + (−1)j−1cj−1h1 + (−1)jcj = 0 , with ci = 0 for i > p. The isomorphism is given by associating ci to the ith Chern class of the dual S∗of the tautological rank p subbundle S over the Grassmannian. Then hi is the ith Chern class of the rank m quotient bundle Q, and these classes vanish for i > m. The relation (5.2) between these classes ci and hi is succinctly expressed via the splitting principle, 1 = c(Cm+p) = c(S∗)c(Q∗) , where Cm+p is the trivial bundle and c(·) is the total Chern class. (Here, c(Q∗) = 1 −h1 + h2 −· · · .) Because the cohomology ring is a complete intersection and the hj are homoge-neous of degree 2j, it is Gorenstein with socle in dimension 2mp = P j(deg hm+j − deg cj). A generator of the socle is the image of cm p , and the degree map (used to compute intersection numbers) is simply the coefficient of cm p in an element of this quotient ring. Thus, given some classes ξ1, ξ2, . . . , ξl in cohomology which are Poincar´ e dual to cycles X1, X2, . . . , Xl in general position, the coefficient of cm p in the product ξ1 · ξ2 · · · ξl is the number of points in the intersection X1 ∩X2 ∩· · · ∩Xl , when there are finitely many such points. What is less known is that the degree map may be computed using the local residue associated to the map H := ((−1)p−1hm+1, (−1)p−2hm+2, . . . , hm+p): Cp → Cp. (See [51, §4] for details.) This residue is ResH(F) = 1 (2πi)p Z Γǫ F (−1)p−1hm+1(−1)p−2hm+2 · · · hm+p dc1 · dc2 · · · dcp , for F ∈C[c1, c2, . . . , cp]. Here Γǫ is a smooth canonically oriented cycle in the region where no component hm+i of H vanishes. Standard properties of residues [23, §5] imply that the residue vanishes on the ideal of (5.1), and so gives a well-defined map on the cohomology ring. Furthermore, when F is homogeneous, the residue vanishes unless deg F = 2mp, for otherwise the form is exact. Thus the residue is proportional to the degree map, and the calculations we do below show the constant of proportionality is (−1)( p 2). RATIONAL CURVES ON GRASSMANNIANS 23 The presentation (5.1) has another form. Let W = 1 m+p+1Pm+p+1, where Pm+p+1 is the (m+p+1)th Newton power sum symmetric polynomial in the vari-ables x1, . . . , xp. If we express Pm+p+1 as a polynomial in the elementary symmetric polynomials c1, c2, . . . , cp, then we have (see below) (5.3) (−1)1+jhm+p+1−j = ∂W ∂cj , where hi is the ith complete homogeneous symmetric polynomial in the variables xj (these satisfy (5.2) when the ci are elementary symmetric polynomials). Thus the presentation becomes H∗(Grass(p, Cm+p)) ∼ − →C[c1, c2, . . . , cp]/⟨dW = 0⟩. We derive (5.3), working in the ring Λ of symmetric functions in the indetermi-nates x1, x2, . . . [57, Sect. 7] [36, I.2] . To obtain this formula for polynomials, specialize xi to 0 for i > p. First, we have the fundamental relations 1 =  X r≥0 hr(−t)r  ·  X r≥0 ertr  , (5.4) X r>0 pr(−t)r−1 = d dt log  X r≥0 ertr  , (5.5) where ei, hi, and pi are, respectively, the elementary, complete homogeneous, and power sum symmetric functions of degree i. (Note that (5.4) gives (5.2).) Differen-tiating (5.5) with respect to ej gives X r>0 ∂pr ∂ej (−t)r−1 = d dt tj P r≥0 ertr = d dttj X r≥0 hr(−t)r . Equating coefficients of tm+p gives (−1)m+p ∂pm+p+1 ∂ej = (−1)m+p+1−j(m + p + 1)hm+p+1−j , from which (5.3) follows. 5.2. Quantum cohomology and the formula of Vafa and Intriligator. The quantum cohomology ring is a perturbation (depending on a K¨ ahler form) of the classical cohomology ring whose structure encodes the genus zero Gromov-Witten invariants. For the Grassmannian, Vafa and Intriligator began with the perturbation of W QW := W + (−1)pβc1 , where β is a complex number associated to the perturbing K¨ ahler form. For our enumerative problem, β = 1. They then proposed the following presentation for the quantum cohomology ring QH∗(Grass(p, Cm+p)) = C[c1, c2, . . . , cp]/⟨dQW = 0⟩ which is (5.6) C[c1, c2, . . . , cp]/⟨(−1)p−1hm+1, (−1)p−2hm+2, . . . , hm+p + (−1)pβ⟩. 24 FRANK SOTTILE They also proposed the following formula. Let X1, X2, . . . , Xl be special Schu-bert cycles in the Grassmannian which are in general position. Suppose β = 1. For a genus g ≥0, set ⟨X1, X2, . . . , Xl⟩g := 0 unless the sum of the codimensions of the Xj is equal to d(m + p) + mp(1 −g), for some non-negative integer d. When there is such an integer d, let the Gromov-Witten invariant ⟨X1, X2, . . . , Xl⟩g be the number of maps f : (Σ, s1, s2, . . . , sl) − →Grass(p, Cm+p) satisfying f(si) ∈Xi. Here Σ is a fixed genus g curve, s1, s2, . . . , sl are fixed, but general, points of Σ, and f∗[Σ] = d · c1. Determining when this definition is well-founded and providing a satisfactory alternative when it is not is an important and subtle story which we do not relate. When β ̸= 1, the definition involves pseudo-holomorphic curves, and we omit it. Suppose we have special Schubert classes ci1, ci2, . . . , cil with cij Poincar´ e dual to Xj. Then the formula of Vafa and Intriligator for ⟨X1, X2, . . . , Xl⟩g is (5.7) (−1)( p 2)(g−1) X dQW (c1,c2,...,cp)=0 det µ∂2QW ∂ci∂cj ¶g−1 · ci1 · ci2 · · · cil . One remarkable feature of this formula involves the determinant J = det ³ ∂2QW ∂ci∂cj ´ . The formula implies that the genus g Gromov-Witten invariant of a monomial ci equals the genus g −1 Gromov-Witten invariant of ci/J (up to a sign). We relate this to the classical intersection formula when g = 0. Since dQW is the vector ((−1)p−1hm+1, . . . , −hm+p−1, hm+p + (−1)pβ), the determinant J in the formula (5.7) is also the Jacobian of the map H, and so the summand of (5.7) becomes X c∈H−1(y) F(c) J , where y = (0, . . . , 0, (−1)p+1β) and F(c) = ci1 · ci2 · · · cil. Let resy(F) denote this number, which is a trace but also a residue as y is a regular value of the map H. A further property of the residue is that resy(F) extends holomorphically to a neighborhood of 0, and lim y→0 resy(F) = ResH(F) . This shows rather explicitly how this formula of Vafa and Intriligator is a deforma-tion of the classical intersection formula. 5.3. Relation between the formulae of Vafa and Intriligator and of Ravi, Rosenthal, and Wang. We relate the formula (5.7) for genus 0 curves to the formula (1.2) of Ravi, Rosenthal, and Wang. For α ∈ ¡[n] p ¢ , let Ωα = Zα(0), a Schubert subvariety of Grass(p, Cm+p). Theorem 5.1 (). Let α(a) ∈Cq m,p. Then deg Zα(a) = (−1)( p 2) X dQW =0 σα∨· c|α(a)| 1 J , where σα∨is the cohomology class Poincar´ e dual to the fundamental cycle of Ωα. RATIONAL CURVES ON GRASSMANNIANS 25 Proof. Since the Gromov-Witten invariants of genus zero curves on the Grassman-nian may be computed in the quantum cohomology ring of the Grassmannian, the obvious linear extension of the formula of Vafa and Intriligator (5.7) for genus zero curves to arbitrary cycles Xi is valid. Thus the right hand side above computes the Gromov-Witten invariant (5.8) ⟨Ωα, X1, X2, . . . , X|α(a)|⟩0 , where X1, X2, . . . , X|α(a)| are special Schubert varieties in general position, each dual to c1. Since the cohomological degree of the class σα∨·c|α(a)| 1 is 2(mp −|α|) + 2|α(a)| = 2mp −2|α| + 2(a(m + p) + |α|) = 2a(m + p) + 2mp , this is an invariant of degree a curves. We first express this Gromov-Witten invariant as the number of points in an in-tersection. Given a point s ∈P1, the evaluation map evs : Ma m,p →Grass(P, Cm+p) associates a curve M to the p-plane M(s). Each cycle Xi has the form ΩLi = {H ∈Grass(p, Cm+p) | H ∩Li ̸= {0}} , where L1, . . . , L|α(a)| are m-planes in Cm+p in general position. Thus (5.8) counts the number of points in the intersection (5.9) ev−1 s0 Ωα \ ev−1 s1 ΩL1 \ · · · \ ev−1 s|α(a)|ΩL|α(a)| , where s0, s1, . . . , s|α(a)| are general points in P1. Observe that we may choose s0 to be the point ∞at infinity in P1. Then, in the quantum Pl¨ ucker coordinates (zβ(b) | β(b) ∈Ca m,p) for a point z ∈Ma m,p, ev∞: z 7− →(zβ(a) | β ∈ ¡[m+p] p ¢ ) . Since, in the Pl¨ ucker coordinates (yβ | β ∈ ¡[m+p] p ¢ )) for Grass(p, Cm+p) we have the analog of (2.10) for Ωα, Ωα := {y ∈Grass(p, Cm+p | yβ = 0 if β ̸≤α} , we see that ev−1 ∞(Ωα) = Zα(a). Finally, observe that for an m-plane L ⊂Cm+p and a point s ∈P1, ev−1 s ΩL = {M ∈Ma m,p | M(s) ∩L ̸= {0}} which is a hyperplane section of Ma m,p. Thus the number of points in the intersec-tion (5.9) is bounded by the degree of Zα(a) and it equals this degree if all points of intersection lie in Zα(a) ∩Ma m,p. But this occurs for general m-planes Li and points si, by Remark 3.6. □ This proof is unsatisfactory in that both sides of the equation have a simple algebraic-combinatorial interpretation, yet we argued using the definition of the Gromov-Witten invariants, rather than something more elementary. We now give a more direct proof, following . For a sequence I : 0 < i1 < i2 < · · · < ip of integers, let SI be the Schur symmetric polynomial in x1, . . . , xp associated to the partition (ip −p, . . . , i2 −2, i1 −1), which is also a polynomial in the elementary symmetric polynomials. 26 FRANK SOTTILE Theorem 5.2. For α(a) ∈Cq m,p, define δ(α(a)) := (−1)( p 2) X dQW =0 c|α(a)| 1 · Sα∨ J . Then the function δ(α(a)) satisfies the recursion (2.11). This will prove the equality of the two formulae, since under the map (5.1) to cohomology we have Sα 7− →σα for α ∈ ¡[m+p] p ¢ . Proof. We set n := m+p and change coordinates, working in the ring of symmetric polynomials C[x1, x2, . . . , xp]Sp in x1, x2, . . . , xp. If we let each xi have cohomo-logical degree 2, then this ring is isomorphic to the ring C[c1, c2, . . . , cp] with the isomorphism given by ci = ei(x1, x2, . . . , xp). Here ei(x1, x2, . . . , xp) is the ith elementary symmetric polynomial in x1, x2, . . . , xp. This theorem is a consequence of Lemma 2.7 and the following lemma. Let y1, y2, . . . , yn be the nth roots of (−1)p+1. Lemma 5.3. For K = k1, k2, . . . , kp, define the function D(K) to be (5.10) (−1)( p 2) np X (x1 · · · xp)(x1 + · · · + xp) P j kj−j det(xn−ki j ) det(xp−j i ) , the sum over all I ∈ ¡[n] p ¢ , where xj = yij. Then (i) For sequences of integers k1 ≤k2 ≤· · · ≤kp with P j kj −j ≥0 and kp < k1 + n, the function D(k1, . . . , kp) satisfies the recursion, ini-tial condition, and boundary conditions of Lemma 2.7. In particular, D(J(α(a))) = deg Zα(a). (ii) For α(a) ∈Cq m,p, δ(α(a)) = D(J(α(a))). Proof of Lemma 5.3(i). For a sequence K = (k1, k2, . . . , kp) of integers, let f(K) := det(xn−kj i ). This determinant is the sum of terms f(π; K) := sgn(π)xn−k1 π(1) · · · xn−kp π(p) over all permutations π of {1, 2, . . . , p}, where sgn(π) is the sign of the permutation π. Multiplying this term by x1 + · · · + xp gives (x1 + · · · + xp) · f(π; K) = p X a=1 f(π; k1, . . . , ka−1, . . . , kp) . Summing over all permutations π gives the Pieri formula (x1 + · · · + xp) · f(K) = p X a=1 f(k1, . . . , ka −1, . . . , kp) , and thus D(K) satisfies the recursion (2.14) of Lemma 2.7. Since D(K) is antisym-metric in its arguments, it satisfies the boundary condition (2.16). If kp = k1 + n, then the first and last rows of the matrix (xn−ki j ) are the scalar multiples (−1)p+1 of each other, and so the function D satisfies the boundary condition (2.18). To show that D satisfies the initial condition (2.15) and the remaining boundary condition (2.17), consider the values of D(k1, . . . , kp) when k1 < · · · < kp, 0 = P j(kj −j), and kp < k1 + n. For such sequences K, we show that (5.11) D(K) = ½ 1 if K = (1, 2, . . . , p) , 0 otherwise . RATIONAL CURVES ON GRASSMANNIANS 27 The first case of this is the initial condition (2.15). We deduce the boundary condition (2.17) from the second case of (5.11). Let J = j1 < j2 < · · · < jp be sequence of integers satisfying P i ji −i ≥0 with j1 = 0 and jp < n = j1 + n. Applying the recursion (2.14) P i(ji −i) times to D(J), and the boundary conditions (2.16) and (2.18) shows that D(J) is a sum of terms D(K) for K satisfying the conditions for (5.11), but with k1 ≤0. Thus every such term vanishes, and so D(J) = 0. We prove (5.11), which will complete the proof of Lemma 5.3 (i). For the sequences K of (5.11), we have D(K) = (−1)( p 2) np X (x1 · · · xp) det(xn−ki j ) det(xp−j i ) = (−1)( p 2) np X det(xn−ki j ) det(xp+1−j i ) , where the sum is over all (ordered) p-tuples (x1, . . . , xp) of the nth roots (y1, . . . , yn) of (−1)p+1. We apply the Cauchy-Binet formula to this sum of products of deter-minants to obtain D(K) = (−1)( p 2) np det         yn−k1 1 yn−k1 2 · · · yn−k1 n . . . . . . ... . . . yn−kp 1 yn−kp 2 · · · yn−kp n   ·      yp 1 · · · y1 yp 2 · · · y2 . . . ... . . . yp n · · · yn          . Expanding this product, we obtain (5.12) D(K) = (−1)( p 2) np det    P(n + p −k1) · · · P(n + 1 −k1) . . . ... . . . P(n + p −kp) · · · P(n + 1 −kp)   , where P(b) is the sum of the bth powers of the yi. Since the yi are the nth roots of (−1)p+1, we have P(b) = ½ (−1)(p+1)a · n if b = an , 0 otherwise . The ith row of the determinant (5.12) has at most one non-zero entry, in the column j where j + ki ≡p + 1 modulo n. Suppose that K satisfies the conditions of (5.11) and the determinant of (5.12) does not vanish. Then each component of K is congruent to one of {1, 2, . . . , p} modulo n. Since each congruence must occur for the determinant to be non-zero (if you like, since no two components of K are congruent modulo n), we have that K ≡{1, 2, . . . , p} modulo n. In particular, no component of K vanishes. Let r be the index such that kr < 0 < kr+1. Since kp < k1 + n and P j(kj −j), we must have k1 ≤1, and also −n < k1 < kp < n. In fact the condition that K ≡{1, 2, . . . , p} modulo n implies that k1, . . . , kr ≤−m and 0 < kr+1, . . . , kp ≤p. This implies k1 ≤−m + 1 −r and p−r ≤kp, and hence p −r < kp < k1 + n ≤p + 1 −r, which implies that K = (−m + 1 −r, −m + 2 − r, . . . , −m, 1, 2, . . . , p −r) and so P j kj −j = −nr. Since this sum must equal 0, we see that K must be (1, 2, . . . , p). When K = (1, 2, . . . , p), the matrix of power sums is antidiagonal with entries (−1)p+1n, and so the determinant is (−1)p2+p+( p 2)np = (−1)( p 2)np, which shows that D(1, 2, . . . , p) = 1, as claimed. This completes the proof of Lemma 5.3(i). □ 28 FRANK SOTTILE Proof of Lemma 5.3(ii). We show that for α(a) ∈Cq m,p, we have the equality δ(α(a)) = D(J(α(a))). Since QW(x1, . . . , xp) = p X j=1 xn+1 j n + 1 + (−1)pxj , the set of solutions for dQW = 0 are just the set of p-tuples of nth roots of (−1)p+1. The Schur polynomial Sα∨is equal to the quotient of alternants det(xn−αi j ) det(xp−j i ) . The denominator is the Vandermonde determinant ∆:= Q i<j(xi −xj). The Jacobian J is the determinant of the Hessian of QW with respect to the variables ci, which we compute using the multivariate chain rule µ ∂2QW ∂xi∂xj ¶ = µ∂2QW ∂ci∂cj ¶ · µ ∂ci ∂xj ¶2 + ÃX k ∂QW ∂ck ∂2ck ∂xi∂xj ! . Since we evaluate this where dQW = 0, we obtain det µ ∂2QW ∂xi∂xj ¶¯ ¯ ¯ ¯ dQW =0 = det µ∂2QW ∂ci∂cj ¶ · · det µ ∂ci ∂xj ¶¸2 . The Hessian of QW with respect to the variables xi is the diagonal matrix with entry nxn−1 i in position (i, i) and by Lemma 5.4 below, det(∂ci/∂xj) = ∆. Since δ(α(a)) is the sum over p-tuples (x1, . . . , xp) of nth roots of (−1)p+1, we compute the value of the Jacobian J = det(∂2QW/∂ci∂cj) at the p-tuple (x1, . . . , xp) to be J = np(x1 · · · xp)n−1 ∆2 = np(x1 · · · xp)n (x1 · · · xp)∆2 = np (x1 · · · xp)∆2 , as (x1 · · · xp)n = (−1)p(p+1) = 1. Since each summand involves the Vandermonde, we may restrict the sum to be over the set I of all p-tuples of distinct roots, which we will always take to be in an order compatible with a fixed ordering of the nth roots y1, . . . , yn of (−1)p+1. We may put these calculations together and obtain the following formula for δ(α(a)) (5.13) (−1)( p 2) np X I (x1 · · · xp)(x1 + · · · + xp)|α(a)| det(xn−αi j ) det(xp−j i ) . Let (k1, . . . , kp) = J(α(a)). If we write a = pl + r with 0 ≤r < p, then this is the sequence (ln + αr+1, . . . , ln + αp, (l+1)n + α1, . . . , (l+1)n + αr) . The vector (xn−kj i ) is ((−1)(p+1)lxn−αr+1 i , . . . , (−1)(p+1)lxn−αp i , (−1)(p+1)(l+1)xn−α1 i , . . . , (−1)(p+1)(l+1)xn−αr i ) . Thus we see that det(xn−kj i ) = (−1)(p+1)a+r(p−r) det(xn−αj i ) = det(xn−αj i ) , since, as in the calculation at the end of Section 2, r(p −r) ≡pa + a modulo 2. RATIONAL CURVES ON GRASSMANNIANS 29 Since |α(a)| = P j kj −j, we may substitute the last formula into (5.13) and obtain δ(α(a)) = (−1)( p 2) np X I (x1 · · · xp)(x1 + · · · + xp) P j kj−j det(xn−ki j ) det(xp−j i ) = D(J(α(a))) , as claimed. We complete the proof of Lemma 5.3(ii) and hence of Theorem 5.2 with the calculation below. Lemma 5.4. det µ ∂ci ∂xj ¶ = Y i<j (xi −xj) = ∆. Proof. Let Fp(x1, . . . , xp) be this determinant. Since ∂ci ∂xj = ci−1(x1, . . . , b xj, . . . , xp) , where b xj indicates that xj is omitted, we seek the determinant of the matrix whose (i, j)th entry is ci−1(x1, . . . , b xj, . . . , xp). If we subtract the first column from each the rest, we obtain a matrix in block form µ 1 0 ∗ A ¶ , where the entries of A in position (i, j) (note the shift from the original matrix) are ci(x1, . . . , [ xj+1, . . . , xp) −ci(c x1, . . . , xp) = (x1 −xj+1)ci−1(x2, . . . , [ xj+1, . . . , xp) . Dividing the common factors of (x1 −xj+1) from the columns of A gives the matrix with entries ci−1(x2, . . . , [ xj+1, . . . , xp), and so we have the recursive formula Fp(x1, . . . , xp) = p Y j=2 (x1 −xj) · Fp−1(x2, . . . , xp) . Since F1(xp) = 1, this completes the Lemma. □ 5.4. The quantum cohomology ring of the Grassmannian. We discuss an alternative view of the quantum cohomology ring of the Grassmannian, mention how this ring arose in representation theory, and give some open problems. The presentation (5.6) of QH∗(Grass(p, Cm+p)) is not what one ordinarily sees in algebraic geometry, but rather an integral form with a parameter q Z[q][c1, c2, . . . , cp]/⟨hm+1, . . . , hm+p−1, hm+p + (−1)pq⟩. Then the genus zero Gromov-Witten invariant ⟨X1, X2, . . . , Xl⟩0 of cycles repre-sented by classes ξ1, ξ2, . . . , ξl ∈Z[c1, c2, . . . , cp] is the coefficient of cm p · qd in the product ξ1 · ξ2 · · · ξl. Here d is the degree of the curves this invariant enumerates. In this presentation, the variable q (which is the variable β of Section 5.2) keeps track of the degrees of curves. This ring is graded, if q has cohomological degree 2(m + p). The cohomology of the Grassmannian has a basis of Schubert classes, σα, given by the Giambelli formula (Jacobi-Trudi for combinatorists). (5.14) σα = det(hαi−j)1≤i,j≤p . 30 FRANK SOTTILE The class σα is Poincar´ e dual to the Schubert variety Ωα∨. The quantum coho-mology ring of Grass(p, Cm+p) may be viewed additively as polynomials in q with coefficients in the cohomology of the Grassmannian, H∗(Grass(p, Cm+p))[q], but with a deformed product, ∗, defined by (5.15) σα ∗· · · ∗σβ = X γ,d≥0 qd ⟨Ωα∨, . . . , Ωβ∨, Ωγ⟩d 0 σγ , where ⟨· · · ⟩d 0 is the genus 0 Gromov-Witten invariant for degree d curves. (The degree d is determined by the cohomological degrees of the Schubert classes.) Bertram studied this ring, and showed that the Giambelli formula (5.14) remains valid with the quantum multiplication. He also established a Pieri formula σα ∗ha = X σβ + q X σγ , the sum over all β, γ with |β| = |α| + a and |γ| = |α| + a −m −p, and satisfying α1 ≤β1 < α2 ≤β2 < · · · < αp ≤βp , γ1 ≤α1 −1 < γ2 ≤α2 −1 < · · · < γp ≤αp −1 . Like the classical Giambelli and Pieri formulae , these determine the ring struc-ture of quantum cohomology with respect to the basis of Schubert classes. In particular, the structure constants N γ α β(m, p) defined by the formula (5.16) σα ∗σβ = X γ,d qdN γ α β(m, p)σγ are completely determined. (Here, the summation is over γ, d with |γ|+d(m+p) = |α| + |β|.) These are the analogs of the classical Littlewood-Richardson coeffi-cients. Like them, these numbers N γ α β(m, p) are non-negative. Unlike the classical coefficients, there is as yet no quantum Littlewood-Richardson formula for these constants which proves their non-negativity. These are certain three-point Gromov Witten invariants, as combining (5.15) and (5.16) shows N γ α β(m, p) = ⟨Ωα∨, Ωβ∨, Ωγ⟩0 . These are known in the case of the Pieri formula and when q = 0; for then they are the classical Littlewood-Richardson coefficients. The only case for which there is such a positive formula is due to Tudose , when the minimum of m or p is 2. A formula for N γ α β(m, p) which involves signs (like the formula (1.2) for d(q; m, p)) was given by Bertram, Ciocan-Fontanine, and Fulton . Interestingly, a similar formula was given previously in two different contexts. The Verlinde algebra is a variant of the representation ring of slp where the usual product is replaced by the fusion product, which is the tensor product of two representations reduced at level m. Witten explained the isomorphism between the Verlinde algebra and the quantum cohomology ring of the Grassman-nian, and this was rigorously established by Agnihotri . This isomorphism is an analog of the relation between the cohomology rings of the Grassmann varieties Grass(p, Cm+p), as m varies, and the representation ring of slp. A formula similar to that of Bertram, Ciocan-Fontanine, and Fulton was given by Kac [31, Exercise 13.35] and Walton in this context, where further details may be found. The cohomology ring of the Grassmannian is likewise isomorphic to an external representation ring of the symmetric groups (see [21, 48]). Similarly, there is a family of quotients of Hecke algebras at a primitive (m + p)th root of unity RATIONAL CURVES ON GRASSMANNIANS 31 whose external representation ring is isomorphic to the quantum cohomology of the Grassmannian. This was studied by Goodman and Wenzl , and they also gave a formula for N γ α β(p, m) identical to that of Kac and Walton. They also gave another presentation of the quantum cohomology ring Z[c1, c2, . . . , cp]/Im,p , where Im,p is the ideal generated by {SK | kp −k1 = m + p} , here, K : 0 < k1 < k2 < · · · < kp is an increasing sequence of positive integers and SK is defined by the Giambelli formula SK := (hki−j)1≤i,j≤p , where the polynomials hi are defined recursively by (5.2). Goodman and Wenzl proved that this quotient ring has an integral basis consisting of the classes {SI | I : 0 < i1 < i2 < · · · < ip < i1 + m + p} . This is just the set of sequences J(α(a)) for all α(a) ∈Cm,p := S b Cb m,p, where J( · ) is the map (2.12) of Section 2.4. The relation between this basis and the basis qaσα of H∗(Grass(p, Cm+p))[q] is just qaσα = SJ(α(a)) . Bertram’s Pieri formula has a nice expression in this basis: ha ∗SI = X SJ , the sum over all sequences J with |J| = |I| + a, where i1 ≤j1 < i2 ≤j2 < · · · < ip ≤jp < i1 + m + p . We close with two additional problems concerning the quantum Littlewood-Richardson coefficients. Let f α(a) = deg Zα(a) be the number of saturated chains in the poset Cm,p from the minimal element to α(a). More generally, given β(b), α(a) ∈Cm,p, let f α(a) β(b) be the number of saturated chains that begin at β(b) and end at α(a). Write σα(a) for the class qaσα and then the expansion (5.16) becomes σα(a) ∗σβ(b) = X |γ(c)|=|α(a)|+|β(b)| N γ α β(m, p) σγ(c) . Iterating the Pieri formula with a = 1 implies that h∗l 1 ∗σβ(b) = X |γ(c)|=l+|β(b)| f γ(c) β(b) σγ(c) . Since h∗l 1 = P |α(a)|=l f α(a)σα(a), we may expand the left hand side to obtain X |α(a)|=l f α(a)σα(a) ∗σβ(b) = X |α(a)|=l X |γ(c)| f α(a)N γ α β(m, p) σγ(c) . Equating the coefficients of σγ(c), we obtain (5.17) X |α(a)|=l f α(a)N γ α β(m, p) = f γ(c) β(b) . 32 FRANK SOTTILE The eventual combinatorial formula for the quantum Littlewood-Richardson coef-ficients should also explain this identity. That is, there should be some algorithm to convert a path in the poset Cm,p from β(b) to γ(c) into a path of the same length that starts at the minimal element, where the multiplicity of the occurrence of any path to α(a) is the quantum Littlewood-Richardson coefficient N γ α β(m, p). In short, we ask for a a quantum version of Schensted insertion. Another open problem is to show the (apparent) inequalities: N γ α β(m, p) ≤N γ α β(m + 1, p) ≤N γ α β(m + 2, p) ≤· · · , which were conjectured by Walton . Acknowledgements We thank Joachim Rosenthal, who taught us the basics of systems theory and commented on an early version of this manuscript, Jan Verschelde for the matrix manipulations of Section 2.2, Eduardo Cattani who resolved some of our questions on residues, and Emma Previato, who solicited this survey. We also thank Anders Buch, Sergey Fomin, Christian Lenart, Sasha Postnikov, Bruce Sagan, Mark Shimozono, and Richard Stanley who each provided us with a last-minute proof of the identity (5.3). References A. Agnihotri, Quantum cohomology and the Verlinde algebra, Ph.D. thesis, Oxford Univer-sity, 1995. E.L. Allgower and K. Georg, Numerical path following, Handbook of Numerical Analysis (P. G. Ciarles and J.L. Lions, eds.), vol. 5, North Holland, 1997, pp. 3–207. J. Ball and J. Rosenthal, Pole placement, internal stabilization and interpolation conditions for rational matrix functions: a Grassmannian formulation, Linear Algebra for Control The-ory (P. Van Dooren and B. Wyman, eds.), IMA Vol. in Math. and its Appl, vol. 62, Springer-Verlag, 1993, pp. 21–30. I. Bernstein, On the Ljusternik-ˇ Schnirel’mann category of Grassmannians, Proc. Amer. Math. Soc. 79 (1976), 129–134. A. Bertram, Quantum Schubert calculus, Adv. Math. 128 (1997), no. 2, 289–305. A. Bertram, I. Ciocan-Fontanine, and Wm. Fulton, Quantum multiplication of Schur poly-nomials, J. Algebra 219 (1999), 728–746. A. Bertram, G. Daskalopoulos, and R. Wentworth, Gromov invariants for holomorphic maps from Riemann surfaces to Grassmannians, J. Amer. Math. Soc. 9 (1996), no. 2, 529–571. F.M. Brasch and J.B. Pearson, Pole placement using dynamic compensation, IEEE Trans. Aut. Control AS-15 (1970), 34–43. R.W. Brockett and C.I. Byrnes, Multivariable Nyquist criteria, root loci and pole placement: A geometric viewpoint, IEEE Trans. Automat. Control. AC-26 (1981), 271–284. C. I. Byrnes, Algebraic and geometric aspects of the analysis of feedback systems, Geometrical Methods for the Theory of Linear Systems (C. I. Byrnes and C. F. Martin, eds.), D. Reidel, Dordrecht, Holland, 1980, pp. 85–124. C.I. Byrnes, On compactifications of spaces of systems and dynamic compensation, Proc. IEEE Conference on Decision and Control, San Antonio, TX, 1983, 1983, pp. 889–894. , Pole assignment by output feedback, Three Decades of Mathematical Systems Theory (H. Nijmeijer and J.M. Schumacher, eds.), Lecture Notes in Control and Inform. Sci., vol. 135, Springer-Verlag, Berlin, 1989, pp. 31–78. A.L. Cauchy, M´ emoire sur les fonctions qui ne peuvent obtenir que deux valeurs ´ egales et de signes contraires par suite des transpositions op´ er´ es entre les variables qu’elle renferment, J. ´ Ecole Polyt. 10 (1815), 29–112, Also in Ouvres ser. 2, vol 1, pp. 91-169. J.M. Clark, The consistent selection of local coordinates in linear system identification, Proc. Joint Automatic Control Conference, 1976, pp. 576–580. RATIONAL CURVES ON GRASSMANNIANS 33 D. Cox, J. Little, and D. O’Shea, Ideals, varieties, algorithms: An introduction to com-putational algebraic geometry and commutative algebra, UTM, Springer-Verlag, New York, 1992. D. Delchamps, State-space and input-output linear systems, Springer-Verlag, New York-Berlin, 1988. S.K. Donaldson, The geometry of 4-manifolds, Proceedings of the International Congress of Mathematicians (Berkeley 1986) (A.M. Gleason, ed.), vol. I, Amer. Math. Soc., 1987, pp. 43– 54. D. Eisenbud, Commutative algebra with a view towards algebraic geometry, GTM, no. 150, Springer-Verlag, 1995. P. Falb, Methods of algebraic geometry in control theory II: Multivariate linear systems and projective algebraic geometry, Birkh¨ auser, 1999. Wm. Fulton, Intersection theory, Ergebnisse der Math., no. 2, Springer-Verlag, 1984. , Young tableaux, Cambridge Univ. Press, 1997. F. Goodman and H. Wenzel, Littlewood-Richardson coefficients for Hecke algebras at roots of unity, Adv. Math. 82 (1990), 244–265. P. Griffiths and J. Harris, Principles of algebraic geometry, J. Wiley and Sons, 1978. M. Gromov, Psuedo holomorphic curves in symplectic manifolds, Invent. Math. 82 (1985), 307–347. M. Hazewinkel, On families of linear systems: Degeneration phenomena, Algebraic and Geometric Methods in Linear Systems Theory (C.I. Byrnes and C.F. Martin, eds.), Lectures in Applied Mathematics, vol. 18, Amer. Math. Society, 1980, pp. 157–189. U. Helmke, Topology of the moduli space for reachable linear dynamical systems: The complex case, Math. Systems Theory 19 (1986), 155–187. W.V.D. Hodge and D. Pedoe, Methods of algebraic geometry, vol. II, Cambridge Univ. Press, 1952. B. Huber, F. Sottile, and B. Sturmfels, Numerical Schubert calculus, J. Symb. Comp. 26 (1998), no. 6, 767–788. B. Huber and J. Verschelde, Pieri homotopies for problems in enumerative geometry applied to pole placement in linear systems control, SIAM J. Control and Optim., 38, (2000), 1265– 1287. K. Intriligator, Fusion residues, Mod. Phys. Lett. A 6 (1991), 3543–3556. V. Kac, Infinite-dimensional Lie algebras, 3rd ed., Cambridge University Press, Cambridge, UK, 1990. S.L. Kleiman, The transversality of a general translate, Compositio Math. 28 (1974), 287– 297. S.L. Kleiman and Dan Laksov, Schubert calculus, Amer. Math. Monthly 79 (1972), 1061– 1082. V. Lakshmibai and C.S. Seshadri, Standard monomial theory, Proceedings of the Hyderabad Conference on Algebraic Groups (S. Ramanan, ed.), Manoj Prakashan, 1991, pp. 279–322. F.S. Macaulay, Some properties of enumeration in the theory of modular systems, Proc. London Math. Soc. 26 (1927), 531–555. I.G. Macdonald, Symmetric functions and hall polynomials, Oxford Univ. Press, 1995, second edition. C.F. Martin and R. Hermann, Applications of algebraic geometry to system theory: The McMillan degree and Kronecker indices as topological and holomorphic invariants, SIAM J. Control Optim. 16 (1978), 743–755. M.S. Ravi, Interpolation theory and quantum cohomology, SIAM Journal on Control and Optimization, 39, (2000), 981–988 (electronic). M.S. Ravi and J. Rosenthal, A smooth compactification of the space of transfer functions with fixed McMillan degree, Acta Appl. Math. 34 (1994), 329–352. M.S. Ravi, J. Rosenthal, and X.C. Wang, Dynamic pole assignment and Schubert calculus, SIAM J. Control and Optim. 34 (1996), 813–832. , Degree of the generalized Pl¨ ucker embedding of a quot scheme and quantum coho-mology, Math. Ann. 311 (1998), no. 1, 11–26. H. Rosenbrock, State space and multivariate theory, John Wiley, New York, 1970. J. Rosenthal, Geometric methods for feedback stabilization of multivariate linear systems, Ph.D. thesis, Arizona State University, 1990. 34 FRANK SOTTILE J. Rosenthal, On dynamic feedback compensation and compactification of systems, SIAM J. Control Optim. 32 (1994), 279–296. J. Rosenthal and X. Wang, Output feedback pole placement with dynamic compensators, IEEE Trans. Aut. Control. 41 (1996), no. 6, 830–843. Y.B. Ruan, Quantum cohomology and its applications, Proceedings of the International Con-gress of Mathematicians (Berlin 1998), Doc. Math., vol. Extra Vol. II, 1998, pp. 411–420. J. Sacks and K.K. Uhlenbeck, The existence of minimal immersions of 2-spheres, Ann. Math. 113 (1981), 1–24. Bruce Sagan, The symmetric group; representations, combinatorics, algorithms & symmetric functions, Wadsworth & Brooks/Cole, 1991. H. Schubert, Anzahl-Bestimmungen f¨ ur lineare R¨ aume beliebiger Dimension, Acta. Math. 8 (1886), 97–118. , Los¨ ung des Charakteritiken-Problems f¨ ur lineare R¨ aume beliebiger Dimension, Mit-theil. Math. Ges. Hamburg (1886), 135–155, (dated 1885). B. Siebert and G. Tian, On quantum cohomology rings of Fano manifolds and a formula of Vafa and Intrilligator, Asian J. Math. 1 (1997), 679–695. F. Sottile, Enumerative geometry for real varieties, Algebraic Geometry, Santa Cruz 1995 (J. Koll´ ar, R. Lazarsfeld, and D. Morrison, eds.), Proc. Sympos. Pure Math., vol. 62, Part 1, Amer. Math. Soc., 1997, pp. 435–447. , Elementary transversality in the Schubert calculus in any characteristic, arXiv.org/math.AG/0010319, 13 pp., 2000. , Real rational curves in Grassmannians, J. Amer. Math. Soc. 13 (2000), 333–341. , Some real and unreal enumerative geometry for flag manifolds, Mich. Math. J. 48 (2000), 573–592. F. Sottile and B. Sturmfels, A sagbi basis for the quantum Grassmannian, J. Pure Appl. Alg., 158 (2001), 347–366. R. Stanley, Enumerative combinatorics volume 2, Cambridge Studies in Advanced Mathe-matics, no. 62, Cambridge University Press, 1999, With appendix 1 by Sergey Fomin. S.A. Strømme, On parameterized rational curves in Grassmann varieties, Space Curves (F. Ghione, C. Peskine, and E. Sernesi, eds.), Lecture Notes in Math., vol. 1266, Springer-Verlag, 1987, pp. 251–272. B. Sturmfels, Algorithms in invariant theory, Texts and Monographs in Symbolic Computa-tion, Springer-Verlag, 1993. , Gr¨ obner bases and convex polytopes, University Lecture Series, vol. 8, American Math. Soc., Providence, RI, 1996. Geanina Tudose, A special case of sl(n)-fusion coefficients, Mss., arXiv.math.CO/0008034, 2000. C. Vafa, Topological mirrors and quantum rings, Essays on Mirror Manifolds, International Press, 1992, ed. by S.-T. Yau, pp. 96–119. M. Walton, Fusion rules in Weiss-Zumino-Witten models, Nuclear Phys. B 340 (1990), no. 2-3, 777–790. X. Wang, Pole placement by static output feedback, Journal of Math. Systems, Estimation, and Control 2 (1992), no. 2, 205–218. J.C. Willems and W.H. Hesselink, Generic properties of the pole placement problem, Proc. of the 7th IFAC Congress, 1978, pp. 1725–1729. E. Witten, Two dimensional gravity and intersection theory on moduli space, Surveys in Diff. Geom. 1 (1991), 243–310. E. Witten, The Verlinde algebera and the cohomology of the Grassmannian, Geometry, Topol-ogy, and Physics (Cambridge, MA), Conference Proceedings and Lecture Notes in Geometric Topology, vol. IV, International Press, 1995, pp. 357–422. Department of Mathematics and Statistics, University of Massachusetts, Amherst, Massachusetts 01003, USA E-mail address, Frank Sottile: [email protected] URL, Frank Sottile:
330
Sign up Sign in Sign up Sign in Understanding Reranking: Techniques, Advantages, and Disadvantages -- Listen Share In the ever-evolving landscape of information retrieval and artificial intelligence, reranking has emerged as a pivotal technique to enhance the accuracy and relevance of search results. This article delves into the fundamentals of reranking, its techniques, and the advantages and disadvantages it presents. 👉 We previously discussed how to run LLMs locally in the article available at the link. Now, it is time to explore efficient and advanced techniques to improve the results for RAG. What is Reranking? Reranking is a crucial post-processing step in information retrieval that enhances the relevance and accuracy of search results. Initially, a primary retrieval method, such as traditional keyword-based search engines, TF-IDF, BM25, or initial embedding-based retrieval, is used to generate search results (chunks) from the database. These initial results are then re-ordered based on a more sophisticated relevance analysis. This secondary analysis refines the search results 🕵️, ensuring that the most pertinent information/content is presented to the model to answer the query. Techniques in Reranking 1. Traditional Methods Traditional reranking techniques often rely on heuristic methods or rule-based systems. These methods include: 2. Neural Network-based Methods Modern reranking techniques leverage the power of neural networks, particularly deep learning models: a. Cross-Encoders: Overview: Cross-Encoders, such as BERT-based rerankers, are designed to assess the relevance of a query and document pair by processing them together. This approach ensures a deep interaction between the query and document during the encoding process, allowing for a more nuanced understanding of their relationship. How They Work: Input Combination: The query and the document are concatenated into a single input sequence. For example, if the query is “climate change effects” and the document is “Climate change impacts agriculture by…”, the input to the Cross-Encoder might look like: [CLS] climate change effects [SEP] Climate change impacts agriculture by... [SEP]. [CLS] climate change effects [SEP] Climate change impacts agriculture by... [SEP] Encoding: This combined input is then fed into a transformer model (e.g., BERT). The model processes the entire sequence simultaneously, considering the interactions between each token in the query and each token in the document. Relevance Scoring: The output of the transformer model is used to produce a relevance score for the query-document pair. Typically, the [CLS] token’s representation is passed through a linear layer to generate this score. Advantages: High Precision: Because the model can capture complex interactions between the query and the document, Cross-Encoders tend to provide highly accurate relevance scores. Contextual Understanding: The model benefits from the full context of both the query and the document, leading to a better understanding of their relationship. Disadvantages: Computationally Expensive: Processing each query-document pair together is resource-intensive, making it less suitable for real-time applications or scenarios involving large document collections. Scalability Issues: Due to the high computational cost, scaling to millions of documents or handling a large number of queries can be challenging. b. Bi-Encoders Overview: Bi-Encoders offer a more efficient alternative by encoding the query and the documents separately into fixed-length vectors. This method allows for faster computation of similarity scores, making it more scalable than Cross-Encoders. How They Work: Separate Encoding: The query and each document are encoded independently. For instance, the query “climate change effects” is encoded into one vector, and the document “Climate change impacts agriculture by…” is encoded into another vector. Vector Similarity: Once the query and document vectors are obtained, their similarity is computed, typically using cosine similarity or dot product. The similarity score indicates how relevant the document is to the query. Advantages: Efficiency: Since the query and documents are encoded independently, precomputing document vectors is possible. This means that at query time, only the query needs to be encoded, significantly speeding up the retrieval process. Scalability: Bi-Encoders can handle large-scale retrieval tasks efficiently. Precomputed document vectors can be stored and indexed, allowing for fast lookups. Disadvantages: Lower Precision: Because the query and document are encoded separately, Bi-Encoders might miss out on some of the finer interactions between the query and the document, potentially leading to lower precision compared to Cross-Encoders. Less Contextual Interaction: The independent encoding process means that the model cannot fully leverage the context provided by the query when encoding the document and vice versa. Comparison and Use Cases 3. Retrieval-Augmented Generation (RAG) with Reranking RAG models, which combine retrieval mechanisms with generative models, benefit significantly from reranking. Initially, a vector database retrieves a broad set of relevant documents. A reranker then narrows these down to the most relevant ones, which are used as input for the language model to produce high-quality responses. Reranking can be mathematically framed as an optimization problem. Let D={d1,d2,…,dn} be the initial set of documents retrieved for a query ‘q’. The goal is to find a ranking function f(q,d) that assigns a relevance score to each document d given the query ‘q’. The documents are then sorted based on these scores. Here, x(q,d) is a feature vector representing the query-document pair, and www is a vector of weights learned through training. Advantages of Reranking 1. Improved Relevance Reranking significantly enhances the relevance of search results. By re-evaluating the initial results with a more detailed analysis, reranking ensures that the most pertinent documents are prioritized, improving user satisfaction and trust​. 2. Better Handling of Ambiguities Complex queries often contain ambiguities or nuanced meanings. Reranking helps to capture these subtleties by employing sophisticated models that understand the context better than traditional ranking methods. 3. Enhanced User Experience For applications requiring high precision, reranking ensures that users receive the most accurate and relevant information, thereby enhancing the overall user experience​. Disadvantages of Reranking 1. Increased Computational Cost One of the primary drawbacks of reranking, especially with neural network-based methods, is the significant computational expense. Processing queries and documents through deep neural networks require substantial computational resources and can lead to increased latency​. 2. Latency Issues The additional processing step introduced by reranking can result in slower response times. This is particularly critical in real-time systems where speed is essential. The trade-off between improved relevance and response time must be carefully managed​. Open source reranking models can be found on huggingface at: Conclusion Reranking is a powerful technique that plays a crucial role in enhancing the relevance and accuracy of search results in modern information retrieval systems. While it offers significant advantages, such as improved relevance and better handling of ambiguities, it also comes with challenges like increased computational cost and latency. Understanding these trade-offs is essential for leveraging reranking effectively in various applications. By understanding and implementing reranking techniques one can significantly improve the performance of their search or RAG systems, paving the way for more accurate and user-friendly information retrieval solutions. An additional technique to further enhance the performance and quality of responses is the Long Context Reorder, which we will explore in detail in the next article. Feel free to share your thoughts or questions in the comments below. If you enjoyed this article, please give it a clap 👏 and share it with your network! Sources: -- -- Written by Srikanth Dongala Srikanth, a Data Scientist at FloData Analytics, holds a Master's degree in Data Science. Expertise in ML, DL, Computer Vision, Gen AI No responses yet Help Status About Careers Press Blog Privacy Rules Terms Text to speech
331
Aminoacyl-tRNA synthetase evolution and sectoring of the genetic code - PMC =============== Skip to main content An official website of the United States government Here's how you know Here's how you know Official websites use .gov A .gov website belongs to an official government organization in the United States. Secure .gov websites use HTTPS A lock ( ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites. PMC Search Update PMC Beta search will replace the current PMC search the week of September 7, 2025. Try out PMC Beta search now and give us your feedback. Learn more Search Log in Dashboard Publications Account settings Log out Search… Search NCBI Primary site navigation Search Logged in as: Dashboard Publications Account settings Log in Search PMC Full-Text Archive Search in PMC Advanced Search Journal List User Guide New Try this search in PMC Beta Search View on publisher site Download PDF Add to Collections Cite Permalink PERMALINK Copy As a library, NLM provides access to scientific literature. Inclusion in an NLM database does not imply endorsement of, or agreement with, the contents by NLM or the National Institutes of Health. Learn more: PMC Disclaimer | PMC Copyright Notice Transcription . 2018 May 30;9(4):205–224. doi: 10.1080/21541264.2018.1467718 Search in PMC Search in PubMed View in NLM Catalog Add to search Aminoacyl-tRNA synthetase evolution and sectoring of the genetic code Daewoo Pak Daewoo Pak a Center for Statistical Training and Consulting, Michigan State University, E. Lansing, MI 48824, USA Find articles by Daewoo Pak a, Yunsoo Kim Yunsoo Kim b Troy High School, Troy, MI, USA Find articles by Yunsoo Kim b, Zachary F Burton Zachary F Burton c Department of Biochemistry and Molecular Biology, Michigan State University, 603 Wilson Rd, E. Lansing, MI 48824-1319, USA Find articles by Zachary F Burton c, Author information Article notes Copyright and License information a Center for Statistical Training and Consulting, Michigan State University, E. Lansing, MI 48824, USA b Troy High School, Troy, MI, USA c Department of Biochemistry and Molecular Biology, Michigan State University, 603 Wilson Rd, E. Lansing, MI 48824-1319, USA CONTACT Zachary F. Burton [email protected] Department of Biochemistry and Molecular Biology, Michigan State University, E. Lansing, MI 48824-1319, USA. Supplemental data for this article can be accessed on the publisher's website. Received 2018 Jan 9; Accepted 2018 Apr 13; Collection date 2018. © 2018 The Author(s). Published by Informa UK Limited, trading as Taylor & Francis Group. This is an Open Access article distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives License ( which permits non-commercial re-use, distribution, and reproduction in any medium, provided the original work is properly cited, and is not altered, transformed, or built upon in any way. PMC Copyright notice PMCID: PMC6104698 PMID: 29727262 ABSTRACT The genetic code sectored via tRNA charging errors, and the code progressed toward closure and universality because of evolution of aminoacyl-tRNA synthetase (aaRS) fidelity and translational fidelity mechanisms. Class I and class II aaRS folds are identified as homologs. From sequence alignments, a structurally conserved Zn-binding domain common to class I and class II aaRS was identified. A model for the class I and class II aaRS alternate folding pathways is posited. Five mechanisms toward code closure are highlighted: 1) aaRS proofreading to remove mischarged amino acids from tRNA; 2) accurate aaRS active site specification of amino acid substrates; 3) aaRS-tRNA anticodon recognition; 4) conformational coupling proofreading of the anticodon-codon interaction; and 5) deamination of tRNA wobble adenine to inosine. In tRNA anticodons there is strong wobble sequence preference that results in a broader spectrum of contacts to synonymous mRNA codon wobble bases. Adenine is excluded from the anticodon wobble position of tRNA unless it is modified to inosine. Uracil is generally preferred to cytosine in the tRNA anticodon wobble position. Because of wobble ambiguity when tRNA reads mRNA, the maximal coding capacity of the three nucleotide code read by tRNA is 31 amino acids + stops. KEYWORDS: The last universal common cellular ancestor, cloverleaf tRNA, standard genetic code, aminoacyl-tRNA synthetases, class I and class II aaRS homology, anticodon wobble preference, synonymous anticodons, tRNA wobble inosine Abbreviations aminoacyl-tRNA synthetase enzymes(aaRS) (i.e. glycine aminoacyl-tRNA synthetase (GlyRS)) Homo sapiens (Hs) inosine (I) the last universal common cellular ancestor (LUCA) Pyrobaculum aerophilum (Pae) Pyrococcus furiosis (Pfu) Staphylothermus marinus (Sma) Sulfolobus solfataricus (Sso) Thermus thermophilus (Tth) Introduction Aminoacyl-tRNA synthetases (aaRS; i.e. GlyRS) accurately add amino acids to the 3’-CCA ends of tRNAs. Two distinct protein folds are identified for aaRS, described as class I and class II [1,2]. Class I aaRS have an active site that adenylates an amino acid at a “Rossmannoid” fold of parallel β-sheets before transferring the amino acid to tRNA. Class II aaRS, by strong contrast, have an active site of antiparallel β-sheets. The evolutionary relationship of class I and class II enzymes has not been clearly demonstrated, although the interesting suggestion has been made that class I and class II aaRS enzymes were encoded on opposing strands of a bi-directional ancestral gene [3–8]. We provide a simpler explanation. We show amino acid sequence similarity in archaea that indicates that class I and class II aaRS enzymes arose from unidirectional in-frame translation starting from different N-termini. The longer N-terminal region of class I aaRS enzymes forces the class I fold and prevents the class II fold. To detect class I and class II aaRS sequence similarity, one only has to gaze toward LUCA (the last universal common cellular ancestor; ∼3.85 billion years ago) by comparing sequences in ancient archaea . All 64 codons are utilized in mRNA, but only a subset of matching anticodons is utilized in tRNA. A subset of tRNA anticodons is possible because of degeneracy of the genetic code and ambiguity in tRNA anticodon wobble bases reading mRNA codons, allowing (and limiting) a tRNA anticodon to read multiple synonymous mRNA codons. Potentially, therefore, ambiguity in tRNA anticodons reading mRNA codons could be positively selected in evolution, which might be reflected in anticodon wobble base preferences. It appears that tRNAomes (the collection of tRNAs for an organism) are generally selected to be small (even in complex eukaryotes) without skipping recognition of mRNA codons . Specialization of tRNAs may occur , but this was not the major driving force in early evolution. In a recent review, it was suggested that evidence was sparse for the error minimization hypothesis in standard genetic code evolution . The error minimization theory describes sectoring of the code to minimize the impacts of random mutations in tRNAs and of tRNA charging errors. Clustering similar amino acids in the codon-anticodon table might be selected in order to reduce the impact of translation errors. Massey has argued, however, that the code likely did not sector strongly to minimize errors in translation and coding, but, rather, that clustering of similar amino acids occurred through the evolutionary sectoring mechanism . Here, we also argue against error minimization as a strong selection pressure in building the genetic code. Rather, we argue that the sectoring of the code was largely driven by tRNA charging errors, and, therefore, error minimization resulted from the pathway of code evolution, essentially as proposed by Massey. Specifically, we show that minimization of translation errors via aaRS proofreading appears to have limited sectoring of the genetic code, indicating that tRNA charging errors led to reassignments of tRNAs during early code evolution. Reassignments of tRNAs could result in subdividing a 4-codon sector of the codon-anticodon table into two 2-codon sectors and adding a newly encoded amino acid. Mutations in the anticodon loop of tRNAs can also initiate invasion of neighboring genetic code sectors, but this process moves amino acids in the table without introducing new amino acids into the code. Because tRNA charging errors drove code evolution, mechanisms ensuring tRNA charging accuracy brought the code to closure and universality. The dominant model to analyze genetic code evolution, therefore, should be that tRNA charging errors induced sectoring of the code, and evolution of accuracy mechanisms brought the code to universality and closure. The coevolution hypothesis posits that tRNAs, amino acids, the genetic code and aaRS enzymes are coevolved, an idea that we support in this paper. In recent work, we describe how a cloverleaf tRNA evolution model [13,14] is highly predictive for models of genetic code evolution . Further, we show that evolution of the genetic code is centered more on tRNA than on mRNA or the ribosome. Primitive archaea have 46 tRNAs and 3 stop codons. Translation termination signals are recognized by proteins (not tRNA) that bind to an mRNA stop codon in the ribosome decoding center and reach into the ribosome peptidyl transferase center to terminate translation . Included in sets of 46 tRNAs are encoded 44 unique tRNA anticodons. There are 3 tRNA Met (CAU anticodon) including 1 initiator tRNA iMet and 2 elongator tRNA Met [16,17]. Generally, in ancient archaea and bacteria, only a single tRNA Ile (GAU) is utilized. All other permitted anticodons are found in tRNAs except for three potential anticodon sequences corresponding to stop codons. Because only 44 unique anticodons and 3 stop codons need to be considered in early code evolution, but all 64 codons are utilized in mRNA, tRNA anticodon structure and presentation appears to have placed the greatest restrictions on expansions of the genetic code . In archaea, little or no tRNA wobble position adenine is found [9,18]. In bacteria, only tRNA Arg (ACG) generally has adenine encoded in the anticodon wobble position [18,19]. In bacteria and eukarya, tRNA wobble adenine is modified to inosine (A→I) by a tRNA adenosine deaminase. Wobble A in tRNA specifies U in mRNA codons, but wobble inosine pairs A, C and U, indicating that increasing ambiguity in mRNA codon interpretation was positively selected as long as the specificity of coding remained unchanged [18,19]. Because tRNA wobble A is negatively selected, according to a tRNA-centric view, only 44 unique anticodons and 3 stop codons need to be considered in earliest standard genetic code evolution rather than 64 . Because of tRNA wobble ambiguity reading mRNA, however, the maximum number of amino acids that can be encoded by a genetic code read by tRNA is 31 aas with stops. Although the initial evolution of the genetic code may have involved ribozyme-catalyzed tRNA aminoacylation [20–23], at later stages, tRNAs coevolved with aaRS enzymes that attach amino acids at the tRNA 3’-CCA ends [1,2]. Some aaRS enzymes have the capability to proofread tRNA-aa attachments by moving an improperly joined amino acid from the aaRS synthetic site, where the amino acid is linked to the tRNA 3’-CCA end, to a separate aaRS editing or proofreading site, where the non-cognate amino acid is removed [1,2]. We make observations about aaRS editing that are not noted in reviews nor, as far as we can discover, in the literature. We make the observation that aaRS editing appears to inhibit continued sectoring of the code utilizing the anticodon wobble position, giving insight into the roles of tRNA charging errors in evolution of the code. Furthermore, in eukaryotes, left half and mostly 4-codon sectors of the genetic code, for which the aaRS enzymes have proofreading capacity, are also the sectors that have introduced the adenine→inosine anticodon wobble position modification. A→I modification blocks subdivision of a 4-codon sector to two 2-codon sectors because sectoring would result in translation errors. A→I conversions and U vs C wobble preference increase the ambiguity of the tRNA wobble base to allow broader sequence contacts to synonymous mRNA codons. Results Evolution and homology of class I and class II aaRS enzymes The evolution of Pyrococcus furiosis (Pfu) aaRS enzymes is described in Figure 1. Interestingly, the apparent pathways for Pfu aaRS divergence show similarities to the proposed pathways for LUCA tRNA evolution . AaRS enzymes, amino acids and tRNAs are coevolved, as predicted by the coevolution hypothesis . To construct the pathway for aaRS evolution, NCBI (National Center for Biotechnology Information) Blast tools were used with relaxed search metrics to identify the closest apparent relationships between aaRS proteins. Pfu was selected as an example of an ancient archaea with a similar translation system to LUCA . Separate structural comparisons (structural dendograms) of class I and class II aaRS enzymes have been published [24–26], and our analysis is consistent with these. Surprisingly, however, we identify similarities in protein sequences comparing class I and class II Pfu aaRS enzymes, which, to our knowledge, have not previously been reported (however, see Figure 13 of reference ). Specifically, GlyRS-IIA and ValRS-IA are similar in amino acid sequence (e-value = 2.1). AspRS-IIB and IleRS-IA are similar (e-value = 1.4 or 1.5; depending on the alignment). HisRS-IIA and TyrRS-IC are also similar (e-value = 3.7). TyrRS-1C is more similar in sequence to HisRS-IIA (e-value = 3.7) than it is to other class I aaRS enzymes, with the sole exception of closely related TrpRS-IC (e-value = 1e-4). ThrRS-IIA is similar in sequence to IleRS-IA (e-value = 4.2). The e-value scores are for the best local alignments, but Pfu GlyRS-IIA and ValRS-IA are similar in sequence over nearly the entire length of GlyRS-IIA. Figure 1. Open in a new tab Pyrococcus furiosis (Pfu) aaRS enzymes were searched using NCBI Blast tools for nearest homologs in Pfu. In some cases, Staphylothermus marinus (Sma) (archaea) and Escherichia coli (Eco) (bacteria) homologs are identified. AlaX is one of a set of tRNA Ala editing enzymes in Pfu. Interestingly, Pfu ValRS-IA, LeuRS-IA, IleRS-IA and MetRS-IA are all very similar enzymes by aaRS structural class (IA) and e-value, and Val, Leu, Ile and Met are similar neutral and hydrophobic amino acids within the first column of the codon-anticodon table. For Pfu, therefore, amino acids, tRNAs and aaRS enzymes are coevolved for the first column of the code, as expected from the coevolution hypothesis. ThrRS-IIA, ProRS-IIA and SerRS-IIA are found in the second column of the code, and these are related enzymes by aaRS class, e-value and apparent lineage. Gly, Asp, Val and Ala have been proposed to be the first four amino acids in the code . Interestingly, GlyRS-IIA, AspRS-IIB, ValRS-IA and AlaRS-IID are very different enzymes, indicating that, at the base of code evolution across rows, discrimination of tRNAs by distinct aaRS enzymes was strongly selected. Apparently, there is a greater tendency for amino acid, tRNA and aaRS coevolution within columns than across rows of the genetic code, particularly at the base of the code and at the earliest stage of code evolution. These observations appear to partly explain the distributions of similar amino acids within codon-anticodon table columns. Figure 2 and Supplementary Figure 1 show the alignment of GlyRS-IIA, ValRS-IA and IleRS-IA enzymes from ancient archaea. The alignment in Figure 2 includes a Zn-binding motif that is shared among class I and class II aaRS enzymes. Some other features of class I and class II aaRS enzymes also appear to be conserved. The entire alignment is shown in Supplementary Figure 1. A summary of the alignment data is shown in the schematic in Figure 2. Relative to Pfu GlyRS-IIA, Staphylothermus marinus (Sma) IleRS-IA has an N-terminal extension that includes essential active site β-sheets, the HIGH active site motif and a Zn-binding motif, all of which are missing in GlyRS-IIA. These unique N-terminal determinants of class I aaRS enzymes, are likely to ensure the class I fold and to block the C-terminus of the protein from assuming the class II fold. The shorter GlyRS-IIA aligns to IleRS-IA and ValRS-IA over its entire length, and the C-terminus of these proteins is reasonably conserved across aaRS classes. GlyRS-IIA, IleRS-IA and ValRS-IA share: 1) a β-sheet in the Motif 1 region of GlyRS-IIA aligning with an active site β-sheet in IleRS-IA; 2) a Zn-binding domain including 3 similarly positioned β-sheets; 3) a β-sheet of GlyRS-IIA just C-terminal to the shared Zn-binding domain aligns with an active site β-sheet of IleRS-IA; and 4) the active site Motif 2 of GlyRS-IIA and the active site KMSKS of IleRS-IA align, including a shared active site β-sheet and loop. The quality of the amino acid sequence alignment is probably sufficient to demonstrate GlyRS-IIA, ValRS-IA and IleRS-IA homology. The structural similarities, such as the shared Zn-binding motif, strongly reinforce this conclusion. Local alignments with e-values as low as 0.001-0.002 have been obtained for GlyRS-IIA and IleRS-IA (i.e. a 1:500 to 1:1000 chance that the alignment is due to a random event). Figure 2. Open in a new tab Similarity of class I and class II aaRS enzymes is indicated. A partial sequence alignment of GlyRS-IIA, ValRS-IA and IleRS-IA enzymes is shown demonstrating sequence similarity of a shared Zn-binding motif and GlyRS-IIA Motif 2 with IleRS-IA KMSKS motif. Red shading indicates identity comparing class I and class II aaRS. Yellow shading indicates similarity comparing class I and class II aaRS. Green shading is used to highlight Zn-binding motifs. Cyan shading indicates active site β-sheets (sss). Magenta shading indicates 3 β-sheets in GlyRS-IIA expected to block class I folding by a class II aaRS. Gray shading indicates β1-β3 of the shared Zn-binding domain. The entire alignment is shown in Supplementary Figure 1. The schematic diagram shows how Pfu GlyRS-IIA and Sma IleRS-IA align (gray lines highlight some similarities). Pae) Pyrobaculum aerophilum; Sso) Sulfolobus solfataricus. In order to analyze the shared Zn-binding motif, a homology model for Pfu GlyRS-IIA was generated (Figure 3). The closest structure identified using the Phyre2 server was human GlyRS-IIA (i.e. PDB 4KQE and 4QEI) [27,28]. Although human GlyRS-IIA lacks cysteine ligands for Zn binding, the fold of the homologous region of the protein is maintained, so a model of the conserved Pfu GlyRS-IIA Zn-binding domain was obtained. Thermus thermophilus (Tth) ValRS-IA (PDB 1GAX) includes a shortened version of the shared Zn-binding domain . In Figure 4, the shared Zn-binding regions are compared. Within the Zn-binding region, three similarly arranged β-sheets are identified (β1-β3) comparing Tth ValRS-IA, Pfu GlyRS-IIA and human GlyRS-IIA structures. We conclude from this structural comparison that class I and class II aaRS enzymes are homologous. Figure 3. Open in a new tab A homology model (Supplementary File 1) of Pyrococcus furiosis GlyRS-IIA was constructed by homology threading to human GlyRS-IIA (PDB 4KQE). The homology model (powder blue), PDB 4KQE (white) and related PDB 4QEI (magenta) were overlaid. Although human (Hs) GlyRS-IIA lacks Zn binding, the shape of the loops is maintained. Figure 4. Open in a new tab A structurally conserved Zn-binding motif among class I and class II aaRS enzymes. Similar orientations of Tth ValRS-IA (green), Pfu GlyRS-IIA (magenta) and human GlyRS-IIA (white) are shown. Incompatibility of class I and class II aaRS folds ValRS-IA and GlyRS-IIA folds are incompatible (Figure 5). The N-terminal extension of ValRS-IA helps to form the class I aaRS active site (i.e. the HIGH motif and essential active site β-sheets), and the N-terminal Zn-binding region of class I aaRS enzymes blocks class II aaRS folding. By comparison of structures, three antiparallel β-sheets in GlyRS-IIA that surround the shared Zn-binding motif establish a clash with the ValRS-IA N-terminal Zn-binding domain. The most C-terminal β-sheet of the Pfu GlyRS-IIA β-sheets (157-KAYL) corresponds to an active site β-sheet in Tth ValRS-IA (485-LVTG; Sma IleRS-IA 553-FIVEG), so formation of the β-sheets in GlyRS-IIA is incompatible with formation of the ValRS-IA active site. Formation of the ValRS-IA active site, therefore, is dependent on the N-terminal domain of Tth ValRS-IA, which includes the “HIGH” active site motif, parts of the active site parallel β-sheets (β-sheets 35-PFVIF, 73-EAVWL(P)GT, 137-DWSREAF) and the class I-specific Zn-binding domain. The more N-terminal class I-specific Zn-binding domain blocks class II aaRS folding. Figure 5. Open in a new tab Incompatibility of class I and class II aaRS folding patterns. An overlay of the shared Zn-binding motif of GlyRS-IIA (secondary structure representation) and ValRS-IA (green) demonstrates a clash by three antiparallel GlyRS-IIA β-sheets with the N-terminal ValRS-IA Zn-binding domain. A ValRS-IA active site β-sheet (LVLEG) is yellow. LVLEG corresponds to Pfu GlyRS-IIA β-sheet KAYL in the 3 antiparallel β-sheet cluster surrounding the shared Zn-binding motif. The standard genetic code The initial standard genetic code, which is found in many ancient archaea, is shown in Figure 6 as a codon-anticodon table (top table, for archaea). Because of the central importance of tRNA in genetic code evolution, codon-anticodon tables are more informative than simpler representations. When the standard genetic code was established (i.e. in the RNA-protein world before LUCA), the anticodons shaded in red in the top chart were disallowed, because adenine was negatively selected in the tRNA anticodon wobble position [9,18]. Adenine in the wobble position can destabilize the anticodon loop. Also, because wobble A pairs with U much better than with C in mRNA, adenine in the tRNA wobble position supports an inflexible code that was negatively selected . In addition, only one tRNA Ile (GAU) is generally utilized. Therefore, only 44 unique tRNA anticodons and 3 stop codons need to be considered in early genetic code evolution . Figure 6. Open in a new tab Codon-anticodon tables. Proofreading by aaRS enzymes in archaea is confined to the left half of the codon-anticodon table. Gray shading indicates editing by aaRS enzymes. Red shading indicates anticodons that are disallowed or strongly underrepresented. Green shading indicates adenosine→inosine conversion in bacteria and eukaryotes (tRNA Arg (ACG→ICG)). Yellow shading indicates adenosine→inosine conversion in eukaryotes (very rarely, these modifications are found in some bacteria) . AaRS enzymes that proofread In the course of studies of genetic code evolution, we analyzed archaeal, bacterial and eukaryotic aaRS enzymes with proofreading active sites versus the standard genetic code (Figure 6) [1,2]. The figure also accounts for the tRNA wobble adenine→inosine modification lacking in archaea but found in bacteria (tRNA Arg (ACG)) and eukarya (tRNA Leu (AAG), tRNA Ile (AAU), tRNA Val (AAC), tRNA Ser (AGA), tRNA Pro (AGG), tRNA Thr (AGU), tRNA Ala (AGC) and tRNA Arg (ACG)) [18,19]. Remarkably, the aaRS enzymes that proofread in archaea are restricted to the left half of the codon-anticodon table, and, in eukaryotes, aaRS enzymes that proofread correlate strongly with the wobble A→I modification. SerRS-IIA proofreads, but Ser is split between the left and right halves of the table. In bacteria and eukaryotes, LysRS-IIB proofreads, but, in archaea, LysRS-IE does not [1,2]. To our knowledge, near restriction of aaRS editing to the left half of the codon-anticodon chart is not recorded in recent reviews or in the literature on aaRS enzymes, tRNAs or the genetic code, although this observation is informative about code structure and evolution. Because the A→I wobble modification strongly correlates with aaRS enzymes that proofread, this added structure of the code requires explanation. Synonymous anticodon preferences I>>G>>A In part because of strong G>>A anticodon wobble preference in archaea (Figure 6), we considered wobble preference more generally in tRNA (Figures 7–9). Unlike codon preference, anticodon wobble preference does not appear to be largely driven by gene regulation (i.e. to match codon bias). Inspection of anticodon wobble base frequencies indicated that, for each synonymous ANN and GNN pair (encoding the same amino acid) (Figure 7), there was a strong preference for wobble G>>A, unless A was deaminated to inosine, in which case, interestingly, the preference was strongly I>>G. In archaea, A is largely excluded in the wobble position. In bacteria, only tRNA Arg (ACG→ICG) is strongly favored over tRNA Arg (GCG). In all other cases in bacteria, G is strongly favored over A, as in archaea. In eukaryotes, tRNA Leu (AAG vs GAG), tRNA Ile (AAU vs GAU), tRNA Val (AAC vs GAC), tRNA Ser (AGA vs GGA), tRNA Pro (AGG vs GGG), tRNA Ala (AGC vs GGC) and tRNA Arg (ACG vs GCG), for which wobble anticodons encoding A modify A→I, inosine is strongly favored over G. The A→I conversion is expected to increase the encoding of tRNAs with ANN anticodons but is not necessarily expected to so strongly suppress the use of synonymous GNN anticodons, which are functional in archaea. In this regard, tRNA wobble G can pair with mRNA codon C or U, but tRNA wobble I can pair with A, C or U. Apparently, I>>G preference in the anticodon wobble position reflects strong positive selection for broader recognition by tRNA of synonymous mRNA codons. Figure 7. Open in a new tab Anticodon wobble preferences comparing synonymous ANN and GNN tRNA anticodons. indicates A→I conversion. Synonymous anticodons: Ser1 (AAA vs GAA), Ser2 (ACU vs GCU). Figure 9. Open in a new tab A representation that combines purine and pyrimidine anticodon wobble preference data. The down arrow indicates Ile (UAU) utilization in eukaryotes. The black lines indicate interesting differences comparing Arg anticodons in bacteria and eukarya. We note that in eukaryotes tRNA Ser (ACU<<GCU) shows G preference over A in the anticodon wobble position. For tRNA Ser (ACU), A is not converted to I. If tRNA Ser (ACU) converted A→I, this would cause recognition by tRNA Ser (ICU) of AGA Arg codons in mRNA, causing translation errors (Ser replacement of Arg in proteins). The A→I conversion, therefore, only occurs in 4-codon sectors to prevent spillover of tRNA specificity to a 2-codon sector encoding a different amino acid. In eukaryotes, the only 4-codon sector of the genetic code for which there is no A→I conversion is tRNA Gly (ACC) . U>C In Figure 8, pyrimidine wobble preferences are analyzed. Generally, U is preferred over C. In archaea, U is slightly preferred over C, for all synonymous anticodon pairs. Trp (CCA) is a special case, because anticodon UCA represents a UGA stop codon that is read in mRNA by a protein. In bacteria, U is generally preferred over C, except for Leu2 (UAG<CAG) and Arg1 (UCG<>CAC), Ser (UGA>>CGA), Pro (UGG>>CGG), Thr (UGU>>CGG), Ala (UGC>>CGC), Gln (UUG>>CUG), Lys (UUU>>CUU), Glu (UUC>>CUC) and Gly (UCC>>CCC), wobble U is strongly preferred over wobble C. In eukaryotes, the U>C tRNA anticodon wobble preference is apparent for sets of synonymous anticodons for Ser (UGA>CGA), Pro (UGG>>CGG), Thr (UGU>>CGU), Ala (UGC>>CGC), Gln (UUG>CUG), Glu (UUC>>CUC), Arg1 (UCG>>CCG) and Arg2 (UCU>CCU). Interestingly, U versus C bias is opposite for eukaryotic Arg1 (UCG>>CCG) and bacterial Arg1 (UCG<<CCG). Also, U<C anticodon wobble preference is observed in eukaryotes for Leu1 (UAA<CAA), Leu2 (UAG<CAG), Val (UAC<<CAC), Lys (UUU<<CUU) and Gly (UCC<CCC). Figure 8. Open in a new tab Anticodon wobble preferences comparing synonymous UNN and CNN tRNA anticodons. Synonymous anticodons: Leu1 (UAA vs CAA), Leu2 (UAG vs CAG), Arg1 (UCG vs CCG), Arg2 (UCU vs CCU). Ile, Met and Trp are special cases. Ile and Met occupy the same 4-codon sector in the codon-anticodon table. Anticodon preference for Ile is shown in Figure 9. For archaea and bacteria, tRNA Ile (GAU) is highly used and tRNA Ile (AAU and UAU) are very rarely used. Generally, archaea and bacteria utilize a single tRNA Ile (GAU). In eukaryotes tRNA Ile (AAU→IAU) is strongly favored and tRNA Ile (GAU) is suppressed, as expected. Interestingly, tRNA Ile (UAU) is commonly utilized in eukaryotes, although tRNA Ile (UAU) can potentially be ambiguous with tRNA Met (CAU). Trp (CCA) shares a 2-codon sector with a stop codon, which is read in mRNA by a protein rather than a tRNA . Synonymous anticodon wobble preference in mitochondria To maintain a small organelle genome, mitochondria encode a subset of tRNAs, potentially limiting available anticodons. Mitochondrial anticodon wobble preference, indeed, is strange and limited in coding capacity (Figures 7 and 8). In particular, Leu, Val, Pro, Thr, Ala, Arg and Gly tRNAs have scant or no mitochondria-encoded wobble anticodon A or G (Figure 7). In terms of codon usage and preference, however, mitochondria utilize mRNA codons with wobble C and U encoding these amino acids. Because mitochondria import cytosolic tRNAs [30–32], deficiencies in mitochondrial coding can be compensated, and, perhaps, all of the apparent mitochondrial anticodon wobble deficiencies are compensated by imported cytosolic tRNAs. We note that import of tRNAs with inosine in the anticodon wobble position (encoding Leu, Val, Pro, Thr, Ala and Arg) would be almost sufficient to compensate for limiting mitochondrial tRNAs. Import of tRNA Ile (IAU) is less important, because tRNA Ile (GAU) is encoded in the mitochondria and can suffice to read mRNA codons AUC and AUU. Import of a cytosolic tRNA Gly (GCC) also appears necessary, and cytosolic tRNA Gly (GCC) is imported into mitochondria . Furthermore, mitochondrial-encoded tRNAs are heavily biased toward wobble U rather than C (Figure 8). Interestingly, tRNA Trp (UCA>>CCA) in mitochondria utilizes the UCA anticodon corresponding to the UGA stop codon in place of the CCA anticodon, which is utilized to encode Trp in archaea, bacteria and eukaryotes. Of course, tRNA wobble U reads a broader spectrum of synonymous mRNA codons than tRNA wobble C because wobble U can pair with mRNA wobble A or G but wobble C strongly prefers to pair with mRNA wobble G. Arg coding Figure 9 shows interesting features of tRNA Arg distributions in eukarya and bacteria. Notably, tRNA Arg (CCG) is somewhat limiting in eukaryotes and tRNA Arg (UCG) is limiting or absent in bacteria. In Figure 10, the consequences of these tRNA limitations are reviewed. It appears that eukaryotes primarily use tRNA Arg (UCG) to read CGG codons. Bacteria primarily use tRNA Arg (CCG) to read CGG codons. Eukaryotes read CGA codons using tRNA Arg (ICG and UCG). Bacteria read CGA codons primarily using tRNA Arg (ICG). Absence of tRNA Arg (UCG) in some bacteria, therefore, appears to explain the evolution of the A→I tRNA wobble modification, which, in the case of missing tRNA Arg (UCG), is required to read CGA codons. Figure 10. Open in a new tab Consequences of apparent limiting of tRNA Arg (CCG) in eukarya and tRNA Arg (UCG) in bacteria. Ac for anticodon. The editing hypothesis Based on the genetic code table left side biased distribution of 4-codon sectors correlating with proofreading aaRS enzymes (Figure 6), sectoring of the genetic code utilizing the tRNA anticodon wobble position was likely inhibited by aaRS proofreading. Furthermore, editing appears to be limited to hydrophobic and neutral amino acids with limited charge and absent or limited side chain hydrogen bonding potential. Proofreading generally occurs for amino acids that are smaller than, or very similar to, the cognate amino acid [1,2]. Smaller amino acids may attach to a non-cognate tRNA and require editing because they can fit the synthetic aaRS active site and, therefore, can be linked to a non-cognate tRNA [1,2]. Aminoacylation errors are less likely for amino acids with charged side chains and/or with more hydrogen bonding groups, because more readily distinguished amino acids are more fully specified in their cognate aaRS synthetic active site. Interestingly, the densest sectoring employing the tRNA anticodon wobble position is observed for the third column (Glu, Asp, Lys, Asn, Gln, His, Ter (stop), Tyr) and the uppermost 4-codon sector of the fourth column (Trp, Ter, Cys). None of the corresponding aaRS enzymes proofread in archaea (Figure 6). In bacteria, LysRS-IIB proofreads to reject amino acids that are from outside the code (homocysteine, homoserine and ornithine). Lys and Arg are readily discriminated in the LysRS and ArgRS active sites. Lys has a flexible side chain with a localized positive charge. Arg, by contrast, has a much stiffer side chain with a distributed positive charge and hydrogen bonding potential. We posit that, as the code evolved, right half tRNAs initially did not require editing by an aaRS because encoded amino acids with more identifying functional groups were easier to specify through interactions in the aaRS synthetic active site. Because accurate specification of a cognate amino acid in the aaRS synthetic site limits tRNA charging errors, it is likely that the right half of the genetic code sectored more completely to 2-codon sectors prior to full evolution of accurate amino acid selectivity by aaRS enzymes. A slightly different but related view might be that amino acids with more identifying characteristics were more aggressive at invading 4-codon sectors compared to neutral amino acids with limited hydrogen bond forming potential. Evolution of aaRS anticodon recognition domains (in all aaRS except for AlaRS and SerRS) also enhanced the accuracy of tRNA charging and brought the code to universality. Fidelity mechanisms, therefore, continued to evolve and potentially take precedence over one another as the code continued to sector. Because aaRS proofreading appears to inhibit sectoring, and because archaeal aaRS enzymes from the right half of the code do not edit, sectoring is more innovative on the right half of the codon-anticodon table than the left half. We posit the editing hypothesis that aaRS proofreading inhibited genetic code sectoring. Invasion to reassign a 4-codon sector encoding a single amino acid to two 2-codon sectors, each encoding a distinct amino acid, for instance, was initiated by aminoacylation errors on existing tRNAs. During code evolution, invasion could be by an amino acid that was not yet encoded, resulting in an increase in the complexity of the code. Note that accuracy of translation and tRNA charging continued to improve as the code evolved, and editing and specificity, therefore, became ever more important later in evolution as additional amino acids became encoded. Also, metabolism generates amino acids that are not encoded but could be charged to tRNAs in error and could be removed by aaRS proofreading. Alternatively, amino acids could be attached to tRNAs in error, and, then, through selection, could be added to the genetic code by dividing a 4-codon sector into two 2-codon sectors. With the exceptions of Met and Trp, only a stop codon (UGA), which is recognized in mRNA by a protein not a tRNA, can occupy a 1-codon sector. Only in eukaryotes does Ile strongly occupy the UAU anticodon (adjacent to Met (CAU)) (Figure 9), by which time in evolution, mechanisms were developed for tRNA modifications to support accurate tRNA Ile (UAU) and tRNA Met (CAU) discrimination [33,34]. Met, which appears to have invaded a partially occupied 4-codon Ile sector, may be an apparent exception to the rule that proofreading aaRS enzymes resist sectoring around the wobble anticodon position. MetRS proofreads to remove homocysteine, which is not part of the genetic code [1,2]. The Ile-Met 4-codon sector, however, appears to be a special case (see Discussion). In evolution, Phe may have invaded a 4-codon Leu sector (AAA, GAA, UAA, CAA), perhaps being recruited from outside the code. Arg appears to have invaded a 4-codon Ser sector (ACU, GCU, UCU, CCU) (Figure 6), apparently demonstrating movement of amino acids within the code. Coevolution of aaRS enzymes and tRNAs and the editing hypothesis Figure 11 shows how aaRS enzymes may have coevolved with tRNAs [1,2]. Before the code was substantially evolved, it is difficult to imagine tRNA recognition by proteins, and initial aminoacyl transfers may have been catalyzed by ribozymes [20,22,23]. A proposed sequence of events was developed according to the aaRS mechanisms now used to discriminate different archaeal tRNAs. As aaRS enzymes evolved, acceptor stems of tRNAs and the discriminator base (position 76; 73 in historic numbering) [9,13] may have been the most important initial determinants for discrimination. In archaea, most discriminators are A, so the discriminator base is only used for a subset of amino acids (i.e. generally A in archaea except: G) Asp, Ser, Arg, Asn; U) Thr, Cys; and C) His) . We posit that recognition of the anticodon of tRNAs by aaRS enzymes subsequently became a mechanism for tRNA specification that restricted further sectoring of the code. Only AlaRS and SerRS lack anticodon recognition domains. Later, the aaRS enzyme class (i.e. class I versus class II aaRS) became a determinant [1,2]. Without knowing the exact order of events, at some stage, longer V loops in tRNA Leu and tRNA Ser became important as determinants and anti-determinants for tRNA charging. From Figure 1 of a recent paper , many tRNAs appear to be derived from tRNA Leu and tRNA Ser, which could have driven tRNA Leu and tRNA Ser V loop expansions in the evolving code in order to discriminate partially radiated tRNAs that may attach related amino acids. In archaea, generally, other tRNAs do not have V loop expansions (i.e. tRNA Tyr and tRNA Sec (Sec for selenocysteine) in bacteria). Along the pathway, active sites of aaRS enzymes continued to evolve to exclude attachment of incorrect amino acids. This exclusion is more difficult for amino acid side chains that are uncharged and that form only one hydrogen bond (i.e. the left half of the codon-anticodon table), explaining why aaRS proofreading became such a dominant mechanism for the left half of the code. At a late stage, therefore, proofreading by aaRS enzymes is posited to have been recruited as a mechanism for discrimination mostly restricted to the left half of the codon-anticodon table (Figure 6), consistent with the editing hypothesis that aaRS proofreading maintained 4-codon sectors in the genetic code by suppressing further sectoring. For bacteria, only LysRS-IIB from the right half of the table is capable of proofreading. AaRS enzyme classes are structurally related enzymes for archaea, bacteria and eukaryotes, except for LysRS, which is typically structural class IE in archaea and class IIB in bacteria and eukarya [1,2]. GlyRS is class IIA in archaea and eukaryotes but class IID (or historically classified IIC) in many bacteria . Figure 11. Open in a new tab An approximate sequence of events for the requirement of different mechanisms for discrimination of tRNA identities by aaRS enzymes and the evolution of ribosome fidelity. Green text indicates aaRS proofreading (in archaea). Proofreading is not utilized for GlyRS, ArgRS and archaeal/eukaryotic ProRS (Figure 6). Glycine is the smallest amino acid, so the GlyRS synthetic active site is constrained to block loading of larger amino acids (PDB 4KR2) . Arginine is a large amino acid that is much less flexible than lysine. As with lysine, arginine is charged (+1), and arginine has significant hydrogen bonding potential. These distinguishing features of arginine are utilized in the ArgRS synthetic active site to exclude incorrect amino acids (PDB 1F7U) . Proline is the only encoded imino acid, so proline is readily distinguished in the ProRS active site from other encoded amino acids. ProRS proofreads in bacteria because of addition of a bacterial-specific editing domain to ProRS-IIA that is missing in ProRS-IIA of archaea and eukaryotes. Of course, aaRS editing and accurate cognate amino acid specification also suppress inaccurate charging of tRNAs with amino acids that are generated from metabolism but are not encoded. When the code was evolving, aaRS enzymes were likely more error-prone in attaching amino acids, supporting sectoring of the code via tRNA charging errors. Interestingly, the tRNAs added by eukaryotes (compared to bacteria) with adenine→inosine in the anticodon wobble position are mostly proofread by aaRS enzymes and, also, generally occupy 4-codon sectors on the left half of the genetic code table (Figure 6) [18,19]. In eukaryotes, all tRNAs on the left half of the chart utilize the A→I conversion except tRNA Met, which lacks an anticodon with encoded wobble A, and tRNA Phe, which occupies a 2-codon sector and, therefore, cannot adopt the A→I modification without substituting Phe for Leu in proteins. It appears that eukaryotes adopted a mechanism evolved in bacteria for tRNA Arg (ACG→ICG) in order to modify and stabilize the left half of the eukaryotic genetic code table. Perhaps, most interestingly, when wobble inosine is utilized, the synonymous GNN anticodon is suppressed (Figures 7 and 9), indicating that the broader mRNA synonymous codon recognition of inosine compared to G is positively selected. The adenine→inosine modification has only invaded sectors with 4 codons because inosine pairs A, C and U in mRNA codons. In eukaryotes, the only 4-codon sector that is not altered with the adenine→inosine modification is the Gly sector . In mitochondria, it appears that tRNAs encoding Leu, Val, Pro, Thr, Ala and Arg, which all convert wobble A→I in eukaryotes, must be imported from the cytosol, indicating a strong preference for utilizing tRNAs encoded in the cell nucleus with inosine in the wobble anticodon position. Because the mitochondrion was derived from a α-proteobacterial endosymbiont, mitochondria would encode tRNAs with wobble G to specify these amino acids. It appears that the mitochondria prefer to import nuclear-encoded eukaryotic tRNAs with inosine in the wobble position rather than to utilize mitochondrial tRNAs with G in the wobble position, indicating once again the importance of increasing ambiguity in tRNA reading synonymous mRNA codons. Import of tRNAs is a fascinating process supporting the mitochondria-eukaryote symbiotic relationship. Without a eukaryotic host to supply missing tRNAs, mitochondria would not be able to translate mitochondria-encoded mRNA. Discussion Alternate class I and class II aaRS folding Class I and class II aaRS enzymes are related by amino acid sequence homology identified in archaeal species (Figures 1–2; Supplementary Figure S1). A shared Zn-binding domain in GlyRS-IIA and ValRS-IA is identified (Figure 4). Class I aaRS enzymes have a N-terminal extension that can include a second Zn-binding domain, which may have been a determinant in distinct class I aaRS folding. Additionally, class I aaRS active site β-sheets and the active site “HIGH” motif are found within the class I-specific N-terminus, so the class I aaRS active site cannot be assembled without the N-terminal domain. Because class I and class II enzymes have largely incompatible protein folds and bind to opposite faces of tRNA, we posit that an ancestral aaRS enzyme folded in distinct class I and class II conformations for three reasons. First, the N-terminal extension in class I aaRS enzymes that includes the HIGH active site motif and active site β-sheets and that can include a Zn-binding domain helped to enforce the class I fold. Second, a set of three antiparallel β-sheets in class II aaRS enzymes would clash with the N-terminal Zn motif found in class I aaRS and block assembly of the class I aaRS active site (Figure 5). Third, opposite faces of cloverleaf tRNA bind class I and class II aaRS enzymes, and tRNA binding may have helped direct the alternate aaRS folds. As domains evolved to take on the appropriate fold, Zn-binding disappears from some domains that initially evolved around Zn binding (Figure 4). The more complex model that class I and class II aaRS enzymes arose from transcription and translation of an ancestral bi-directional gene [3–6] we find less likely. Early in evolution, Zn-binding appears to have directed the stability and folding conformations of large proteins such as aaRS enzymes and RNA polymerase. Over time, some Zn domains hardened in conformation so that Zn binding was no longer necessary (Figure 4). Based on the determinants for class I and class II aaRS folding identified here, domain swap experiments can likely switch the folding of the two aaRS structural forms. The maximal size of the genetic code The standard genetic code is generally considered to potentially encode 64 amino acids. Because adenine is not utilized in the anticodon wobble position in archaea, however, this reduces the number of utilized anticodons to 48 at the base of code evolution . When wobble A is encoded in tRNA, A→I modification occurs, and wobble G is suppressed in the synonymous anticodon (Figures 7 and 9). The most heavily divided 4-codon sectors of the standard genetic code that encode amino acids, and not stop codons, or Met, are divided into two 2-codon sectors. The reason that 2-codon sectors resisted further subdivision into 1-codon sectors encoding two different amino acids is that tRNA anticodons with wobble U and wobble C are read ambiguously to recognize mRNA codons with both wobble A and G. Anticodon wobble C is thought to mostly recognize codon wobble G but may have recognized mRNA codon A well enough to have supported ambiguous reading of mRNAs during the early evolution of the code. Interestingly, anticodon wobble C was not excluded from tRNA as strictly as was anticodon wobble A, and this observation requires further explanation. Because of ambiguous coding, reading tRNAs, the largest number of amino acids that could be encoded using a triplet tRNA code is 32 (or 31 aas with stops). Because division of 4-codon sectors was limited by aaRS proofreading, evolutionary refinement of aaRS active sites, aaRS anticodon recognition and the A→I modification, the standard genetic code has only 20 aas with stops, and, with minor partial exceptions, the code has remained universal in the three domains of life. Coevolution of aaRS accuracy and genetic code universality There is a “chicken and egg” problem to consider in terms of aaRS evolution. Notably, there is no known mechanism to generate aaRS proteins until the code has evolved, and, to our knowledge, there is no clear model for making functional proteins with subsets of amino acids. At this time, we offer no simple solution to this problem. Ribozymes as small as 5 nt created in vitro can aminoacylate tRNAs [20,22,23], but these ribozyme functions appear now to be fully replaced by aaRS enzymes, so a natural record of aminoacylating ribozymes may not now exist. Because tRNA and aaRS enzymes must be coevolved [1,2], however, aaRS enzymes and proofreading by aaRS enzymes are considered with regard to evolution of the code. We note that aaRS proofreading, in archaea, is limited to the left half of the codon-anticodon table, which encodes only hydrophobic and neutral amino acids (Figure 6). ProRS, from the left half of the code, does not edit in archaea and eukarya, but ProRS edits in bacteria (Figure 6). Another partial exception to the left half rule is tRNA Ser (GCU). Ser is the only amino acid that is split into both the left and right halves of the table, and SerRS-IIA edits. We posit that a 4-codon sector encoding Ser (anticodons ACU, GCU, UCU, CCU) may have been invaded by Arg probably before SerRS proofreading evolved to adequately resist sectoring. Also, because Ser is encoded within separated genetic code sectors, SerRS did not recognize the tRNA Ser anticodon for discrimination in accurate Ser attachment, which may have increased tRNA Ser charging errors, leading to Ser sensitivity to invasion by Arg [1,2]. In bacteria, LysRS (class IIB (editing) in most bacteria; class IE (non-editing) in most archaea) is also a partial exception (Figure 6) . we observe that the third column and also the uppermost 4-codon sector of the fourth column of the codon-anticodon table are the most heavily innovated, indicating that evolution of aaRS proofreading inhibited code sectoring, limiting the expansion of the code. Based on this observation, we posit that errors in amino acid attachment to tRNA were important to continue sectoring the code by utilizing the wobble anticodon position. As errors become more difficult to make or to sustain, i.e. because of aaRS synthetic site specificity (mostly the right half of the code) or because of aaRS editing (left half of the code), the code evolved toward closure and universality. Also, the tRNA cloverleaf structure and rugged RNA evolution may limit the potential size of the code. The advent of specific tRNA modifications (i.e. in bacteria and eukaryotes) can be assessed in expanding permitted anticodon contacts to mRNA [33,37,38]. Rugged evolution occurs when many or most substitutions are disruptive for structure, as expected for tRNA [39–41]. Expanding the code beyond 20 amino acids, therefore, may strain the capacity of tRNAs and aaRS enzymes to coevolve for adequate accuracy and discrimination. Evolution of tRNA covalent modifications supported innovation and refinement of the code (i.e. discrimination of tRNA Ile (UAU) and tRNA Met (CAU) in eukaryotes) [33,37,42]. Positive selection of tRNA anticodon wobble ambiguity Because tRNA wobble bases make ambiguous contacts with mRNA, a single tRNA can recognize multiple synonymous mRNA codons, but tRNA wobble ambiguity also limited the capacity for code expansions to encode new amino acids. Although there may be selection for tRNAs with specific purposes, particularly in complex eukaryotes , generally, selection was for increased ambiguity in reading tRNA anticodons. In evolution of the genetic code, tRNA anticodon wobble inosine is strongly preferred to guanine, which is strongly preferred to adenine (Figures 7 and 9). Anticodon wobble inosine recognizes A, C and U in mRNA. Anticodon wobble G recognizes C and U. Anticodon wobble A recognizes U, but tRNA wobble A recognizes mRNA wobble C poorly. We posit that the I>>G>>A preference reflects positive selection of increasing ambiguity in the tRNA anticodon wobble position without affecting the reading of synonymous mRNA codons. Because inosine recognizes A, C and U in mRNA codons, the A→I substitution strongly selects for, and can only occur in, 4-codon sectors. Similarly, anticodon wobble U can pair with both mRNA codon A and G. Anticodon wobble C pairs much more strongly with G than with A. We posit that U is generally preferred to C in the anticodon wobble position because tRNA wobble U recognizes synonymous mRNA wobble A and G more rapidly and readily than tRNA wobble C recognizes mRNA wobble A. It is also possible that G = C wobble pairs are (or were) too stable to be optimal for translation (i.e. gave slow tRNA release on the ribosome). The selection pressures at the inception of the code were different than subsequent selection pressures. Resistance to forming 1-codon sectors The reason that 4-codon sectors of the genetic code split into two 2-codon sectors around purine and pyrimidine wobble bases is that tRNA wobble bases are read ambiguously. The only 1-codon sectors are for tRNA Met (CAU) and tRNA Trp (CCA). The Ile-Met 4-codon sector is a special case (see below). In the Cys-Ter-Trp 4-codon sector, tRNA Trp (CCA) shares a 2-codon sector with a stop codon UGA (anticodon UCA), which is recognized in mRNA by a protein, not a tRNA. In mitochondria, however, anticodon UCA (corresponding to stop codon UGA) is utilized to encode Trp (Figure 8). Because of tRNA wobble ambiguity, the maximum coding potential of the standard genetic code is for 32 letters: 31 aas + stops. The Ile-Met sector Questions remain with regard to early evolution of the Ile-Met 4-codon sector of the standard genetic code. In archaea, typically only a single tRNA Ile (GAU) (Figure 9), two elongator tRNA Met (CAU) and one tRNA iMet (CAU) are found. The sectoring and early proliferation of tRNA Met (CAU) is unusual so near the base of code evolution and requires explanation. From analysis of archaeal tRNA radiations from the primordial cloverleaf tRNA Pri, it appears that tRNA Met and tRNA iMet may be derived from tRNA Ile, as might be expected from code structure . Furthermore, one tRNA Met and tRNA iMet appear to radiate further and further from tRNA Ile in more derived archaeal species. Perhaps the 4-codon Ile-Met sector can be viewed as a partially occupied 4-codon Ile sector, partly invaded by Met. Invasion of the Ile 4-codon sector by Met probably involved recruitment of Met from outside the code via inaccurate tRNA Ile (i.e. CAU) charging. Met invasion of Ile and tRNA Met proliferation were partly driven to establish the start signal for translation. Because Met (CAU) evolved at LUCA to discriminate three tRNA Met (CAU; 2 elongator and 1 initiator), at eukaryogenesis, discrimination of potentially synonymous Met (CAU) and Ile (UAU) could be supported by previously evolved tRNA modifications [33,37,42]. Evolution of the standard genetic code Three main hypotheses for evolution of the standard genetic code include: 1) variations on the Gamow hypothesis (the stereochemical hypothesis: that amino acids interact directly with RNAs, i.e. codons or anticodons, leading to matching of codons and anticodons with amino acids and evolution of the code); 2) the coevolution theory (that code complexity coevolved with advances in amino acid metabolism); and 3) the error minimization theory (that the code evolved to minimize tRNA charging and translation errors) . Recently, it was pointed out that these long-standing hypotheses may have limitations for furthering our understanding of code evolution . Here, we give a simple hypothesis partly relating to, and slightly at odds with, the error minimization theory. We posit that the standard genetic code evolved through mechanisms of inaccurate tRNA charging, tRNA anticodon mutation and tRNA diversification. Mechanisms that enforced tRNA charging accuracy, therefore, brought the code to universality. We posit that similar amino acids are encoded in neighboring sectors and often in the same column of the codon-anticodon table because sectoring was driven by two mechanisms. First, errors in aaRS-catalyzed amino acid attachments to tRNAs induced the division of sectors, generally involving recruitment of similar amino acids, from outside the code, that attached to initially similar tRNAs. Secondly, tRNA anticodon mutations could result in local migrations to a neighboring sector, moving similar amino acids to nearby positions within the code. Selection for incorporation of a new amino acid into proteins drove tRNAs to diverge and discriminate amino acid attachments, leading to a more complex code with an increased number of sectors encoding different amino acids. We posit that the code was built by sectoring in a series of stages described in a recent paper . Koonin and Novozhilov ask why the code is a triplet code . The code is triplet because of the structure of the tRNA anticodon loop, which forces a triplet register for two adjacent tRNAs bound to adjacent mRNA codons . In strict terms of coding, however, the code is almost a 2-nucleotide code, because of degeneracy in the anticodon wobble position, explaining why there are 20 amino acids + stops in the standard genetic code rather than a larger number (up to 31 aas + stops). Koonin and Novozhilov suggest that translation systems should be analyzed to understand code evolution . We identify two features of translation systems that are relevant. First of all, in the decoding center of the ribosome, proofreading of anticodon base pair attachments to mRNA codons, involving small ribosomal subunit conformational closure enabling EF-Tu and GTP hydrolysis, applies to the second and third anticodon positions only, not the first (wobble) position . For most amino acids, the tRNA wobble position was selected to broaden recognition of mRNA codons, supporting code degeneracy and making tRNAs more readily available for insertion of the encoded amino acid. Secondly, translation systems evolved around tRNA, so a focus on tRNA evolution helps to interpret genetic code evolution. The tRNA-centric view significantly simplifies the problem of standard genetic code evolution, i.e. by shrinking the relevant number of anticodons. Because the genetic code is degenerate, analyzing code evolution from the point of view of mRNA is deceptive, because all 64 codons are utilized in mRNA, but only 44 unique tRNA anticodons and 3 stop codons were utilized at the inception of the standard genetic code (LUCA and ancient archaea). Furthermore, because of tRNA wobble ambiguity, the maximal capacity of the genetic code only expands to 31 amino acids + stops, but aaRS proofreading, accurate aaRS synthetic site specification of amino acid substrates, aaRS anticodon recognition, ribosome conformational proofreading of the anticodon-codon interaction and perhaps the A→I modification limited code expansions to 20 amino acids by preserving 4-codon sectors. Evolving 1-codon sectors of the genetic code was strongly resisted particularly via aaRS and ribosome fidelity mechanisms. Ribosome proofreading the anticodon-codon interaction In a recent paper, we posit that the genetic code sectored from a 1→4→8→16→21 letter code (20 aas + stops) . The initial code evolved to utilize any mRNA sequence to synthesize polyglycine, used to stabilize protocells. According to this view, conformational tightening and EF-Tu and GTP proofreading of Watson-Crick base pairing between the anticodon and the codon in the second and third anticodon positions became necessary at the 8→16 letter stage. The 8 letter stage is characterized by resolution of purines and pyrimidines only, but not individual bases, in the first mRNA codon position and the corresponding third tRNA anticodon position. At the 8 letter stage of code evolution, reading the third anticodon position is similar to the sectoring of the wobble position of the standard code, indicating that ribosome proofreading was not yet evolved at this stage. In order to fully resolve A, G, C and U in the first codon position and the corresponding third anticodon position, conformational tightening and EF-Tu and GTP hydrolysis proofreading was essential. The model for sectoring of the genetic code, therefore, makes a prediction about the evolution of translational fidelity mechanisms that brought the code to universality. Correlation of aaRS proofreading and A→I modification In eukaryotes, there is strong correlation between aaRS editing and tRNA wobble A→I modification (Figure 6; bottom panel). Primarily, we attribute this correlation to 4-codon sectors. Proofreading by aaRS enzymes maintains 4-codon sectors by inhibiting tRNA charging errors that could lead to further sectoring. A→I modification is most utilized by eukaryotes, which are about 2.2 billion years old. The standard genetic code, by vast contrast, is probably >3.8 billion years old. Because the code is ancient and universal, eukaryotic innovations do not bear on the birth of the code, although eukaryotic innovations may have stabilized the eukaryotic code to prevent further sectoring and a possible escape by eukaryotes from code universality. A→I conversion is limited to 4-codon sectors, because tRNA wobble inosine recognizes mRNA wobble A, C and U. A→I modification in a 2-codon sector, therefore, spills into a neighboring 2-codon sector, causing translation errors. Much earlier in code evolution, tRNA charging errors induced sectoring, adding amino acids to the code. Now such errors are lethal because they induce translation errors. In bacteria, the Arg (ACG, GCG, UCG, CCG) 4-codon sector was protected by the A→I modification, but, in archaea, the Arg 4-codon sector was faithfully preserved without the A→I modification, perhaps because of the high specificity of the ArgRS synthetic active site, ArgRS anticodon recognition and EF-TU proofreading on the ribosome. Because Gly occupies a 4-codon sector of the code, this raises the question of why tRNA Gly (ACC) is not modified A→I in eukaryotes . GlyRS resists charging errors because of the small size of the synthetic active site, which made the Gly (ACC, GCC, UCC, CCC) sector resistant to subdivision. A similar argument can be made for the Arg (ACG, GCG, UCG, CCG) sector. ArgRS does not have a proofreading active site. The ArgRS synthetic active site accurately specifies Arg, however, because of the distinctive Arg side chain. Specificity of charging is enhanced because ArgRS recognizes the tRNA Arg anticodon. The Arg 4-codon sector resists further division in bacteria and eukarya because ArgRS charging is accurate and because the A→I modification limits sectoring. It is possible that the standard genetic code is universal (i.e. in archaea, bacteria and eukaryotes), in part, because aaRS proofreading, high aaRS synthetic site specificity, anticodon recognition by aaRS, EF-TU proofreading and A→I modification prevented introduction of new 2-codon sectors in the bacterial and eukaryotic genetic codes. Because tRNA charging errors resulted in code sectoring, evolving mechanisms that enhanced the accuracy of amino acid attachments to tRNAs led to closure and universality of the genetic code. The tRNA-centric view We advocate a tRNA-centric view of genetic code and ribosome evolution [9,13]. The complexity of the genetic code was limited by tRNA anticodon loop structure and tRNA wobble degeneracy reading mRNA. The primitive ribosome might have been a decoding scaffold and a mobile peptidyl transferase center. According to our view, cloverleaf tRNA was the essential biological intellectual property leading to the evolution of the code and to the encoding of proteins including aaRS enzymes. According to this view, cloverleaf tRNA Pri was a prerequisite to the coevolution of tRNAomes, aaRS enzymes, ribosomes and the genetic code. It appears to us that a small collection of ribozymes, most of which have been generated in vitro, is sufficient to convert a strange polymer and minihelix world into a cloverleaf tRNA world that leads inevitably to a RNA-protein world and cellular life. As described previously, tRNA Pri evolved initially as an improved mechanism to synthesize polyglycine to stabilize protocells (as in bacterial cell walls) before the coevolution of tRNAomes, aaRS enzymes, ribosomes and the genetic code. Alternate views have been expressed by others [3,4,7,8,44]. Methods NCBI Blast NCBI Blast tools ( were used to analyze the relatedness of Pfu aaRS enzymes (Figure 1) and to obtain alignments (Figure 2; Supplementary Figure S1). Anticodon wobble preference Sequences for tRNAs were collected from the tRNA database ( and the genomic tRNA database ( [16,17]. Anticodon wobble position preference was analyzed for synonymous anticodons with A and G (ANN vs GNN) or U and C (UNN vs CNN). Homology modeling Pfu GlyRS-IIA was modeled to human GlyRS-IIA (PDB 4KQE) using the program Phyre2 [45,46]. Atomic coordinates were refined using the YASARA energy minimization server ( The PDB file for Pfu GlyRS-IIA is Supplementary File 1 for this paper. UCSF Chimera was used to visualize molecules [47,48]. Zn was oriented to ligands as previously described . Because of low sequence similarity in shared Zn fingers, Pfu GlyRS-IIA and Tth ValRS-IA Zn fingers were aligned manually using Chimera. Statistical methods Anticodon wobble preference data sets were analyzed using a chi-square goodness of fit test ( Because of the large datasets used and the differences observed, all comparisons were judged to be significant (p-value<0.0001). Supplementary Material Supplemental_Data.pdf ktrn-09-04-1467718-s001.pdf (371.3KB, pdf) Acknowledgments We thank Bruce Kowiatek (Blue Ridge Community and Technical College, WV) and Robert Root-Bernstein, Michigan State University, MI) for encouragement and helpful suggestions. Kristopher Opron (University of Michigan, Bioinformatics Core) helped with sequence alignments. Disclosure of potential conflicts of interest No potential conflicts of interest were disclosed. References .Perona JJ, Gruic-Sovulj I. Synthetic and editing mechanisms of aminoacyl-tRNA synthetases. Top Curr Chem. 2014;344:1–41. PMID:23852030 [DOI] [PubMed] [Google Scholar] .Giege R, Eriani G. Transfer RNA recognition and aminoacylation by synthetases. John Wiley & Sons, Ltd; 2014. [Google Scholar] .Carter CW., Jr. Coding of class I and II aminoacyl-tRNA synthetases. Adv Exp Med Biol. 2017;966:103–148. doi: 10.1007/5584_2017_93. PMID:28828732 [DOI] [PMC free article] [PubMed] [Google Scholar] .Rodin AS, Rodin SN, Carter CW., Jr On primordial sense-antisense coding. J Mol Evol. 2009;69:555–567. doi: 10.1007/s00239-009-9288-4. PMID:19956936 [DOI] [PMC free article] [PubMed] [Google Scholar] .Pham Y, Li L, Kim A, et al. A minimal TrpRS catalytic domain supports sense/antisense ancestry of class I and II aminoacyl-tRNA synthetases. Mol Cell. 2007;25:851–862. doi: 10.1016/j.molcel.2007.02.010. PMID:17386262 [DOI] [PubMed] [Google Scholar] .Rodin SN, Ohno S. Two types of aminoacyl-tRNA synthetases could be originally encoded by complementary strands of the same nucleic acid. Orig Life Evol Biosph. 1995;25:565–589. doi: 10.1007/BF01582025. PMID:7494636 [DOI] [PubMed] [Google Scholar] .Carter CW, Jr., Wills PR. Interdependence, reflexivity, fidelity, impedance matching, and the evolution of genetic coding. Mol Biol Evol. 2018;35:269–286. doi: 10.1093/molbev/msx265. PMID:29077934 [DOI] [PMC free article] [PubMed] [Google Scholar] .Wills PR, Carter CW., Jr. Insuperable problems of the genetic code initially emerging in an RNA world. Biosystems. 2018;164:155–166. doi: 10.1016/j.biosystems.2017.09.006. PMID:28903058 [DOI] [PMC free article] [PubMed] [Google Scholar] .Pak D, Du N, Kim Y, et al. Rooted tRNAomes and evolu tion of the genetic code. Transcription. 2018. doi: 10.1080/21541264.2018.1429837. [DOI] [PMC free article] [PubMed] [Google Scholar] .Percudani R. Restricted wobble rules for eukaryotic genomes. Trends Genet. 2001;17:133–135. doi: 10.1016/S0168-9525(00)02208-3. PMID:11314654 [DOI] [PubMed] [Google Scholar] .Koonin EV, Novozhilov AS. Origin and evolution of the universal g enetic code. Annu Rev Genet. 2017. doi: 10.1146/annurev-genet-120116-024713. PMID:28853922 [DOI] [PubMed] [Google Scholar] .Massey SE. A neutral origin for error minimization in the genetic code. J Mol Evol. 2008;67:510–516. doi: 10.1007/s00239-008-9167-4. PMID:18855039 [DOI] [PubMed] [Google Scholar] .Pak D, Root-Bernstein R, Burton ZF. tRNA structure and evolution and standardization to the three nucleotide genetic code. Transcription. 2017;8:205–219. doi: 10.1080/21541264.2017.1318811. PMID:28632998 [DOI] [PMC free article] [PubMed] [Google Scholar] .Root-Bernstein R, Kim Y, Sanjay A, et al. tRNA evolution from the proto-tRNA minihelix world. Transcription. 2016;7:153–163. doi: 10.1080/21541264.2016.1235527. PMID:27636862 [DOI] [PMC free article] [PubMed] [Google Scholar] .Bertram G, Innes S, Minella O, et al. Endless possibilities: translation termination and stop codon recognition. Microbiology. 2001;147:255–269. doi: 10.1099/00221287-147-2-255. PMID:11158343 [DOI] [PubMed] [Google Scholar] .Juhling F, Morl M, Hartmann RK, et al. tRNAdb 2009: compilation of tRNA sequences and tRNA genes. Nucleic Acids Res. 2009;37:D159–D162. doi: 10.1093/nar/gkn772. PMID:18957446 [DOI] [PMC free article] [PubMed] [Google Scholar] .Chan PP, Lowe TM. GtRNAdb 2.0: an expanded database of transfer RNA genes identified in complete and draft genomes. Nucleic Acids Res. 2016;44:D184–D189. doi: 10.1093/nar/gkv1309. PMID:26673694 [DOI] [PMC free article] [PubMed] [Google Scholar] .Saint-Leger A, Bello C, Dans PD, et al. Saturation of recognition elements blocks evolution of new tRNA identities. Sci Adv. 2016;2:e1501860. doi: 10.1126/sciadv.1501860. PMID:27386510 [DOI] [PMC free article] [PubMed] [Google Scholar] .Rafels-Ybern A, Torres AG, Grau-Bove X, et al. Codon adaptation to tRNAs with I nosine modification at position 34 is widespread among Eukaryotes and present in two Bacterial phyla. RNA Biol. 2017:1–8. [DOI] [PMC free article] [PubMed] [Google Scholar] .Xiao H, Murakami H, Suga H, et al. Structural basis of specific tRNA aminoacylation by a small in vitro selected ribozyme. Nature. 2008;454:358–361. doi: 10.1038/nature07033. PMID:18548004 [DOI] [PubMed] [Google Scholar] .Rodin SN, Rodin AS. Origin of the genetic code: first aminoacyl-tRNA synthetases could replace isofunctional ribozymes when only the second base of codons was established. DNA Cell Biol. 2006;25:365–375. doi: 10.1089/dna.2006.25.365. PMID:16792507 [DOI] [PubMed] [Google Scholar] .Lee N, Bessho Y, Wei K, et al. Ribozyme-catalyzed tRNA aminoacylation. Nat Struct Biol. 2000;7:28–33. doi: 10.1038/71225. PMID:10625423 [DOI] [PubMed] [Google Scholar] .Turk RM, Chumachenko NV, Yarus M. Multiple translational products from a five-nucleotide ribozyme. Proc Natl Acad Sci U S A. 2010;107:4585–4589. doi: 10.1073/pnas.0912895107. PMID:20176971 [DOI] [PMC free article] [PubMed] [Google Scholar] .Valencia-Sanchez MI, Rodriguez-Hernandez A, Ferreira R, et al. Structural insights into the polyphyletic origins of glycyl tRNA synthetases. J Biol Chem. 2016;291:14430–14446. doi: 10.1074/jbc.M116.730382. PMID:27226617 [DOI] [PMC free article] [PubMed] [Google Scholar] .Smith TF, Hartman H. The evolution of class II aminoacyl-tRNA synthetases and the first code. FEBS Lett. 2015;589:3499–3507. doi: 10.1016/j.febslet.2015.10.006. PMID:26472323 [DOI] [PubMed] [Google Scholar] .O'Donoghue P, Luthey-Schulten Z. On the evolution of structure in aminoacyl-tRNA synthetases. Microbiol Mol Biol Rev. 2003;67:550–573. doi: 10.1128/MMBR.67.4.550-573.2003. PMID:14665676 [DOI] [PMC free article] [PubMed] [Google Scholar] .Qin X, Deng X, Chen L, et al. Crystal structure of the wild-type human GlyRS Bound with tRNA(Gly) in a productive conformation. J Mol Biol. 2016;428:3603–3614. doi: 10.1016/j.jmb.2016.05.018. PMID:27261259 [DOI] [PubMed] [Google Scholar] .Deng X, Qin X, Chen L, et al. Large conformational changes of insertion 3 in human glycyl-tRNA synthetase (hGlyRS) during catalysis. J Biol Chem. 2016;291:5740–5752. doi: 10.1074/jbc.M115.679126. PMID:26797133 [DOI] [PMC free article] [PubMed] [Google Scholar] .Fukai S, Nureki O, Sekine S, et al. Structural basis for double-sieve discrimination of L-valine from L-isoleucine and L-threonine by the complex of tRNA(Val) and valyl-tRNA synthetase. Cell. 2000;103:793–803. doi: 10.1016/S0092-8674(00)00182-3. PMID:11114335 [DOI] [PubMed] [Google Scholar] .Salinas-Giege T, Giege R, Giege P. tRNA biology in mitochondria. Int J Mol Sci. 2015;16:4518–4559. doi: 10.3390/ijms16034518. PMID:25734984 [DOI] [PMC free article] [PubMed] [Google Scholar] .Salinas T, Duby F, Larosa V, et al. Co-evolution of mitochondrial tRNA import and codon usage determines translational efficiency in the green alga Chlamydomonas. PLoS Genet. 2012;8:e1002946. doi: 10.1371/journal.pgen.1002946. PMID:23028354 [DOI] [PMC free article] [PubMed] [Google Scholar] .Schneider A. Mitochondrial tRNA import and its consequences for mitochondrial translation. Annu Rev Biochem. 2011;80:1033–1053. doi: 10.1146/annurev-biochem-060109-092838. PMID:21417719 [DOI] [PubMed] [Google Scholar] .Agris PF, Narendran A, Sarachan K, et al. The importance of being modified: the role of RNA modifications in translational fidelity. Enzymes. 2017;41:1–50. doi: 10.1016/bs.enz.2017.03.005. PMID:28601219 [DOI] [PMC free article] [PubMed] [Google Scholar] .Agris PF, Vendeix FA, Graham WD. tRNA's wobble decoding of the genome: 40 years of modification. J Mol Biol. 2007;366:1–13. doi: 10.1016/j.jmb.2006.11.046. PMID:17187822 [DOI] [PubMed] [Google Scholar] .Qin X, Hao Z, Tian Q, et al. Cocrystal structures of glycyl-tRNA synthetase in complex with tRNA suggest multiple conformational states in glycylation. J Biol Chem. 2014;289:20359–20369. doi: 10.1074/jbc.M114.557249. PMID:24898252 [DOI] [PMC free article] [PubMed] [Google Scholar] .Delagoutte B, Moras D, Cavarelli J. tRNA aminoacylation by arginyl-tRNA synthetase: induced conformations during substrates binding. EMBO J. 2000;19:5599–5610. doi: 10.1093/emboj/19.21.5599. PMID:11060012 [DOI] [PMC free article] [PubMed] [Google Scholar] .Agris PF, Eruysal ER, Narendran A, et al. Celebrating wobble decoding: half a century and still much is new. RNA Biol. 2017:1–17. [DOI] [PMC free article] [PubMed] [Google Scholar] .Vare VY, Eruysal ER, Narendran A, et al. Chemical and conforma tional diversity of modified nucleosides affects tRNA structure and function. Biomolecules. 2017;7. doi: 10.3390/biom7010029. PMID:28300792 [DOI] [PMC free article] [PubMed] [Google Scholar] .Novozhilov AS, Wolf YI, Koonin EV. Evolution of the genetic code: partial optimization of a random code for robustness to translation error in a rugged fitness landscape. Biol Direct. 2007;2:24. doi: 10.1186/1745-6150-2-24. PMID:17956616 [DOI] [PMC free article] [PubMed] [Google Scholar] .Curtis EA, Bartel DP. Synthetic shuffling and in vitro selection reveal the rugged adaptive fitness landscape of a kinase ribozyme. RNA. 2013;19:1116–1128. doi: 10.1261/rna.037572.112. PMID:23798664 [DOI] [PMC free article] [PubMed] [Google Scholar] .Kun A, Szathmary E. Fitness landscapes of functional RNAs. Life (Basel). 2015;5:1497–1517. PMID:26308059 [DOI] [PMC free article] [PubMed] [Google Scholar] .Vendeix FA, Dziergowska A, Gustilo EM, et al. Anticodon domain modifications contribute order to tRNA for ribosome-mediated codon binding. Biochemistry. 2008;47:6117–6129. doi: 10.1021/bi702356j. PMID:18473483 [DOI] [PubMed] [Google Scholar] .Demeshkina N, Jenner L, Westhof E, et al. A new understanding of the decoding principle on the ribosome. Nature. 2012;484:256–259. doi: 10.1038/nature10913. PMID:22437501 [DOI] [PubMed] [Google Scholar] .Zamudio GS, Jose MV. Phenotypic graphs and evolution unfold the standard genetic code as the optimal. Orig Life Evol Biosph. 2018;48:83–91. doi: 10.1007/s11084-017-9552-3. PMID:29082465 [DOI] [PubMed] [Google Scholar] .Kelley LA, Mezulis S, Yates CM, et al. The Phyre2 web portal for protein modeling, prediction and analysis. Nat Protoc. 2015;10:845–858. doi: 10.1038/nprot.2015.053. PMID:25950237 [DOI] [PMC free article] [PubMed] [Google Scholar] .Kim Y, Benning N, Pham K, et al. Homology threading to generate RNA polymerase structures. Protein Expr Purif. 2018;147:13–16. doi: 10.1016/j.pep.2018.02.002. PMID:29444461 [DOI] [PubMed] [Google Scholar] .Yang Z, Lasker K, Schneidman-Duhovny D, et al. UCSF Chimera, MODELLER, and IMP: an integrated modeling system. J Struct Biol. 2012;179:269–278. doi: 10.1016/j.jsb.2011.09.006. PMID:21963794 [DOI] [PMC free article] [PubMed] [Google Scholar] .Pettersen EF, Goddard TD, Huang CC, et al. UCSF Chimera–a visualization system for exploratory research and analysis. J Comput Chem. 2004;25:1605–1612. doi: 10.1002/jcc.20084. PMID:15264254 [DOI] [PubMed] [Google Scholar] Associated Data This section collects any data citations, data availability statements, or supplementary materials included in this article. Supplementary Materials Supplemental_Data.pdf ktrn-09-04-1467718-s001.pdf (371.3KB, pdf) Articles from Transcription are provided here courtesy of Taylor & Francis ACTIONS View on publisher site PDF (5.9 MB) Cite Collections Permalink PERMALINK Copy RESOURCES Similar articles Cited by other articles Links to NCBI Databases On this page ABSTRACT Abbreviations Introduction Results Discussion Methods Supplementary Material Acknowledgments Disclosure of potential conflicts of interest References Associated Data Cite Copy Download .nbib.nbib Format: Add to Collections Create a new collection Add to an existing collection Name your collection Choose a collection Unable to load your collection due to an error Please try again Add Cancel Follow NCBI NCBI on X (formerly known as Twitter)NCBI on FacebookNCBI on LinkedInNCBI on GitHubNCBI RSS feed Connect with NLM NLM on X (formerly known as Twitter)NLM on FacebookNLM on YouTube National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894 Web Policies FOIA HHS Vulnerability Disclosure Help Accessibility Careers NLM NIH HHS USA.gov Back to Top
332
1 KOZENY-CARMAN EQUATION REVISITED Jack Dvorkin -- 2009 Abstract The Kozeny-Carman equation is often presented as permeability versus porosity, grain size, and tortuosity. When it is used to estimate permeability evolution versus porosity, some of these arguments (e.g., the grain size and tortuosity) are held constant. Here we theoretically explore the internal consistency of this assumption and offer alternative forms for the Kozeny-Carman equation. The only advantage of these forms over the one traditionally used is their internal consistency. Such analytical solutions cannot replace measurements, physical and digital, but can rather serve for quality control of physical and digital data. 1. Problem Formulation Traditionally, the Kozeny-Carman equation relates the absolute permeability € kabsolute to porosity € φ and grain size € d as € kabsolute ~ d 2φ 3. (1.1) This form is frequently employed to mimic permeability versus porosity evolution in datasets, such as in Fontainebleau sandstone (Bourbie and Zinszner, 1985) or Finney pack (Finney, 1970). During such calculations, the grain size € d is typically kept constant. We find at least two inconsistencies in this approach: (a) the Kozeny-Carman equation has been derived for a solid medium with pipe conduits, rather than for a granular medium and (b) even if a grain size is used in this equation, it is not obvious that it does not vary with varying porosity (Figure 1.1 and 1.2). Bearing this argument in mind, we explore how permeability can be predicted consistently within the Kozeny-Carman formalism, by varying the radii of the conduits, their number, and type. We find that such a consistent approach is possible. However, 2 it requires additional assumptions, specifically, regarding tortuosity evolution during porosity reduction. In the end we arrive at alternative forms for the Kozeny-Carman equation, which still should not be used to predict permeability, but instead to quality-controls physical and digital experimental data. 0.072 0.109 0.167 0.250 Figure 1.1. Cross-sections of four Fontainebleau sandstone samples with decreasing porosity (posted on top of each image). The scale bar in each image is 0.5 mm. We argue that it is not obvious which parameters (grain size, conduit size, or the number of conduits) change during porosity reduction. 3 Figure 1.2. Cross-sections of a Finney pack for uniformly increasing radius of each sphere (from 1.00 to 1.45 mm with 0.05 mm increment, left to right and top to bottom) and respectively decreasing porosity (posted on top of each image). As in the images in Figure 1.1, it is not immediately obvious which parameters (grain size, conduit size, or the number of conduits) change during porosity reduction. 4 2. Definition of Absolute Permeability The definition of absolute permeability € kabsolute of porous rock comes from Darcy’s equation (e.g., Mavko et al., 1998) € Q = −kabsolute A µ dP dx , (2.1) where € Q is the volume flux through the sample (in, e.g., m3/s); € A is the cross-sectional area of the sample (in, e.g., m2); € µ is the dynamic viscosity of the fluid (in, e.g., Pa s with 1 cPs = 10-3 Pa s); and € dP /dx is the pressure drop across the sample divided by the length of the sample (in, e.g., Pa/m). 3. Flow Through a Circular Pipe The equation for laminar viscous flow in a pipe of radius € b is € ∂2u ∂r2 + 1 r ∂u ∂r = 1 µ dP dx , (3.1) where € u is the velocity of the fluid in the axial ( € x) direction; € µ is the dynamic viscosity of the fluid; € dP /dx is the pressure gradient in the axial direction; and € r and € x are the radial and axial coordinates, respectively. A general solution of Equation (3.1) is € u = ˜ A + ˜ B r 2 + ˜ C lnr, (3.2) where € ˜ A , € ˜ B , and € ˜ C are constants. It follows from Equation (3.2) that € ∂u ∂r = 2 ˜ C r + ˜ C r , ∂ 2u ∂r 2 = 2 ˜ C − ˜ C r 2 . (3.3) By substituting the expressions from Equation (3.3) into Equation (3.1) we find that € 2 ˜ B − ˜ C r 2 + 2 ˜ B + ˜ C r 2 = 1 µ dP dy , (3.4) which means that € ˜ B = 1 4µ dP dx . (3.5) 5 To avoid singularity at € r = 0 we need to assume that in Equation (3.2) € ˜ C = 0. Next, we will employ the no-slip condition € u = 0 at € r = € b. € u = ˜ A + ˜ B r 2 = ˜ A + 1 4µ dP dx r 2 = ˜ A + 1 4µ dP dx b 2 = 0, (3.6) which means € ˜ A = −1 4µ dP dx b 2 (3.7) and € u = −1 4µ dP dx b 2(1−r 2 b 2 ). (3.8) The total volume flux through the pipe is € q = −πb 4 8µ ΔP l , (3.9) where € l is the length of the pipe, € ΔP is the pressure head along the length of the pipe, and the pressure gradient € dP /dx is replaced with € ΔP /l. 4. Absolute Permeability – Round Pipe Assume that a pore space is made of € N identical parallel round pipes embedded in a solid block at an angle € α to its horizontal face (Figure 4.1). The horizontal pressure head across the block is € ΔP . The length of each pipe inside the block is € l = L /sinα = Lτ, (4.1) where € L is the horizontal length of the block and, by definition, € τ = sin −1α is the tortuosity. Using Equations (3.9) and (4.1), we obtain the total flux through € N pipes as € Q = Nq = −N πb 4 8µ ΔP Lτ = −N πb 4 8µτ dP dx = −Nπb 2τ b 2 8µτ 2 dP dx , (4.2) where € ΔP /Lτ is the pressure gradient across the pipe. The porosity of the block due to the pipes is 6 € φ = Nπb 2l AL = Nπb 2τ A , (4.3) where € A is the cross-sectional area of the block, same as used in Equation (2.1). Figure 4.1. Solid block with a pipe used for Kozeny-Carman derivations (left). Notations are explained in the text. To the right, we show a cross-section of an open pipe and that of the same pipe with a concentric solid kernel. By combining Equations (4.2) and (4.3), we obtain € Q = −φ b 2 8τ 2 A µ dP dx , (4.4) which means (using the definition of absolute permeability) that € kabsolute = b 2 φ 8τ 2 . (4.5) Let us next introduce another characteristic of the pore space, the specific surface area € s, which, by definition, is the ratio of the pore surface area to the total volume of the block. For the block permeated by pipes, € s = N2πbl AL = N2πbτ A = Nπb 2τ A 2 b = 2φ b . (4.6) and, therefore, € b = 2φ /s and € kabsolute = 1 2 φ 3 s 2τ 2 . (4.7) Finally, let us combine Equations (2.1) and (4.2) to obtain € kabsolute = N Aτ πb 4 8 . (4.8) 7 5. Absolute Permeability -- Concentric Pipe Consider fluid flow through a round pipe of radius € b with a concentric solid kernel of radius € a inside (Figure 4.1). Equations (3.1) and (3.2) are still valid for the flow inside the annular gap formed by the pipe and kernel. The solution for velocity € u inside the annular gap is obtained from Equation (3.2) and using the no-slip ( € u = 0) boundary conditions at € r = € b and € r = € a: € u = −1 4µ dP dx b 2[(1−r 2 b 2 ) −(1−a 2 b 2 ) ln(b/r) ln(b/a)]. (5.1) A comparison of the radial velocity field according to Equation (5.1) and (3.8) is displayed in Figure 5.1. Figure 5.1. Normalized velocity of fluid versus the normalized radius of a circular pipe for (a) flow in a circular pipe without a kernel and (b) annular floe in a pipe with a kernel for the radius of the kernel 0.1 of that of the pipe. By integrating the right-hand part of this equation times € 2πr from € a to € b and with respect to € r we obtain the flux through the annular gap: € q = −π 8µ ΔP l b 4(1−a 2 b 2 )[1+ a 2 b 2 + (1−a 2 b 2 ) 1 ln(a/b)]. (5.2) Let us remember that for a pipe without a kernel, € q = −πb 4 8µ ΔP l . (5.3) We can arrive at this expression from Equation (5.2) if € a = 0 and € lna →−∞. 8 Also, if € a →b, the infinity in the denominator of the third term in the square brackets in Equation (5.2) has the same order as that in the numerator, and € q →0. In Figure 5.2 we display the ratio € ξ of the flux computed according to Equation (5.2) to that according to Equation (5.3): € ξ = (1−a 2 b 2 )[1+ a 2 b 2 + (1−a 2 b 2 ) 1 ln(a/b)], (5.4) which behaves predictably. Figure 5.2. Ratio of annular flux to that through a round pipe versus the normalized radius of a kernel. The total flux through € N pipes with a kernel is € Q = Nq = −N π 8µ ΔP Lτ b 4(1−a 2 b 2 )[1+ a 2 b 2 + (1−a 2 b 2 ) 1 ln(a/b)]. (5.5) Hence the absolute permeability is € kabsolute = N Aτ πb 4 8 (1−a 2 b 2 )[1+ a 2 b 2 + (1−a 2 b 2 ) 1 ln(a/b)]. (5.6) The porosity of this block is now € φ = Nπ(b 2 −a 2)l AL = Nπ(b 2 −a 2)τ A . (5.7) The specific surface area is 9 € s = N2π(b + a)l AL = N2π(b + a)τ A = Nπ(b 2 −a 2)τ A 2 b −a = 2φ b −a . (5.8) As a result, € kabsolute = φ 8τ 2 b 2[1+ a 2 b 2 + (1−a 2 b 2 ) 1 ln(a/b)]. (5.9) 6. Permeability versus Porosity Within the above formalism, one may envision at least three porosity variation scenarios: (a) the number of the pipes € N varies, (b) the number € N remains constant, but the radius of the pipes € b varies, and (c) the number € N remains constant and so does the radius of the pipes € b, but concentric kernels of radius € a grow inside the pipes (Figure 6.1). Original Block Porosity Reduces with the Number of Pipes Porosity Reduces with the Radius of Pipes Porosity Reduces with the Radius of Kernels Figure 6.1. Porosity reduction from that of the original block (left) by the reduction of the number of the pipes (middle), the radius of a pipe (third to the right), and radius of the kernels. Consider a solid block with a square cross-section with a 10-3 m side and the resulting cross-sectional area € A = 10-6 m2. It is penetrated by € N = 50 identical round pipes with radius € b = 2.83·10-5 m. Also assume € τ = 2.5. The resulting porosity is € φ = € Nπb 2τ /A = 0.30. The resulting permeability, according to Equation (4.8), is € kabsolute = 4.772·10-5 m2 = 4772 mD. Next, we alter the original block (left image in Figure 6.1) according to the proposed three porosity reduction scenarios. The results are shown in Figure 6.2, where we also display the classical Fontainebleau sandstone data as well as data for North Sea sand (Troll field). This figure also explains our choice of the solid block and pipe parameters earlier in this section – the numbers selected helped match the porosity and permeability 10 of the original block to those of the highest-porosity Fontainebleau sample. Figure 6.2. Permeability versus porosity according to scenarios a, b, and c. Open symbols are for measured permeability in Fontainebleau sandstone. Filled symbols are for measured permeability in sand samples from Troll field offshore Norway. Figure 6.2 also indicates that none of the proposed simple models reproduces the trends present in real data. 7. Third Dimension and Tortuosity Modeling rock as a block with fixed cross-section, clearly does not produce permeability results that match laboratory data. Let us hypothesize that the effect of the third dimension can be modeled by varying the tortuosity € τ . Specifically, within the Kozeny-Carman formalism, let us assume that € τ varies with porosity € φ. Consider two candidate equations for this dependence: € τ = φ −1.2, (7.1) derived from laboratory contaminant diffusion experiments by Boving and Grathwohl (2001) and € τ = (1+ φ −1)/2, (7.2) theoretically derived by Berryman (1981). 11 At € φ = 0.3, these two equations give € τ = 4.24 and 2.17, respectively. Let us next repeat out calculations of permeability for the three porosity evolution scenarios, but this time with the tortuosity varying versus porosity. Also, to keep the scale consistent, we will scale Equations (7.1) and (7.2) to yield € τ = 2.5 at € φ = 0.3 as follows € τ = 0.590φ −1.2 (7.3) and € τ = 0.576(1+ φ −1). (7.4) The results shown in Figure 7.1 indicate that both the above equations produce similar results. Permeability calculated for porosity reduction scenario with contracting pipe size matches the Fontainebleau data in the medium-to-high porosity range but fails for low porosity. Figure 7.1. Permeability versus porosity according to scenarios a, b, and c with varying tortuosity. Left, according to Equation (7.3) and right, according to Equation (7.4). Our final attempt to match the Fontainebleau data is to assume that the tortuosity becomes infinity at some small percolation porosity € φ p (following Mavko and Nur, 1997). The resulting equations for tortuosity become € τ = 0.590(φ −φ p) −1.2 (7.5) 12 and € τ = 0.576[1+ (φ −φ p) −1]. (7.6) The resulting permeability curves for the case of shrinking pipes and € φ p = 0.025 are displayed in Figure 7.2. This final match appears to be satisfactory. Figure 7.2. Permeability versus porosity for shrinking pipes and for tortuosity given by Equation (7.4) – top curve, Equation (7.5) – middle curve, and Equation (7.6) – bottom curve. The resulting forms of the Kozeny-Carman equation obtained by combining Equations (4.5) and (4.7) with Equation (7.6) are, respectively, € kabsolute = 0.357b 2 φ [1+ (φ −φ p) −1] 2 (7.7) and € kabsolute =1.507s −2 φ 3 [1+ (φ −φ p) −1] 2 . (7.8) Some other reported relations between € τ and € φ are: € τ = 0.67φ −1, (7.9) theoretically derived based on the assumption of fractal pore geometry (Pape et al., 1998) and 13 € τ =1.8561−0.715φ, 0.1< φ < 0.5; τ = 2.1445 −1.126φ, 0.3 < φ < 0.5; τ = −2.1472 + 5.244φ, 0.6 < φ <1.0; (7.10) derived from laboratory fluid flow experiments on textiles, kaolinite, and soil by Salem and Chilingarian (2000). Tortuosity according to Equations (7.1), (7.2), (7.9), and (7.10) is plotted versus porosity in Figure 7.3). The large difference between individual curves is likely due to the materials and models these curves were derived for. Figure 7.3. Tortuosity versus porosity according to equations in the text. Equation numbers are posted next to the curves. 8. Kozeny-Carman Equation with Grain Size The Kozeny-Carman formalism operates with pipe conduits. However, its common interpretation for clastic sediment attempts to operate with the grain size. Such a transformation is possible if one considers permeability Equation (4.7) € kabsolute = 1 2 φ 3 s 2τ 2 (8.1) and relates the specific surface area € s to the grain size. This is possible if we consider a dense random pack of € M identical spherical grains with radius € r and porosity € φ0. The volume of each individual grain is € (4 /3)πr 3 and the total volume of the pack is hence € (4 /3)Mπr 3 /(1−φ0). The surface area of the pore 14 space is € 4Mπr 2. Therefore, for this specific case, € s = 3(1−φ0)/r = 6(1−φ0)/d, (8.2) where € d = 2 € r. Note that this equation is only valid for a sphere pack with porosity € φ0 ≈ 0.36. If we assume that the same equation applies to the entire porosity range, which is clearly invalid since € s should be generally decreasing with decreasing porosity, we obtain form Equations (8.1) and (8.2) € kabsolute = r 2 18 φ 3 (1−φ) 2τ 2 = d 2 72 φ 3 (1−φ) 2τ 2 , (8.3) where € d = 2 € r. We can also modify this last equation by introducing a percolation porosity € φ p (Mavko and Nur, 1997) as € kabsolute = d 2 72 (φ −φ p) 3 (1−φ + φ p) 2τ 2 , (8.4) The corresponding curve for € φ p = 0.025, € d = 0.25 mm = 0.00025 m, and constant tortuosity € τ = 0.25 is plotted in Figure 8.1. Figure 8.1. Black curve: permeability versus porosity for shrinking pipes with the tortuosity given by Equation (7.6). Red curve: permeability versus porosity according to Equation (8.4) with parameters described in the text. 15 9. Conclusion Consistency is possible in deriving and applying Kozeny-Carman equation for permeability as a function of porosity. Any such derivation inherently requires such idealized and, generally, nonexistent in real rock parameters as grain size, pore size, and tortuosity. Yet, the Kozeny-Carman equation can be made to mimic some experimental trends and, therefore, serve as a quality-control tool for physical and digital experimental results. References Berryman, J.G., 1981, Elastic wave propagation in fluid-saturated porous media, Journal of Acoustical Society of America, 69, 416-424. Bourbie, T., and Zinszner, B., 1985, Hydraulic and acoustic properties as a function of porosity in Fontainebleau Sandstone, Journal of Geophysical Research, 90, 11,524-11,532. Boving, T.B., and Grathwohl, P., 2001, Tracer diffusion coefficients in sedimentary rocks: correlation to porosity and hydraulic conductivity, Journal of Contaminant Hydrology, 53, 85-100. Finney, J., 1970, Random packing and the structure of simple liquids (The geometry of random close packing), Proceedings of the Royal Society, 319A, 479. Mavko, G., and Nur, A., 1997, The effect of a percolation threshold in the Kozeny-Carman relation, Geophysics, 62, 1480-1482. Pape, H., Clauser, C., and Iffland, J., 1998, Permeability prediction for reservoir sandstones and basement rocks based on fractal pore space geometry, SEG Expanded Abstracts, SEG 1998 Meeting. Salem, H., and Chilingarian, G.V., 2000, Influence of porosity and direction of flow on tortuosity in unconsolidated porous media, Energy Sources, 22, 207-213. 16 Appendix: Specific Surface Area of the Finney Pack with Expanding Spheres We select a cubic subset of the Finney pack and gradually and uniformly expand the radius of each sphere from 1 to 1.5 mm. The evolution of the porosity of these packs versus the radius of a sphere is shown in Figure A.1. Figure A.1. Left – porosity of a Finney pack decreasing with the increasing radius of each sphere. Middle – the specific surface area decreasing with the increasing radius of each sphere. Right – the specific surface area increasing with increasing porosity (black). The red curve is computed from Equation (A.1) with fixed sphere radius 1 mm. The blue curve is also computed from Equation (A.1) but with respectively increasing sphere radius. In the same figure we display the calculated specific surface area versus the radius and the specific surface area versus the porosity. In the latter frame, we also display a theoretical curve according to equation € s = 3(1−φ)/r (A.1) for varying porosity and fixed radius € r = 1 and for varying porosity and varying radius. Neither of these two theoretical curves even qualitatively matches the computed specific surface area.
333
Revisiting Fermat’s Factorization Method Gajraj Kuldeep and Rune Hylsberg Jacobsen gkuldeep,rhj{@ece.au.dk}, Aarhus University, Denmark Abstract. This paper addresses the problem of factoring composite numbers by introducing a novel approach to represent their prime di-visors. We develop a method to efficiently identify smaller divisors based on the difference between the primes involved in forming the composite number. Building on these insights, we propose an algorithm that signif-icantly reduces the computational complexity of factoring, requiring half as many iterations as traditional quadratic residue-based methods. The presented algorithm offers a more efficient solution for factoring compos-ite numbers, with potential applications in fields such as cryptography and computational number theory. 1 Introduction Integer factorization has been one of the most fascinating problems in mathe-matics and computer science, and it has been studied extensively for centuries. The difficulty of decomposing a large composite number into its prime factors forms the foundation of many cryptographic systems. Among these, the RSA public key algorithm stands out as one of the most well-known applications of integer factorization. RSA relies on the hardness of factorizing the product of two large prime numbers, a task which is computationally infeasible for classical computers when the numbers involved are sufficiently large. Despite its ancient roots, integer factorization has gained renewed interest in modern times due to its implications in cryptography and security. In fact, the security of much of today’s digital communication, including secure web trans-actions, is underpinned by the assumption that efficient methods for factoring large integers do not exist. As of now, no known classical algorithms can fac-tor large integers efficiently, making factorization a cornerstone of public-key cryptography. Over the years, several factorization algorithms have been developed, ranging from basic methods like trial division and Fermat’s method to more advanced ap-proaches such as the quadratic sieve and the general number field sieve (GNFS), which is currently the fastest known classical algorithm for large numbers [1, 2]. However, with the advent of quantum computing, factorization has become a topic of intense scrutiny. Quantum algorithms, particularly Shor’s algorithm, offer a polynomial-time solution to the integer factorization problem, potentially threatening the security of classical cryptographic schemes[3, 4]. Existing methods, such as those based on quadratic residues, often require a significant number of iterations to achieve results. This motivates the need for 2 Gajraj Kuldeep and Rune Hylsberg Jacobsen more efficient algorithms that can reduce computational effort. In this work, we seek to develop a more effective approach by representing prime divisors and identifying smaller divisors of the difference between the primes. Our aim is to design an algorithm that reduces the number of iterations required, offering a faster alternative to traditional methods. 2 Fermat’s Method Fermat’s factorization method is based on the idea that any odd integer N can be represented as the difference of two squares: N = x2 −y2 = (x −y)(x + y) Where N = PQ for two primes P and Q, Fermat’s method seeks to express N as: N = (x −y)(x + y) This implies: x = P + Q 2 , y = P −Q 2 . The key idea is to search for integers x and y such that x2 −N becomes a perfect square. Fermat’s method works efficiently if P and Q are close to each other, i.e., when P ≈Q. The algorithm proceeds by starting with x = ⌈ √ N⌉ and incrementing x until x2 −N = y2 for some integer y. Once such an x and y are found, the factors of N can be computed as: P = x + y, Q = x −y 3 Difference Based Method Let N be a composite odd number such that, N = PQ, (1) where P and Q are primes. Without loss of generality throughout this article, it is assumed that Q is greater than P. Since Q > P and both are odd primes, we can write Q in the following way, Q = P + T, (2) where T is an even number. By the placing the value of Q in 1, we get, N = P 2 + PT, (3) Revisiting Fermat’s Factorization Method 3 First we demonstrate the relation with Fermat’s factoring equation, i.e., N =  Q+P 2 2 −  Q−P 2 2 . We can write T = 2t for some positive integer t. Now, we define P using t and α ≥0 in the following way, P = αt + s, (4) where s is an integer, i.e., s can take both positive and negative values depending on αt. Similarly we write Q in t and s as, Q = (α + 2)t + s. (5) Using 4 and 5, 1 can be written as, N = α(α + 2)t2 + 2(α + 1)ts + s2 (6) N = t s α(α + 2) α + 1 α + 1 1  t s  = xAxT , where x = t s and A = α(α + 2) α + 1 α + 1 1  . It can be noted that the matrix A is symmetric and invertible. Using 4, 5, and 6, it is easy to establish link with the Fermat’s factoring equation. We have the following relations, Q −P 2 = t, (7) and Q + P 2 = (α + 1)t + s. (8) By adding t2 to 6, We have N + t2 = ((α + 1)t + s)2, N = ((α + 1)t + s)2 −t2, N = Q + P 2 2 − Q −P 2 2 . (9) This demonstrates the relation with Fermat’s factoring equation. We can use 6 and solve for t. The solutions are given as, t = −(α + 1)s ± p s2 + α(α + 2)N α(α + 2) . (10) 10 can be simplified as, 4 Gajraj Kuldeep and Rune Hylsberg Jacobsen P = αt + s = s ± p s2 + α(α + 2)N (α + 2) . (11) Similarly solving for s we get, s = −(α + 1)t ± p t2 + N, Q = (α + 2)t + s = t ± p t2 + N = T ± √ T 2 + 4N 2 . (12) T is the difference of two odd primes that makes T always even. Therefore, α = 2 is chosen to find information about T. In this case P = 2t + s = T + s, and inserting the value of P in 3, we get, N = s2 + 2T 2 + 3sT (13) To find the factors of T and s. Let T and s be defined as, T = Gt1 + t0, s = Gs1 + s0, (14) where G is a positive integer. By inserting these values into 13, we get, N = G2(2t2 1 + 3t1s1 + s2 1) + G(4t1t0 + 3t1s0 + 3s1t0 + 2s1s0) + (2t2 0 + 3t0s0 + s2 0), (2t2 0 + 3t0s0 + s2 0) ≡N((G)). (15) We can find the possible values of t0 and s0 for a given G. If there is only one possible value of t0 or s0 for given G then there is no ambiguity. Algorithm 1 Find Special Pairs of t0 and s0 for a G 1: ite ←1 2: t0Spece ←[ ] 3: s0Spece ←[ ] 4: NmodG ←N((G)) 5: for t0 ←0 to G −1 do 6: for s0 ←0 to G −1 do 7: if 2 · t2 0 + 3 · t0 · s0 + s2 0((G)) == NmodG then 8: t0Spece[ite] ←t0 9: s0Spece[ite] ←s0 10: ite ←ite + 1 11: end if 12: end for 13: end for Revisiting Fermat’s Factorization Method 5 Let’s take L = j√ N k . We have assumed that P < Q, therefore we have P < L < Q. Let a, b ∈Z and P, Q, and N are defined as, P = L −a, Q = L + b, N = L2 + L(b −a) −ab. (16) Let b −a = r, and 16 for N can be written as, N = L2 + Lr −a2 −ar. (17) Fermat’s factoring equation, N =  Q+P 2 2 −  Q−P 2 2 have Q+P and Q−P. Using Eq. 16 Q + P = 2L + r and using 5 and 4, we get Q −P = T. To find the factors of r and a. Let r and a are defined as, r = Gr1 + r0, a = Ga1 + a0, (18) where G is a positive number. By inserting these values into 17, we get, (L2 + Lr0 −a2 0 −a0r0) ≡N((G)). (19) Algorithm 1 can be used to find the values of r0 and a0 by changing only the if condition with given in 19. The focus is on T and r but values s and a are also studied. We list the some interesting values of t0 and r0 where they are unique for different RSA numbers for a G . First, we take RSA-250 so that the values can be verified. The t0 and r0 for different values of G are given in Table 1. Table 1. Values of t0 and r0 for RSA-250 number G = 2 G = 3 G = 4 G = 6 G = 8 G = 12 G = 24 t0 = 0 t0 = 0 t0 = 0 t0 = 0 t0 = 0 t0 = 0 t0 = 0 r0 = 0 r0 = {0, 1} r0 = 0 r0 = {0, 4} r0 = {0, 4} r0 = {0, 4} r0 = {0, 4, 12, 16} We can write T = 24t1 and r = 4r1 and form additional constraints on P and Q as Q −P = 0((24)) and Q + P = 2((4)), and Q and P can be written in linear form as, Q = L + 12t1 + 2r1 P = L −12t1 + 2r1 (20) Similarly, P and Q can also be written as, P = 24p1 + a, Q = 24p1 + 24t1 + a, (21) 6 Gajraj Kuldeep and Rune Hylsberg Jacobsen where a ∈{1, 5, 7, 11, 13, 17, 19, 23} for some positive integer p1. We present the values of RSA numbers RSA-160 and RSA-270 in Table 2 and 3. Table 2. Values of t0 and r0 for RSA-260 number G = 2 G = 3 G = 4 G = 6 G = 8 G = 12 G = 24 t0 = 0 t0 = 0 t0 = 2 t0 = 0 t0 = {2, 6} t0 = 6 t0 = {18, 6} r0 = 0 r0 = {1, 2} r0 = 2 r0 = {2, 4} r0 = 6 r0 = {2, 10} r0 = {14, 22} Table 3. Values of t0 and r0 for RSA-270 number G = 2 G = 3 G = 4 G = 6 G = 8 G = 12 G = 24 t0 = 0 t0 = 0 t0 = 2 t0 = 0 t0 = {2, 6} t0 = 6 t0 = {18, 6} r0 = 0 r0 = {0, 1} r0 = 0 r0 = {0, 4} r0 = 0 r0 = {0, 4} r0 = {0, 16} Fermat’s method has been improved using the quadratic residues. The basic idea is to focus the search for factors by leveraging congruences modulo small numbers, m, which helps reduce the number of trial steps, i.e, try only values of x when x2 −N = a((m)) is quadratic residue to that number, where a is quadratic residue to m. For example the small number is m = 24 and we take RSA-250. For this number we have N = 1((24)) and x2((24)) should be 1 and x = 24x1+c, where c ∈1, 5, 7, 11, 13, 17, 19, 23. Therefore, there are 8 possible values of c for each try. On other hand if (2L+r)2 −4N is checked for perfect square then only 4 tries are needed as can be observed from Table 1. Furthermore, one can write T 2 = (2L + r)2 −4N. We demonstrate the advantage of the way of representing the primes in r and T for the RSA numbers. Again we take take the same numbers RSA-250, and RSA-260, and RSA-270 and compare with quadratic residue based method in Fig. 1, 2, 3. Simulations are shown for G = 100 to G = 199 for all figures. We observe the following either the number of tries reduce by half or remain same as compared to the quadratic residue based methods. Now we propose an algorithm to factor N based on G. Suppose we have found all the possible values of r0s for a large G and it is represented as R. The factoring algorithm is given as Algorithm 2. A large value of G should be chosen in a way that the possible values of r0 is small. For different value of N, the exercise to find large G with small possible values of r0 has to be done again, because the value of G is depended on N. 4 Conclusion In this paper, we present a method to represent the prime divisors of a composite number. Additionally, we propose an approach to identify the smaller divisors Revisiting Fermat’s Factorization Method 7 100 120 140 160 180 200 V alue of G 0 20 40 60 80 100 Number of tries for a G P r oposed method Quadratic r esidue method Fig. 1. Comparison between improvements using Quadratic residue method and pro-posed method for RSA-250 Algorithm 2 Factoring algorithm based on the G 1: r1 ←0 2: while r0 ∈R do 3: if (2L + Gr1 + r0)2 −4N is square then 4: f 2 ←(2L + Gr1 + r0)2 −4N 5: P ←gcd(f + 2L + Gr1 + r0, N) 6: Q ←gcd(−f + 2L + Gr1 + r0, N) 7: end if 8: end while 9: r1 ←r1 + 1 10: Goto step 2. of the difference between the primes forming the composite number. Finally, we introduce an algorithm for factoring composite numbers, demonstrating that it requires half the iterations compared to the quadratic residue-based method. References 1. J. Mckee: Speeding Fermat’s factoring method, Math. Comp. 68 (1999), 1729-1737. 2. W. R. Alford and C. Pomerance: Implementing the self initializing quadratic sieve on a distributed network, Number Theoretic and Algebraic Methods in Computer 8 Gajraj Kuldeep and Rune Hylsberg Jacobsen 100 120 140 160 180 200 V alue of G 0 20 40 60 80 100 Number of tries for a G P r oposed method Quadratic r esidue method Fig. 2. Comparison between improvements using Quadratic residue method and pro-posed method for RSA-260 Science (Moscow) (A. van der Poorten, I. Shparlinski, and H. G. Zimmer, eds.), 1993, pp. 163–174. 3. P. Shor: Algorithms for quantum computation: discrete logarithms and factoring, Proceedings of the Thirty-Fifth Annual Symposium on the Foundations of Computer Science, 1994, pp. 124–134. 4. Jr., S., S., Wagstaff: The Joy of Factoring. Student Mathematical Library, vol. 68 Providence, Rhode Island: Amer. Math. Soc. 5. RSA Numbers, last accessed 2024/9/10. Revisiting Fermat’s Factorization Method 9 100 120 140 160 180 200 V alue of G 0 20 40 60 80 100 Number of tries for a G P r oposed method Quadratic r esidue method Fig. 3. Comparison between improvements using Quadratic residue method and pro-posed method for RSA-270
334
An introduction to symmetry methods in the solution of differential equations that occur in chemistry and chemical biology Peter E Hydon Dept. Mathematics & Statistics University of Surrey Guildford GU2 7XH, UK [email protected] June 3, 2005 Synopsis This paper is a short overview of the main ways in which symmetries can be used to obtain exact information about differential equations. It is written for a general scientific audience; readers do not need any previous knowledge of symmetry methods. The information yielded by symmetry methods may include the general solution of a given differential equation, special ‘invariant solutions’ (such as similarity solutions), and conservation laws. Several symmetry methods have been implemented as computer algebra packages, which can be used by nonspecialists. Towards the end of the paper, there is a brief outline of some recent devel-opments in symmetry methods that await translation into symbolic algebra. Key words Symmetry, differential equation, computer algebra, difference equation, invari-ant. 1 Introduction In the second half of the 19th century, the Norwegian mathematician Sophus Lie began to create a remarkable body of work that unified virtually all known methods of solving differential equations. He discovered that symmetries of dif-ferential equations can be found and exploited systematically. Over many years, 1 0 x y 6 -Figure 1: Some solutions of y′ = 0. considerable research effort has been directed at understanding the elegant al-gebraic structure of symmetry groups, but Lie’s methods for determining and using symmetries were largely neglected until fairly recently. With the advent of powerful symbolic computation packages, it has become possible to apply Lie’s methods to explore the symmetries and conservation laws of a wide range of physical systems. This article is a straightforward introduction to symmetry methods. Simple examples are used to illustrate each of the major ideas; indeed, §2 is devoted to the simplest of all differential equations. The majority of the article is con-tained in §3, which deals with the problem of finding symmetries, and §4, which describes various ways of using symmetries. Some extensions of these themes are given in §5, and §6 is a brief description of some newly-developed methods that have not yet been implemented as symbolic packages. The article concludes with some suggestions for further reading. 2 Symmetries of the simplest differential equa-tion Some important concepts in symmetry methods can be explained with the aid of the simplest differential equation, y′ = 0. (1) The solutions of this ordinary differential equation (ODE) can be represented on the (x, y) plane by the parallel straight lines y = c, as shown in Fig. 1. (Here and throughout the paper, arbitrary constants are denoted by c or ci.) Roughly speaking, a point symmetry of an ODE is a smooth invertible map-ping Γ of the (x, y) plane to itself, that maps every solution of the ODE to a solution. Here are some examples of symmetries of (1): 1. reflection in the x-axis, Γ1 := (x, y) 7→(x, −y), which maps the solution y = c to the solution y = −c; 2 2. translations in the x-direction, Γ2 := (x, y) 7→(x + ǫ, y), each of which maps each solution to itself; 3. translations in the y-direction, Γ3 := (x, y) 7→(x, y + ǫ), which map the solution y = c to the solution y = c + ǫ. These are not the only symmetries of the ODE (1) – in fact, there are infinitely many. However, each of the above represents an important aspect of symmetries. First note that Γ1 maps almost every solution to a different solution; the only exception is y = 0, which is mapped to itself. Any solution that is mapped to itself by a symmetry is said to be invariant. Translations in the x-direction move points along solution curves, so every solution is invariant. Symmetries that map every solution to itself are called trivial symmetries. By contrast, translations in the y-direction map each solution to a different solution. The translations Γ2 and Γ3 each depend on a continuous parameter, ǫ. In each case, ǫ = 0 corresponds to the identity map. These are examples of Lie point symmetries. By contrast, Γ1 does not depend on a continuous parameter; therefore it is said to be a discrete symmetry. The set of all solutions of the ODE can be obtained by finding all solutions in the upper half-plane, and then applying the reflection Γ1 to each of these solutions. (This yields the set of solutions in the lower half-plane.) A more efficient way to generate all solutions is to find one solution and then apply all possible translations Γ3, allowing ǫ to vary over the real numbers. In this way the dimension of the problem is reduced by one. Instead of having to find a one-parameter family of solutions, we need only find a single solution. This idea is at the heart of symmetry methods for ODE’s. Note that the trivial symmetries Γ2 do not reduce the number of solutions that we have to find. For this reason, trivial symmetries are of no use to us. Point symmetries are examples of point transformations, which are trans-formations of the independent and dependent variables. There may also be symmetries that depend additionally on derivatives of the dependent variables. These symmetries are usually less obvious than point symmetries, but they can still be very useful. 3 The linearized symmetry condition This section describes how to obtain Lie symmetries of a given scalar differential equation. (For brevity, we do not consider systems of differential equations, but everything in the remainder of this paper is applicable to systems as well as scalar equations.) Consider the problem of finding the Lie point symmetries of the ODE y(n) = ω  x, y, y′, . . . , y(n−1) . (2) Let us seek conditions under which a smooth invertible mapping Γ : (x, y) 7→ ˆ x(x, y), ˆ y(x, y)  3 is a symmetry of the ODE. Let y = f(x) be a curve in the (x, y) plane. The image of this curve under the mapping Γ is the parametric curve ˆ y = ˆ y(x, f(x)), ˆ x = ˆ x(x, f(x)). In regions in which the second of these equations is invertible, there exists a function ˜ f such that ˆ y = ˜ f(ˆ x). It is usual to identify the (ˆ x, ˆ y) plane with the (x, y) plane; thus the image of the curve y = f(x) is y = ˜ f(x). The mapping Γ is a symmetry of the ODE if each solution is mapped to a solution. Therefore y = ˜ f(x) satisfies (2) whenever y = f(x) does. Equivalently, ˆ y(n) = ω  ˆ x, ˆ y, ˆ y′, . . . , ˆ y(n−1) when (2) holds. (3) This equation is called the symmetry condition for the ODE (2). In principle, the symmetry condition can be solved by writing out the derivatives of ˆ y with respect to ˆ x in full, For instance, ˆ y′ = dˆ y dˆ x = ˆ yx + y′ˆ yy ˆ xx + y′ˆ xy . (The subscripts x and y denote partial derivatives with respect to these vari-ables.) For higher derivatives, the expressions are much messier, and it is hard to solve the symmetry condition for the unknown functions ˆ x(x, y) and ˆ y(x, y). The problem of solving the symmetry condition becomes very much easier if we restrict attention to one-parameter local Lie groups of point symmetries that are near-identity transformations of the plane. These Lie point symmetries of a given ODE (2) are symmetries for which ˆ x = x + ǫξ(x, y) + O(ǫ2), ˆ y = y + ǫη(x, y) + O(ǫ2). (4) Here ǫ is a real parameter, and the Lie symmetries are defined for each ǫ suffi-ciently close to zero. The set of points (ˆ x, ˆ y) that can be reached from (x, y) by varying ǫ is called the orbit through (x, y); Fig. 2 illustrates part of a typical orbit. By substituting (4) into the symmetry condition (3) and expanding the result in powers of ǫ, it is possible to derive a linear partial differential equa-tion (PDE) for ξ(x, y) and η(x, y). This PDE is called the linearized symmetry condition (LSC). Perhaps surprisingly, once the LSC has been solved, the Lie point symmetries can be calculated to all orders in ǫ. To do this, define the infinitesimal generator of the Lie symmetries to be the first-order partial differential operator X = ξ(x, y) ∂ ∂x + η(x, y) ∂ ∂y . This operator can be interpreted as the tangent vector field (at ǫ = 0) to the orbits of (4), as illustrated in Fig. 2. Consequently (ˆ x, ˆ y) are solutions of the initial-value problem dˆ x dǫ = ξ(ˆ x, ˆ y), dˆ y dǫ = η(ˆ x, ˆ y), (ˆ x, ˆ y) = (x, y) when ǫ = 0. 4 (x, y) (ˆ x, ˆ y) ǫ 0  ξ(x,y) η(x,y)   ξ(ˆ x,ˆ y) η(ˆ x,ˆ y)  R q       7 q -Figure 2: Part of the orbit through (x, y), showing the tangent vectors at ǫ = 0 and at a general value of ǫ. The solution to this problem can be expressed as a power series, as follows: ˆ x = eǫXx, ˆ y = eǫXy, where eǫX = ∞ X n=0 ǫn n! Xn. Thus, once we know X, it is possible to calculate (ˆ x, ˆ y); in other words, the orbits can be found. For nontrivial symmetries, most orbits are curves that are transverse to solu-tion curves, so that points on one solution curve will be mapped onto different solution curves. There are two important exceptions. A point (x, y) on the plane is fixed by the Lie symmetries if and only if the infinitesimal generator is zero there, i.e. ξ(x, y) = η(x, y) = 0. Points that satisfy this condition are called invariant points; each one is a zero-dimensional orbit. The other exception occurs when an orbit coincides with a solution curve. As the orbit is determined by the infinitesimal generator, we can write this as a condition on ξ(x, y) and η(x, y), as follows. The characteristic of the Lie symmetries (4) is the function Q(x, y, y′) = η(x, y) −ξ(x, y) y′, (5) which is zero wherever the tangent to a curve (x, y(x)) is parallel to the tan-gent to the orbit. Any curve on which Q vanishes is invariant under the Lie symmetries. Example 1 The Lie symmetries of the ODE y′ = y3 + x2y −y −x xy2 + x3 + y −x (6) 5 include the rotations about the origin, (ˆ x, ˆ y) = (x cos ǫ −y sin ǫ, x sin ǫ + y cos ǫ), which are generated by X = −y ∂ ∂x + x ∂ ∂y . Note that the only invariant point is the origin. The characteristic is Q(x, y, y′) = x + yy′, which vanishes on the circles x2 + y2 = c. The invariant solutions of the ODE are the common solutions of Q(x, y, y′) = 0 and the ODE; there is only one such solution, namely x2 + y2 = 1. If the Lie symmetries of the ODE (2) are trivial, the characteristic vanishes on every solution. It is possible to factor out the trivial symmetries by insisting that ˆ x = x. The following result enables us to do this without losing any generality. Theorem 1 The Lie point symmetries (4) of the ODE (2) are equivalent (up to a trivial symmetry) to the following dynamical symmetries: ˆ x = x, ˆ y = y + ǫQ(x, y, y′) + O(ǫ2), (7) where Q(x, y, y′) = η(x, y) −ξ(x, y)y′. Generally speaking, dynamical symmetries are not point symmetries, be-cause ˆ y depends on y′. However, when (7) is substituted into the symmetry condition, it yields the same LSC as (4). It turns out that (7) is easier to work with than (4); moreover, this formulation is easily extended to other types of symmetry. To calculate the LSC for the ODE (2), we must calculate the derivatives of ˆ y with respect to ˆ x. Define the total derivative with respect to x, restricted to solutions of the ODE, to be the operator Dx = ∂ ∂x + y′ ∂ ∂y + y′′ ∂ ∂y′ + . . . + ω  x, y, y′, . . . , y(n−1) ∂ ∂y(n−1) . Then, letting Q denote Q(x, y, y′) (on solutions of the ODE), dˆ y dˆ x = dˆ y dx = Dxˆ y = y′ + ǫDxQ + O(ǫ2). Similarly, on solutions of the ODE, ˆ y(k) = dkˆ y dˆ xk = y(k) + ǫ(Dx)kQ + O(ǫ2). k = 1, 2 . . .. 6 By substituting these results into the symmetry condition, and looking only at the terms that are first-order in ǫ, we derive the LSC: (Dx)nQ −ωy(n−1)(Dx)n−1Q −ωy(n−2)(Dx)n−2Q −. . . −ωyQ = 0. (8) If n ≥2 then the LSC depends upon y′, y′′, . . . , y(n−1), whereas ξ and η are independent of these variables. Therefore (8) can be split into an overdetermined system of PDEs, as the following simple example shows. Example 2. Consider the ODE y′′ = 0. The total derivative operator is Dx = ∂ ∂x + y′ ∂ ∂y , and therefore the LSC is Qxx + 2y′Qxy + y′2Qyy = 0. By substituting (5) into the LSC, and splitting the resulting equation into pow-ers of y′, we obtain the overdetermined system ηxx = 0, 2ηxy −ξxx = 0, ηyy −2ξxy = 0, ξyy = 0. The general solution of this system is ξ(x, y) = c1x2 + c2xy + c3x + c4y + c5, η(x, y) = c1xy + c2y2 + c6x + c7y + c8. Therefore every infinitesimal generator of Lie point symmetries of y′′ = 0 is a linear combination of X1 = x2 ∂ ∂x + xy ∂ ∂y , X2 = xy ∂ ∂x + y2 ∂ ∂y , X3 = x ∂ ∂x , X4 = y ∂ ∂x , X5 = ∂ ∂x , X6 = x ∂ ∂y , X7 = y ∂ ∂y , X8 = ∂ ∂y . The process used above can also be applied to more complicated ODEs. The basic step of splitting the LSC into an overdetermined system is easily accomplished with the aid of computer algebra. Hereman reviews a wide variety of packages for doing this. Some packages also use various heuristics to try to solve the overdetermined system. A nicer approach uses differential algebra to simplify the overdetermined system first . Within the computer algebra system Maple , for example, this can be done by using the package rifsimp, which reduces the system to a simple ‘involutive’ form . Example 3. In this example, we use Maple to find the Lie point symmetries of the nonlinear ODE y′′ = y′/y2. 7 The symmetries can be found very quickly with a few lines of Maple code, which are listed in the Appendix. It is instructive to follow the solution process in some detail. The LSC is (Dx)2Q −1 y2 DxQ + 2y′ y3 Q = 0, where Dx = ∂ ∂x + y′ ∂ ∂y + y′ y2 ∂ ∂y′ . As before, the LSC is split into a system of PDEs by equating terms that have the same dependence on y′. The overdetermined system that results from this process is y3ηxx −yηx = 0, 2y3ηxy −y3ξxx −yξx + 2η = 0, y3ηyy −2y3ξxy −2yξy = 0, y3ξyy = 0. This rather untidy system is reduced by rifsimp to the equivalent form ηx = 0, ξx = 2η/y, ηy = η/y, ξy = 0, which is easily solved: ξ = 2c1x + c2, η = c1y. Therefore the symmetry generators are linear combinations of X1 = 2x ∂ ∂x + y ∂ ∂y , X2 = ∂ ∂x . The Maple code listed in the Appendix is adapted from the documentation for rifsimp. It is short and simple, and is easily changed to determine sym-metries of other ODEs. Newcomers to symmetry methods who have access to Maple may wish to experiment by trying to find symmetries of various ODEs of order two or more. Readers with other computer algebra systems should consult their documentation for help on finding symmetries. Hereman’s review covers most of the add-on packages that are available. So far we have focused on ODEs. However, the same approach can be used to find Lie symmetries of PDEs. For simplicity, we shall restrict attention to PDEs with one dependent variable, u, and two independent variables, x and t. Then the infinitesimal generator of Lie point symmetries is of the form X = ξ(x, t, u) ∂ ∂x + τ(x, t, u) ∂ ∂t + η(x, t, u) ∂ ∂u . The characteristic is Q = η(x, t, u) −ξ(x, t, u)ux −τ(x, t, u)ut. (9) 8 Once again, invariant solutions satisfy the condition Q = 0, and trivial symme-tries may be factored out by looking for symmetries of the form ˆ x = x, ˆ t = t, ˆ u = u + ǫQ + O(ǫ2) As before, the symmetry condition requires that the PDE must hold in the transformed variables whenever it holds in the original variables. The LSC is obtained by retaining only the first-order terms. Example 4. The symmetry condition for the heat equation, ut = uxx, is ˆ uˆ t = ˆ uˆ xˆ x when ut = uxx. Therefore the LSC is DtQ = (Dx)2Q when ut = uxx, where Dx = ∂ ∂x + ux ∂ ∂u + uxx ∂ ∂ux + uxt ∂ ∂ut + . . . , Dt = ∂ ∂t + ut ∂ ∂u + uxt ∂ ∂ux + utt ∂ ∂ut + . . . , are the total derivatives with respect to x and t respectively. After replacing uxx by ut wherever it occurs, one can split the LSC into an overdetermined system by equating powers of uxt, utt, ux and ut. This system can be solved by hand, but it is easier to use computer algebra. The infinitesimal generator is a linear combination of X1 = ∂ ∂x , X2 = ∂ ∂t , X3 = u ∂ ∂u , X4 = x ∂ ∂x + 2t ∂ ∂t , X5 = 2t ∂ ∂x −xu ∂ ∂u , X6 = 4xt ∂ ∂x + 4t2 ∂ ∂t −(x2 + 2t)u ∂ ∂u ,  XU = U(x, t) ∂ ∂u : Ut = Uxx  . (10) Note that there is an infinite family of infinitesimal generators, which depend upon solutions of the heat equation. The effect of these symmetries is to add an arbitrary multiple of one solution to the original solution: ˆ u = u + ǫ U(x, t), (11) where Ut = Uxx. This corresponds to the principle of linear superposition. Similarly, every PDE that is linear (or linearizable by a point transformation) has an infinite family of Lie point symmetries. 9 Example 5. The LSC for the Thomas equation, uxt = uxut −1, is DxDtQ = utDxQ + uxDtQ when uxt = uxut −1. Once again, there is an infinite family of Lie point symmetry generators, spanned by X1 = ∂ ∂x , X2 = ∂ ∂t , X3 = ∂ ∂u , X4 = x ∂ ∂x −t ∂ ∂t ,  XV = V (x, t) eu ∂ ∂u : Vxt = V  . (12) This suggests that the Thomas equation is linearizable to vxt = v by a point transformation. The required transformation is obtained by looking for variables in which XV generates linear superpositions. In this case v = −exp{−u} will do, for then eu ∂ ∂u = ∂ ∂v . 4 Some uses of Lie point symmetries 4.1 Reduction of order For ODEs, a one-parameter local Lie group of symmetries can be used to re-duce the order of the ODE by one. In particular, first-order ODEs can be solved completely. This is done by introducing a new set of coordinates that are suited to the symmetries. In terms of these new canonical coordinates, nontrivial sym-metries become translations between solutions (similar to Γ3 in §2). Let (r, s) be a pair of canonical coordinates, where s is the direction of translation. Then the Lie symmetries are (ˆ r, ˆ s) = (r, s + ǫ), and so a first-order ODE y′ = ω(x, y) that admits such symmetries may be rewritten in the form ˙ s ≡ds dr = Ω(r). (13) Note that Ωis independent of s, because s varies with ǫ, whereas r and ˙ s are invariant. The transformed equation (13) is easy to solve: s + c = Z Ω(r) dr. The effect of the symmetry group is clear: each symmetry changes the arbitrary constant of integration. Canonical coordinates can be constructed systematically from the infinitesi-mal generator X. This represents the tangent vector field, which is independent of the coordinate system that is used. In canonical coordinates, X = ∂ ∂s . 10 Therefore ξ(x, y) ∂r ∂x + η(x, y)∂r ∂y = Xr = 0, ξ(x, y) ∂s ∂x + η(x, y)∂s ∂y = Xs = 1. (14) This system of first-order linear PDEs can be solved by the method of charac-teristics, which is a simple task for most symmetries of mathematical models of physical systems. Thus, it is usually easy to obtain canonical coordinates; any nondegenerate solution (r, s) of (14) will do. Canonical coordinates cannot be defined at an invariant point; as X is zero there, the second equation of (14) cannot be satisfied. Therefore it is usually necessary to use several sets of canonical coordinates to cover all regions of the plane. Example 1 (cont.) Recall that the ODE y′ = y3 + x2y −y −x xy2 + x3 + y −x (15) has Lie symmetries generated by X = −y ∂ ∂x + x ∂ ∂y . In the region x > 0, the equations (14) for canonical coordinates have a well-known solution, namely the polar coordinates r = p x2 + y2, s = tan−1{y/x}. In these coordinates, (15) is transformed to ˙ s = 1 r(1 −r2) , which is undefined at the invariant point r = 0 and on the invariant solution r = 1. The general solution of the transformed ODE is s + c = 1 2 ln r2 1 −r2 , which can easily be rewritten in terms of x and y to yield the general solution of (15) in the region x > 0. The remainder of the plane can be treated similarly. For a second-order ODE, the introduction of canonical coordinates enables the ODE to be written in the form ¨ s = Ω(r, ˙ s), 11 which is equivalent to a first-order ODE for v = ˙ s. If the solution of this ‘reduced’ ODE is v = f(r, c1) then s + c2 = Z f(r, c1) dr. Of course, there is no guarantee that the solution of the reduced ODE can be found, unless a one-parameter Lie group of its symmetries is known. However, if there are at least two independent infinitesimal generators for the original ODE, it is almost always possible to arrange the reduction so that the reduced ODE inherits some Lie symmetries, as follows. Calculate the commutator of each pair of infinitesimal generators, which is the first-order partial differential operator [X1, X2] = X1X2 −X2X1. It can be shown that each commutator is an infinitesimal symmetry generator, and therefore the set of all infinitesimal generators is a Lie algebra. If one can find a pair of generators Xi, Xj whose commutator is a multiple of Xi, write the ODE in terms of the canonical coordinates obtained from Xi; the reduced ODE is then guaranteed to inherit the symmetries generated by Xj, which can be used to solve it. Example 3 (cont.) Earlier, we found that the Lie point symmetry generators of the second-order ODE y′′ = y′ y2 are linear combinations of X1 = 2x ∂ ∂x + y ∂ ∂y , X2 = ∂ ∂x . The commutator of X1 with X2 is [X1, X2] = −2 ∂ ∂x = −2X2. Therefore, according to the above recipe, we should use canonical coordinates determined by X2 to reduce the ODE. The simplest choice of such coordinates is (r, s) = (y, x). Therefore ˙ s = 1/y′, and ¨ s = −y′′ (y′)3 = − 1 y2(y′)2 = −( ˙ s)2 r2 . Let v = ˙ s; then the reduced ODE is ˙ v = −v2 r2 . It is worth noting that, in terms of these canonical coordinates, the symmetries generated by X1 are (ˆ r, ˆ s) = (eǫr, e2ǫs) 12 Therefore ˆ v = dˆ s dˆ r = eǫ ds dr = eǫv, and so the infinitesimal generator for these symmetries on the (r, v) plane is ˜ X1 = r ∂ ∂r + v ∂ ∂v . As promised, these are symmetries of the reduced ODE. This ODE happens to be separable, so it can be solved without using canonical coordinates for ˜ X1, but the recipe ensures that such coordinates are available. At this stage, the solution is easy to complete, and is left as an exercise. Suppose that we had reduced the original ODE using the ‘wrong’ generator X1. Then, in terms of the canonical coordinates (r, s) = (y/√x, ln(x)/2), the original ODE becomes ¨ s = −(2/r + r)( ˙ s)3 −2( ˙ s)2/r2. The reduced ODE does not inherit the symmetries generated by X2, and it appears to be intractable. For simplicity, we have restricted attention to first- and second-order ODEs. However, the same ideas are equally applicable to higher-order ODEs. The structure of the Lie algebra determines whether or not there exists a pair of generators such that [Xi, Xj] is a multiple of Xj. More generally, the Lie algebra determines whether or not an ODE can be integrated step-by-step. For further details, consult [5, 6]. 4.2 Invariant solutions Most PDEs do not have a ‘general solution,’ but symmetries can be used to find families of invariant solutions. Just as for invariant solutions of ODEs, we seek solutions of the differential equation that also satisfy Q = 0. Invariant solutions commonly include travelling waves and similarity solutions (which can be found almost by inspection), but they also include solutions that are not obvious. It is possible to classify all invariant solutions, using the structure of the Lie algebra. In the following, attention is restricted to a few examples, in order to convey the basic method. Given an infinitesimal generator, X = ξ(x, t, u) ∂ ∂x + τ(x, t, u) ∂ ∂t + η(x, t, u) ∂ ∂u , the solutions of Q = 0 are first integrals of the characteristic equations dx ξ = dt τ = du η . All such first integrals are invariant under the symmetries generated by X. If r and v are functionally independent first integrals, and if v depends nontrivially 13 on u, then we can substitute first integrals of the form v = F(r) into the original PDE. In general, the PDE will reduce to an ODE for F. The solution of this ODE yields a family of invariant solutions to the original PDE. If r depends on u, it is also necessary to seek solutions of the form r = c, because such solutions cannot be written in the form v = F(r). Example 4 (cont.) We shall first seek invariant solutions of the heat equa-tion, ut = uxx, under the symmetries generated by X5 = 2t ∂ ∂x −xu ∂ ∂u . The characteristic equations are dx 2t = dt 0 = du −xu , which have two functionally independent first integrals: r = t, v = u exp x2 4t  . Therefore, we substitute u = exp  −x2 4t  F(t) into the heat equation, which yields the reduced ODE F ′(t) = −1 2tF(t). The general solution of this ODE is F(t) = c/ √ t ; hence the invariant solutions under the symmetries generated by X5 are the Gaussian profiles u = c √ t exp  −x2 4t  . Applying the same procedure for the symmetries generated by X6 = 4xt ∂ ∂x + 4t2 ∂ ∂t −(x2 + 2t)u ∂ ∂u yields the invariants r = x t , v = u √ t exp x2 4t  . The ODE that is obtained by substituting u = 1 √ t exp  −x2 4t  F x t  14 into the heat equation is F ′′ = 0, whose solution is F(r) = c1r + c2. Therefore invariant solutions of the symmetries generated by X6 are a linear superposition of u = x t3/2 exp  −x2 4t  and the solutions obtained from X5. The heat equation has several large families of invariant functions, which have been classified . The same is true of many important physical systems [7, 8]. 4.3 Some other uses The methods that have been described up to this point are very powerful, and can be applied to almost any differential equation. They are based on the simple idea of a one-parameter Lie group. However, there are also many ways of using symmetries that use information about all of the Lie point symmetries. Here are two examples; they are not discussed in detail, but the interested reader is referred to the literature. • Provided that the number of linearly independent symmetry generators exceeds the order of the ODE, it is possible to construct the first integrals of the ODE directly from the symmetries; there is no need to consider the structure of the Lie algebra . • Lie symmetries can be used to construct the discrete symmetries of a given differential equation [9, 10]. It is very hard to construct discrete symmetries directly from the symmetry condition; I know of only one substantial example in which this has been achieved . However, it is possible to find discrete symmetries indirectly by looking at their action on the Lie algebra. Such actions have been classified for almost all Lie algebras of symmetries of ODEs . Discrete symmetries have many uses; most notably, they affect the stability of nonlinear dynamical systems . 5 Higher symmetries To find Lie point symmetries, one must split the LSC into an overdetermined system of PDEs. For ODEs of order n ≥3, there is no need for Q to be linear in y′; the LSC can be split by equating powers of y′′. More generally, Q may depend on any of the variables x, y, y′, . . . , y(n−1), provided that the form of Q enables the LSC to be split in a way that enables any unknown functions to be determined. All such symmetries are collectively known as dynamical symmetries; as in Theorem 1, the independent variables are fixed. A similar idea can be extended to PDEs; here Q may depend upon arbitrarily many derivatives of the dependent variable. Such symmetries are called gen-eralized or Lie-B¨ acklund symmetries. For PDEs that come from a variational 15 formulation, Noether’s Theorem enables conservation laws to be derived from symmetries that leave the variational problem unchanged; typically, these are generalized symmetries. The nontrivial conservation laws of a PDE for u(x, t) are expressions of the form Dt(F) + Dx(G) = 0, that hold on solutions of the PDE, but do not hold identically. Integrable systems (such as the Korteweg–de Vries equation) are partly characterized by the existence of an infinite number of conservation laws. Even if a PDE does not have a known variational formulation, its conserva-tion laws can be found systematically by a direct method that is analogous to the search for symmetries . There are several different ways of implementing this approach with computer algebra [14, 15]. 6 Some recent developments This section highlights two new areas of symmetries research that promise to be widely applicable, for which computer algebra will be needed. 6.1 Symmetries of initial-value problems So far, we have not referred to initial conditions or boundary conditions, but such conditions are usually stated in the formulation of a physical problem. Surprisingly, it is not generally true that the symmetries of an initial-value problem are also symmetries of the unconstrained differential equation. For example, the set of solutions of y′′′ = 0 subject to the initial condition y′′(0) = 0 is the same as the set of solutions of y′′ = 0. However, y′′ = 0 has symmetries generated by X = y ∂ ∂x , whereas y′′′ = 0 has no such symmetries. It has been shown that (subject to technical conditions) the Lie point symmetries of ODEs with specified initial conditions can be constructed with the aid of Taylor series . Whilst this is far more computationally intensive than the methods described in §3, it is a way of solving some problems that cannot be solved by the standard approach. 6.2 Difference equations Within the numerical analysis community, there is a rapidly-growing interest in geometric integration, which describes the transfer of geometric structures from a given differential equation to its numerical approximation. Such structures include symmetries, conservation laws, and symplectic structures . The geometric structure of difference equations is also important for inte-grable systems, as there are large classes of discrete integrable systems. At 16 present, far more is known about continuous integrable systems than about their discrete counterparts. The problem of finding local symmetries of difference equations can be tack-led in much the same way as for differential equations, but the LSC is a func-tional equation, rather than a PDE. Nevertheless, a technique for obtaining the solutions of the LSC has recently been developed ; this technique has also been used to determine conservation laws of partial difference equations . At present, the calculations cannot usually be done entirely by hand or by com-puter algebra (due to weaknesses in routines for solving differential equations). It remains to be seen whether it is possible to develop a computational package that will do this type of calculation reliably. 7 Conclusions and further reading This article has shown something of the power and scope of symmetry methods. Of necessity, it has only touched the surface of what is possible; indeed, cur-rent research on symmetries suggests that there are simple, widely-applicable methods still to be discovered. For readers who would like a fuller introduction, I recommend the texts by Stephani and Hydon . The outstanding advanced text by Olver is es-sential reading for anyone who is interested in research into symmetry methods. Ovsiannikov and Bluman & Kumei each include a number of useful results that do not appear in other texts. The Lie symmetries and conservation laws of many physically-important systems have been classified; the first two volumes of a handbook of symmetry analysis edited by Ibragimov [7, 8] are excellent sources for such classifications. Acknowledgements This investigation was partly supported by NIH Research Grant No. 1 R01 HL070542-01A1. Appendix Here is the Maple code that was used to obtain the Lie point symmetries of the ODE y′′ = y′/y2. For brevity, the output is omitted here, as it has been included in §3 of the main text. For more information, refer to the Maple documentation for the commands rifsimp and odepde. restart: > with(DEtools): First define the ODE whose symmetries are to be found. ODE:=diff(y(x),x,x)-diff(y(x),x)/y(x)^2; 17 The DEtools command "odepde" creates the LSC, whose numerator is split into an overdetermined system by "coeffs." > overdetsys:={coeffs(numer(odepde(ODE,xi,eta,y(x))),_y1)}; The above system of determining equations is greatly simplified by "rifsimp." > simplesys:=rifsimp(overdetsys); The simplified system is easily solved by hand; however, "pdsolve" will also do the job. pdsolve(simplesys[’Solved’]); 18 References Hereman, W. In: CRC Handbook of Lie Group Analysis of Differential equations, Vol III: New Trends in Theoretical Developments and Compu-tational Methods (ed. Ibragimov, N. H.); CRC Press: Boca Raton, 1996; Chapter XII. Mansfield, E. L.; Clarkson, P. A. J Symb Comp 1997, 23, 517–533. Waterloo Maple Software. Maple 9.50; Waterloo, Ontario, Canada, 2004. Reid, G. J.; Wittkopf, A. D.; Boulton, A. Eur J Appl Math 1996, 7, 604– 635. Hydon, P. E. Symmetry Methods for Differential Equations; Cambridge University Press: Cambridge, 2000. Olver, P. J. Applications of Lie Groups to Differential Equations (2nd edn); Springer-Verlag: New York, 1993. Ibragimov, N. H. (ed.) CRC Handbook of Lie Group Analysis of Differential Equations, Vol. I: Symmetries, Exact Solutions, and Conservation Laws; CRC Press: Boca Raton, 1994. Ibragimov, N. H. (ed.) CRC Handbook of Lie Group Analysis of Differential Equations, Vol. II: Applications in Engineering and Physical Sciences; CRC Press: Boca Raton, 1995. Hydon, P. E. Proc Roy Soc Lond A 1998, 454, 1961–1972. Hydon, P. E. Eur J Appl Math 2000, 11, 515–527. Reid, G. J.; Weih, D. T.; Wittkopf, A. D. In: Modern Group Analysis: Advanced Analytical and Computational Methods in Mathematical Physics (ed. Ibragimov, N. H.; Torrisi, M.; Valenti A.); Kluwer: Dordrecht, 1993; pp. 311–316. Laine-Pearson, F. E.; Hydon, P. E. Stud Appl Math 2003, 111, 269–299. Golubitsky, M.; Stewart, I.; Schaeffer, D. G. Singularities and Groups in Bifurcation Theory, Vol. II; Springer-Verlag: New York, 1988. Wolf, T. Eur J Appl Math 2002, 13, 129–152. Wolf, T.; Brand, A.; Mohammadzadeh, M. J Symb Comp 1999, 27, 221– 238. Hydon, P. E. Symmetry analysis of initial-value problems, J Math Anal Appl 2005 (in press, currently available online from the journal’s webpage). Budd, C. J.; Iserles, A. Phil Trans Roy Soc Lond A 1999, 357, 945–956. 19 Hydon, P. E. Proc Roy Soc Lond A 2000, 456, 2835–28. Hydon, P. E. J Phys A 2001, 34, 10347–10355. Stephani, H. Differential Equations: Their Solution Using Symmetries; Cambridge University Press: Cambridge, 1989. Ovsiannikov, L. V. Group Analysis of Differential Equations; Academic Press: New York, 1982. Bluman, G. W.; Kumei, S. Symmetries and Differential Equations; Springer-Verlag: New York, 1989. 20
335
reference request - Cohomology of commutative monoid acting on module - MathOverflow =============== Join MathOverflow By clicking “Sign up”, you agree to our terms of service and acknowledge you have read our privacy policy. Sign up with Google OR Email Password Sign up Already have an account? Log in Skip to main content Stack Exchange Network Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Visit Stack Exchange Loading… Tour Start here for a quick overview of the site Help Center Detailed answers to any questions you might have Meta Discuss the workings and policies of this site About Us Learn more about Stack Overflow the company, and our products current community MathOverflow helpchat MathOverflow Meta your communities Sign up or log in to customize your list. more stack exchange communities company blog Log in Sign up Home Questions Unanswered AI Assist Labs Tags Chat Users Hang on, you can't upvote just yet. You'll need to complete a few actions and gain 15 reputation points before being able to upvote. Upvoting indicates when questions and answers are useful. What's reputation and how do I get it? Instead, you can save this post to reference later. Save this post for later Not now Cohomology of commutative monoid acting on module Ask Question Asked 4 years, 3 months ago Modified4 years, 3 months ago Viewed 333 times This question shows research effort; it is useful and clear 3 Save this question. Show activity on this post. I have a some naive questions about how to define the cohomology of a commutative monoid. One way to express the cohomology of a group G G with coefficients in a module A A is as Ext i ZGExt Z[G]i(Z,A). If we have a commutative monoid M M (you can also assume it's cancellative if you want), we can follow the exact same recipe over the monoid algebra Z[M]Z[M]; I think this gives derived functors in the category of M M-modules of "taking M M-invariants," which is what I'd expect and want. I was wondering if this theory was developed anywhere, in terms of what analogues of standard group cohomology constructions/theorems exist, vanishing theorems, etc. Incidentally I've found by googling there are various other monoid cohomologies, but constructed in ways that seem arcane to me, e.g. Leech or Gillet symmetric cohomology. I guess you could also take the cohomology of the classifying space of the monoid as a category. Do any of them restrict to/agree with the construction above when restricted to some nice class of commutative monoids? What are the relations between them? reference-request homological-algebra group-cohomology semigroups-and-monoids Share Share a link to this question Copy linkCC BY-SA 4.0 Cite Improve this question Follow Follow this question to receive notifications asked May 4, 2021 at 16:41 xirxir 2,212 1 1 gold badge 9 9 silver badges 16 16 bronze badges 11 1 What is the use case that you have in mind? The most straightforward generalization of group cohomology would be Ext_M(Z, Z) which is the same as the categorical cohomology. The problem is that this is a boring invariant in the commutative cancellative case since it is invariant under group completion. Indeed, transposing to geometry, this is the self-Ext of the skyscraper sheaf at 1 in Spec(Z[M]), which can be computed inside the open chart Spec(Z[M^gp]) corresponding to the group completion. –Dmitry Vaintrob Commented May 4, 2021 at 21:56 Maybe more generally, a very good first thing to do when working with a commutative monoid M is to pass to the corresponding "toric" geometry object X = Spec Z[M]. The category of M-representations is equivalent to the category of quasicoherent sheaves on X, and invariants computed in this category have geometric meaning. If you want to encode the Hopf algebra of M and not just its underlying ring, this is equivalent to considering its spectrum X as a semigroup object. Most interesting monoid invariants should have meaning as invariants of commutative geometric semigroups. –Dmitry Vaintrob Commented May 4, 2021 at 22:09 I mostly asked because I was just curious, but the original use case I had was figuring out something about Ext_M(Z, A) for commutative cancellative M and a certain non-trivial module A which definitely can't be computed via pullback to the group completion. and yeah that geometric picture seems very interesting and promising to me. –xir Commented May 4, 2021 at 22:41 I'm not sure how to use it to compute an actual specific group of this sort though. also, by "the same as the categorical cohomology," what do you mean? –xir Commented May 4, 2021 at 22:42 The cohomology Ext_M(Z,A) most certainly is computable by pullback to the group completion! The module Z is supported in the open Z[M^{gp}], and so any Ext can be computed after localizing to this open. By "same as the categorical cohomology" I mean it is the same as the cohomology of the trivial representation of the corresponding category (not sure if this is what you mean by "classifying space as a category") –Dmitry Vaintrob Commented May 4, 2021 at 22:53 |Show 6 more comments 1 Answer 1 Sorted by: Reset to default This answer is useful 2 Save this answer. Show activity on this post. There are many different cohomology theories for monoids. Since you are using commutative monoids, you might be interested in Grillet's symmetric cohomology but I am not very familiar with it. If we ignore Grillet (due to my ignorance mostly) then there are 3 cohomology theories that are popular for monoids: left/right Eilenberg-Mac Lane cohomology, Hochschild-Mitchell cohomology and Leech cohomology. Let me note that all three theories are also studied in the context of cohomology of small categories, but Leech cohomology is called Baues and Wirsching cohomology in that context since they rediscovered the same theory as Leech (but in the context of categories instead of monoids) without knowing it. For groups, all these cohomology theories give the same answer. The cohomology theory that you mention, where you look at the trivial Z M Z M-module Z Z, or equivalently the derived functors of the invariants, is somewhat studied due to its connections with string rewriting systems discovered by Anick-Squier-Groves. The problem with it is that the theory has a number of flaws. First it is not left/right dual. For commutative monoids it makes no difference but for noncommutative monoids the cohomology theories obtained by studying left versus right modules is radically different. This also explains why the classifying space approach is not very good. The classifying space for the left version of the theory and the right version are the same. So you can have a contractible classifying space and have interesting cohomology on one or both sides. Probably this is due to the fact that cohomology with coefficients in a module doesn't generally speaking have a natural topological interpretation as local coefficient systems on the classifying space like it does for groups Another problem is that because monoids can have many idempotents, the trivial module can be projective, for example, if the monoid has a one-sided zero on the appropriate side. I think few semigroup theorist will respect a theory where adjoining a zero makes the whole thing trivial. (There is 0 0-cohomology introduced by Novikov that tries to rectify this.) The biggest problem I think with the cohomology is that H 2(M,A)H 2(M,A) in this setting does not classify a reasonable notion of extension for monoids. It classifies extensions of M M by A A that are extensions in a very strong sense and don't really come up much in practice. The main use of H 2 H 2 is to define twisted monoid algebras over a commutative ring as far as I know. A better cohomology theory is Hochschild-Mitchell cohomology which for a monoid M M amounts to looking at the Hochschild cohomology of Z M Z M. So you take free resolutions of Z M Z M as a bimodule and the coefficients will be a bimodule. It is the derived functor of taking the "center" of the bimodule. I never thought much about what this means for commutative monoids. The nice thing about Hochschild-Mitchell cohomology is that it is left-right dual and does not get affected in a serious way by adjoining a zero. It classifies a slightly more interesting notion of extension than Eilenberg-Mac Lane cohomology but still isn't great. Leech cohomology is the most powerful cohomology theory. I mean this in the sense that Eilenberg-Mac Lane cohomology can be computed from Hochschild-Mitchell and Hochschild-Mitchell can be computed from Leech. In Leech cohomology you take coefficients in something more complicated than a module. Baeus-Wirsching call it a natural system. I can't remember what Leech calls it. The basic idea is you replace your monoid M M by a small category that I think Leech calls D(M)D(M) but I don't remember. Modules become functors. The objects of D(M)D(M) are elements of M M and the arrows are more complicated. I think category theorists call it something like Quillen's twisted arrow category or something along those lines. The category structure of D(M)D(M) contains a lot of important information about M M. The problem with the other cohomology theories is that if e e is an idempotent of M M, then e M e e M e has a group of units G e G e and we might want to have our extension do something with G e G e that depends on e e. More generally, attached to any monoid element m m is a group G m G m called the Schutzenberger group of m m, and to classify extensions of M M that are determined by grouplike information you want to allow any Schutzenberger group to be extended. In Leech's category D(M)D(M) the automorphism groups of the objects are the Schutzenberger groups of m m and your extensions can basically treat all Schutzenberger groups of D D-classes (=isomorphism classes in D(M)D(M)) separately. So H 2 H 2 classifies in Leech theory something much more interesting. Also Leech's theory doesn't suffer these left-right issues or problems caused by zeroes. But I do think it is hard to absorb. Share Share a link to this answer Copy linkCC BY-SA 4.0 Cite Improve this answer Follow Follow this answer to receive notifications edited May 4, 2021 at 18:41 answered May 4, 2021 at 18:35 Benjamin SteinbergBenjamin Steinberg 39.5k 3 3 gold badges 108 108 silver badges 191 191 bronze badges 3 thanks for your very thorough answer! just to be clear, which cohomology is "Eilenberg-MacLane cohomology"; the cohomology of the classifying space, or the derived functor cohomology? or do these coincide for some reason? –xir Commented May 4, 2021 at 22:45 The derived functor. But the cohomology of the classifying space is the same as the derived functor if you use coefficients with a trivial monoid action –Benjamin Steinberg Commented May 5, 2021 at 0:03 that makes sense, thanks! –xir Commented May 5, 2021 at 0:07 Add a comment| You must log in to answer this question. Start asking to get answers Find the answer to your question by asking. Ask question Explore related questions reference-request homological-algebra group-cohomology semigroups-and-monoids See similar questions with these tags. Featured on Meta Upcoming initiatives on Stack Overflow and across the Stack Exchange network... Will you help build our new visual identity? Related 9Structure Theorem for finitely generated commutative cancellative monoids? 22Toposes (topoi) as classifying toposes of groupoids 43What are the main structure theorems on finitely generated commutative monoids? 5What are some examples of non-commutative Q Q-monoids and/or R R-monoids? 6Projective resolutions for commutative monoids 5Terminology for a monoid H H s.t. x y∈H×x y∈H× only if x,y∈H×x,y∈H× 6Homotopy type of a specific discrete monoid 7Two numerical monoids are isomorphic iff they are equal Question feed Subscribe to RSS Question feed To subscribe to this RSS feed, copy and paste this URL into your RSS reader. Why are you flagging this comment? It contains harassment, bigotry or abuse. This comment attacks a person or group. Learn more in our Code of Conduct. It's unfriendly or unkind. This comment is rude or condescending. Learn more in our Code of Conduct. Not needed. This comment is not relevant to the post. Enter at least 6 characters Something else. A problem not listed above. Try to be as specific as possible. Enter at least 6 characters Flag comment Cancel You have 0 flags left today MathOverflow Tour Help Chat Contact Feedback Company Stack Overflow Teams Advertising Talent About Press Legal Privacy Policy Terms of Service Your Privacy Choices Cookie Policy Stack Exchange Network Technology Culture & recreation Life & arts Science Professional Business API Data Blog Facebook Twitter LinkedIn Instagram Site design / logo © 2025 Stack Exchange Inc; user contributions licensed under CC BY-SA. rev 2025.8.13.32804 By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy. Accept all cookies Necessary cookies only Customize settings
336
Generalized Inverses: Theory and Applications - Adi Ben-Israel, Thomas N.E. Greville - Google Books =============== Sign in Hidden fields Try the new Google Books Books View sample Add to my library Try the new Google Books Check out the new look and enjoy easier access to your favorite features Try it now No thanks Try the new Google Books My library Help Advanced Book Search Good for: Web Tablet / iPad eReader Smartphone#### Features: Flowing text Scanned pages Help with devices & formats Learn more about books on Google Play EBOOK FROM $24.36 Get this book in print▼ Springer Shop Amazon.com Barnes&Noble.com Books-A-Million IndieBound Find in a library All sellers» My library My History Generalized Inverses: Theory and Applications ============================================= Adi Ben-Israel, Thomas N.E. Greville Springer Science & Business Media, Apr 18, 2006 - Mathematics - 420 pages 1. The Inverse of a Nonsingular Matrix It is well known that every nonsingular matrix A has a unique inverse, ?1 denoted by A , such that ?1 ?1 AA = A A =I, (1) where I is the identity matrix. Of the numerous properties of the inverse matrix, we mention a few. Thus, ?1 ?1 (A ) = A, T ?1 ?1 T (A ) =(A ) , ? ?1 ?1 ? (A ) =(A ) , ?1 ?1 ?1 (AB) = B A , T ? where A and A , respectively, denote the transpose and conjugate tra- pose of A. It will be recalled that a real or complex number ? is called an eigenvalue of a square matrix A, and a nonzero vector x is called an eigenvector of A corresponding to ?,if Ax = ?x. ?1 Another property of the inverse A is that its eigenvalues are the recip- cals of those of A. 2. Generalized Inverses of Matrices A matrix has an inverse only if it is square, and even then only if it is nonsingular or, in other words, if its columns (or rows) are linearly in- pendent. In recent years needs have been felt in numerous areas of applied mathematics for some kind of partial inverse of a matrix that is singular or even rectangular. More » Preview this book » Selected pages Title Page Table of Contents Index References Contents Introduction1 Solvability of Linear Systems2 Diversity of Generalized Inverses3 Preparation Expected of the Reader 4 4 54 Historical Note 7 Remarks on Notation Suggested Further Reading Chapter 0 Preliminaries6 Linear Transformations and Matrices10 Elementary Operations and Permutations 22 The Hermite Normal Form and Related Items 23 An Application of the BottDuffin Inverse to Electrical Networks 99 99 Suggested Further Reading 103 Minimal Properties of Generalized Inverses 104 Solutions of Minimum Norm 108 Tikhonov Regularization 114 Weighted Generalized Inverses 117 LeastSquares Solutions and Basic Solutions 122 Minors of the MoorePenrose Inverse 127 More Determinants and Volume 28 Some Multilinear Algebra 32 The Jordan Normal Form 34 The Smith Normal Form 38 Nonnegative Matrices 39 Existence and Construction of Generalized Inverses 40 Existence and Construction of 11lInverses 41 Properties of 11lInverses 42 Existence and Construction of 112lInverses 45 Existence and Construction of 1123l 1124l and 11234lInverses 46 Explicit Formula for A1 48 Construction of 12lInverses of Prescribed Rank 49 Notes on Terminology 51 Linear Systems and Characterization of Generalized Inverses 52 Characterization of A113l and A114l 3 Characterization of A12l A112l and Other Subsets of A12l 56 56 Idempotent Matrices and Projectors 58 Matrix Functions 65 Generalized Inverses with Prescribed Range and Null Space 71 Orthogonal Projections and Orthogonal Projectors 74 Efficient Characterization of Classes of Generalized Inverses 85 Restricted Generalized Inverses 88 The BottDuffin Inverse 91 An Application of 11lInverses in Interval Linear Programming 95 A 112lInverse for the Integral Solution of Linear Equations 97 Essentially Strictly Convex Norms and the Associated Projectors and Generalized Inverses 130 An Extremal Property of the BottDuffin Inverse with Application to Electrical Networks 149 Suggested Further Reading 151 Spectral Generalized Inverses 152 The Matrix Index 153 Spectral Inverse of a Diagonable Matrix 155 The Group Inverse 156 Spectral Properties of the Group Inverse 161 The Drazin Inverse 163 Spectral Properties of the Drazin Inverse 168 Index 1Nilpotent Decomposition of a Square Matrix 169 QuasiCommuting Inverses 171 Other Spectral Generalized Inverses 172 Suggested Further Reading 174 Generalized Inverses of Partitioned Matrices 175 Intersection of Manifolds 182 A Spectral Theory for Rectangular Matrices 201 Suggested Further Reading 255 Miscellaneous Applications 282 Generalized Inverses of Linear Operators between Hilbert 330 Appendix A The Moore of the MoorePenrose Inverse 370 Subject Index409 Author Index415 Copyright Less Other editions - View all Generalized Inverses: Theory and Applications Adi Ben-Israel,Thomas N.E. Greville Limited preview - 2003 Generalized Inverses: Theory and Applications Adi Ben-Israel,Thomas Nall Eden Greville Snippet view - 1974 Generalized Inverses: Theory and Applications Adi Ben-Israel,Thomas N.E. Greville No preview available - 2010 View all » Common terms and phrases AnalapplicationsapproximationarbitraryBen-IsraelcalledCm×nCmxnCn×nCnxnconvergesconvexCorollarycorrespondingdecompositiondefineddefinitiondenoteddiagonaldifferentialDrazin inverseeigenvalueseigenvectorselementsequivalentExercises Exexistsextremal solutionfollowing theoremfollows from Exfull-rank factorizationgivenGolubgroup inverseHermite normal formHermitianidempotentintegraliterative methodleast-squares solutionLemmaLinear AlgebraLinear Algebra Appllinear equationslinear transformationMathmatrix normminimizeminimum-normMoore-Penrose inversemultiplicativenonsingular matrixnonzeronull spaceo.n. basisoperatororthogonal projectorpartial isometrypartitionedpermutationpermutation matrixPN(ApolynomialPR(AproblemPROOFpropertiesprovingpseudoinverserowssatisfiesscalarSectionsequenceShowSIAMsingular valuesspectralsquare matrixsubmatrixsubspaceTseng inverseunique solutionunitaryvectorXk+1zero About the author(2006) Adi Ben-Israel is Professor of Operations Research, Business and Mathematics at Rutgers University, New Brunswick, NJ. Previously he was Professor of Applied Mathematics at the University of Delaware, Northwestern University, and the Technion-Israel Institute of Technology. The late Thomas N.E. Greville was Professor of Mathematics, and a member of the US Army Mathematics Research Center at the University of Wisconsin, Madison, WI. Bibliographic information Title Generalized Inverses: Theory and Applications CMS Books in Mathematics AuthorsAdi Ben-Israel, Thomas N.E. Greville Edition 2, illustrated Publisher Springer Science & Business Media, 2006 ISBN 0387216340, 9780387216348 Length 420 pages SubjectsMathematics › Algebra › Linear Mathematics / Algebra / General Mathematics / Algebra / Linear Mathematics / Functional Analysis Mathematics / Numerical Analysis Export CitationBiBTeXEndNoteRefMan About Google Books - Privacy Policy - Terms of Service - Information for Publishers - Report an issue - Help - Google Home
337
338
Published Time: 2009-03-15T02:22:42Z Law of the unconscious statistician - Wikipedia =============== Jump to content [x] Main menu Main menu move to sidebar hide Navigation Main page Contents Current events Random article About Wikipedia Contact us Contribute Help Learn to edit Community portal Recent changes Upload file Special pages Search Search [x] Appearance Appearance move to sidebar hide Text Small Standard Large This page always uses small font size Width Standard Wide The content is as wide as possible for your browser window. Color (beta) Automatic Light Dark This page is always in light mode. Donate Create account Log in [x] Personal tools Donate Create account Log in Pages for logged out editors learn more Contributions Talk [x] Toggle the table of contents Contents move to sidebar hide (Top) 1 Etymology 2 Joint distributions 3 Special casesToggle Special cases subsection 3.1 Discrete case 3.2 Continuous case 3.3 Measure-theoretic formulation 4 References Law of the unconscious statistician [x] 1 language Español Edit links Article Talk [x] English Read Edit View history [x] Tools Tools move to sidebar hide Actions Read Edit View history General What links here Related changes Upload file Permanent link Page information Cite this page Get shortened URL Download QR code Edit interlanguage links Print/export Download as PDF Printable version In other projects Wikidata item From Wikipedia, the free encyclopedia Theorem in probability and statistics In probability theory and statistics, the law of the unconscious statistician, or LOTUS, is a theorem which expresses the expected value of a functiong(X) of a random variableX in terms of g and the probability distribution of X. The form of the law depends on the type of random variable X in question. If the distribution of X is discrete and one knows its probability mass functionp X, then the expected value of g(X) is E⁡[g(X)]=∑x g(x)p X(x),{\displaystyle \operatorname {E} [g(X)]=\sum {x}g(x)p{X}(x),\,} where the sum is over all possible values x of X. If instead the distribution of X is continuous with probability density functionf X, then the expected value of g(X) is E⁡[g(X)]=∫−∞∞g(x)f X(x)d x{\displaystyle \operatorname {E} [g(X)]=\int {-\infty }^{\infty }g(x)f{X}(x)\,\mathrm {d} x} Both of these special cases can be expressed in terms of the cumulative probability distribution functionF X of X, with the expected value of g(X) now given by the Lebesgue–Stieltjes integralE⁡[g(X)]=∫−∞∞g(x)d F X(x).{\displaystyle \operatorname {E} [g(X)]=\int {-\infty }^{\infty }g(x)\,\mathrm {d} F{X}(x).} In even greater generality, X could be a random element in any measurable space, in which case the law is given in terms of measure theory and the Lebesgue integral. In this setting, there is no need to restrict the context to probability measures, and the law becomes a general theorem of mathematical analysis on Lebesgue integration relative to a pushforward measure. Etymology [edit] This proposition is (sometimes) known as the law of the unconscious statistician because of a purported tendency to think of the aforementioned law as the very definition of the expected value of a function g(X) and a random variable X, rather than (more formally) as a consequence of the true definition of expected value. The naming is sometimes attributed to Sheldon Ross' textbook Introduction to Probability Models, although he removed the reference in later editions. Many statistics textbooks do present the result as the definition of expected value. Joint distributions [edit] A similar property holds for joint distributions, or equivalently, for random vectors. For discrete random variables X and Y, a function of two variables g, and joint probability mass function p X,Y(x,y){\displaystyle p_{X,Y}(x,y)}:E⁡[g(X,Y)]=∑y∑x g(x,y)p X,Y(x,y){\displaystyle \operatorname {E} [g(X,Y)]=\sum {y}\sum {x}g(x,y)p_{X,Y}(x,y)} In the absolutely continuous case, with f X,Y(x,y){\displaystyle f_{X,Y}(x,y)} being the joint probability density function, E⁡[g(X,Y)]=∫−∞∞∫−∞∞g(x,y)f X,Y(x,y)d x d y{\displaystyle \operatorname {E} [g(X,Y)]=\int {-\infty }^{\infty }\int {-\infty }^{\infty }g(x,y)f_{X,Y}(x,y)\,\mathrm {d} x\,\mathrm {d} y} Special cases [edit] A number of special cases are given here. In the simplest case, where the random variable X takes on countably many values (so that its distribution is discrete), the proof is particularly simple, and holds without modification if X is a discrete random vector or even a discrete random element. The case of a continuous random variable is more subtle, since the proof in generality requires subtle forms of the change-of-variables formula for integration. However, in the framework of measure theory, the discrete case generalizes straightforwardly to general (not necessarily discrete) random elements, and the case of a continuous random variable is then a special case by making use of the Radon–Nikodym theorem. Discrete case [edit] Suppose that X is a random variable which takes on only finitely or countably many different values x 1, x 2, ..., with probabilities p 1, p 2, .... Then for any function g of these values, the random variable g(X) has values g(x 1), g(x 2), ..., although some of these may coincide with each other. For example, this is the case if X can take on both values 1 and −1 and g(x) = x 2. Let y 1, y 2, ... enumerate the possible distinct values of g(X){\displaystyle g(X)}, and for each i let I i denote the collection of all j with g(x j) = y i. Then, according to the definition of expected value, there is E⁡[g(X)]=∑i y i p g(X)(y i).{\displaystyle \operatorname {E} [g(X)]=\sum {i}y{i}p_{g(X)}(y_{i}).} Since a y i{\displaystyle y_{i}} can be the image of multiple, distinct x j{\displaystyle x_{j}}, it holds that p g(X)(y i)=∑j∈I i p X(x j).{\displaystyle p_{g(X)}(y_{i})=\sum {j\in I{i}}p_{X}(x_{j}).} Then the expected value can be rewritten as ∑i y i p g(X)(y i)=∑i y i∑j∈I i p X(x j)=∑i∑j∈I i g(x j)p X(x j)=∑x g(x)p X(x).{\displaystyle \sum {i}y{i}p_{g(X)}(y_{i})=\sum {i}y{i}\sum {j\in I{i}}p_{X}(x_{j})=\sum {i}\sum {j\in I_{i}}g(x_{j})p_{X}(x_{j})=\sum {x}g(x)p{X}(x).} This equality relates the average of the outputs of g(X) as weighted by the probabilities of the outputs themselves to the average of the outputs of g(X) as weighted by the probabilities of the outputs of X. If X takes on only finitely many possible values, the above is fully rigorous. However, if X takes on countably many values, the last equality given does not always hold, as seen by the Riemann series theorem. Because of this, it is necessary to assume the absolute convergence of the sums in question. Continuous case [edit] Suppose that X is a random variable whose distribution has a continuous density f. If g is a general function, then the probability that g(X) is valued in a set of real numbers K equals the probability that X is valued in g−1(K), which is given by ∫g−1(K)f(x)d x.{\displaystyle \int {g^{-1}(K)}f(x)\,\mathrm {d} x.} Under various conditions on g, the change-of-variables formula for integration can be applied to relate this to an integral over K, and hence to identify the density of _g(X) in terms of the density of X. In the simplest case, if g is differentiable with nowhere-vanishing derivative, then the above integral can be written as ∫K f(g−1(y))(g−1)′(y)d y,{\displaystyle \int {K}f(g^{-1}(y))(g^{-1})'(y)\,\mathrm {d} y,} thereby identifying _g(X) as possessing the density f (g−1(y))(g−1)′(y). The expected value of g(X) is then identified as ∫−∞∞y f(g−1(y))(g−1)′(y)d y=∫−∞∞g(x)f(x)d x,{\displaystyle \int {-\infty }^{\infty }yf(g^{-1}(y))(g^{-1})'(y)\,\mathrm {d} y=\int {-\infty }^{\infty }g(x)f(x)\,\mathrm {d} x,} where the equality follows by another use of the change-of-variables formula for integration. This shows that the expected value of g(X) is encoded entirely by the function g and the density f of X. The assumption that g is differentiable with nonvanishing derivative, which is necessary for applying the usual change-of-variables formula, excludes many typical cases, such as g(x) = x 2. The result still holds true in these broader settings, although the proof requires more sophisticated results from mathematical analysis such as Sard's theorem and the coarea formula. In even greater generality, using the Lebesgue theory as below, it can be found that the identity E⁡[g(X)]=∫−∞∞g(x)f(x)d x{\displaystyle \operatorname {E} [g(X)]=\int {-\infty }^{\infty }g(x)f(x)\,\mathrm {d} x} holds true whenever X has a density f (which does not have to be continuous) and whenever g is a measurable function for which _g(X) has finite expected value. (Every continuous function is measurable.) Furthermore, without modification to the proof, this holds even if X is a random vector (with density) and g is a multivariable function; the integral is then taken over the multi-dimensional range of values of X. Measure-theoretic formulation [edit] An abstract and general form of the result is available using the framework of measure theory and the Lebesgue integral. Here, the setting is that of a measure space(Ω, μ) and a measurable mapX from Ω to a measurable spaceΩ'. The theorem then says that for any measurable function g on Ω' which is valued in real numbers (or even the extended real number line), there is ∫Ω g∘X d μ=∫Ω′g d(X♯μ),{\displaystyle \int {\Omega }g\circ X\,\mathrm {d} \mu =\int {\Omega '}g\,\mathrm {d} (X_{\sharp }\mu ),} (interpreted as saying, in particular, that either side of the equality exists if the other side exists). Here X♯μ denotes the pushforward measure on Ω′. The 'discrete case' given above is the special case arising when X takes on only countably many values and μ is a probability measure. In fact, the discrete case (although without the restriction to probability measures) is the first step in proving the general measure-theoretic formulation, as the general version follows therefrom by an application of the monotone convergence theorem. Without any major changes, the result can also be formulated in the setting of outer measures. If μ is a σ-finite measure, the theory of the Radon–Nikodym derivative is applicable. In the special case that the measure X♯μ is absolutely continuous relative to some background σ-finite measure ν on Ω′, there is a real-valued function f X on Ω' representing the Radon–Nikodym derivative of the two measures, and then ∫Ω′g d(X♯μ)=∫Ω′g f X d ν.{\displaystyle \int {\Omega '}g\,\mathrm {d} (X{\sharp }\mu )=\int {\Omega '}gf{X}\,\mathrm {d} \nu .} In the further special case that Ω′ is the real number line, as in the contexts discussed above, it is natural to take ν to be the Lebesgue measure, and this then recovers the 'continuous case' given above whenever μ is a probability measure. (In this special case, the condition of σ-finiteness is vacuous, since Lebesgue measure and every probability measure are trivially σ-finite.) References [edit] ^DeGroot & Schervish 2014, pp.213−214. ^Casella & Berger 2001, Section 2.2; Ross 2019. ^Casella & Berger 2001, Section 2.2. ^Ross 2019. ^Feller 1968, Section IX.2. ^Papoulis & Pillai 2002, Chapter 5. ^Bogachev 2007, Section 3.6; Cohn 2013, Section 2.6; Halmos 1950, Section 39. ^Federer 1969, Section 2.4. ^Halmos 1950, Section 39. Bogachev, V. I. (2007). Measure theory. Volume I. Berlin: Springer-Verlag. doi:10.1007/978-3-540-34514-5. ISBN978-3-540-34513-8. MR2267655. Zbl1120.28001. Casella, George; Berger, Roger L. (2001). Statistical inference. Duxbury Advanced Series (Second edition of 1990 original ed.). Pacific Grove, CA: Duxbury. ISBN0-534-11958-1. Zbl0699.62001. Cohn, Donald L. (2013). Measure theory. Birkhäuser Advanced Texts: Basler Lehrbücher (Second edition of 1980 original ed.). New York: Birkhäuser/Springer. doi:10.1007/978-1-4614-6956-8. ISBN978-1-4614-6955-1. MR3098996. Zbl1292.28002. DeGroot, Morris H.; Schervish, Mark J. (2014). Probability and statistics (Fourth edition of 1975 original ed.). Pearson Education. ISBN0-321-50046-6. MR0373075. Zbl0619.62001. Federer, Herbert (1969). Geometric measure theory. Die Grundlehren der mathematischen Wissenschaften. Vol.153. Berlin–Heidelberg–New York: Springer-Verlag. doi:10.1007/978-3-642-62010-2. ISBN978-3-540-60656-7. MR0257325. Zbl0176.00801. Feller, William (1968). An introduction to probability theory and its applications. Volume I (Third edition of 1950 original ed.). New York–London–Sydney: John Wiley & Sons, Inc.MR0228020. Zbl0155.23101. Halmos, Paul R. (1950). Measure theory. New York: D. Van Nostrand Co., Inc.doi:10.1007/978-1-4684-9440-2. MR0033869. Zbl0040.16802. Papoulis, Athanasios; Pillai, S. Unnikrishna (2002). Probability, random variables, and stochastic processes (Fourth edition of 1965 original ed.). New York: McGraw-Hill. ISBN0-07-366011-6. Ross, Sheldon M. (2019). Introduction to probability models (Twelfth edition of 1972 original ed.). London: Academic Press. doi:10.1016/C2017-0-01324-1. ISBN978-0-12-814346-9. MR3931305. Zbl1408.60002. Retrieved from " Categories: Theory of probability distributions Statistical laws Hidden categories: Articles with short description Short description is different from Wikidata This page was last edited on 26 December 2024, at 20:40(UTC). Text is available under the Creative Commons Attribution-ShareAlike 4.0 License; additional terms may apply. By using this site, you agree to the Terms of Use and Privacy Policy. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit organization. Privacy policy About Wikipedia Disclaimers Contact Wikipedia Code of Conduct Developers Statistics Cookie statement Mobile view Edit preview settings Search Search [x] Toggle the table of contents Law of the unconscious statistician 1 languageAdd topic
339
Skip to main content Poetry Foundation Homepage Poetry Foundation Homepage Poetry Foundation Homepage Poetry Foundation Poetry Magazine July/August 2025Subscribe Subscribe Gwendolyn Brooks 1917—2000 Gwendolyn Brooks is one of the most influential and widely read 20th-century American poets. The author of more than 20 books, she was highly regarded even during her lifetime and had the distinction of being the first Black poet to win the Pulitzer Prize. She was also the first Black woman to hold the role of Consultant in Poetry to the Library of Congress, a position now referred to as the Poet Laureate Consultant in Poetry, and served as the Illinois poet laureate for 32 years. Her body of work gave her, according to critic George E. Kent, “a unique position in American letters. Not only has she combined a strong commitment to racial identity and equality with a mastery of poetic techniques, but she has also managed to bridge the gap between the academic poets of her generation in the 1940s and the young Black militant writers of the 1960s.” Brooks was born in Topeka, Kansas, but her family moved to Chicago when she was young. Her father was a janitor who had hoped to become a doctor; her mother was a schoolteacher and a classically trained pianist; both supported their daughter’s passion for reading and writing. Brooks was 13 when she published her first poem, “Eventide,” in American Childhood; by the time she was 17, she was publishing poems frequently in the Chicago Defender. After attending junior college and working for the National Association for the Advancement of Colored People (NAACP), she developed her craft in poetry workshops and completed her first collection, A Street in Bronzeville (Harper & Brothers, 1945). The poems in A Street in Bronzeville and the Pulitzer Prize–winning Annie Allen (Harper & Brothers, 1949) are “devoted to small, carefully cerebrated, terse portraits of the Black urban poor,”commented Richard K. Barksdale in Modern Black Poets: A Collection of Critical Essays (Prentice-Hall, 1973). Several critics welcomed Brooks as a new voice in poetry. Fellow poet Rolfe Humphries wrote in the New York Times Book Review that “we have, in A Street in Bronzeville, a good book and a real poet,” and Langston Hughes, in a review of Annie Allen for Voices, remarked that “the people and poems in Gwendolyn Brooks’ book are alive, reaching, and very much of today.” In the 1950s, Brooks published her only novel, Maud Martha (Harper & Brothers, 1953), which details its title character’s life in short vignettes. Maud suffers prejudice not only from white people but also from lighter-skinned African Americans, something that mirrored Brooks’s experience. Her later work took on politics more overtly, displaying what National Observer contributor Bruce Cook termed “an intense awareness of the problems of color and justice.” Toni Cade Bambara reported in the New York Times Book Review that at the age of 50 something happened to Brooks, a something most certainly in evidence in In the Mecca [Harper & Row, 1968] and subsequent works—a new movement and energy, intensity, richness, power of statement and a new stripped, lean, compressed style. A change of style prompted by a change of mind. This shift or change is often attributed to Brooks’s attendance at a gathering of Black writers at Fisk University in 1967; however, more recently, scholars such as Evie Shockley and Cheryl Clarke challenge the idea that Brooks’s career can be so neatly divided. Clarke, for example, described In the Mecca as Brooks’s “final seminar on the Western lyric.” Brooks herself noted this shift as quoted in The New York Times: “Those young black writers seemed so proud and committed to their own people. The poets among them felt that black poets should write as blacks, about blacks, and address themselves to blacks.” She later wrote, “If it hadn't been for these young people, these young writers who influenced me, I wouldn't know what I know about this society. By associating with them I know who I am.” From that time forward, Brooks thought of herself as an African determined not to compromise social comment for the sake of technical proficiency. Essayist Charles Israel suggested that In the Mecca’s title poem, for example, shows “a deepening of Brooks’s concern with social problems.” A mother loses her small daughter in the block-long ghetto tenement, the Mecca; this long poem traces her steps through the building, revealing her neighbors to be indifferent or insulated by their own personal obsessions. The mother finds her little girl, who “never learned that black is not beloved.” Critic R. Baxter Miller, writing in Black American Poets between Worlds, 1940-1960 (University of Tennessee Press, 1986), observed, “In the Mecca is a most complex and intriguing book; it seeks to balance the sordid realities of urban life with an imaginative process of reconciliation and redemption.” Other poems in the book, such as those occasioned by the death of Malcolm X and the dedication of a mural of Black heroes painted on a Chicago building, express Brooks’s commitment to her community’s awareness of itself as a political as well as a cultural entity. Brooks’s activism led her to leave major publisher Harper & Row in favor of fledgling Black publishing companies. In the 1970s, she worked with Dudley Randall’s Broadside Press to publish her poetry collections Riot (1969), Family Pictures (1970), Aloneness (1971), Aurora (1972), and Beckonings (1975) and the first volume of her autobiography, Report from Part One (1972). She also edited two collections of poetry—A Broadside Treasury (1971) and Jump Bad: A New Chicago Anthology (1971)—for the Detroit-area press. The Chicago-based Third World Press, run by Haki R. Madhubuti, a young poet Brooks met during the 1960s, also brought many Brooks titles into print. Brooks was the first writer to read in the Broadside Press original Poet’s Theatre series and the first poet to read in the second opening of the series when the press was revived under new ownership in 1988. Brooks, however, felt that Riot, Family Pictures, Beckonings, and other books Black publishers brought out received only brief notice from critics of the literary establishment because they “did not wish to encourage Black publishers.” Among Brooks’s major prose works are her two volumes of autobiography. When Report from Part One was published, some reviewers expressed disappointment that it did not provide the level of personal detail or the insight into Black literature they had expected. “They wanted a list of domestic spats,” remarked Brooks. Bambara noted that it “is not a sustained dramatic narrative for the nosey, being neither the confessions of a private woman/poet or the usual sort of mahogany-desk memoir public personages inflict upon the populace at the first sign of a cardiac. … It documents the growth of Gwen Brooks.” Other critics praised the book for explaining the poet’s new orientation toward her racial heritage and her role as a poet. In a passage she presented again in later books as a definitive statement, Brooks wrote I—who have ‘gone the gamut’ from an almost angry rejection of my dark skin by some of my brainwashed brothers and sisters to a surprised queenhood in the new Black sun—am qualified to enter at least the kindergarten of new consciousness now. New consciousness and trudge-toward-progress. I have hopes for myself… I know now that I am essentially an essential African, in occupancy here because of an indeed ‘peculiar’ institution… I know that Black fellow-feeling must be the Black man’s encyclopedic Primer. I know that the Black-and-white integration concept, which in the mind of some beaming early saint was a dainty spinning dream, has wound down to farce… I know that the Black emphasis must be not against white but FOR Black… In the Conference-That-Counts, whose date may be 1980 or 2080 (woe betide the Fabric of Man if it is 2080), there will be no looking up nor looking down. Brooks put some of the finishing touches on the second volume of her autobiography at the age of 68 while serving as Consultant in Poetry to the Library of Congress. Of her many duties there, the most important, in her view, were visits to local schools. Similar visits to colleges, universities, prisons, hospitals, and drug rehabilitation centers characterized her tenure as poet laureate of Illinois. In that role, she sponsored and hosted annual literary awards ceremonies at which she presented prizes funded, as related by Reginald Gibbons in the Chicago Tribune, “out of [Brooks’s] own pocket, which, despite her modest means, is of legendary depth.” Because of the wide recognition of her service and achievements, several schools were named for her, and she was similarly honored in 1970 by the founding of Western Illinois University’s Gwendolyn Brooks Cultural Center. In 2017, the centenary of Brooks’s birth was celebrated at the University of Chicago and the University of Illinois, Urbana-Champaign, where her papers are held. “Brooks Day” is celebrated annually in her hometown of Chicago.
340
The Ultimate Guide to Transfer Operators =============== Resources Case StudiesWhitepapersData ReportsAPI Documentation Search Explore more topics StatisticsEconomicsMBABusiness ForecastingData AnalyticsMathmatics Press Enter to search Open main menu Log inSign up Log inSign up The Ultimate Guide to Transfer Operators A Comprehensive Resource for Understanding Dynamical Systems Sarah Lee AI generated Llama-4-Maverick-17B-128E-Instruct-FP8 5 min read· May 28, 2025 62 views Photo by Stephen Dawson on Unsplash Fundamentals of Transfer Operators Transfer operators are a powerful tool for analyzing and understanding complex dynamical systems. In this section, we will explore the mathematical definition and properties of transfer operators, their relationship to other dynamical systems concepts, and provide examples of transfer operators in simple systems. Mathematical Definition and Properties A transfer operator is a linear operator that describes the evolution of a probability density function under the action of a dynamical system. Mathematically, it can be defined as follows: Let T:X→X T: X \to X T:X→X be a measurable map on a measure space (X,μ)(X, \mu)(X,μ). The transfer operator L:L 1(X)→L 1(X)\mathcal{L}: L^1(X) \to L^1(X)L:L 1(X)→L 1(X) associated with T T T is defined by ∫A L f(x)d μ(x)=∫T−1(A)f(x)d μ(x),\int_A \mathcal{L}f(x) d\mu(x) = \int_{T^{-1}(A)} f(x) d\mu(x),∫A​L f(x)d μ(x)=∫T−1(A)​f(x)d μ(x), for all measurable sets A⊂X A \subset X A⊂X and f∈L 1(X)f \in L^1(X)f∈L 1(X). The transfer operator has several important properties, including: Linearity: L\mathcal{L}L is a linear operator, meaning that L(a f+b g)=a L f+b L g\mathcal{L}(af + bg) = a\mathcal{L}f + b\mathcal{L}g L(a f+b g)=a L f+b L g for all f,g∈L 1(X)f, g \in L^1(X)f,g∈L 1(X) and a,b∈R a, b \in \mathbb{R}a,b∈R. Positivity: L\mathcal{L}L is a positive operator, meaning that L f≥0\mathcal{L}f \geq 0 L f≥0 whenever f≥0 f \geq 0 f≥0. Conservation of mass: L\mathcal{L}L conserves the integral of f f f, meaning that ∫X L f(x)d μ(x)=∫X f(x)d μ(x)\int_X \mathcal{L}f(x) d\mu(x) = \int_X f(x) d\mu(x)∫X​L f(x)d μ(x)=∫X​f(x)d μ(x). Relationship to Other Dynamical Systems Concepts Transfer operators are closely related to other concepts in dynamical systems, including: Frobenius-Perron operator: The transfer operator is also known as the Frobenius-Perron operator, particularly in the context of measure-preserving transformations. Koopman operator: The Koopman operator is the adjoint of the transfer operator, and is used to study the evolution of observables under the action of a dynamical system. Invariant measures: The transfer operator can be used to study the existence and properties of invariant measures, which are measures that are preserved under the action of the dynamical system. Examples of Transfer Operators in Simple Systems To illustrate the concept of transfer operators, let's consider a few simple examples: Doubling map: Consider the doubling map T(x)=2 x m o d 1 T(x) = 2x \mod 1 T(x)=2 x mod 1 on the unit interval [0,1)[0,1)[0,1). The transfer operator associated with this map is given by L f(x)=1 2 f(x 2)+1 2 f(x+1 2)\mathcal{L}f(x) = \frac{1}{2}f(\frac{x}{2}) + \frac{1}{2}f(\frac{x+1}{2})L f(x)=2 1​f(2 x​)+2 1​f(2 x+1​). Logistic map: Consider the logistic map T(x)=4 x(1−x)T(x) = 4x(1-x)T(x)=4 x(1−x) on the unit interval [0,1)[0,1)[0,1). The transfer operator associated with this map is given by L f(x)=1 4 x(1−x)[f(1−1−x 2)+f(1+1−x 2)]\mathcal{L}f(x) = \frac{1}{4\sqrt{x(1-x)}} \left[ f(\frac{1-\sqrt{1-x}}{2}) + f(\frac{1+\sqrt{1-x}}{2}) \right]L f(x)=4 x(1−x)​1​[f(2 1−1−x​​)+f(2 1+1−x​​)]. Transfer Operators in Practice In this section, we will discuss the numerical computation of transfer operators, their applications to chaotic systems and attractors, and their use in prediction and control. Numerical Computation of Transfer Operators Computing transfer operators numerically is a challenging task, particularly for high-dimensional systems. Some common methods include: Ulam's method: Ulam's method is a popular technique for approximating the transfer operator using a finite-dimensional matrix. Galerkin approximation: Galerkin approximation is a method for approximating the transfer operator using a finite-dimensional subspace of L 1(X)L^1(X)L 1(X). Monte Carlo methods: Monte Carlo methods can be used to approximate the transfer operator by sampling the underlying measure. Applications to Chaotic Systems and Attractors Transfer operators have been used to study a wide range of chaotic systems and attractors, including: Strange attractors: Transfer operators can be used to study the properties of strange attractors, such as their fractal dimension and Lyapunov exponents. Chaotic mixing: Transfer operators can be used to study the mixing properties of chaotic systems, including the rate of mixing and the distribution of mixing times. Using Transfer Operators for Prediction and Control Transfer operators can be used for prediction and control in a variety of ways, including: Predicting probability densities: Transfer operators can be used to predict the evolution of probability densities under the action of a dynamical system. Optimal control: Transfer operators can be used to design optimal control strategies for complex systems. Advanced Topics in Transfer Operators In this section, we will discuss some advanced topics in transfer operators, including spectral analysis and decomposition, transfer operators for non-autonomous systems, and open problems and future directions. Spectral Analysis and Decomposition Spectral analysis and decomposition are powerful tools for understanding the properties of transfer operators. Some key concepts include: Spectrum: The spectrum of a transfer operator is the set of eigenvalues and eigenfunctions. Spectral decomposition: Spectral decomposition is a technique for decomposing a transfer operator into its spectral components. The following Mermaid graph illustrates the spectral decomposition of a transfer operator: Transfer Operators for Non-Autonomous Systems Non-autonomous systems are systems that depend explicitly on time. Transfer operators can be generalized to non-autonomous systems using the following formula: L f(x,t)=∫T t−1(x)f(y,t−1)d μ t(y),\mathcal{L}f(x,t) = \int_{T_t^{-1}(x)} f(y,t-1) d\mu_t(y),L f(x,t)=∫T t−1​(x)​f(y,t−1)d μ t​(y), where T t T_t T t​ is the time-dependent map and μ t\mu_t μ t​ is the time-dependent measure. Open Problems and Future Directions Despite the many advances in transfer operators, there are still many open problems and future directions, including: High-dimensional systems: Developing efficient methods for computing transfer operators in high-dimensional systems. Non-smooth systems: Developing transfer operator theory for non-smooth systems, such as systems with discontinuities or singularities. Machine learning: Exploring the connection between transfer operators and machine learning techniques. References "A Concise Introduction to Transfer Operators" by S. Klus et al. "Transfer Operators, Endomorphisms, and Measurable Partitions" by S. Bezuglyi et al. "Spectral Theory of Dynamical Systems" by M. Blank et al. FAQ What is a transfer operator? A transfer operator is a linear operator that describes the evolution of a probability density function under the action of a dynamical system. What are the properties of a transfer operator? A transfer operator has several important properties, including linearity, positivity, and conservation of mass. How are transfer operators used in practice? Transfer operators are used in a variety of applications, including predicting probability densities, optimal control, and studying chaotic systems and attractors. What are some open problems in transfer operator theory? Some open problems in transfer operator theory include developing efficient methods for computing transfer operators in high-dimensional systems, developing transfer operator theory for non-smooth systems, and exploring the connection between transfer operators and machine learning techniques. Sarah Lee 2025-05-28 06:50:26 0 Comments You need to be logged in to add comments. Click here to login. Related Posts Combinatorial Algorithms Meet Topological Data Analysis The integration of Topological Data Analysis (TDA) and combinatorial algorithms has the potential to... By Sarah Lee •Jun 16, 2025•2522 views Easy statistical analysis tool Learn more Easy statistical analysis tool We use cookies to improve your experience on our website. Privacy Policy. By clicking Accept All you consent to our use of cookies. Accept All Decline All
341
| | | | | | | | --- | --- | --- | --- | --- | --- | | Other versions: | 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025 2026 | | Print | | Feedback on this topic | | | | | | | --- | --- | | | Welcome to SOLIDWORKS Online Help | | | | | --- | --- | | Expand Working with the 3DEXPERIENCE Platform and 3DEXPERIENCE Apps) | Working with the 3DEXPERIENCE Platform and 3DEXPERIENCE Apps | | | | | --- | --- | | Expand Working with Online Services) | Working with Online Services | | | | | --- | --- | | Expand User Interface) | User Interface | | | | | --- | --- | | Expand Fundamentals) | Fundamentals | | | | | --- | --- | | Expand Display) | Display | | | | | --- | --- | | Expand Moving from 2D to 3D) | Moving from 2D to 3D | | | | | --- | --- | | Expand Assemblies) | Assemblies | | | | | --- | --- | | Expand CircuitWorks) | CircuitWorks | | | | | --- | --- | | Expand Configurations) | Configurations | | | | | --- | --- | | Expand SOLIDWORKS Costing) | SOLIDWORKS Costing | | | | | --- | --- | | Expand Design Checker) | Design Checker | | | | | --- | --- | | Expand Design Studies in SOLIDWORKS) | Design Studies in SOLIDWORKS | | | | | --- | --- | | Expand Detailing and Drawings) | Detailing and Drawings | | | | | --- | --- | | Expand SOLIDWORKS File Utilities) | SOLIDWORKS File Utilities | | | | | --- | --- | | Expand DFMXpress) | DFMXpress | | | | | --- | --- | | Expand DriveWorksXpress) | DriveWorksXpress | | | | | --- | --- | | Expand FloXpress) | FloXpress | | | | | --- | --- | | Expand Import and Export) | Import and Export | | | | | --- | --- | | Expand SOLIDWORKS MBD) | SOLIDWORKS MBD | | | | | --- | --- | | Expand Model Display) | Model Display | | | | | --- | --- | | Expand Mold Design) | Mold Design | | | | | --- | --- | | Expand Motion Studies) | Motion Studies | | | | | --- | --- | | Expand Parts and Features) | Parts and Features | | | | | --- | --- | | Expand Routing) | Routing | | | | | --- | --- | | Expand Sheet Metal) | Sheet Metal | | | | | --- | --- | | Collapse Simulation) | Simulation | | | | | | --- | --- | --- | | | | Welcome to SOLIDWORKS Simulation Help | | | | | | --- | --- | --- | | | | Accessing Help | | | | | | --- | --- | --- | | | | Legal Notices | | | | | | --- | --- | --- | | | | SOLIDWORKS Simulation Reference | | | | | | --- | --- | --- | | | Expand SOLIDWORKS Simulation Fundamentals) | SOLIDWORKS Simulation Fundamentals | | | | | | --- | --- | --- | | | Expand Simulation with SOLIDWORKS Connected) | Simulation with SOLIDWORKS Connected | | | | | | --- | --- | --- | | | Collapse Analysis Background) | Analysis Background | | | | | | | --- | --- | --- | --- | | | | Expand Linear Static Analysis) | Linear Static Analysis | | | | | | | --- | --- | --- | --- | | | | Expand Frequency Analysis) | Frequency Analysis | | | | | | | --- | --- | --- | --- | | | | Expand Dynamic Analysis) | Dynamic Analysis | | | | | | | --- | --- | --- | --- | | | | Expand Linearized Buckling Analysis) | Linearized Buckling Analysis | | | | | | | --- | --- | --- | --- | | | | Expand Thermal Analysis) | Thermal Analysis | | | | | | | --- | --- | --- | --- | | | | Expand Nonlinear Static Analysis) | Nonlinear Static Analysis | | | | | | | --- | --- | --- | --- | | | | Expand Drop Test Studies) | Drop Test Studies | | | | | | | --- | --- | --- | --- | | | | Collapse Fatigue Analysis) | Fatigue Analysis | | | | | | | | --- | --- | --- | --- | --- | | | | | | Definitions for Fatigue | | | | | | | | --- | --- | --- | --- | --- | | | | | Expand Stress- Life Cycle (S-N) Curve) | Stress- Life Cycle (S-N) Curve | | | | | | | | --- | --- | --- | --- | --- | | | | | | Performing Fatigue Analysis | | | | | | | | --- | --- | --- | --- | --- | | | | | Collapse Fatigue Events) | Fatigue Events | | | | | | | | | --- | --- | --- | --- | --- | --- | | | | | | Expand Add Event (Constant) PropertyManager) | Add Event (Constant) PropertyManager | | | | | | | | | --- | --- | --- | --- | --- | --- | | | | | | | Add Event (Variable) PropertyManager | | | | | | | | | --- | --- | --- | --- | --- | --- | | | | | | Collapse Add Event (Random Vibration) PropertyManager) | Add Event (Random Vibration) PropertyManager | | | | | | | | | | --- | --- | --- | --- | --- | --- | --- | | | | | | | | Fatigue - Random Vibration PropertyManager | | | | | | | | | | --- | --- | --- | --- | --- | --- | --- | | | | | | | | Fatigue for Random Vibration Loading | | | | | | | | | | --- | --- | --- | --- | --- | --- | --- | | | | | | | | Creating a Fatigue Study Based on Random Vibration Results | | | | | | | | | | --- | --- | --- | --- | --- | --- | --- | | | | | | | | Derivation of Basquin Constants from S-N curve | | | | | | | | | --- | --- | --- | --- | --- | --- | | | | | | | Add Event (Harmonic) PropertyManager | | | | | | | | | --- | --- | --- | --- | --- | --- | | | | | | | Creating a Fatigue Study for Harmonic Loading | | | | | | | | --- | --- | --- | --- | --- | | | | | | Theory of Cumulative Damage | | | | | | | | --- | --- | --- | --- | --- | | | | | Expand Multiple Fatigue Events) | Multiple Fatigue Events | | | | | | | | --- | --- | --- | --- | --- | | | | | | Rainflow Cycle Counting Method | | | | | | | | --- | --- | --- | --- | --- | | | | | | Fatigue Plots | | | | | | | | --- | --- | --- | --- | --- | | | | | | Fatigue Result Options PropertyManager | | | | | | | | --- | --- | --- | --- | --- | | | | | | Matrix Charts PropertyManager | | | | | | | | --- | --- | --- | --- | --- | | | | | Expand Fatigue Check Plot) | Fatigue Check Plot | | | | | | | --- | --- | --- | --- | | | | Expand Pressure Vessel Design Overview) | Pressure Vessel Design Overview | | | | | | | --- | --- | --- | --- | | | | Expand Beams and Trusses) | Beams and Trusses | | | | | | | --- | --- | --- | --- | | | | Expand 2D Simplification) | 2D Simplification | | | | | | --- | --- | --- | | | Expand Simulation Studies) | Simulation Studies | | | | | | --- | --- | --- | | | Expand Submodeling Studies) | Submodeling Studies | | | | | | --- | --- | --- | | | Expand Simulation Materials) | Simulation Materials | | | | | | --- | --- | --- | | | Expand Simulation Options) | Simulation Options | | | | | | --- | --- | --- | | | Expand Interaction Options) | Interaction Options | | | | | | --- | --- | --- | | | Expand Composite Shells) | Composite Shells | | | | | | --- | --- | --- | | | Expand Design Studies) | Design Studies | | | | | | --- | --- | --- | | | Expand Factor of Safety Check) | Factor of Safety Check | | | | | | --- | --- | --- | | | Expand Loads and Restraints) | Loads and Restraints | | | | | | --- | --- | --- | | | Expand Meshing) | Meshing | | | | | | --- | --- | --- | | | Expand Parameters) | Parameters | | | | | | --- | --- | --- | | | Expand Study Reports) | Study Reports | | | | | | --- | --- | --- | | | Expand Topology Study) | Topology Study | | | | | | --- | --- | --- | | | Expand Viewing Analysis Results) | Viewing Analysis Results | | | | | | --- | --- | --- | | | Expand Workflow for Performing 2D Simplification) | Workflow for Performing 2D Simplification | | | | | | --- | --- | --- | | | Expand Analysis Library Features) | Analysis Library Features | | | | | --- | --- | | Expand SimulationXpress) | SimulationXpress | | | | | --- | --- | | Expand Sketching) | Sketching | | | | | --- | --- | | Expand SLDXML Data Exchange) | SLDXML Data Exchange | | | | | --- | --- | | Expand SOLIDWORKS Sustainability) | SOLIDWORKS Sustainability | | | | | --- | --- | | Expand Tolerancing) | Tolerancing | | | | | --- | --- | | Expand TolAnalyst) | TolAnalyst | | | | | --- | --- | | Expand Toolbox) | Toolbox | | | | | --- | --- | | Expand SOLIDWORKS Utilities) | SOLIDWORKS Utilities | | | | | --- | --- | | Expand Weldments and Structure System) | Weldments and Structure System | | | | | --- | --- | | Expand Installation and Administration) | Installation and Administration | | | | | --- | --- | | Expand Troubleshooting) | Troubleshooting | | | | | --- | --- | | | Legal Notices | Derivation of Basquin Constants from S-N curve From a given material’s fatigue strength S-N curve, you can derive the Basquin equation constants, or let the program calculate the Basquin constants by specifying the number of data points on the S-N curve to include in the curve-fitting calculations. Some materials available from the SOLIDWORKS Material database and the SOLIDWORKS Material Web Portal have fatigue S-N curve data information. For example the S-N curve of the material Ti-6AI-4V (Metal_Ti Alpha-Beta Alloy) downloaded from the SOLIDWORKS Material Web Portal (material database format is .sldmat) is shown in a log S - log N scale. The numerical values of the first four S-N data points are given in the table. Basquin’s equation is a power law relationship which describes the linear relationship between the applied stress cycles (S) in the y-axis and the number of cycles to failure in the x-axis plotted on a log-log scale. It can be defined as: where N is the number of cycles to failure usually more than 104 , Sr is the reference value of fatigue strength (in Simulation this is the stress range which is taken as 2 alternating stress), m is the slope of the log S - log N fatigue strength curve, and B is the value of the stress at one cycle. To calculate the slope m of the Basquin equation, solve the system of equations: To solve for m, take the log of both expressions: By substituting the first two S-N data points from the table above, calculate first m and then B: For the constant B, the program considers the stress range value (from the maximum cyclic stress to the minimum cyclic stress). If the stress values of the S-N curve are given as alternating stresses (which is the common practice), multiply these stresses by 2 to calculate the constant B (stress range = 2 alternating stress, assuming a zero mean stress and full reversal of the cyclic load). If the S-N curve data are given in stress range values, apply them directly in the equation for estimating the constant B. For the calculation of the slope constant m, multiplying the stresses does not alter the slope value. You can also find plots of fatigue strength curves in codes such as the Eurocode 9: Design of aluminum structures: Structures susceptible to fatigue, Ref. EN 1999-1-3:2007/A1. Example of Fatigue Strength S-N Curve In Eurocode 9 you can find numerical values for the constant slope m for different detail categories, and then calculate B. For example, from Ref. Table J.2 - Detail categories for plain members, EN 1999-1-3:2007/A1 for a simple plate with holes, the stress range Ds = 100 MPa at N = 2x106 cycles and the slope m = 7; then B equals to: To let the program perform the curve-fitting on a given set of S-N data to a straight line, select Estimate Basquin constants from S-N curve. In this case, make sure that Interpolate is set to Log-log and select the last S-N data point to consider for the curve-fitting in Consider the cut-off point for the S-N curve at row. The two graphs show the superposition of an original S-N curve (red line) with the Basquin equation curve-fitting line (green line) for 2 (a) and 22 (b) S-N data points respectively. It is recommended to check the quality of the Basquin curve-fitting before you proceed with the analysis. The quality of the curve-fitting line in approximating the original S-N curve is best for the portion of the S-N curve up to the cut-off point. | | | | --- | --- | | | | | (a) Basquin curve-fitting with 2 S-N data points (green line). | (b) Basquin curve-fitting with 22 S-N data points (green line). | Fatigue Strength S-N Curve Reference: Figure J.1 , EN 1999-1-3:2007 : Annex J, Eurocode 9: Design of aluminum structures - Structures susceptible to fatigue Provide feedback on this topic SOLIDWORKS welcomes your feedback concerning the presentation, accuracy, and thoroughness of the documentation. Use the form below to send your comments and suggestions about this topic directly to our documentation team. The documentation team cannot answer technical support questions. Click here for information about technical support. Required | | | | | --- | --- | --- | | | | | | Email: | | | | Subject: | | Feedback on Help Topics | | Page: | | Derivation of Basquin Constants from S-N curve | | Comment: | | | | | | I acknowledge I have read and I hereby accept the privacy policy under which my Personal Data will be used by Dassault Systèmes | Thank you for your comments. We will contact you if we have questions regarding your feedback. Sincerely, The SOLIDWORKS Documentation Team Print Topic Select the scope of content to print: | | | --- | | This topic and all topics linked from this topic | | Just this topic | | This topic and only immediate topics under it | | This selected topic and all subtopics | We have detected you are using a browser version older than Internet Explorer 7. For optimized display, we suggest upgrading your browser to Internet Explorer 7 or newer. Web Help Content Version: SOLIDWORKS 2023 SP05 To disable Web help from within SOLIDWORKS and use local help instead, click Help > Use SOLIDWORKS Web Help. To report problems encountered with the Web help interface and search, contact your local support representative. To provide feedback on individual help topics, use the “Feedback on this topic” link on the individual topic page. Terms of Use | Privacy Policy | Personalize Cookie Choices | Get a Product Demo | Contact Sales | Get a Quote © 1995-2025 Dassault Systèmes. All rights reserved.
342
Work Energy Theorem For a variable force Sharath Gore 11200 subscribers 442 likes Description 25033 views Posted: 31 Dec 2021 Sharath Gore NEET / JEE lecturer at Vibrant Academy, Moodbidri VAIL call: 7411417028 31 comments Transcript: hi everybody we will uh now prove work energy theorem for a variable force work energy theorem in the case of a variable force in the case of variable force in the case of variable force okay what is work energy theorem it states that the change in kinetic energy the change in kinetic energy of a particle the change in kinetic energy of a particle the change in kinetic energy of a particle is equal to the is equal to the work done on it work done on it by the net force work done on it by the net force okay to derive this equation let me consider the equation of uh kinetic energy so kinetic energy is given by half mv square kinetic energy is equal to half mv square let me differentiate this equation with respect to time then what i get rate of change of kinetic energy so dk by dt is equal to differentiation of differentiation of half mv square because i need change in kinetic energy now that is why i will differentiate this because dk stands for very very small change in kinetic energy so on simplifying this what happens dk by dt dk by dt will be equal to half m is a constant half m into so differentiation of v square differentiation of this differentiation of v square is okay 2 v into dv by dt differentiation of v square is 2 v into dv by dt so what happens this 2 and 2 gets cancelled and dv by dt will be equal to acceleration so therefore i can write so dk by dt dk by dt is equal to m [Music] into a into v i can write this okay or this will be equal to this will be equal to m into a is equal to force of course v i'll write it as dx by dt okay v means rate of change of displacement i'll write as d x by d t so what happens this d t and this d t gets cancelled that very very small change in kinetic energy will be equal to f dx we will get this okay now let us suppose that the displacement changes from say the position changes from x i to x i to x f okay so corresponding kinetic energy it will be k i to k f kinetic energy will change from k i to k f so integrating so what happens when the so integrating on both side integration of f dx between the limits x i and x f between the limits x i and x f is equal to integration of d k between the limits ki and kf okay so what is integration of dk integration of dk will be equal to k okay between the limits ki and kf so that is equal to so what is this term okay the integral of the definite integral of force over displacement okay is equal to work done so we have we already know that definite integral of force over displacement is equal to work done so that will be equal to work done i can write it directly okay so this is equal to work done worked okay so therefore the final step first we apply the upper limit in the place of k you write final kinetic energy minus initial kinetic energy is equal to work done so hence we have proved work energy theorem for a variable force thank you very much
343
On binding domains Martin Everaert Utrecht University Proceedings of the 12th International Conference on Head-Driven Phrase Structure Grammar Department of Informatics, University of Lisbon Stefan Müller (Editor) 2005 Stanford, CA: CSLI Publications pages 503–518 Everaert, Martin. 2005. On binding domains. In Stefan Müller (ed.), Proceedings of the 12th International Conference on Head-Driven Phrase Structure Grammar, Department of Informatics, University of Lisbon , 503–518. Stanford, CA: CSLI Publications. DOI: 10.21248/hpsg.2005.29. Abstract In this paper I want to explore reasons for replacing Binding Theory based on the anaphor-pronoun dichotomy by a Binding Theory allowing more domains restricting/defining anaphoric dependencies. This will, thus, have consequences for the partitioning of anaphoric elements, presupposing more types of ‘anaphors’/‘pronouns’ than standard Binding Theory offers us. 1. Introduction Mainstream generative accounts (Chomsky 1981; Pollard & Sag 1994; Manning & Sag 1999; Bresnan 2002, and Reinhart & Reuland 1993) sketch a very clear, uniform picture of anaphoric dependencies. Binding in the syntactic sense of the word is primarily limited to the predicational domain, formulated as in binding conditions A (cf. 1) and B (cf. 2): 1 (1) a. An anaphor is bound in its Governing Category. b. A locally a-commanded short-distance reflexive must be locally a-bound. c. A nuclear (reflexive) pronoun must be bound in the minimal nucleus that contains it. (2) a. A pronominal is free in its Governing Category. b. A pronoun must be locally a-free. c. A nonnuclear pronoun must be free in the minimal nucleus that contains it ‘Reflexives’ are subject to condition (1), i.e. they are referentially dependent upon a hierarchically superior NP (cf. 3a), and the antecedent must be found within a certain domain (cf. 3b). 2 I would like to thank the organizers of the workshop, António Branco and Manfred Sailer, and the editor of this volume, Stefan Müller, for giving me the opportunity to present my work, and their patience. Alexis Dimitriadis, Shakuntala Mahanta, Eric Reuland, Anca Sevcenco, Giorgos Spathas have contributed, in different ways, to this paper, for most without knowing it. 1Limiting myself to ‘condition A/B’, following Reinhart (1983). 2Anaphoric dependencies are indicated by italics. 504 (3) a. John ’s plans failed himself b. John thinks that Mary hates himself ‘Pronominals’ obey condition (2), the reverse from (1): whatever the reference of the pronoun may be, it is not able to take a co-argument for an antecedent. These standard generative binding conditions (cf. Everaert 2003 for acomparison of Binding Theories in several generative frameworks) describe recurrent patterns in the various languages of the world. Examples from Finnish (4a), Sakha (4b, personal communication Nadya Vinokurova), and Spanish (4c) illustrate that, in many languages, reflexives and pronominals are, indeed, in complimentary distribution: (4) a. Pekka näki itsensä / hänet ‘Pekka saw himself/him’ b. Misha bejetin/kinini taptyyr Misha himself/him loves ‘Misha loves himself/him’ c. Juan se/lo admira ‘Juan admires himself/him’ The examples in (5), from Italian, Dutch, Russian, and Icelandic, respectively, show that, in addition, reflexives must be locally bound, while pronominals allow non-local binding: (5) a. Gianni pensava che Maria si/lo ammirasse ‘Gianni thought that Maria admired him’ b. Jan vroeg mij voor zich/hem te werken Jan asked me for himself/him to work ‘John asked me to work for him’ c. Vanja dumaet to Maša uvažaet sebja/ego ‘Vanja thinks that Maša admires him’ d. Jón veit aδ María elskar sig/hann John knows that Maria loves-IND himself/ him ‘John knows that Maria loves him’ In all generative accounts (HPSG, LFG, P&P, etc.) there seems to be general agreement on the following properties being encoded in Binding Theory: (6) i. Reflexivization is local. 505 ii. A distinction must be drawn between two types of anaphoric element: anaphors (= reflexives and reciprocals) and pronouns. iii. Any anaphoric dependency that is non-local is either exceptional, marked or does not fall under Binding Theory proper. In other words, anaphor resolution (as it is used in the literature on discourse) is outside the scope of Binding Theory. In this paper I will focus on (6ii). However, it will become clear that this is only possible if we also address (6i). In other words, I will discuss: (7) i. the notion ‘domain’/’locality’. ii. the partitioning of elements that are sensitive to binding restrictions. It is important to observe that I will be guided by the principle in (8), which is inspired by a view, formulated in (9), on what syntax might be: (8) Binding Theory deals with those nominal expressions that encode their referential properties in the morpho-syntactic vocabulary (feature system) of a specific language. (9) “One of the prerequisites for attaining the goals of the Minimalist Program (MP) developed in Chomsky 1995, 2000, to appear, is to draw the boundaries of syntax in a principled way. The MP proposes that the computational system of human language (C HL ) reflects the combinatorial properties of a purely morpho-syntactic vocabulary.” Reuland (2001: 440) My starting point is that any grammatical feature that is morpho-syntactically encoded might be, in principle, be relevant for binding. Taking (8) as a fundamental principle will significantly widen the empirical scope of the Binding Theory. It defines it as an interface system, as discussed in Reuland (2001).Although what I propose is compatible with Reuland’s position, the focus is slightly different. Reuland (2001) is focused on the binding principles A and B, both part of syntax, replacing syntactic ‘identity derived by co-indexation’ from ‘identity derived by movement’. I am arguing that there might be reason to extend Binding Theory to discourse. 2. Partitioning of anaphoric elements Nominals are generally partitioned as follows (Pollard & Sag 1994): 506 (10) nominals pronouns nonpronouns anaphors pronominals reflexives reciprocals Since we generally accept that reflexives and reciprocals behave the same with respect to binding conditions, (10) is reduced to (10’), with the three binding conditions indicated: (10’) nominals pronouns nonpronouns anaphors pronominals | |A B CLet us, for the moment, focus on binding condition A (cf. 1). It restricts elements classified as ‘anaphors’ to be bound locally. And local is defined in several ways: 3 (11) ‘subcat-list’, ‘arg-structure’, ‘complete functional complex’, ‘predicate’, etc. Condition A, however, is not without exceptions. Quite early on it was noted that, cross-linguistically, there were many anaphors with antecedents essentially beyond the regular domain (Thráinsson 1976, Reis 1976, Inoue 1976, Yang 1983, Harbert 1983, and many others since). The examples in (12), Norwegian, Dutch, Japanese and Icelandic, respectively, illustrate this: (12) a. Jon bad oss hjelpe seg Jon asked us help himself ‘John asked us to help him’ b. Jan laat mij voor zich werken Jan made me for himself work ‘John made me work for him’ 3A very different take on locality is the assumption that anaphora domains and NP-movement domains coincide (Reuland 2001, Hornstein 2001). 507 c. Bill-wa John-ga zibun-o seme-ta to omot-ta Bill John himself blamed that thought ‘Bill thought that John blamed him’ d. Jón segir a δ Péturi raki sig á hverjum degi Johnn says that Peter shave himself at every day ‘John says that Peter shaves him every day’ Following the terminology of Koster & Reuland (1991) we will classify the exceptions to binding condition A in (12a,b) as medium distance binding, and those in (12c,d) as long distance binding. Medium distance is reflexivization that is non-local, but the non-locality is restricted to a reanalysis/small clause domain. The phenomenon of long distance binding, a binding relation between an anaphor and a non co-argument antecedent, is tackled in different ways: (13) Long distance binding is: a. reduced to locality, and thus condition A, through LF-movement: Pica (1984), Cole & Sung (1994), a.o. b. relegated to non-syntactic binding: Reinhart & Reuland (1991, 1993), Pollard & Sag (1994), Reuland (2001), a.o. c. accounted for by introduction of a fourth binding condition, principle Z: cf. (14) for a formulation of the principle (14) Principle Z (Xue et al. 1994, and others; formulation from Branco 2005) An o-commanded long-distance reflexive must be o-bound. It is this fourth binding condition, principle Z, that allows Branco & Marrafa (1997) and Branco (2005) to explore the possibility of deriving the binding conditions from a more general principle of quantification structure. Branco (2005) argues that the empirical generalizations captured in the definition of the four binding principles, conditions A,B,C and principle Z, are “just the effect of the specific quantificational force of the anaphors lexically encoded in their semantic values” (Branco 2005: 166). So, the question whether the four-way partitioning of binding conditions is motivated, and linked to well-motivated partitioning of lexical elements, becomes an important one. In the way Principles A and Z are formulated a distinction is made between short-distance and long-distance binding. The question, of course, is whether such a distinction is motivated. And if so, could it be that this distinction is derived from other principles of grammar. Many have argued that it could be derived from the morphology of anaphoric elements. Pica (1985) argued that long distance anaphors are heads, short distance anaphors are ‘complex’. 508 Everaert (1986) argued that the fact that certain anaphors require strict local binding follows from their morpho-syntactic make-up. 4 Alternatively, we could derive the distinction between short distance anaphors and long distance anaphors from a well-defined feature specification. Everaert (1991) argues that short distance anaphors could be seen as +A,-P specified, to be distinguished from +A,+P long distance anaphors. Defining the notions ‘governing category’/ ’minimal governing category’ relative to the A(naphor)- and P(ronominal)-features, respectively, Everaert derives that <+A,+P> reflexives, bound in some governing category and in their minimal governing category, are necessarily locally bound, while <+A,-P> reflexives, bound in some governing category and not bound in their minimal governing category, are not. I will assume that, indeed, something like principle Z exists, but that it is, perhaps, the only binding principle in the traditional sense of the word that exists. Following Everaert (1986) I would like to suggest that binding condition A is, a priori, non-local, but limited to the sentence-internal domain. 3. Domains What would be a priori domains relevant for anaphoric dependencies? The first distinction seems to be the distinction between the domain in which syntax is relevant, sentence grammar (cf. 15a,b), and the domain where syntax is only indirectly relevant, discourse (cf. 15c,d). Within sentence grammar we might make a distinction between the domain in which predicate-based grammatical processes like passive apply (cf. 15a) versus the domain in which processes like wh-movement apply (cf. 15b). At the discourse level we distinguish discourse (15c) from deixis (cf. 15d), the latter being the more ‘local’ option in discourse. (15) For y = reflexive, x = antecedent of y:a. (complex) predicate/clause ...........[ CP/IP ... x... y...] ............. b. sentence [CP .... x… [ CP ..... y....] ...........] c. deixis [CP ..... y....] ................. x............................ d. discourse [CP .... x...] [ CP ..........] [ CP .... y...] 4Whether or not such generalizations hold true is not at issue here (cf. Everaert 1991). 509 In the Principles and Parameters theory, Lexical-Functional Grammar, Head-Phrase Structure Grammar, Binding Theory is focused on syntactic binding, limited to the predicational domain. Reflexives encode referential dependencies in the clausal domain, i.e. (15a). In all Binding Theories that I am acquainted with, with the exception of Reflexivity, there is room for debate whether (15b) could still be taken as a possible domain for regular ‘syntactic’ binding. But for all Binding Theories mentioned above, reference outside the sentence, i.e. (15c,d) is forbidden ground for anaphors (cf. Kang 1988 for discussion). For the domain of discourse, we exclusively have elements called pronouns, and the binding conditions have nothing to say about anaphoric dependencies in this domain. Is there a reason to assume that anaphora are partitioned this way? In other words, is there reason to assume that we need more than the simple anaphor (for 15a) – pronoun (for 15b,c,d) distinction of BT? If we look at what defines an element as an ‘anaphor’ it is not straightforward that the anaphoric dependencies in (15a) and (15b) would be morpho-syntactically encoded differently from those in (15c) and (15d). It is not evident that a definition of anaphors rooted in Chomsky (1986) and Keenan (1988) according to which anaphors are referentially defective NPs predicts that reflexives could, for instance, never be taken as discourse anaphora (15d). 5 Only if reflexive anaphors were necessarily interpreted as bound variables, subject to a c-command/o-command/ syntactic rank restriction, the predicted discourse restrictions on reflexive anaphors would follow naturally from whatever explains the (un)grammaticality of the examples in (16): (16) a. Every ex-husband feared that he would be neglected b. Because she hated every ex-husband , Mary would certainly tell Zelda why she left him c. Every ex-husband feared that I would be neglected. He …In other words, we generally assume that the preferred domain for a ‘reflexive’ is (15a). There is no a priori reason that this should be the case, but most languages (like Dutch, Spanish, Russian, etc.) mentioned above offer us this as the primary distinction. In a sense, English is rather atypical, because its reflexive anaphor can be used in all domains. That is, it is often used in more structural configurations than we might consider calling reflexive environments: 5It has been observed that in various languages reflexives are used as honorifics. See Siewierska (2004: 224-228) for an overview on this particular, deictic, use of reflexives. 510 (17) a. Predicate: Mary thinks that [ John saw himself ]b. Sentence: And that was exactly it, he thought. [ He really didn’t care too much [what happened to himself ]c. Deixis: There were five tourists in the room apart from myself d. Discourse: [Whom he [=Philip] was supposed to be fooling, he couldn’t imagine]. [Not the twins, surely, because Désirée, in the terrifying way of progressive American parents, believed in treating children like adults] and [had undoubtedly explained to them the precise nature of her relationship with himself ]. With the fourfold distinction given in (15), we could, in principle, expect a language to make the following partitioning, giving every domain its unique identifiable anaphoric element: (18) a. anaphor 1 for (15a) b. anaphor 2 for (15b) c. pronoun 3 for (15c) d. pronoun 4 for (15d) As far as I can tell there is no language that straightforwardly offers us this picture - four different forms - but there are many languages that offer a morpho-syntactic partitioning of anaphoric elements that is clearly different from the simple anaphor-pronoun distinction. In the following section I will give a very limited sketch of some of the diversity one may find. 4. Anaphoric elements and their domains The literature gives us overwhelming evidence that most/all languages seem to have an anaphor 1-type. To give an example, take the Norwegian reflexive seg selv , which contrary to seg , can only be bound in its most immediate domain, as is shown by the contrast between (12a), here repeated, and (19): (12) a. Jon bad oss hjelpe seg ‘John asked us to help him’ (19) Jon bad oss hjelpe seg selv ‘John asked us to help himself’ 511 Likewise, reciprocals seem to be primarily clause-bound, as has been observed in Yang (1981). 6 This is illustrated for Kannada in (20) (Amritavalli 2000: 67,89): (20) a. [shyaama tannannu i priitisuttaane anta] raama i heeLidanu Shyama self acc loves that Rama said ‘Rama said that Shyama loves him (=Rama)’ b. makkaLu i [naanu obbaranna obbaru i baide anta] heeLidaru children I one acc one nom scolded that said ‘The children said that I scolded one another’ But what about the other anaphor/pronoun types that could, potentially, exist? A language like Tamil gives a good illustration of the point I want to make. 7 4.1 Tamil Tamil, as described in (Lehmann 1989, Annamalai 2000), has two pronouns referring to 3 rd person antecedents: avan (that one, he; 3 rd Person, Masculine, Accusative, -Proximate) and ivan (this one, he; 3rd Person, Masculine, Accusative, +Proximate). In addition Tamil has a pronominal form taan (3 rd Person, -Plural, not specified for gender), which could be taken as the equivalent of English himself .(21-22) illustrate the binding properties of taan : taan cannot be discourse bound (cf 21), but intra-sentential reference is not restricted to the local domain (cf. 22a,b) (21) a. kamalaa avan tann-ai veru-kkir-aan en-ru ninai-tt-aa Kamala he self-acc hate-pres-3sm say-vbp think-pst-3sf ‘Kamala thought that he hated him(=Kumaar)’ b. kumaar kaDekki poonan; ange tanakku oNNum piDikkale Kumar shop to go-pst-agr there self to anything like not ‘Kumar went to the shop; he did not like anything there.’ (22) a. kamalaa avan tann-ai veru-kkir-aan en-ru ninai-tt-aa Kamala he he-acc hate-pres-3sm say-vbp think-pst-3sf ‘Kamala thought that he hated himself’ b. kamalaa avan tann-ai veru-kkir-aan en-ru ninai-tt-aa Kamala he she-acc hate-pres-3sm say-vbp think-pst-3sf ‘Kamala thought that he hated her’ 6Cf. Everaert 2005 for a discussion of this generalization. 7A similar partitioning of anaphopric elements and similar distributional facts hold for Malayalam, Bangla, Telugu (cf. Jayaseelan & Haripasad 2001). 512 In Lehmann (1989) taan is described as a 4th person pronoun: “the occurrence of taan in a reflexive construction is only one of its occurences and there is, therefore, no justification to call it a reflexive pronoun […] just because it can occur in a reflexive construction.” (p.97) In other words, because taan is not limited to the smallest domain (21a), but is regularly used in a wider domain (21b), like an anaphor 2 type, Lehmann does not want to call it a reflexive, contrary to Annamalai (2000). In some cases, however, taan seems to behave like a true anaphor 1-type, necessarily clause bound, as is shown in (23): (23) a. kumaar umaa tanne tiTTikiTTaaNNu sonnaan Kumar Uma self-acc scold-pst-VR-pst-agr-that say-pst-agr ‘Kumar said that Uma scolded himself’ b. kumaar umaa tanne tiTTikiTTaaNNu sonnaan Kumar Uma self-acc scold-pst-VR-pst-agr-that say-pst-agr ‘Kumar said that Uma scolded himself’ Note, however, that it is the verbal auxiliary kiDu reflexive marking the embedded predicate, resulting in local binding (23a), blocking long-distance binding (cf. 23b). The pronoun avan is the designated element for discourse binding (cf. 24a); local binding is excluded (24b), unless modified by an emphasis marker (24c): (24) a. kumaar kaDekki poonan; ange avanukku oNNum piDikkale Kumar shop to go-pst-agr there he to anything like not ‘Kumar went to the shop; he did not like anything there.’ b. kumaar avan-ai veru-kkir-aan Kumar he-Acc hate-pres-3sm Kumar hates himself c. kumaar avaneyee verukaan Kumar he-acc-emph hate-prst-agr ‘Kumar i hates himself i/him i’The differences/similarities between the proximate/obviative pronouns becomes clear in (25-26). (25) shows that both pronouns can be used deictically, but that for sentence internal reference ivan , the proximate element, is excluded: (25) a. ivan en tampi 513 (this)-he I(OBL) brother ‘He is my brother’ b. avan en tampi (that)-he I(OBL) brother ‘He is my brother’ (26) a. kumaar va-nt-aal naan avan-iTam collu-v-een Kumar come-cond I he-loc say-fu-1s ‘If Kumar comes I will tell him’ b. kumaar va-nt-aal naan ivan-iTam collu-v-een Kumar come-cond I he-loc say-fu-1s ‘If Kumar comes I will tell him’ Summarizing we can say that taan is an anaphor 2 element that is used for sentence internal reference (cf 15b); ivan is a pronoun 3 element, used for deictic contexts only (15d) 8; avan can be used for deixis, discourse binding and sentence internal binding (15b,c,d). Strict local binding (cf 15a) is only realized when the anaphor 2 element taan is combined with a verbal reflexive marker, making it a reflexively marked predicate in the sense of Reinhart & Reuland (1993). 4.2 Roumenian and Mupun There are other languages that, like, Tamil, seem to have a anaphor 2 element, whose distribution is defined as in (15b): the ‘reflexive’ sine in Roumenian (Sevcenco 2004) and the ‘logophoric pronoun’ émì in Fon (Kinyalolo 1993) and ì in Mupun (Frajzyngier 1997). 9 I will limit my brief discussion here to Roumenian and Mupun. The distribution of the Romanian anaphor sine (Sevcenco 2004) shows that it can be bound in both local and long distance contexts, as in (27), which involves the occurrence of sine in a clitic doubling structure, and (28), which is ambiguous between the reading in which Alex is the antecedent of sine and another reading in which George is the antecedent: 10 (27) Directorul se admir pe sine. Director-the se REFL CL ACC admires 3SG pe PREP ACC self. ‘The director admires himself’. 8All languages seem to morpho-syntactically encode indexicals like I, we, you of the pronoun 4type. 9The fourth person pronouns in Mabaan as described in Andersen (1999) might offer another example. 10 What is interesting is that Romanian seems to have no ‘logophoricity’ constraints, in the semantic sense. But does have blocking effects. 514 (28) George vrea ca Alex s conteze on sine. George wants that COMP SUBJ Alex s SUBJ count on self. ‘George wants that Alex count on Alex/George’. Logophoric systems are, generally, also defined by the domain given in (15b). The case of Mupun (Frajzyngier 1997) illustrates this: (29) a. wu/wa/mo sat n ta ì/ è/  ee n-jos he/she/they say COMP stop he/she/they stay prep-Jos ‘He/she/they i said that he/she/they i stopped over in Jos’ b. wu/wa/mo sat n ta wù/wà/wà ee n-jos he/she/they say COMP stop he/she/they stay prep-Jos ‘He/she/they i said that he/she/they j stopped over in Jos’ In (29a) the logophoric pronouns refer, necessarily, to the matrix subject. If one want to encode sentence external reference a regular pronoun is chose, as illustrated in (29b). 5. Conclusion In the preceding section I have given some evidence for a richer classification of anaphoric elements that the anaphor-pronoun distinction. This is based on the assumption that we should distinguish four types of domains, as sketched in (15). Many languages indeed reflect these domains by morpho-syntactic encoding domain with dedicated anaphoric elements. The consequences for a proper formulation of the Binding theory are substantial. Given the postulation of four domains of anaphoric dependencies, and four anaphoric types, we might also need four binding conditions. However, not in the traditional sense of the word. Anaphoric dependencies outside the scope of sentence grammar I leave undiscussed here. But, clearly, notions like Source, Self and Pivot, as introduced in Sells (1987) will play a crucial role. For sentence grammar we, at least, need the equivalent of Principle Z, for instance: (30) An anaphor is bound (=c-commanded by a co-indexed element) This condition applies to any element that is standardly called a reflexive/ reciprocal, but it also holds for logophors, or ‘4 th person’pronouns. This condition gives no domain restriction other than that the antecedent must be a sentence internal c-commanding NP. The fact that certain anaphors have a 515 restricted choice of antecedents, a co-argument, is the result of reflexive marking of the predicate of which the anaphors is an argument. Reflexive marking is either overtly visible through verbal morphology, or covertly through incorporation of a reflexive-marker (cf. Reinhart & Reuland 1991, Anagnostopoulou & Everaert 1999), generally morpho-syntactically encoded on the anaphoric element itself. One could take (31) as a binding condition, (31) A reflexive marked predicate must be reflexive but this condition is different from (30) in that it not directly refers to the anaphoric element itself. References Anagnostopoulou, E. and M. Everaert. 1999. Towards a more complete typology of anaphoric expressions. Linguistic Inquiry 30.1, 97-118 Andersen, T. 1999 Anti-logophoricity and indirect mode in Mabaan, Studies in Language 23, 499-530. Annamalai, E . 2000. Lexical Anaphors and pronouns in Tamil. In Lexical pronouns an anaphors in selected South Asian languages: a principled typology , ed. B. Lust, K. Wali, J. Gair and K.V. Subburao. Mouton De Gruyter. Branco, António and Palmira Marrafa. 1999. Long-distance Reflexives and the Binding Square of Opposition. In Gert Webelhuth, Jean-Pierre Koening, and Andreas Kathol, editors, Lexical and Constructional Aspects of Linguistics Explanation . CSLI Publications, Stanford, chapter 11, pages 163–177. Branco, A. 2005. Anaphoric Constraints and Dualites in the Semantics of Nominals, Journal of Logic, Language and Information , 14, 149-171. Bresnan, J. 2002. Lexical-Functional Syntax . Oxford: Blackwell. Chomsky, N. 1981. Lectures on Government and Binding. Dordrecht: Foris Chomsky, N. 1986. Knowledge of Language: its nature, origin and use , New York: Praeger. Cole, P., G. Hermon and C.-T. J. Huang. 2001. Long-distance reflexives , Syntax and Semantics 33, San Diego, Academic Press. Everaert, M. 1986. The Syntax of Reflexivization. Foris, Dordrecht. Everaert, M. 1991. Contextual Determination of the Anaphor/Pronominal Distinction. In J. Koster and E. Reuland (eds.), Long-distance Anaphora . 49-76). Cambridge, UK: Cambridge University Press. 516 Everaert, M. 2003. Binding Theories in the Generative Research Tradition, Research in Language 1, 33-52. Everaert, M. 2005. Long-Distance Reciprocals, ms Utrecht University Frajzyngier, Z. 1997. Pronouns and agreement: systems interaction in the coding of reference. Atomism and binding . H. Bennis, P. Pica, and J. Rooryck (eds). Dordrecht: Foris, 115-140. Harbert, W. 1983. On the Definition of Binding Domains. In Proceedings of the West-Coast Conference of Formal Linguistics 2 , 102-113. Hornstein, N. 2001. Move! A minimalist theory of construal . Blackwell, 2001. Inoue, K. 1976, Reflexivization: an interpretive approach, In: M. Shibatani (ed.) Syntax & Semantics 5, Academic Press, New York. Jayaseelan, K.A. & M. Haripasad (2001) Deixis in Pronouns and Noun Phrases, Linguistic Analysis 31, 132-149. Kang, B.-M. 1988. Unbounded reflexives, Linguistics & Philosophy 13, 415-456. Keenan, E. 1988. On Semantics and the Binding Theory, In : J. Hawkins (ed.) Explaining language universals . Oxford: Blackwell, 105-144. Kinyalolo, K. 1993. The logophoric pronoun émi in Fon as an LF operator/ anaphor. Proceedings of NELS 23 . Amherst: GSLA, 223-237. Koster, J. & E. Reuland (eds) 1991. Long-distance Anaphora , Cambridge University Press, Cambridge. Lehmann, T. 1989. A Grammar of Modern Tamil . Pondicherry: Pondicherry Publications. Pica, P. 1985. Subject, Tense and Truth: Towards a Modular Approach to Binding. In J. Guéron, H. G. Obenauer, and J.-Y. Pollock (eds.), Grammatical Representation . Foris, Dordrecht, pp. 259-291. Pollard, C., and I. Sag. 1994. Head-Driven Phrase Structure Grammar . Stanford: CSLI, and Chicago: The University of Chicago Press. Reinhart, T. 1983, Anaphora and Semantic Interpretation , London & Sydney: Croom Helm. Reinhart, Tanya and Eric Reuland. 1991. Anaphors and Logophors: An Argument Structure Perspective. In Long Distance Anaphora , ed. Jan Koster and Eric Reuland, 283-321. Cambridge: Cambridge University Press Reinhart, T., & E. Reuland. 1993. “Reflexivity”. Linguistic Inquiry 24.4: 657-720. Reis, M. 1976. Reflexivierung in deutsche A.c.I.-konstruktionen, Ein transformations-grammatisches Dilemma, Papiere zur Linguistik 9, 5-82. Reuland, E. & Everaert, M. 2001. Deconstructing Binding, In: M. Baltin & C. Collins (eds.) The Handbook of Contemporary Syntactic Linguistics .Blackwell, 634-670 Thráinsson, H. 1976. Reflexives and Subjunctives in Icelandic, Proceedings of NELS 6, 225-239. 517 Sells, Peter. 1987. Aspects of Logophoricity. Linguistic Inquiry, 18:445-479. Sevcenco, A. 2004. Long Distance Romanian Anaphors and the Blocking Effect, talk presented at Discourse Anaphora and Anaphora Resolution , S. Miguel, Azores – September 23-24, 2004 Siewierska, A. 2004. Person. Cambridge, Cambridge Unversity Press. Yang, D.-W. (1984). The Extended Binding Theory of Anaphors, Theoretical Linguistic Research , 1:195-218 518
344
Jump to content Pseudoforest Français Magyar Português Русский Українська Edit links From Wikipedia, the free encyclopedia Graph with at most one cycle per component In graph theory, a pseudoforest is an undirected graph in which every connected component has at most one cycle. That is, it is a system of vertices and edges connecting pairs of vertices, such that no two cycles of consecutive edges share any vertex with each other, nor can any two cycles be connected to each other by a path of consecutive edges. A pseudotree is a connected pseudoforest. The names are justified by analogy to the more commonly studied trees and forests. (A tree is a connected graph with no cycles; a forest is a disjoint union of trees.) Gabow and Tarjan attribute the study of pseudoforests to Dantzig's 1963 book on linear programming, in which pseudoforests arise in the solution of certain network flow problems. Pseudoforests also form graph-theoretic models of functions and occur in several algorithmic problems. Pseudoforests are sparse graphs – their number of edges is linearly bounded in terms of their number of vertices (in fact, they have at most as many edges as they have vertices) – and their matroid structure allows several other families of sparse graphs to be decomposed as unions of forests and pseudoforests. The name "pseudoforest" comes from Picard & Queyranne (1982). Definitions and structure [edit] We define an undirected graph to be a set of vertices and edges such that each edge has two vertices (which may coincide) as endpoints. That is, we allow multiple edges (edges with the same pair of endpoints) and loops (edges whose two endpoints are the same vertex). A subgraph of a graph is the graph formed by any subsets of its vertices and edges such that each edge in the edge subset has both endpoints in the vertex subset. A connected component of an undirected graph is the subgraph consisting of the vertices and edges that can be reached by following edges from a single given starting vertex. A graph is connected if every vertex or edge is reachable from every other vertex or edge. A cycle in an undirected graph is a connected subgraph in which each vertex is incident to exactly two edges, or is a loop. A pseudoforest is an undirected graph in which each connected component contains at most one cycle. Equivalently, it is an undirected graph in which each connected component has no more edges than vertices. The components that have no cycles are just trees, while the components that have a single cycle within them are called 1-trees or unicyclic graphs. That is, a 1-tree is a connected graph containing exactly one cycle. A pseudoforest with a single connected component (usually called a pseudotree, although some authors define a pseudotree to be a 1-tree) is either a tree or a 1-tree; in general a pseudoforest may have multiple connected components as long as all of them are trees or 1-trees. If one removes from a 1-tree one of the edges in its cycle, the result is a tree. Reversing this process, if one augments a tree by connecting any two of its vertices by a new edge, the result is a 1-tree; the path in the tree connecting the two endpoints of the added edge, together with the added edge itself, form the 1-tree's unique cycle. If one augments a 1-tree by adding an edge that connects one of its vertices to a newly added vertex, the result is again a 1-tree, with one more vertex; an alternative method for constructing 1-trees is to start with a single cycle and then repeat this augmentation operation any number of times. The edges of any 1-tree can be partitioned in a unique way into two subgraphs, one of which is a cycle and the other of which is a forest, such that each tree of the forest contains exactly one vertex of the cycle. Certain more specific types of pseudoforests have also been studied. : A 1-forest, sometimes called a maximal pseudoforest, is a pseudoforest to which no more edges can be added without causing some component of the graph to contain multiple cycles. If a pseudoforest contains a tree as one of its components, it cannot be a 1-forest, for one can add either an edge connecting two vertices within that tree, forming a single cycle, or an edge connecting that tree to some other component. Thus, the 1-forests are exactly the pseudoforests in which every component is a 1-tree. : The spanning pseudoforests of an undirected graph G are the pseudoforest subgraphs of G that have all the vertices of G. Such a pseudoforest need not have any edges, since for example the subgraph that has all the vertices of G and no edges is a pseudoforest (whose components are trees consisting of a single vertex). : The maximal pseudoforests of G are the pseudoforest subgraphs of G that are not contained within any larger pseudoforest of G. A maximal pseudoforest of G is always a spanning pseudoforest, but not conversely. If G has no connected components that are trees, then its maximal pseudoforests are 1-forests, but if G does have a tree component, its maximal pseudoforests are not 1-forests. Stated precisely, in any graph G its maximal pseudoforests consist of every tree component of G, together with one or more disjoint 1-trees covering the remaining vertices of G. Directed pseudoforests [edit] Versions of these definitions are also used for directed graphs. Like an undirected graph, a directed graph consists of vertices and edges, but each edge is directed from one of its endpoints to the other endpoint. A directed pseudoforest is a directed graph in which each vertex has at most one outgoing edge; that is, it has outdegree at most one. A directed 1-forest – most commonly called a functional graph (see below), sometimes maximal directed pseudoforest – is a directed graph in which each vertex has outdegree exactly one. If D is a directed pseudoforest, the undirected graph formed by removing the direction from each edge of D is an undirected pseudoforest. Number of edges [edit] Every pseudoforest on a set of n vertices has at most n edges, and every maximal pseudoforest on a set of n vertices has exactly n edges. Conversely, if a graph G has the property that, for every subset S of its vertices, the number of edges in the induced subgraph of S is at most the number of vertices in S, then G is a pseudoforest. 1-trees can be defined as connected graphs with equally many vertices and edges. Moving from individual graphs to graph families, if a family of graphs has the property that every subgraph of a graph in the family is also in the family, and every graph in the family has at most as many edges as vertices, then the family contains only pseudoforests. For instance, every subgraph of a thrackle (a graph drawn so that every pair of edges has one point of intersection) is also a thrackle, so Conway's conjecture that every thrackle has at most as many edges as vertices can be restated as saying that every thrackle is a pseudoforest. A more precise characterization is that, if the conjecture is true, then the thrackles are exactly the pseudoforests with no four-vertex cycle and at most one odd cycle. Streinu and Theran generalize the sparsity conditions defining pseudoforests: they define a graph as being (k,l)-sparse if every nonempty subgraph with n vertices has at most kn − l edges, and (k,l)-tight if it is (k,l)-sparse and has exactly kn − l edges. Thus, the pseudoforests are the (1,0)-sparse graphs, and the maximal pseudoforests are the (1,0)-tight graphs. Several other important families of graphs may be defined from other values of k and l, and when l ≤ k the (k,l)-sparse graphs may be characterized as the graphs formed as the edge-disjoint union of l forests and k − l pseudoforests. Almost every sufficiently sparse random graph is pseudoforest. That is, if c is a constant with 0 < c < 1/2, and Pc(n) is the probability that choosing uniformly at random among the n-vertex graphs with cn edges results in a pseudoforest, then Pc(n) tends to one in the limit for large n. However, for c > 1/2, almost every random graph with cn edges has a large component that is not unicyclic. Enumeration [edit] A graph is simple if it has no self-loops and no multiple edges with the same endpoints. The number of simple 1-trees with n labelled vertices is The values for n up to 300 can be found in sequence OEIS: A057500 of the On-Line Encyclopedia of Integer Sequences. The number of maximal directed pseudoforests on n vertices, allowing self-loops, is nn, because for each vertex there are n possible endpoints for the outgoing edge. André Joyal used this fact to provide a bijective proof of Cayley's formula, that the number of undirected trees on n nodes is nn − 2, by finding a bijection between maximal directed pseudoforests and undirected trees with two distinguished nodes. If self-loops are not allowed, the number of maximal directed pseudoforests is instead (n − 1)n. Graphs of functions [edit] "Functional graph" redirects here. For other uses, see Graph of a function. Directed pseudoforests and endofunctions are in some sense mathematically equivalent. Any function ƒ from a set X to itself (that is, an endomorphism of X) can be interpreted as defining a directed pseudoforest which has an edge from x to y whenever ƒ(x) = y. The resulting directed pseudoforest is maximal, and may include self-loops whenever some value x has ƒ(x) = x. Alternatively, omitting the self-loops produces a non-maximal pseudoforest. In the other direction, any maximal directed pseudoforest determines a function ƒ such that ƒ(x) is the target of the edge that goes out from x, and any non-maximal directed pseudoforest can be made maximal by adding self-loops and then converted into a function in the same way. For this reason, maximal directed pseudoforests are sometimes called functional graphs. Viewing a function as a functional graph provides a convenient language for describing properties that are not as easily described from the function-theoretic point of view; this technique is especially applicable to problems involving iterated functions, which correspond to paths in functional graphs. Cycle detection, the problem of following a path in a functional graph to find a cycle in it, has applications in cryptography and computational number theory, as part of Pollard's rho algorithm for integer factorization and as a method for finding collisions in cryptographic hash functions. In these applications, ƒ is expected to behave randomly; Flajolet and Odlyzko study the graph-theoretic properties of the functional graphs arising from randomly chosen mappings. In particular, a form of the birthday paradox implies that, in a random functional graph with n vertices, the path starting from a randomly selected vertex will typically loop back on itself to form a cycle within O(√n) steps. Konyagin et al. have made analytical and computational progress on graph statistics. Martin, Odlyzko, and Wolfram investigate pseudoforests that model the dynamics of cellular automata. These functional graphs, which they call state transition diagrams, have one vertex for each possible configuration that the ensemble of cells of the automaton can be in, and an edge connecting each configuration to the configuration that follows it according to the automaton's rule. One can infer properties of the automaton from the structure of these diagrams, such as the number of components, length of limiting cycles, depth of the trees connecting non-limiting states to these cycles, or symmetries of the diagram. For instance, any vertex with no incoming edge corresponds to a Garden of Eden pattern and a vertex with a self-loop corresponds to a still life pattern. Another early application of functional graphs is in the trains used to study Steiner triple systems. The train of a triple system is a functional graph having a vertex for each possible triple of symbols; each triple pqr is mapped by ƒ to stu, where pqs, prt, and qru are the triples that belong to the triple system and contain the pairs pq, pr, and qr respectively. Trains have been shown to be a powerful invariant of triple systems although somewhat cumbersome to compute. Bicircular matroid [edit] A matroid is a mathematical structure in which certain sets of elements are defined to be independent, in such a way that the independent sets satisfy properties modeled after the properties of linear independence in a vector space. One of the standard examples of a matroid is the graphic matroid in which the independent sets are the sets of edges in forests of a graph; the matroid structure of forests is important in algorithms for computing the minimum spanning tree of the graph. Analogously, we may define matroids from pseudoforests. For any graph G = (V,E), we may define a matroid on the edges of G, in which a set of edges is independent if and only if it forms a pseudoforest; this matroid is known as the bicircular matroid (or bicycle matroid) of G. The smallest dependent sets for this matroid are the minimal connected subgraphs of G that have more than one cycle, and these subgraphs are sometimes called bicycles. There are three possible types of bicycle: a theta graph has two vertices that are connected by three internally disjoint paths, a figure 8 graph consists of two cycles sharing a single vertex, and a handcuff graph is formed by two disjoint cycles connected by a path. A graph is a pseudoforest if and only if it does not contain a bicycle as a subgraph. Forbidden minors [edit] Forming a minor of a pseudoforest by contracting some of its edges and deleting others produces another pseudoforest. Therefore, the family of pseudoforests is closed under minors, and the Robertson–Seymour theorem implies that pseudoforests can be characterized in terms of a finite set of forbidden minors, analogously to Wagner's theorem characterizing the planar graphs as the graphs having neither the complete graph K5 nor the complete bipartite graph K3,3 as minors. As discussed above, any non-pseudoforest graph contains as a subgraph a handcuff, figure 8, or theta graph; any handcuff or figure 8 graph may be contracted to form a butterfly graph (five-vertex figure 8), and any theta graph may be contracted to form a diamond graph (four-vertex theta graph), so any non-pseudoforest contains either a butterfly or a diamond as a minor, and these are the only minor-minimal non-pseudoforest graphs. Thus, a graph is a pseudoforest if and only if it does not have the butterfly or the diamond as a minor. If one forbids only the diamond but not the butterfly, the resulting larger graph family consists of the cactus graphs and disjoint unions of multiple cactus graphs. More simply, if multigraphs with self-loops are considered, there is only one forbidden minor, a vertex with two loops. Algorithms [edit] An early algorithmic use of pseudoforests involves the network simplex algorithm and its application to generalized flow problems modeling the conversion between commodities of different types. In these problems, one is given as input a flow network in which the vertices model each commodity and the edges model allowable conversions between one commodity and another. Each edge is marked with a capacity (how much of a commodity can be converted per unit time), a flow multiplier (the conversion rate between commodities), and a cost (how much loss or, if negative, profit is incurred per unit of conversion). The task is to determine how much of each commodity to convert via each edge of the flow network, in order to minimize cost or maximize profit, while obeying the capacity constraints and not allowing commodities of any type to accumulate unused. This type of problem can be formulated as a linear program, and solved using the simplex algorithm. The intermediate solutions arising from this algorithm, as well as the eventual optimal solution, have a special structure: each edge in the input network is either unused or used to its full capacity, except for a subset of the edges, forming a spanning pseudoforest of the input network, for which the flow amounts may lie between zero and the full capacity. In this application, unicyclic graphs are also sometimes called augmented trees and maximal pseudoforests are also sometimes called augmented forests. The minimum spanning pseudoforest problem involves finding a spanning pseudoforest of minimum weight in a larger edge-weighted graph G. Due to the matroid structure of pseudoforests, minimum-weight maximal pseudoforests may be found by greedy algorithms similar to those for the minimum spanning tree problem. However, Gabow and Tarjan found a more efficient linear-time approach in this case. The pseudoarboricity of a graph G is defined by analogy to the arboricity as the minimum number of pseudoforests into which its edges can be partitioned; equivalently, it is the minimum k such that G is (k,0)-sparse, or the minimum k such that the edges of G can be oriented to form a directed graph with outdegree at most k. Due to the matroid structure of pseudoforests, the pseudoarboricity may be computed in polynomial time. A random bipartite graph with n vertices on each side of its bipartition, and with cn edges chosen independently at random from each of the n2 possible pairs of vertices, is a pseudoforest with high probability whenever c is a constant strictly less than one. This fact plays a key role in the analysis of cuckoo hashing, a data structure for looking up key-value pairs by looking in one of two hash tables at locations determined from the key: one can form a graph, the "cuckoo graph", whose vertices correspond to hash table locations and whose edges link the two locations at which one of the keys might be found, and the cuckoo hashing algorithm succeeds in finding locations for all of its keys if and only if the cuckoo graph is a pseudoforest. Pseudoforests also play a key role in parallel algorithms for graph coloring and related problems. Notes [edit] ^ Jump up to: a b The kind of undirected graph considered here is often called a multigraph or pseudograph, to distinguish it from a simple graph. ^ Jump up to: a b c d Gabow & Tarjan (1988). ^ Jump up to: a b Dantzig (1963). ^ See the linked articles and the references therein for these definitions. ^ This is the definition used, e.g., by Gabow & Westermann (1992). ^ This is the definition in Gabow & Tarjan (1988). ^ See, e.g., the proof of Lemma 4 in Àlvarez, Blesa & Serna (2002). ^ Kruskal, Rudolph & Snir (1990) instead use the opposite definition, in which each vertex has indegree one; the resulting graphs, which they call unicycular, are the transposes of the graphs considered here. ^ Woodall (1969); Lovász, Pach & Szegedy (1997). ^ Jump up to: a b Streinu & Theran (2009). ^ Whiteley (1988). ^ Bollobás (1985). See especially Corollary 24, p.120, for a bound on the number of vertices belonging to unicyclic components in a random graph, and Corollary 19, p.113, for a bound on the number of distinct labeled unicyclic graphs. ^ Riddell (1951); see OEIS: A057500 in the On-Line Encyclopedia of Integer Sequences. ^ Aigner & Ziegler (1998). ^ Flajolet & Odlyzko (1990). ^ Konyagin et al. (2010). ^ Martin, Odlyzko & Wolfram (1984). ^ White (1913); Colbourn, Colbourn & Rosenbaum (1982); Stinson (1983). ^ Simoes-Pereira (1972). ^ Matthews (1977). ^ Glossary of Signed and Gain Graphs and Allied Areas ^ For this terminology, see the list of small graphs from the Information System on Graph Class Inclusions. However, butterfly graph may also refer to a different family of graphs related to hypercubes, and the five-vertex figure 8 is sometimes instead called a bowtie graph. ^ El-Mallah & Colbourn (1988). ^ Jump up to: a b Ahuja, Magnanti & Orlin (1993). ^ Gabow & Westermann (1992). See also the faster approximation schemes of Kowalik (2006). ^ Kutzelnigg (2006). ^ Goldberg, Plotkin & Shannon (1988); Kruskal, Rudolph & Snir (1990). References [edit] Ahuja, Ravindra K.; Magnanti, Thomas L.; Orlin, James B. (1993), Network Flows: Theory, Algorithms and Applications, Prentice Hall, ISBN 0-13-617549-X. Aigner, Martin; Ziegler, Günter M. (1998), Proofs from THE BOOK, Springer-Verlag, pp. 141–146. Àlvarez, Carme; Blesa, Maria; Serna, Maria (2002), "Universal stability of undirected graphs in the adversarial queueing model", Proc. 14th ACM Symposium on Parallel Algorithms and Architectures, pp. 183–197, doi:10.1145/564870.564903, hdl:2117/97553, ISBN 1-58113-529-7, S2CID 14384161. Bollobás, Béla (1985), Random Graphs, Academic Press. Colbourn, Marlene J.; Colbourn, Charles J.; Rosenbaum, Wilf L. (1982), "Trains: an invariant for Steiner triple systems", Ars Combinatoria, 13: 149–162, MR 0666934. Dantzig, G. B. (1963), Linear Programming and Extensions, Princeton University Press. El-Mallah, Ehab; Colbourn, Charles J. (1988), "The complexity of some edge deletion problems", IEEE Transactions on Circuits and Systems, 35 (3): 354–362, doi:10.1109/31.1748. Flajolet, P.; Odlyzko, A. (1990), "Random mapping statistics", Advances in Cryptology – EUROCRYPT '89: Workshop on the Theory and Application of Cryptographic Techniques, Lecture Notes in Computer Science, vol. 434, Springer-Verlag, pp. 329–354. Gabow, H. N.; Tarjan, R. E. (1988), "A linear-time algorithm for finding a minimum spanning pseudoforest", Information Processing Letters, 27 (5): 259–263, doi:10.1016/0020-0190(88)90089-0. Gabow, H. N.; Westermann, H. H. (1992), "Forests, frames, and games: Algorithms for matroid sums and applications", Algorithmica, 7 (1): 465–497, doi:10.1007/BF01758774, S2CID 40358357. Goldberg, A. V.; Plotkin, S. A.; Shannon, G. E. (1988), "Parallel symmetry-breaking in sparse graphs", SIAM Journal on Discrete Mathematics, 1 (4): 434–446, doi:10.1137/0401044. Konyagin, Sergei; Luca, Florian; Mans, Bernard; Mathieson, Luke; Shparlinski, Igor E. (2010), Functional Graphs of Polynomials over Finite Fields Kowalik, Ł. (2006), "Approximation Scheme for Lowest Outdegree Orientation and Graph Density Measures", in Asano, Tetsuo (ed.), Proceedings of the International Symposium on Algorithms and Computation, Lecture Notes in Computer Science, vol. 4288, Springer-Verlag, pp. 557–566, doi:10.1007/11940128, ISBN 978-3-540-49694-6. Kruskal, Clyde P.; Rudolph, Larry; Snir, Marc (1990), "Efficient parallel algorithms for graph problems", Algorithmica, 5 (1): 43–64, doi:10.1007/BF01840376, S2CID 753980. Picard, Jean-Claude; Queyranne, Maurice (1982), "A network flow solution to some nonlinear 0–1 programming problems, with applications to graph theory", Networks, 12 (2): 141–159, doi:10.1002/net.3230120206, MR 0670021. Kutzelnigg, Reinhard (2006), "Bipartite random graphs and cuckoo hashing", Fourth Colloquium on Mathematics and Computer Science, Discrete Mathematics and Theoretical Computer Science, vol. AG, pp. 403–406. Lovász, L.; Pach, J.; Szegedy, M. (1997), "On Conway's thrackle conjecture", Discrete and Computational Geometry, 18 (4): 369–376, doi:10.1007/PL00009322. Martin, O.; Odlyzko, A. M.; Wolfram, S. (1984), "Algebraic properties of cellular automata", Communications in Mathematical Physics, 93 (2): 219–258, Bibcode:1984CMaPh..93..219M, CiteSeerX 10.1.1.78.212, doi:10.1007/BF01223745, S2CID 6900060, archived from the original on 2012-02-12, retrieved 2007-10-03. Matthews, L. R. (1977), "Bicircular matroids", The Quarterly Journal of Mathematics, Second Series, 28 (110): 213–227, doi:10.1093/qmath/28.2.213, MR 0505702. Riddell, R. J. (1951), Contributions to the Theory of Condensation, Ph.D. thesis, Ann Arbor: University of Michigan, Bibcode:1951PhDT........20R. Simoes-Pereira, J. M. S. (1972), "On subgraphs as matroid cells", Mathematische Zeitschrift, 127 (4): 315–322, doi:10.1007/BF01111390, S2CID 186231673. Stinson, D. R. (1983), "A comparison of two invariants for Steiner triple systems: fragments and trains", Ars Combinatoria, 16: 69–76, MR 0734047. Streinu, I.; Theran, L. (2009), "Sparsity-certifying Graph Decompositions", Graphs and Combinatorics, 25 (2): 219, arXiv:0704.0002, doi:10.1007/s00373-008-0834-4, S2CID 15877017. White, H. S. (1913), "Triple-systems as transformations, and their paths among triads", Transactions of the American Mathematical Society, 14 (1), American Mathematical Society: 6–13, doi:10.2307/1988765, JSTOR 1988765. Whiteley, W. (1988), "The union of matroids and the rigidity of frameworks", SIAM Journal on Discrete Mathematics, 1 (2): 237–255, doi:10.1137/0401025. Woodall, D. R. (1969), "Thrackles and deadlock", in Welsh, D. J. A. (ed.), Combinatorial Mathematics and Its Applications, Academic Press, pp. 335–348. External links [edit] Weisstein, Eric W., "Unicyclic Graph", MathWorld Retrieved from " Categories: Matroid theory Graph families Graph theory objects Hidden categories: Articles with short description Short description is different from Wikidata Good articles
345
Published Time: 2004-06-01 Equatives and Deferred Reference | Request PDF =============== Article Equatives and Deferred Reference June 2004 Language 80(2):262-289 DOI:10.1353/lan.2004.0102 Authors: Gregory Ward Northwestern University Request full-text PDF To read the full-text of this research, you can request a copy directly from the author. Request full-text Download citation Copy link Link copied Request full-textDownload citation Copy link Link copied To read the full-text of this research, you can request a copy directly from the author. Citations (43)References (30) Abstract Previous accounts of deferred reference (e.g. Nunberg 1995) have argued that all (non-ostensive) deferred reference is the result of meaning transfer, a shift in the sense of a nominal or predicate expression. An analysis of deferred equatives (I'm the Pad Thai) suggests an alternative account based on the notion of pragmatic mapping: a contextually licensed mapping operation between (sets of) discourse entities, neither of which undergoes a transfer of meaning. Moreover, the use of a deferred equative requires the presence of a contextually licensed open proposition (Prince 1986) whose instantiation encodes the particular mapping between entities, both of which remain accessible to varying degrees within the discourse context. Finally, it is shown how a complete account of deferred reference must provide for transfers of reference as well as sense. Discover the world's research 25+ million members 160+ million publication pages 2.3+ billion citations Join for free No full-text available To read the full-text of this research, you can request a copy directly from the author. Request full-text PDF Citations (43) References (30) ... In an open-ended-relation NPC, the semantic relation between the subject and predicate NPs is hardly constrained, and the hearer needs to identify it based on contextual cues. The open-ended-relation NPC in English is discussed under the rubric of "deferred equative" by Ward (2004); the one in Japanese has been called the "eel construction (eel sentence)", after an oft-cited example involving unagi 'eel' as its predicate NP (Hoffer 1972: 220-222;Okutsu 1978;Sakahara 1996;Tokizaki 2003). ... ... 5.1.1 Ward (2004) on deferred equatives Ward (2004) argues that English NPCs like (105) and (106B) instantiate a special construction that he terms "the deferred equative". ... ... (adapted from Nunberg 1995: 115) Given that in English it is customarily possible for an NP to stand for an entity (e.g. a person) metonymically associated with its referent (e.g. a dish), it may seem reasonable to treat (120)/(121B) as regular NPCs whose predicate NP happens to have undergone this kind of metonymic transfer. Ward (2004), however, convincingly argues that NPCs like (105)/(106B) cannot be accounted for in terms of metonymic transfer at the level of nominals. One piece of evidence that the subject and predicate NPs of a deferred equative (typically) retain their literal meaning is that a predicate NP or subject NP literally denoting a non-human but equated with a human, such as the pad thai in (108b)/(109b), still accepts a modifier selecting a non-human-denoting modifiee. ... What a nominal predicate may mean: eel, merfolk, and other creatures Article Full-text available Oct 2024 LINGUISTICS David Y. Oshima A nominal predicate construction (NPC; e.g. “Cicero is {Tully/an orator}”) typically indicates the relation of identity or inclusion. NPCs, however, may receive marked interpretations as well, as in “I’m the ham sandwich” and “Their car is a peculiar color”; the former does not entail that the speaker is identical to the sandwich, and the latter does not entail that the car belongs to a set of colors. This article identifies, classifies, and analyzes such marked NPCs in English and Japanese, thereby enriching the taxonomy of NPCs acknowledged in the existing literature. It will be argued that both Japanese and English NPCs may indicate either (i) one of a handful of relatively specific semantic relations including the one of identity/inclusion, or (ii) an unspecified relation that is to be contextually inferred. The English NPC is semantically less flexible than the Japanese one, being compatible with a narrower range of specific semantic relations and allowing the unspecified-relation interpretation less leniently. The English NPC, on the other hand, is more liberally used in comparison to its counterparts in some related languages (e.g. German). The analysis put forth makes a good vantage point for general-linguistic and typological inquiries as to how languages in general may contrast with each other in terms of what types of situations can be described with a NPC. View Show abstract ... The Japanese ONPC has been called the "eel sentence (eel construction)", after an oft-cited example involving unagi 'eel' as its PNP (Hoffer 1972;Okutsu 1978;Tokizaki 2003). The English ONPC is discussed in good detail by Ward (2004) under the rubric of the "deferred equative". The English ONPC is discoursepragmatically more constrained than the Japanese one, as illustrated in (1) This work develops semantic analyses of the two ONPCs that make accurate predictions on their discourse-pragmatic distributions. ... ... 2 The English open-ended-relation NPC (the deferred equative) 2.1 Ward (2004) on deferred equatives Ward (2004) argues that English NPCs like (3) and (4B) instantiate a special construction that he terms the deferred equative. ... ... (adapted from Nunberg 1995:115) Given that in English it is customarily possible for an NP to stand for an entity (e.g. a person) metonymically associted with its referent (e.g. a dish), it may seem reasonable to treat (3)/(4B) as regular NPCs whose predicate NP happens to have undergone this kind of metonymic transfer. Ward (2004), however, convincingly argues that NPCs like like (3)/(4B) cannot be accounted for in terms of metonymic transfer at the level of nominals ("deferred nonequatives"). One piece of evidence that the subject and predicate NPs of a deferred equative (typically) retain their literal meaning is that a predicate NP or subject NP literally denoting a non-human but equated with a human, such as the pad thai in (6b)/(7b), still accepts a modifier selecting a non-human-denoting modifiee. ... How to be a ham sandwich or an eel: The English deferred equative and the Japanese eel sentence Article Full-text available Oct 2022 David Y. Oshima In some languages including English and Japanese, a nominal predicate construction (NPC; "NP1 is NP2") has a marked variety—"open-ended-relation NPCs" (ONPCs), to label it—where the referents of the subject NP and the predicate NP are understood to be in some pragmatically prominent relation other than identity or inclusion (e.g. I'm the ham sandwich 'I'm the customer who ordered the ham sandwich'). The Japanese ONPC has been called the "eel sentence (eel construction)", after an oft-cited example involving unagi 'eel' as its predicate NP. The English ONPC is discussed in good detail by Ward (2004; "Equatives and deferred reference", Language 80) under the rubric of the "deferred equative". The ONPCs in the two languages can be naturally used only under limited discourse configurations, with the English one being more severely constrained than the Japanese one. This work develops semantic analyses of the two ONPCs that improve on previous accounts. View Show abstract ... A key factor in their development is the use in Middle Chinese (220-960 CE) of relative clause at the post-copula position. 2 We argue that the emergence of V de O clefts also involved analogization to the extant VP de clefts and deferred equatives (Ward 2004), which gave rise to semantic and syntactic neoanalysis. When VP de clefts came into being, the network of the cleft construction simultaneously emerged. ... ... The fact that the V de O cleft is incompatible with these grammatical categories shows that other than the VP de cleft, its occurrence might have been influenced by other constructions. We argue that the deferred reference copula, also called the deferred equative (Ward 2004), 16 is another important exemplar relevant to the development of V de O cleft. ... ... 米饭 mǐfàn rice 'I am the rice. ' Ward (2004) proposes an account for deferred equatives like (24) based on the notion of pragmatic mapping: a contextually licensed mapping operation between (sets of ) discourse entities. He suggests that the use of a deferred equative requires the presence of a contextually licensed open proposition whose instan- Zhuzi yulei (朱子語類 the 13th century) already scattered. ... The development of the Chinese V de O cleft construction: A constructional approach Article Sep 2022 Fangqiong Zhan Haihua Pan This paper addresses the development of the Chinese V de O cleft construction, and how the cleft constructional network was developed in the history of Chinese. It is argued that V de O clefts emerged in the 13th century which was about 300 years later than VP de clefts. A key factor in their development is the use in Middle Chinese of relative clause in post-copula position. We argue that the emergence of V de O clefts also involved analogization to the extant VP de clefts as well as deferred equatives. Once V de O clefts occurred, they were recruited into the cleft network as a subschema, resulting in the schematic network being augmented and expanded. This study is a contribution to the developing field of constructionalization by making more explicit the way how nodes are created in a constructional network and how the network is reorganized and expanded. View Show abstract ... Many of the properties on the list are familiar from the literature, especially Nunberg's work. 6 Here we refine them and respond to a number of worries from Ward (2004). ... ... (18) The ham sandwich left without paying; he was a jerk. (19) The ham sandwich left without paying; #it was delicious Ward (2004) challenges the claim that transferred meaning is available for anaphoric reference while untransferred meaning isn't. 12 He maintains that in some contexts anaphoric reference to the non-transferred meaning is available. ... ... Thanks to Rob Stainton for this way of putting the point. We take it that the idea that 'it' here is a deictic rather than anaphoric pronouns is what Ward has in mind when he considers the possibility that the 'it' here is a "pronoun of laziness" (Ward (2004): 270), though we ourselves would not use that term in this context. Ward himself dismisses this proposal on grounds of parsimony (ibid. ... Meaning Transfer Revisited Article Jul 2019 Phil Perspect David Liebesman Ofra Magidor View ... Although Nunberg presents very interesting and challenging data on the meaning transfer involved in metonymies, his approach does not explain how those meaning transfers work, that is, how the process of associating properties to objects or individuals is contextually constrained to get the intended referent. In this sense, Ward (2004) has raised some doubts about the meaning transfer operating on the common noun rather than on a reference transfer operating on the whole NP, since if the meaning of the common noun is transferred (that is, ham sandwich in (5-6)), but there is no referent of the NP evoked in the interpretation of the metonymy (i.e. a unique or salient sandwich), then it is not clear how the hearer identifies the intended contextual meaning (the ham sandwich customer). A further question that arises here is how to analyse metonymies involving proper names in this approach, since these have no meaning to be transferred. ... ... A further question that arises here is how to analyse metonymies involving proper names in this approach, since these have no meaning to be transferred. Ward (2004) states that Nunberg's nominal transfer mechanism would not work in the absence of a common noun upon which to base the transfer. To illustrate his point he provides the following example: ... A Relevance-theoretic Perspective on Metonymy Article Full-text available Feb 2015 Bárbara Eizaga-Rebollar The aim of this paper is to analyse metonymies from a relevance-theoretic perspective. Metonymy has been considered as an association process between contiguous items within the same cognitive domain or as involving a meaning transfer between properties. These approaches, however, prove inadequate in offering a complete account. I will argue that metonymies are used as reference tools to refer to individuals or objects lying outside their linguistically-specified denotation. I will outline how the intended referent might be identified by the property the speaker singles out for the hearer to focus on. Finally, different metonymic uses in communication will be analysed. View Show abstract ... The recognition that metonymy cannot be analysed along traditional Gricean lines ( 1989), as involving implicatures derived on the basis of the hearer having 'said' something patently absurd/false (e.g. in (1) above, that the object made of bread and ham is getting impatient), has led to several proposals for how metonymy can be accommodated at the truth-conditional level, hence within a compositional semantics (Sag 1981;Stallard 1993;Pustejovsky 1995a;Stern 2000Stern , 2006. 159 The challenge for more cognitively or pragmatically oriented approaches to metonymy (Nunberg 1978(Nunberg , 1979(Nunberg , 1996(Nunberg , 2004Fauconnier 1994Fauconnier [1985; Papafragou 1996; Recanati 1995Recanati , 2004 Ward 2004;Evans 2009), which are less concerned with the problems metonymy raises for semantic compositionality, is to explain the nature of the pragmatic process that leads to metonymic interpretations. Central questions are: What are the circumstances under which a speaker may take an expression a which refers to A and use it to successfully refer to B? What are the constraints on the possible relations between A and B? What is the cognitive and communicative motivation for using a metonymic expression, instead of a literal expression with a similar meaning? ... ... In previous chapters, I have argued that polysemy arises as a result of a pragmatic process of ad hoc concept construction, which operates at the level of individual words and whose outcomes are concepts with either a narrower or a broader denotation than those linguistically-encoded. As we have seen, relevance theory takes utterance comprehension, including the construction of ad hoc concepts, to be a wholly inferential process, with a unitary, on-line pragmatic processing system which derives Nunberg 1978Nunberg , 1979Fauconnier 1994Fauconnier [1985; Ward 2004), ... The Semantics and Pragmatics of Polysemy: A Relevance-­‐Theoretic Account Article Full-text available Jan 2011 Ingrid Lossius Falkum This thesis investigates the phenomenon of polysemy: a single lexical form with two or multiple related senses (e.g. catch the rabbit/order the rabbit; lose a wallet/lose a relative; a handsome man/a handsome gift). I develop a pragmatic account of polysemy within the framework of Sperber and Wilson’s relevance theory, where new senses for a word are constructed during on-line comprehension by means of a single process of ad hoc concept construction, which adjusts the meanings of individual words in different directions. While polysemy is largely unproblematic from the perspective of communication, it poses a range of theoretical and descriptive problems. This is sometimes termed the polysemy paradox. A widely held view in lexical semantics is that word meanings must consist of complex representations in order to capture the sense relations involved in polysemy. Contrary to this view, I argue that a conceptual atomist approach, which treats word meanings as unstructured atoms and thereby avoids the range of problems associated with decompositional theories of word meaning, may be at least as able to account for polysemy when paired with an adequate pragmatic theory. My proposed solution to the polysemy paradox is to treat polysemy as a fundamentally communicative phenomenon, which arises as a result of encoded lexical concepts being massively underdetermining of speaker-intended concepts, and is grounded in our pragmatic inferential ability. According to this approach, the role of the linguistic system in giving rise to polysemy is to provide a minimal input, or clue, which the pragmatic system uses as evidence to yield hypotheses about occasion-specific, speaker-intended meanings. I further show how this pragmatic approach can account for cases of ‘systematic polysemy’, usually seen as prime candidates for an analysis in terms of lexical rule application. Finally, I develop an account of metonymy within the overall framework of relevance-theory. View Show abstract ... That metonymic expressions such as read Dickens are as easy to process as conventional expressions such as met Dickens strongly suggests that deferred interpretation per se is not computationally costly for the language processor. It might be useful for some purposes to assume that all forms of deferred interpretation share a common linguistic mechanism, such as meaning or reference transfer (e.g., Nunberg, 1978 Nunberg, , 1979 Nunberg, , 1995 Ward, 2004). However, our data indicate that these constructs do not appear to provide a useful classification of compositional cost and, hence, do not appear to accurately characterize the means through which comprehenders arrive at the respective interpretations. ... ... adopt Nunberg's (2004) use of the term deferred interpretation as an umbrella expression for cases that might involve both meaning (or sense) transfer and reference transfer (see Ward, 2004). Unfortunately, the literature is rife with different terms, often used in slightly different ways (e.g., deferred reference, deferred meaning, predicate transfer, systematic polysemy). ... Deferred Interpretations: Why Starting Dickens is Taxing but Reading Dickens Isn't Article Jan 2006 Brian Mcelree Steven Frisson Martin J Pickering Comprehenders often need to go beyond conventional word senses to obtain an appropriate interpretation of an expression. We report an experiment examining the processing of standard metonymies (The gentleman read Dickens) and logical metonymies (The gentleman began Dickens), contrasting both to the processing of control expressions with a conventional interpretation (The gentleman met Dickens). Eye movement measures during reading indicated that standard (producer-for-product) metonymies were not more costly to interpret than conventional expressions, but logical metonymies were more costly to interpret than both standard metonymies and conventional expressions. These results indicate that constructing alternative senses is sometimes taxing and that not all types of deferred interpretations are processed in the same way. The results suggest that a critical factor in determining the attendant cost of constructing alternative senses is whether compositional operations must generate unexpressed semantic structure to realize an extended sense of an expression. View Show abstract ... These examples are often discussed together with genuine metasemies2, while the lexical expressions of this type are contextually too restricted to be considered lexical units, be it only ephemeral. An ad hoc metonymic semantic transfer represents a pre-linguistic operation by the Speaker: the selection of a possible metonymic meaning to attach to a chunk of the targeted extralinguistic situation (see, e.g., Ward, 2004). Such transfers are determined by psychological and/or pragmatic factors and seem to be language independent. ... A new type of linguistic sign: metasemy2 Article Full-text available Oct 2024 Igor Mel'cuk A new type of morphological expressive linguistic means and the corresponding linguistic sign are described: metasemy 1 — an expressive linguistic means that is an operation on the signified of the target lexeme; and metasemy 2 — a sign whose signifier is a metasemy 1 . Thus, the English metasemy 2 painting , when applied to a human proper name, such as, e.g., T urner , produces a derived lexeme, in this case, painting ( T urner) = [ a ] Turner ‘[a] painting by Turner’ ( I have seen two excellent Turners ). A formal description of the Russian metasemy 2 pomeščenie ‘place 2 ’ is presented, based on the analysis of the Russian phrase u otca ‘at father’s’ = ‘at father’s place 2 ’, where the underlying lexeme otec ‘father’ and the derived lexeme pomeščenie (otec) are involved, the meaning ‘place 2 ’ being expressed by the metasemy 2 pomeščenie . The English and French translational equivalents of this phrase, Eng. at father’s and Fr. chez le père , are shown to have different organizations: in at father’s , the meaning ‘place 2 ’ is carried by the ’s -form, and in chez le père , by the preposition chez . A tentative list of known Russian metasemies 2 is supplied, as well as the similar lists for English and French. A metasemy 2 always expresses a metonymic semantic relation between the underlying lexeme and the resulting derived lexeme; it is a derivational morphological means, parallel to derivational affixes. Metasemy 2 seems to exist universally. View Show abstract ... It must be said that the proper definition of linguistic ambiguity is not trivial, although researchers tend to agree that ambiguity must be distinguished from vagueness (Fara 2000;Kennedy 2019), context sensitivity (Donaldson and Lepore 2012), reference transfer (Nunberg 1978;Ward 2004), and generality of sense (Zwicky and Sadock 1975). Let us briefly illustrate these phenomena before presenting the contributions of this volume. ... Ambiguity in Linguistics Article Full-text available Nov 2023 Stud Ling Jordi Fortuny Lluís Payrató Ambiguity is conventionally defined in Linguistics as a property of a word or an utterance that has two meanings or two interpretations, and is usually classified as lexical, morphological, syntactic (or structural), and pragmatic. Giving an adequate definition of linguistic ambiguity is not trivial, nor is there unanimity in accepting it. Most researchers tend to agree that ambiguity should be distinguished from related concepts such as vagueness, context sensitivity, reference transfer, and underdetermination or generality of meaning. The distinction between these concepts is also related to the divergences or connections between the perspectives of analysis of ambiguity, and the aim of each work. In this introduction, we define the limits of ambiguity with respect to related concepts and summarize the studies contained within this special issue. These studies do not cover all possible approaches to linguistic ambiguity, but provide a broad overview that can be useful in different fields. We trust that they will contribute to deepening into a phenomenon that is not yet well described and that seems to be consubstantial with the use of language. View Show abstract ... The discussions between Ward, Nunberg and Sag seem to suggest the same. (see Ward, 2004;Sag, 1981;Nunberg, 1993Nunberg, , 1995Nunberg, , 2004a Thus, I take Nunberg's account of deferred reference to be an attempt to delineate a technical concept. I find this concept to be a useful tool in analyzing linguistic phenomena, and so I take the characteristics Nunberg provided as its definition. ... Descriptive Indexicals, Deferred Reference, and Anaphora Article Full-text available Jun 2020 Katarzyna Kijania-Placek The objectives of this paper are twofold. The first is to present a differentiation between two kinds of deferred uses of indexicals: those in which indexical utterances express singular propositions (I term them deferred reference proper) and those where they express general propositions (called descriptive uses of indexicals). The second objective is the analysis of the descriptive uses of indexicals. In contrast to Nunberg, who treats descriptive uses as a special case of deferred reference in which a property contributes to the proposition expressed, I argue that examples in which a general proposition is indeed expressed by an indexical cannot be treated by assuming that the property is a deferred referent of the pronoun. I propose an analysis of descriptive uses of indexicals by means of a pragmatic mechanism of ‘descriptive anaphora’, which attempts to explain the special kind of contribution of the property retrieved from the context to the proposition that is characteristic of the descriptive interpretation. View Show abstract ... So far, we have seen that formal semantic approaches to copredication focus on the contribution of lexical representations during composition. In contrast, pragmatic approaches focus on speaker intentions and conversational principles (Nunberg 1978, Recanati 2010, Ward 2004. Semanticists have readily acknowledged (in particular, Asher, Pustejovsky, Copestake and Briscoe) the importance that pragmatics can play in improving copredication acceptability. ... Linguistic Representation and Processing of Copredication Thesis Full-text available Jan 2021 Elliot Murphy This thesis addresses the lexical and psycholinguistic properties of copredication. In particular, it explores its acceptability, frequency, crosslinguistic and electrophysiological features. It proposes a general parsing bias to account for novel acceptability data, through which Complex-Simple predicate orderings are degraded across distinct nominal types relative to the reverse order. This bias, Incremental Semantic Complexity, states that the parser seeks to process linguistic representations in incremental stages of semantic complexity. English and Italian acceptability data are presented which demonstrate that predicate order preferences are based not on sense dominance but rather sense complexity. Initial evidence is presented indicating that pragmatic factors centred on coherence relations can impact copredication acceptability when such copredications host complex (but not simple) predicates. The real-time processing and electrophysiological properties of copredication are also presented, which serve to replicate and ground the acceptability dynamics presented in the thesis. View Show abstract ... Mandarin also has a copulative construction that takes the form of NP1 shì NP2, but the two NPs differ in reference (see (7); cf. Chao 1968, 45;Ward 2004;Shen 2008, 389-390;Zhang & Tang 2010, 20-21 for more discussion). 6 Parallel with (3a) and (4a), one would assume that there exists a Mandarin VOde focus cleft like the following: Ta shì q Běijīng de. ... On the Formation of Modern Chinese VdeO Focus Clefts Article Dec 2013 Haiping Long In this paper we argue that Mandarin VdeO focus clefts (e.g., Ta shi zuo huoche qu de Beijing ‘It was by train that he went to Beijing’ and Shi ta zuo huoche qu de Beijing ‘It was he who went to Beijing by train’) originate from bi-clausal copulative constructions in Early Modern Chinese with the interaction between particular word order (SVO order, but the relative clause before the head noun) and the adjacency effect commonly observed in the focus clefts of SVO languages. The adjacency effect is locally constrained by the presupposition effect of the particular relative clause to produce a special head-noun focus cleft in Mandarin (Ta shi qu de Beijing ‘It was Beijing that he went to’). The past time meaning, the negation restriction, and the TAM (tense, aspect, and modality) restrictions that Mandarin VdeO focus clefts exhibit all come from the syntactic requirement that O in a Mandarin VdeO focus cleft should be specific in reference. View Show abstract ... We are assuming that, at this point, the adjective is coerced to have an adverbial meaning. Then, we need to check whether the coerced adverbial meaning of OFFICIAL, say OFFICIALLY, 7 is compatible with the idiomatic Ward 2004). The underlined parts are coerced interpretations. ... Idioms: Formally Flexible but Semantically Non-transparent Article Full-text available Oct 2015 Hee-Rahk Chae Contrary to popular beliefs, idioms show a high degree of formal flexibility, ranging from word-like idioms to those which are like almost regular phrases. However, we argue that their meanings are not transparent, i.e. they are non-compositional, regardless of their syntactic flexibility. In this paper, firstly, we will introduce a framework to represent their syntactic flexibility, which is developed in Chae (2014), and will observe some consequences of the framework on the lexicon and the set of rules. Secondly, there seem to be some phenomena which can only be handled under the assumption that the component parts of idioms have their own separate meanings. However, we will show that all the phenomena, focusing on the behavior of idiom-internal adjectives, can be accounted for effectively without assuming separate meanings of parts, which confirms the non-transparency of idioms. View Show abstract ... Many researchers have considered how figurative interpretations for metonymies might arise (e.g., Lakoff & Johnson, 1980; Leech, 1974; Ruiz de Mendoza Ibañez, 2001; Nunberg, 1975 Nunberg, , 1995 Stallard, 1993; Ward, 2004). We will not dwell on differences across specific proposals here except to the extent that they are relevant to the hypotheses we will advance. ... When entailments abandon ship: Resolution of semantic conflict through Entailment Transfer Metonymy Article Full-text available Jan 2007 Jean-Pierre Koenig Sean Green Gail Mauner The sentence “The ship confronted the storm,” must be interpreted nonliterally since confronting something requires intention. This sentence represents Entailment Transfer Metonymy (ETM), a previously undescribed variant of me tonymy. Unlike other metonymy variants, with ETM, neither an NP nor a predicate undergoes semantic change. Instead, a literal subject NP satisfies the verb’s requirement for an agent argument, while an entity evoked by the referent of that NP (e.g. the ship’s crew) satisfies the intentionality entailment imposed by the verb. Experiment 1 examined readers’ judgments for rationale clauses requiring a grammatically available intentional agent to show that nonliteral interpretation of ETMs preserves the literal meaning of subject NPs. Experiment 2 provides indirect evidence that readers may to construct Entailment Transfer metonyms soon after encountering the verb. The results of these two experiments demonstrate that ETM can enable figurative interpretation, not through meaning change but by changing the role that semantic information plays in the interpretation of the sentence. View Show abstract ... The idea to capture aspects of a speech situation like in a conceptual-semantic structure is not in itself pragmatic. Neither is metonymy primarily a conceptual-semantic affair, and this holds also for reference transfer (Ward 2004). Second, it is not reasonable to make "no distinction between semantics and pragmatics". ... On "R" in phrasal compounds - A contextualist approach Article Full-text available Sep 2015 Jörg Meibauer In phrasal compounds of the type XP+Y, one can assume a relation “R” that holds between the head and the non-head just as in ordinary N+N compounds. The paper discusses the question how “R” should be understood. Three recent approaches, i.e., construction morphology, parallel architecture view, and indexicalism are discussed. It is argued that all approaches lack a pragmatic component which is necessary for modeling pragmatic inferencing with respect to phrasal compounds. Thus, an “unspecific meaning” approach to the semantics of phrasal compounds, together with contextualist views on pragmatic enrichment, is a serious alternative to the approaches discussed. View Show abstract ... The pragmatic mapping process, illustrated in naming, is considerably more complex than it might seem on the surface, and, as argued in a series of papers and books elsewhere , it forms the foundation for valid referring relations-which are invariably embedded in true narrative representations (TNRs). Valid referring relations, SπO, and all TNRs are true in the ordinary sense of "truth" because they conform to the normal conventional applications of their signs, S; they are narratives in all cases because it is impossible to refer to any particular material entity whatsoever apart from some context of experience that involves events unfolding over time; and they are representations because the S in each case invariably stands for something other than itself. ... Pragmatic information. In Marks, R., Behe, M., Dembski, W., Gordon, B., & Sanford, J.C. (Eds.) Biological Information: New Perspectives (pp. 64–86). Singapore: World Scientific, Chapter Full-text available Jan 2013 John William Oller Jr. [Here is the abstract of the published version that appeared in 2013. The uploaded full text, however, is from a draft written prior to reviewing by Sanford and others that was completed on January 11, 2011. I like my original paper better than the version that was published in 2013 by World Scientific after being stripped of much of the substance and most of the mathematics.] The goal of this paper is to define pragmatic information with a view toward measuring it. Here, pragmatic information means the content of valid signs — the key that unlocks language acquisition by babies and to human communication through language — also the content that enables biological “codes” in genetics, embryology, and immunology to work. In such systems, the inter-related layers appear to be ranked as in a hierarchy. Sounds are outranked by syllables, in turn outranked by words, and so on. In DNA, nucleotide pairs are outranked by codons, which are outranked by genes, and so on. As signs of lower rank combine to form signs of any higher rank, combinatorial “explosions” occur. With each increase in rank, the number of possible combinations grows exponentially, but the constraints on valid strings and, thus, their pragmatic value, sharpens their focus. As a result with each explosive increase in the number of possible combinations the relative proportion of meaningful ones diminishes. Consequently, random processes of forming strings or changing them must tend increasingly toward meaninglessness (invalid and nonviable) strings. The consequent outcome of random mutations is mortality of individuals and in deep time an increasing number of disorders, diseases, and the eventual extinction of populations. Read More: View Show abstract ... One final phenomenon that we want to present here is reference transfer (sometimes also called deferred reference, meaning transfer or sense transfer ) . Here, the initiator of a statement assigns a new meaning to a word with an initially different sense. ... Linguistically Motivated Ontology-Based Information Retrieval Thesis Jun 2013 Wolf Fischer When Tim Berners-Lee proposed his vision of the Semantic Web in 2001, he thought of machines that automatically execute specific tasks based on available knowledge. The knowledge should be captured within ontologies which provide an unambiguous and semantically rich way to capture information. The information could further be used to enhance tasks like information retrieval, i.e., the retrieval of documents which match specific criteria. Over a decade later, technologies which are required for the Semantic Web have been established in several areas, e.g., the biological and medical domains. Both share a very constant pool of knowledge, which does not change as rapidly as in other domains, i.e., neither a lot of new knowledge must be added continuously nor the existing knowledge has to be updated very often. These circumstances make both domains suitable for manually creating ontologies. However, in case of a domain with constantly incoming new knowledge, it would be a great advantage if this knowledge could automatically be added or matched to an ontology. However, there is nearly no concept available on how ontological knowledge can be mapped to natural language precisely. We therefore developed the SE-DSNL approach. It provides experts with the ability to specify how ontological knowledge can be mapped to linguistic information of any known language. The concept provides a flexible and generic meta model which captures all the relevant information. In order to use this for parsing natural language text a prototypical implementation has been developed which takes the information of a SE-DSNL model and applies it to a given input text. The result is a semantic interpretation of the input text which maps its lexical and syntactic elements to the ontology. The direct integration of semantic and linguistic information further allows using the semantic information at runtime. This yields certain advantages which are demonstrated by treating elaborate linguistic phenomena like pronominal anaphora resolution, word sense disambiguation, vagueness and reference transfer. To show the validity of the approach it has been evaluated using scenarios and two case studies. View Show abstract ... From early on, it seemed clear that true representations were critical to language acquisition and after understanding the "exact [mathematical] logic" of C. S. Peirce, I began to investigate the possibility of generating a proof. Upon examination of the process that would later become widely known as "pragmatic mapping" (Oller, 1975;Naremore, 1985;Macnamara, 2000;Ward, 2004;Goldberg & Suttle, 2010), it became evident that ordinary true representations, for instance, something as simple as a name validly applied to the person named, had a necessary three part structure and a temporal aspect to it. ... Pragmatic Information Chapter Full-text available Jan 2011 John William Oller Jr. Mathematical definitions of information, e.g., Shannon’s, set aside reference and the ordinary meanings deployed in social and biological communication. Defining information as “negative entropy” —the theoretical opposite of the absence of information (as proposed by Norbert Wiener)—led even farther from the ordinary world of experience. However, I will argue and endeavor to prove that Shannon’s restricted definition of information is necessarily grounded (exactly to the extent of its meaningfulness) in ordinary representations of facts of experience that are faithfully represented. These are true narrative representations (TNRs) and, it is proved that they form the only possible basis for meaningful signs and sign systems in general. Consistent with empirical applications of Claude Shannon’s abstracted definition of information (as argued by Gitt, Marks, McIntosh, and others at this symposium), and with the current findings reported by Baumgardner and Sanford at this conference and elsewhere, the theory of TNRs shows why random mutations, toxic injuries, diseases, and the interactions of such factors impacting DNA and the biological communication and control systems of the body—including its physiological architecture, biochemistry, and especially its multilayered immune systems—must lead inevitably to disorders, disease conditions, and mortality. The system of proofs on which TNR theory rests sounds the death knell of every current variant of orthodox evolutionary dogma. Surprisingly, functional TNRs contrasted with fictions, errors, lies, and nonsense are relatively “perfect” and ubiquitous in ordinary experience. They have been proved by exact logic to be critical to all scientific measurements without any possible exceptions and to all comprehensible mathematical representations. [The finished product, worked over by certain editors, appeared in 2013 under the title give above. However, this pre-print was completed by about January 11, 2011, well ahead of the Cornell conference chronicled in the publication by World Scientific.] Read More: View Show abstract ... 5. This is not what has been referred to as 'deferred reference' or 'metonymical reference' (see Nunberg 1979Nunberg , 1995Nunberg , 2004a Ward 2004) but it resembles it in a superficial way. Nunberg (1995) asks us to imagine a restaurant patron who is showing a key to a valet, saying This is parked out back. ... Token-reflexive, anaphoric and deictic functions of ‘here’ Article Full-text available Dec 2011 NORD J LINGUIST Thorstein Fretheim Nana Aba Appiah Amfo Ildikó Vaskó There are basically three ways in which the reference of a token of the English proximal spatial indexical here and corresponding terms in other languages can be resolved in the context-dependent, pragmatic phase of the addressee's determination of the propositional content of an utterance that contains this adverbial adjunct. ‘Here’ may refer reflexively to the place of utterance, including minimally the spot occupied by the speaker (token-reflexive reference), it may be anaphoric upon a discourse antecedent that provides information necessary for identification of the referent (anaphoric reference), or resolution of the reference depends on information derived from processing of a perceptual stimulus (deictic reference). These three pragmatic paths to resolution of the reference of proximal spatial indexicals are not mutually exclusive, so they do not warrant postulation of lexical ambiguity, at least not the traditional kind of ambiguity based on differences in conceptual meaning. View Show abstract ... The following prerequisites have been formulated for this kind of meaning shift from a property to an individual: (i) the salience/noteworthiness of the property used to identify an individual, (ii) a functional correspondence between the property and the intended referent, and (iii) context support (cf. Jackendoff 1997;Nunberg 1995;Ward 2004). Context-dependence is also featured as core criterion for meaning shift in truth-conditional pragmatics (Recanati 2010). ... Content and context in incremental processing: “The ham sandwich” revisited Article Jul 2013 Petra B Schumacher The interplay of content and context is observable in a moment to moment manner as propositional content unfolds. The current contribution illustrates this through data from real-time language comprehension indicating that propositional content is not computed in isolation but relies in important ways on context during every step of the computation of meaning. The relevant notion of context that we have to adopt includes all aspects of possible worlds and draws on a variety of knowledge representations, which in a first processing phase serve to generate expectations for upcoming words. In a second phase, the discourse representation is assessed and if necessary updated by means of inferential reasoning and enrichment to reflect the speaker’s intended meaning. View Show abstract ... On the other hand, underspecified lexical representations may interact with context and conceptual structure to select a contextually relevant meaning. Pragmatic approaches in turn focus on speaker intention and conversational principles and argue that certain expressions are used to maximize informativity and efficiency (e.g., Nunberg, 1979;Horn, 1984;Nunberg and Zaenen, 1992;Blutner, 1998;Egg, 2004;Ward, 2004;Recanati, 2010). From a processing perspective, experimental research has revealed different patterns for the examples presented above. ... When combinatorial processing results in reconceptualization: Toward a new approach of compositionality Article Full-text available Oct 2013 Petra B Schumacher Propositional content is often incomplete but comprehenders appear to adjust meaning and add unarticulated meaning constituents effortlessly. This happens at the propositional level (The baby drank the bottle) but also at the phrasal level (the wooden turtle). In two ERP experiments, combinatorial processing was investigated in container/content alternations and adjective-noun combination transforming an animate entity into a physical object. Experiment 1 revealed that container-for-content alternations (The baby drank the bottle) engendered a Late Positivity on the critical expression and on the subsequent segment, while content-for-container alternations (Chris put the beer on the table) did not exert extra costs. In Experiment 2, adjective-noun combinations (the wooden turtle) also evoked a Late Positivity on the critical noun. First, the Late Positivities are taken to reflect discourse updating demands resulting from reference shift from the original denotation to the contextually appropriate interpretation (e.g., the reconceptualization form animal to physical object). This shift is supported by the linguistic unavailability of the original meaning, exemplified by copredication tests. Second, the data reveal that meaning alternations differ qualitatively. Some alternations involve (cost-free) meaning selection, while others engender processing demands associated with reconceptualization. This dissociation thus calls for a new typology of metonymic shifts that centers around the status of the involved discourse referents. View Show abstract ... In all these cases, the interpretation ultimately rests on a contextual interpretation of the perceptual field. The importance of this visual context is comparable to the role of context in cases of verbal replacement (i.e., metonymical or deferred reference (see [Fauconnier 1985]; or [Ward 2004]). In a sentence like "Plato is on the top shelf", the interpretation of Plato as referring to the book instead of the person is exclusively based on the context of the sentence. ... Classifying visual rhetoric: Conceptual and structural heuristics Chapter Full-text available Jan 2008 Alfons A. Maes Joost Schilperoord View ... The discussion is limited to referential choices that maintain the same perspective. Outside the scope of this paper are decisions that reflect choices of perspective (e.g., Betty vs. the professor vs. the woman; Clark and Wilkes-Gibbs 1986;Isaacs and Clark 1987;Schober and Clark 1989), word retrieval (e.g., Dell 1986;Levelt 1999), or indirect reference (e.g., Nunberg 1977Nunberg , 2004Clark 1992;Ward 2004). ... How Speakers Refer: The Role of Accessibility Article Apr 2010 Jennifer E Arnold One of the core components of language is referring, which requires the speaker to choose between expressions that are highly explicit (e.g., the UNC professor, or Peter), and reduced lexical forms (e.g., he). This paper reviews claims that this process is largely driven by the accessibility or salience of the referent, and the psychological processes that underlie these effects. Two classes of constraint are examined: (1) Discourse status, which has traditionally been identified as the determinant of referential choices and (2) Non-linguistic processing constraints that increase the use of explicit forms. These effects together support a modified version of the traditional claim that speakers choose referential explicitness so that the listener can identify the referent, and underscore the need for accessibility to be mediated by a non-linguistic representation. View Show abstract ... In particular, three criteria have been highlighted as prerequisites for successful reference transfer: (i) the salience or noteworthyness of the property denoting an individual, (ii) a functional correspondence between the source and the intended referent, and (iii) contextual support (cf. Jackendoff 1997; Nunberg 1995; Ward 2004). The following example indicates that the salience of the property is an important premise. ... The hepatitis called ...: Electrophysiological evidence for enriched composition Book Apr 2011 Petra B Schumacher In recent years, a lively debate ensued on an old issue, namely the proper distinction between semantics and pragmatics against the background of the classical Gricean distinction between ‘what is said’ and ‘what is implicated’. From a linguist’s point of view, however, there has always been a regrettable lack of empirical data in this otherwise sophisticated debate. Recently, a new strand of research emerged under the name of experimental pragmatics, the attempt to gain experimental data on pragmatic and semantic issues by using psycholinguistic and neurolinguistic methods. This volume brings together work by scholars engaging in experimental research on the semantics/pragmatics distinction. The contribution of experimental pragmatics to pragmatic and semantic theory is discussed from a number of different angles, ranging from implicature and pragmatic enrichment to pragmatic acquisition, pragmatic impairment, and pragmatic processing. In addition, methodological issues are discussed. The contributions will appeal to theoretical linguists, psycholinguists, neurolinguists, and language philosophers. View Show abstract A semantic-syntactic analysis of Chao's sentences with a verbal subject and a nominal predicate (in Chinese) Article Full-text available Oct 2023 Zhongru Xiong 摘要: 子句可以分成题元层、形态层与话语层,这三层可用来定义逻辑主语、语法主语与心理主语(话题)等三种主语。话题化为 A'-移位,涉及论元与附加语,对赵氏"动主名谓句"来说,就是标补子句与状语子句充当话题。标补子句是谓词的论元,可提升为语法主语与话题,如"不下雨已经三个月了"中的"不下雨";状语子句是句子的附加语,不能充当语法主语,但可提升为话题,如"逃孱头"中的"逃"。汉语时制范畴缺乏形态,允许空主语,也不需要动词性宿主。前者使得汉语子句可以以动词或动词短语的形式呈现,如"谁逃"呈现为"pro 逃";后者使得汉语名词性短语可以不借助系词充当谓语,如"谁是孱头"呈现为"谁孱头"。两者的合力产生了赵氏"动主名谓句",如" pro 逃 pro 孱头"。赵氏"动主名谓句"对普遍原则不构成挑战,它的存在跟汉语时制范畴的特征相关。 ABSTRACT: The clause can be divided into three layers: the thematic layer, the inflectional layer and the discourse layer, within which there exist the corresponding logical subject, grammatical subject and psychological subject. Topicalization belongs to A'-movement which involves an argument and an adjunct. For Chao's sentences with a verbal subject and a nominal predicate (CS, for short), the topicalized elements are complement clauses and adverbial clauses. The complement clause is the nominal predicate's argument in CS, acting as the predicate's logical subject, such as ' bu xiayu ' in ' bu xiayu yijing sangeyue le '. It can be raised into the Spec of T or Top, hence, as a grammatical subject or a topic. The adverbial clause is an adjunct of the TP. It can't act as a grammatical subject, but can be raised as a topic, such as ' tao' in ' tao cantou '. Since Chinese lacks of inflection in T, the null subject can be licensed and the verbal host doesn't need. The former makes the clause represent as a bare verb or a verbal phrase, and the latter makes the nominal phrase act as a predicate without the copula's support. Hence, the CSs can be produced with the two types of strength, for instance, in ' pro tao pro cantou' , the subject is null as pro and the predicate has no copular. The CSs do not challenge the universal principles in the Chomskian linguistics, and they are related with the feature of T in Chinese. View Show abstract Nouns and Verbs in Chinese I: Facts and Theories Book Aug 2023 Shen Jiaxuan View Property Inheritance, Deferred Reference and Copredication Article Full-text available Dec 2021 Matthew Gotham There are sentences that are coherent and possibly true, but in which there is at the very least the appearance of a conflict between the requirements of two (or more) predicates that are applied to the same argument. This phenomenon, known as copredication, raises various issues for linguistic theory. In this paper I defend and develop an approach to the issues of counting and individuation in copredication put forward in previous work, in dialogue with criticisms made by Liebesman & Magidor and their own positive account of copredication. View Show abstract On the (un)interpretability of phi-agreement Research Full-text available Jan 2016 Milan Rezac Phi-features in agreement have usually been analysed as uninterpretable. This hypothesis is important evidence about the autonomy of syntax and about particular syntactic mechanisms. However, there are also interpretive approaches to phi-agreement. This study looks at potentials and challenges of interpretive proposals against known phi-agreement phenomena. First are taken up prototypical cases, such as local argument-predicate phi-agreement, and their natural extensions, such as phi-agreement under A'-movement. These cases are interpretable under independently motivated analyses of phi-features and syntactic structures. Long-distance phi-agreement is recalcitrant on its standard analysis, and an alternative is developed that attributes a key role to apparent expletives. Next are taken up more challenging phi-agreements: nonstandard target-controller pairs like objects agreeing with subjects; partial agreement in coordinate structures; and agreement with quantitative/qualitative controllers like a minority of jurors, the last with a case study of interpretive approaches through silent plurality/group denotation shifters. Last are discussed grammatical phi-features, which seem uninterpretable even on controllers. Available approaches are outlined and extended to agreement. The crosslinguistic variety of grammatical phi-agreement is emphasised, including heterogenous agreement patterns and constructional phi-features with missing controllers. The study suggests that independently motivated interpretations of phi-features and structures go far in interpreting phi-agreement but face a significant residue that requires unmotivated stipulations. Moreover, motivations of interpretability fail to replace syntactic conditions on agreement like person-hierarchy interactions. In the conclusion is surveyed evidence for interpreting phi-agreement, which is at present rather weak. View Show abstract Predicate order and coherence in copredication Article Full-text available Jul 2021 Elliot Murphy This article proposes that predicate order and coherence relations are the two major determining factors in copredication licensing, resolving a long-standing puzzle over the criteria for constructing acceptable copredications. The effects of predicate ordering are claimed to be anchored around semantic complexity, such that copredications with semantically Simple–Complex predicate orderings are more acceptable than the reverse. This motivates a parsing bias, termed Incremental Semantic Complexity. Particular ways of implementing this parsing bias are discussed. The effect of predicate coherence is claimed to be anchored around a sense of causality and featural commonality. Lastly, a hierarchy of possible copredications is outlined (the Copredication Hierarchy), helping to delimit the modelling of copredications to a greater extent than has previously been possible. View Show abstract The semantics and pragmatics of proper names in adverbial degree constructions in English: A corpus-driven contribution Thesis Full-text available Sep 2020 Gabriel Frazer-Mckee This thesis is an empirical and theoretical contribution to the study of Adverb-Nominal Degree Constructions (ANDCs) –adverbial degree constructions featuring nominal forms rather than adjectives (e.g. That is so you; This bar is very San Francisco). Situated broadly within the framework of Cognitive Linguistics, our study –the first large corpus-based investigation into ANDCs— investigates the expressed meaning of 4 proper names (1,500+ usage events) from four ontological categories: PLACE, TIME, PEOPLE, and FILM. While several competing models have already been proposed to handle ANDCs, three of our empirical findings highlight the need for an alternate account. Firstly, there are no grounds on which to claim that proper names in ANDCs are necessarily adjectival, as 1) almost all classic diagnostics for adjectivehood actually admit true N(P)s; and 2) proper names in ANDCs exhibit nouny characteristics (e.g. anaphoric binding). Secondly, ANDCs yield interpretations that cannot be accounted for by existing models. In addition to comparison (e.g. Your smile is very Mona Lisa), ANDCs express typicality (e.g. Pizza is very New York), inclination (e.g. I am in a very Harry Potter mood), and quantification (e.g. 2017 has been very Kurt Cobain), amongst others. Lastly, far from lexicalizing, proper names are exploited in ANDCs for their encyclopaedic potential, typically being used to metonymically evoke virtually any knowledge structure (gradable or otherwise) in the nominal sign’s encyclopaedic network (e.g. very Harry Potter  locations / characters / props / weather / music / plot points from the Harry Potter films). We reconcile these observations by proposing that true N(P)s can participate in ANDCs as 1) access points to knowledge networks that 2) become associated with a meaningful, gradable, pragmatic scale R during the process of conceptual combination. It is R that is intensified rather than the N(P) itself. View Show abstract Estudo sobre o uso do artigo definido do português por aprendizes coreanos Article Aug 2012 Hanchul Kim View Real-Time Commitments in Processing Individual/Degree Polysemy: Helping Teachers Develop Research Informed Practice Chapter Jan 2019 Margaret Grant Sonia Michniewicz Jessica Rett Individual/degree polysemy is a phenomenon in which individual-denoting Determiner Phrases of any type can, in certain contexts, denote a degree corresponding to some salient measure of that individual. Like deferred reference, individual/degree polysemy conditions agreement: compare Four pizzas are vegetarian to Four pizzas is more than Sue had asked for. In this paper, we test whether readers commit to a single meaning of potentially polysemous DPs during real-time sentence processing. Immediate commitments have been found for other cases of grammatical ambiguity, for example collective or distributive uses of verbs, whereas readers do not necessarily commit to one sense of a lexically polysemous element (e.g., the concrete or abstract sense of newspaper). We present the results of one study of eye movements during reading and one self-paced reading study. Our results provide evidence that there are immediate commitments to the individual sense and the degree sense, depending on the internal properties of the Determiner Phrase. In particular, there is some evidence that definite DPs like the pizzas have a commitment to an individual interpretation, and stronger evidence that numeral DPs like two pizzas have a commitment to a degree interpretation. We discuss our results in light of the Minimal Semantic Commitment hypothesis proposed by Frazier, Pacht and Rayner. View Show abstract Angels by Another Name: How "Agency Metonymy" Precludes God's Embodiment Chapter Full-text available Oct 2024 David E. S. Stein This essay explains why in seventeen biblical passages that involve Yahweh’s agents (both divine and human), neither divine embodiment nor theophany can be the text’s plain sense. It does so by first identifying a linguistic convention for succinctly expressing endeavors involving two interrelated parties, namely, a principal and an agent in the stand-in arrangement known as agency. This convention refers to both parties at once while naming only the principal. The author dubs this device “agency metonymy” and shows how it is encoded by referential anomalies in the text. The essay then demonstrates that agency metonymy is applied throughout the Bible to human interactions—and that applying this convention likewise to those passages involving Yahweh’s agents regularly yields a text that is both coherent and informative. By Occam’s razor, and with consideration of how the human mind processes language, this paper concludes that biblical composers depicted Yahweh and Yahweh’s agents just like human principals and agents, in that their respective identities were merged only functionally—and not ontologically as many scholars have claimed. View Show abstract Metonymy in human interaction Article Dec 2017 José Antonio Jódar Sánchez Human communication is based on mutual interaction between participants. Much of this communication is linguistic in nature. Language is structured by grammar and grammar is inherently metonymic (Langacker 2009). Thus, language and interaction must be metonymic. In this article, I explore the metonymic basis of human interaction in both its linguistic and non-linguistic aspects. First, I make a distinction between linguistic and cultural metonymy. Both have a conceptual basis. The former, extensively studied from the view of cognitive linguistics, has a linguistic source. The latter, found in fields as diverse as art, theater, and film, does not necessarily have a linguistic source. The broader concept of cultural metonymy seems to structure human interaction. Second, I delineate distinguishing factors between the two types of metonymies. Those arethe nature of the source and the (mis)match in the intentionality of producer and perceiver. Third, I make an overview and provide real examples of what aspects of human interaction are metonymic. Its elements, including the content of the message, the identity, proxemics, and kinesics of the participants, and the context of the interaction, can be metonymic. Its processes, namely those of language production and reception, are as well inherently metonymic. Overall,I show that metonymy, understood as relatedness or association, pervades human interaction and plays an important role in its success. View Show abstract Canonical gender Article Full-text available Jun 2015 Greville Corbett Sebastian Fedden Nominal classification remains a fascinating topic but in order to make further progress we need greater clarity of definition and analysis. Taking a Canonical Typology approach, we use canonical gender as an ideal against which we can measure the actual gender systems we find in the languages of the world. Building on previous work on canonical morphosyntactic features, particularly on how they intersect with canonical parts of speech, we establish the distinctiveness of gender, reflected in the Canonical Gender Principle: In a canonical gender system, each noun has a single gender value . We develop three criteria associated with this principle, which together ensure that canonically a noun has exactly one gender value; we give examples of non-canonicity for each criterion, thus gradually building the typology. This is the essential groundwork for a comprehensive typology of nominal classification: the Canonical Typological approach allows us to tease apart clusterings of properties and to characterize individual properties with respect to a canonical ideal, rather than requiring us to treat the entire system as belonging to a single type. This approach is designed to facilitate comparisons of different noun classification systems across languages. View Show abstract On the formation of Mandarin Vde O focus clefts Article Dec 2013 Hai-Ping Long In this paper we argue that Mandarin Vde O focus clefts (e.g., Ta shi zuo huoche qu de Beijing It was by train that he went to Beijing and Shi ta zuo huoche qu de Beijing It was he who went to Beijing by train) originate from bi-clausal copulative constructions in Early Modern Chinese with the interaction between particular word order (SVO order, but the relative clause before the head noun) and the adjacency effect commonly observed in the focus clefts of SVO languages. The adjacency effect is locally constrained by the presupposition effect of the particular relative clause to produce a special head-noun focus cleft in Mandarin ( Ta shi qu de Beijing It was Beijing that he went to). The past time meaning, the negation restriction, and the TAM (tense, aspect, and modality) restrictions that Mandarin Vde O focus clefts exhibit all come from the syntactic requirement that O in a Mandarin Vde O focus cleft should be specific in reference. View Show abstract Simpler Syntax Article Jun 2005 Peter W Culicover This book offers a perspective on the structure of human language. The fundamental issue it addresses is the proper balance between syntax and semantics, between structure and derivation, and between rule systems and lexicon. It argues that the balance struck by mainstream generative grammar is wrong. It puts forward a new basis for syntactic theory, drawing on a wide range of frameworks, and charts new directions for research. In the past four decades, theories of syntactic structure have become more abstract and syntactic derivations have become more complex. The book traces this development through the history of contemporary syntactic theory, showing how much it has been driven by theory-internal rather than empirical considerations. It develops an alternative that is responsive to linguistic, cognitive, computational, and biological concerns. At the core of this alternative is the Simpler Syntax Hypothesis: the most explanatory syntactic theory is one that imputes the minimum structure necessary to mediate between phonology and meaning. A consequence of this hypothesis is a richer mapping between syntax and semantics than is generally assumed. Through analyses of grammatical phenomena, some old and some new, the book demonstrates the empirical and conceptual superiority of the Simpler Syntax approach. © Peter W. Culicover and Ray Jackendoff 2005. All rights reserved. View Show abstract The polysemy of measurement Article May 2014 LINGUA Jessica Rett The first goal of this paper is to argue that a number of independently treated phenomena – the ‘measure’ interpretation of pseudopartitives (Landman, 2004), amount relatives (0130 and 0125), the how many ambiguity – are different instantiations of the same phenomenon, the general ability for DPs to denote an individual or a degree corresponding to the measure of that individual. I refer to this as ‘individual/degree polysemy’. I show that a particular semantic restriction on the degree interpretations of DPs indicates that the degree interpretation is derived from the individual interpretation (not vice-versa). And I argue that this pervasive polysemy is a natural consequence of degree semantic theories that postulate a null measure operator to measure, when appropriate, individuals, events or degrees. The second goal of this paper is to tie the behavior of this null measurement operator to the similar behavior of quantity adjectives like many and much, giving further support to the claim that quantity adjectives measure sets of degrees ( 0270 and 0275). View Show abstract Meaning shift and the purity of ‘I’ Article May 2013 Edison Barrios In this paper I defend the “Standard View” of the semantics of ‘I’—according to which ‘I’ is a pure, automatic indexical—from a challenge posed by “deferred reference” cases, in which occurrences of ‘I’ are (allegedly) not speaker-referential, and thus non-automatic. In reply, I offer an alternative account of the cases in question, which I call the “Description Analysis” (DA). According to DA, seemingly deferred-referential occurrences of the first person pronoun are interpreted as constituents of a definite description, whose operator scopes over an open sentence Rxy—where R is a contextually selected relation ranging over pairs of people and objects. The role of intentions is thus limited to the determination of R, which is posterior to the fixation of the reference of ‘I’. In support of the DA I present evidence that, in the cases in question, the (Determiner) phrase containing ‘I’ behaves in relevant ways like a description. I show that the DA can account for the problematic examples, while preserving the simplicity of the standard semantics of ‘I’. Finally, I examine a rival account of the data, offered by Nunberg (Linguist Philos 16:1–43, 1993), and argue for the superiority of the DA. View Show abstract Phi in Syntax and Phi Interpretation Article Oct 2010 Milan Rezac Chapter 6 explores the syntax-interpretation interface through phi-mismatches: arguments like French on 'we', with one set of phi-features, 1PL, for interpretation, another, 3SG, uninterpretable, for phenomena such as concord. The uninterpretable phi-features are shown to play a role in syntax, not realizational morphology alone. Therefore, the syntactic phi-specifications of some arguments and their dependencies are autonomous of interpretation, along with expletives, phi-agreement, Case and A-movement. The person of the person interactions in Chapter 4 is among them. The diachronic sources, syntactic properties, and eventual elimination of these uninterpretable phi-features are discussed. View Show abstract Reference and Accessibility from a Givenness Hierarchy Perspective Article Nov 2010 Jeanette K. Gundel Most work on reference and discourse structure appeals, in some sense, to the notion of accessibility. While the term "accessibility" itself is rarely mentioned in research within Gundel, Hedberg and Zacharski's Givenness Hierarchy (GH) framework, the GH has often been interpreted by others as an accessibility hierarchy. This paper aims to clarify the major claims and predictions of the GH theory, showing how it is fundamentally diff erent from other referential hierarchies in a number of ways, most importantly because cognitive statuses on the hierarchy are assumed to encode manner of accessibility, not degree of accessibility. The GH thus differs from the other referential hierarchies, not only in the kinds of facts it aims to predict and explain, but in the specific empirical predictions that can plausibly be derived from it regarding degree of accessibility, as measured by ease of processing. View Show abstract Information Structure Chapter Jan 2008 Betty J. Birner Gregory Ward This chapter contains section titled: View Show abstract Biscuit Conditionals: Quantification Over Potential Literal Acts Article Apr 2006 Muffy E. A. Siegel In biscuit conditionals (BCs) such as If you’re hungry, there’s pizza in the fridge, the if clause appears to apply to the illocutionary act performed in uttering the main clause, rather than to its propositional content. Accordingly, previous analyses of BCs have focused on illocutionary acts, and, this, I argue, leads them to yield incorrect paraphrases. I propose, instead, that BCs involve existential quantification over potential literal acts such as assertions, questions, commands, and exclamations, the semantic objects associated with declarative, interrogative, imperative, and exclamative sentences, respectively. Such an existential interpretation of BCs requires only that we add potential literal acts to our inventory of individuals, and it produces reasonable paraphrases in which if has its normal meaning: If you’re hungry,[there’s a (relevant/salient) assertion that] there’s pizza in the fridge. These potential literal act variables are introduced into semantic interpretations and then undergo Existential Closure. Hence, we would expect to see similar interpretations in contexts other than BCs, that is, with other if constructions, with connectives other than if, with potential literal acts other than assertion, and in root sentences. This prediction is borne out, along with the parallel prediction that we cannot quantify over purely illocutionary acts like offers, but only over potential literal acts, those conventionally associated with a particular morphosyntactic shape. View Show abstract A Pragmatic Analysis of So-Called Anaphoric Islands Article Full-text available Sep 1991 LANGUAGE Gregory Ward Richard Sproat It is commonly assumed that words are grammatically prohibited from containing antecedents for anaphoric elements, and thus constitute 'anaphoric islands' (Postal 1969). In this paper, we argue that such anaphora—termed OUTBOUND ANAPHORA—is in fact fully grammatical and governed by independently motivated pragmatic principles. The felicity of outbound anaphora is shown to be a function of the accessibility of the discourse entity which is evoked by the word-internal element and to which the anaphor is used to refer. The morphosyntactic status of the antecedent is but one factor affecting the accessibility of that entity. A series of psycholinguistic experiments support the analysis. View Show abstract Taking: A Study in Lexical Network Theory Conference Paper Full-text available Sep 1987 Peter Norvig George Lakoff Proceedings of the Thirteenth Annual Meeting of the Berkeley Linguistics Society (1987), pp. 195-206 View Show abstract Condition R Article Full-text available Jun 2000 Jeffrey Lidz this paper I show that this conclusion is not warranted by illustrating differences in the ways that certain anaphors depend on their antecedents for reference. Although it is true that all anaphors are referentially dependent, some anaphors require complete identity with their antecedents whereas others do not. Anaphors of the first class, which I call Pure-reflexives, identify the same entity in the world as their antecedents do. Anaphors from the second class, which I call Near-reflexives, do not require complete identity with their antecedents; the referent of a Near-reflexive can be loosely related to the referent of its antecedent by certain kinds of similarity to be made more precise below. This distinction between Pure- and Near-reflexives has consequences for the theory of reflexive predicates and in conjunction with this theory enables us to explain the existence of so-called "antilocal" anaphors, which appear to resist binding by a coargument. I will show that the anaphors that Reinhart and Reuland claim to have the ability to reflexivize a predicate do not, in fact, ever occur as arguments of semantically reflexive predicates. The anaphors which resist binding by a coargument can occur on reflexive predicates, but only if that reflexivity is lexically expressed. 2. Predicate-centered Binding Theory The theory of reflexivity in generative grammar has traditionally been a theory of nominal types. NPs are identified as anaphors, pronominals or Rexpresssions on the basis of the distributional properties of the elements that are coreferential with these NPs. Chomsky (1986) gives the following principles: (1) a. An anaphor is bound in a local domain b. A pronominal is free in a local domain c. An R-expression is free (in the domain of the head of its chain) The b... View Show abstract Contextually-Dependent Lexical Semantics Article Full-text available Sep 1997 Karin Maria Verspoor This thesis is an investigation of phenomena at the interface between syntax, semantics, and pragmatics, with the aim of arguing for a view of semantic interpretation as lexicallydriven yet contextually dependent. I examine regular, generative processes which operate over the lexicon to induce verbal sense shifts, and discuss the interaction of these processes with the linguistic or discourse context. I concentrate on phenomena where only an interaction between all three linguistic knowledge sources can explain the constraints on verb use: conventionalised lexical semantic knowledge constrains productive syntactic processes, while pragmatic reasoning is both constrained by and constrains the potential interpretations given to certain verbs. The phenomena which are closely examined are the behaviour of PP sentential modifiers (specifically dative and directional PPs) with respect to the lexical semantic representation of the verb phrases they modify, resultative constructions, and logic... View Show abstract Semi-Productive Polysemy and Sense Extension Article Full-text available Jan 2000 Ann Copestake Ted Briscoe In this paper we discuss various aspects of systematic or conventional polysemy and their formal treatment within an implemented constraint based approach to linguistic representation. We distinguish between two classes of systematic polysemy: constructional polysemy, where a single sense assigned to a lexical entry is contextually specialised, and sense extension, which predictably relates two or more senses. Formally the rst case is treated as instantiation of an underspecied lexical entry and the second by use of lexical rules. The problems of distinguishing between these two classes are discussed in detail. We illustrate how lexical rules can be used both to relate fully conventionalised senses and also applied productively to recognise novel usages and how this process can be controlled to account for semi-productivity by utilising probabilities. 1 Introduction Discussion of polysemy has been central to much recent work on lexical semantics. Most of the arguments for (or again... View Show abstract Attention, Intentions, And The Structure Of Discourse Article Full-text available Jun 2002 COMPUT LINGUIST Barbara J. Grosz Candace Sidner this paper we explore a new theory of discourse structure that stresses the role of purpose and processing in discourse. In this theory, discourse structure is composed of three separate but interrelated components: the structure of the sequence of utterances (called the linguistic structure), a struclure of purposes (called the intentional structure), and the state of focus of attention (called the attentional state). The linguistic structure consists of segments of the discourse into which the utterances naturally aggregate. The intentional structure captures the discourse-relevant purposes, expressed in each of the linguistic segments as well as relationships among them. The attentional state is an abstraction of the focus of attention of the participants as the discourse unfolds. The attentional state, being dynamic, records the objects, properties, and relations that are salient at each point of the discourse. The distinction among these components is essential to provide an adequate explanation of such discourse phenomena as cue phrases, referring expressions, and interruptions. The theory of attention, intention, and aggregation of utterances is illustrated in the paper with a number of example discourses. Various properties of discourse are described, and explanations for the behavior of cue phrases, referring expressions, and interruptions are explored. This theory provides a framework for describing the processing of utterances in a discourse. Discourse processing requires recognizing how the utterances of the discourse aggregate into segments, recognizing the intentions expressed in the discourse and the relationships among intentions, and tracking the discourse through the operation of the mechanisms associated with attentional state. This processing descrip... View Show abstract Two Kinds Of Metonymy Article Full-text available May 2002 David Stallard We propose a distinction between two kinds of metonymy: "referential" metonymy, in which the referent of an N-P is shifted, and "predicative" metonymy, in which the referent of the NP is unchanged and the ar- gument place of the predicate is shifted instead. Examples are, respectively, "The hamburger is waiting for his check" and "Which airlines fly from Boston to Denver". We also show that complications arise for both types of metonymy when multiple coercing predicates are considereal. Finally, we present implemented algorithms handling these complexities that generate both types of metonymic reading, as well as criteria for choosing one type of metonymic reading over another. View Show abstract Inversion and Equation in Copular Sentences Article Full-text available Aug 1998 Caroline Heycock Anthony Kroch this paper that copular sentences can be either predicative or equative, and that the latter cannot be reduced to an inverted version of the former. We have, however, claimed that this distinction should not be attributed to any lexical ambiguity in the copula itself, but rather to the existence of two types of small clause, both of which can occur as complements to the copula (as well as to some other heads). Inversion, in the sense of movment of the second element in a small clause past the subject of that small clause, does however occur. In the case of predicative small clauses, the only way that inversion can arise is through A-bar movement of the predicate to a position higher than Spec(IP)---presumably Spec(CP). This kind of predicate fronting we have seen in both English and Italian. Inversion out of equative small clauses also occurs, but this is only possible if the subject of the small clause is not forced to move (as is true in Italian) and if the language allows the operation of scrambling. References View Show abstract A System of Logic, Ratiocinative and Inductive: Being a Connected View of the Principles of Evidence, and the Methods of Scientific Investigation Book Nov 2012 John Stuart Mill This two-volume work, first published in 1843, was John Stuart Mill's first major book. It reinvented the modern study of logic and laid the foundations for his later work in the areas of political economy, women's rights and representative government. In clear, systematic prose, Mill (1806–73) disentangles syllogistic logic from its origins in Aristotle and scholasticism and grounds it instead in processes of inductive reasoning. An important attempt at integrating empiricism within a more general theory of human knowledge, the work constitutes essential reading for anyone seeking a full understanding of Mill's thought. Volume 1 contains Mill's introduction, which elaborates upon his definition of logic as 'not the science of Belief, but the science of Proof, or Evidence'. It also features discussions of the central components of logical reasoning - propositions and syllogisms - in relation to Mill's theories of inductive reasoning and experimental method. View Show abstract Uniqueness, Familiarity, and the Definite Article in English Article Jun 2014 Betty J. Birner Gregory Ward Proceedings of the Twentieth Annual Meeting of the Berkeley Linguistics Society: General Session Dedicated to the Contributions of Charles J. Fillmore (1994) View Show abstract Focus in Generative Grammar Article Sep 1988 Geoffrey J. Huck Michael S. Rochemont View Predictable meaning shift: some linguistic properties of lexical implication rules''in J Article Jan 1992 Nicholas Ostler B. T. S. Atkins Abstract Drawing on a growing database of systematic relationships between word-senses, the authors argue that a significant class of these represent Lexical Implication Rules, a set of formal rules within the domain of lexical semantics; these they distinguish from,other types of semantic,relation more,closely dependent,on metaphor,and world-knowledge. Some formal properties of Lexical Implication Rules are proposed, as evidence of their linguistic, rather than real-world, nature. 1. Introduction: Lexical Implication Rules It is a truism that people, in interpreting and producing language, make use of View Show abstract Information Status and Noncanonical Word Order in English Article Mar 2000 Betty J. Birner Gregory Ward View On (in)definite articles: Implicatures and (un)grammaticality prediction Article Sep 1991 J LINGUIST John A. Hawkins Since Paul Grice published ‘Logic and conversation’ in 1975, there have been a number of attempts to develop his programmatic remarks on conversational and conventional implicatures further (see Gazdar, 1979; Atlas & Levinson, 1981; Horn, 1985; Sperber & Wilson, 1986; and especially Levinson, 1983, and the references cited therein). The result has been a growing understanding of the relationship between semantics and pragmatics, and more generally of human reasoning in everyday language use. Many aspects of natural language understanding that were previously thought to be part of the conventional meaning of a given expression can now be shown to be the result of conversational inference. And with cancellability as the diagnostic test, a number of traditional problems in the study of meaning are yielding to more satisfactory analyses. Even more ambitiously, implicatures are penetrating into core areas of the syntax, as pragmatic theories of increasing subtlety are proposed for ‘grammatical’ phenomena such as Chomsky's (1981, 1982) binding principles (see Reinhart, 1983, and Levinson, 1987a, b, 1991).(Received April 23 1990)(Revised January 02 1991) View Show abstract Uniqueness Article Jun 1990 Nirit Kadmon View The Non–Uniqueness of Semantic Solutions: Polysemy Article Jan 1979 Geoffrey Nunberg View Uniqueness in Definite Noun Phrases Article Jun 2003 Craige Roberts The abstract for this document is available on CSA Illumina.To view the Abstract, click the Abstract button above the document title. View Show abstract Cognitive Status and the Form of Referring Expressions in Discourse Article Jun 1993 Jeanette K. Gundel Nancy Hedberg Ron Zacharski In this paper we propose six implicationally related cognitive statuses relevant for explicating the use of referring expressions in natural language discourse. These statuses are the conventional meanings signalled by determiners and pronouns, and interaction of the statuses with Grice's Maxim of Quantity accounts for the actual distribution and interpretation of forms when necessary conditions for the use of more than one form are met. This proposal is supported by an empirical study of the distribution of referring expressions in naturally occurring discourse in five languages-English, Japanese, Mandarin Chinese, Russian, and Spanish. View Show abstract Metaphors We Live by Article Mar 1983 George Lakoff Mark Johnson The now-classic Metaphors We Live By changed our understanding of metaphor and its role in language and the mind. Metaphor, the authors explain, is a fundamental mechanism of mind, one that allows us to use what we know about our physical and social experience to provide understanding of countless other subjects. Because such metaphors structure our most basic understandings of our experience, they are "metaphors we live by"—metaphors that can shape our perceptions and actions without our ever noticing them. In this updated edition of Lakoff and Johnson's influential book, the authors supply an afterword surveying how their theory of metaphor has developed within the cognitive sciences to become central to the contemporary understanding of how we think and how we express our thoughts in language. View Show abstract Preferences regarding treatments for period problems: Relationship to menstrual and demographic factors Article Jul 1994 Pamela Warner Beliefs about periods and hysterectomy and preferences regarding treatment for period problems were assessed in 362 women--patients referred for menorrhagia, premenstrual syndrome, or dysmenorrhea (n = 99, 102 and 56 respectively), and a control sample (n = 105). Overall, women were predominantly in favor of a treatment which normalized periods (89%) and which coincidentally provided reversible contraceptive effect (74%), while they marginally preferred a one-off operation to tablets. Preferences with regard to contraceptive effect of treatment, effect on periods and hypothetical treatment option were most strongly related to reproductive status (p < 0.00002), in that nulliparous or unsterilized women were least likely to rate as acceptable a treatment that affected their periods or fertility. The women's feelings about their periods and their evaluation of the utilitarian consequences of hysterectomy were most strongly related to their report of menstrual problem(s), with the potential benefits of an end to periods being most often affirmed by women reporting 'severe' menorrhagia, dysmenorrhea or multiple period problems. In contrast, women's evaluation of the reproductive consequences of hysterectomy were most strongly related to reproductive status, with nulliparous women and unsterilized parous women finding them least acceptable. Feelings about periods did not predict intentions with respect to periods, or treatment preferences, and in this regard the usefulness of menstrual attitudes is questioned. View Show abstract On the Semantics and Pragmatics of `Identifier So' Article Apr 1999 Andrew Kehler Gregory Ward this paper, we present an analysis of identifier so based on the informational structure of the discourse in which it is used. Drawing upon a large corpus of naturally occurring data, we show that anaphoric expressions containing so impose a set of constraints on the information status of their View Show abstract Definiteness and indefiniteness Jan 1978 HAWKINS, JOHN A. 1978. Definiteness and indefiniteness. Atlantic Highlands, NJ: Humanities Press. Topic and focus. Handbook of pragmatics Jan 2004 175-96 Thorstein Fretheim GUNDEL, JEANETTE, and THORSTEIN FRETHEIM. 2004. Topic and focus. Handbook of pragmatics, ed. by Laurence R. Horn and Gregory Ward, 175–96. Oxford: Blackwell. The semantics and pragmatics of preposing A pragmatic analysis of socalled anaphoric islands. Language 67 Jan 1988 439-74 Gregory Garland Gregory Ward Gail Richard Mckoon WARD, GREGORY. 1988. The semantics and pragmatics of preposing. New York: Garland. WARD, GREGORY; RICHARD SPROAT; and GAIL MCKOON. 1991. A pragmatic analysis of socalled anaphoric islands. Language 67.439–74. Deferred equatives. Paper presented at the annual meeting of the Linguistic Society of America Feb 2002 Sam Tilsen WARD, GREGORY, and SAM TILSEN. 2002. Deferred equatives. Paper presented at the annual meeting of the Linguistic Society of America, San Francisco, January 2002. Constraints on ellipsis and event reference. Handbook of pragmatics Jan 2004 383-403 Andrew Kehler Gregory Ward KEHLER, ANDREW, and GREGORY WARD. 2004. Constraints on ellipsis and event reference. Handbook of pragmatics, ed. by Laurence R. Horn and Gregory Ward, 383–403. Oxford: Blackwell. On the morphology of reflexives and impersonals. Theoretical analyses in Romance linguistics Jan 1992 399-414 Luigi Burzio BURZIO, LUIGI. 1992. On the morphology of reflexives and impersonals. Theoretical analyses in Romance linguistics, ed. by Christiane Lauefer and Terrell Morgan, 399–414. Amsterdam: John Benjamins. On the semantics and pragmatics of 'identifier so'. The semantics/pragmatics interface from different points of view (Current research in the semantics Jan 1999 233-56 Andrew Kehler KEHLER, ANDREW, and GREGORY WARD. 1999. On the semantics and pragmatics of 'identifier so'. The semantics/pragmatics interface from different points of view (Current research in the semantics/pragmatics interface 1), ed. by Ken Turner, 233–56. Amsterdam: Elsevier. Open propositions and epistemic would Feb 2001 Betty J Birner Jeffrey P Kaplan Gregory Ward BIRNER, BETTY J.; JEFFREY P. KAPLAN; and GREGORY WARD. 2001. Open propositions and epistemic would. Paper presented at the annual meeting of the Linguistic Society of America, Washington, DC, January 2001. On the agreement of reflexive forms in English Jan 1979 519-542 JOSEPH, BRIAN D. 1979. On the agreement of reflexive forms in English. Linguistics 17.519–23. Recommended publications Discover more Article Reading‐Writing Connections: the Importance of Interactive Discourse June 2008 · English in Education Roy Corden Margaret Meek (1988) has described how children borrow ideas from literature through ‘unteachable’ lessons. In this article, I examine how children's written work is enhanced through ‘teachable’ lessons, where the teacher draws attention explicitly to aspects of literary texts and where children explore and evaluate literature through group reading and discussion. The ways in which children ... [Show full abstract] transfer the knowledge of literary devices, gained through group discussion, to their own writing are examined. The relationship between group evaluations of texts and children's writing development is explored with reference to the work of Year 6 children. This illustrates how critical reading and group discussion can raise primary children's metalinguistic awareness and develop their understanding of the stylistic features of narrative texts. Read more Article กลวิธีการปฏิเสธในภาษาอังกฤษของนักศึกษาไทย ที่เรียนภาษาอังกฤษเป็นภาษาต่างประเทศ : การศึกษาการถ่ายโอนท... January 1970 ธนพรรษ สายหรุ่น วิทยานิพนธ์ (อ.ม.)--จุฬาลงกรณ์มหาวิทยาลัย, 2542 ศึกษาการถ่ายโอนทางวัจนปฏิบัติศาสตร์ของกลวิธีการปฏิเสธ ของนักศึกษาไทยที่เรียนภาษาอังกฤษเป็นภาษาต่างประเทศ ข้อมูลที่ใช้ในการวิเคราะห์ได้มาจากการตอบแบบสอบถาม ที่เป็นการเติมข้อความที่เว้นว่างไว้ในบทสนทนาให้สมบูรณ์ (DCT) ของกลุ่มตัวอย่างที่เป็นนักศึกษาอเมริกันและนักศึกษาไทย จำนวนกลุ่มละ 50 คน ผลการวิจัยพบว่า กลวิธีการปฏิเสธของนักศึกษาไทย ... [Show full abstract] มีความแตกต่างจากกลวิธีการปฏิเสธของนักศึกษาอเมริกัน กล่าวคือ นักศึกษาไทยนิยมใช้กลวิธีการปฏิเสธแบบอ้อม และใช้คำกล่าวแสดงความลังเลในการปฏิเสธการขอร้องและการปฏิเสธข้อเสนอ ในขณะที่นักศึกษาอเมริกันมักใช้การปฏิเสธอย่างตรงไปตรงมา นอกจากนี้นักศึกษาไทยนิยมใช้คำกล่าวแสดงว่า ไม่เป็นปัญหา เช่น "ไม่เป็นไร" หรือ "ทำเองได้" ในการปฏิเสธข้อเสนอมากกว่านักศึกษาอเมริกัน และยังพบว่า นักศึกษาไทยให้ความสำคัญกับการเพิ่มความหนักแน่น ในคำกล่าวขอโทษและขอบคุณมากกว่านักศึกษาอเมริกัน ส่วนการถ่ายโอนทางวัจนปฏิบัติศาสตร์นั้น ผลการวิจัยพบว่า นักศึกษาไทยที่เรียนภาษาอังกฤษเป็นภาษาต่างประเทศ แสดงการปฏิเสธในภาษาอังกฤษคล้ายคลึงกับที่ใช้ในภาษาแม่ของตน ซึ่งแตกต่างจากที่นักศึกษาอเมริกันใช้ลักษณะดังกล่าวได้แก่ (1) การนิยมเพิ่มความหนักแน่นในคำกล่าวขอโทษ (2) การนิยมใช้คำกล่าวแสดงความลังเล (3) การใช้รูปแบบ "ได้ แต่..." ในคำกล่าวแสดงความเห็นเชิงบวก (4) การใช้คำกล่าวให้เหตุผล โดยใช้ข้อมูลเกี่ยวกับครอบครัว (5) การนิยมเพิ่มความหนักแน่นในคำกล่าวขอบคุณ และ (6) การใช้คำกล่าวตักเตือนลูกจ้าง นอกจากนี้ยังพบว่า สถานภาพทางสังคมระหว่างผู้ปฏิเสธกับผู้ขอร้อง หรือผู้ให้ข้อเสนอนั้น มีบทบาทต่อการเลือกใช้กลวิธีการปฏิเสธของนักศึกษาไทย ทั้งในปริบทภาษาแม่และปริบทภาษาอังกฤษ เช่น เมื่อผู้ปฏิเสธมีสถานภาพทางสังคมต่ำกว่าคู่สนทนา นักศึกษาไทยมักเลือกใช้คำกล่าวให้เหตุผล คำกล่าวแสดงความกังวล คำกล่าวขอโทษ และคำกล่าวแสดงความเห็นเชิงบวกมากขึ้น ในขณะที่สถานภาพทางสังคมนี้จะมีบทบาทไม่มากนัก ต่อการเลือกใช้กลวิธีการปฏิเสธของนักศึกษาอเมริกัน To investigate the pragmatic transfer of the refusal strategies by Thai learners of English. Based on a discourse completion test, the data were collected from 50 American students and 50 Thai students. It is found that the Thais' refusal strategies differ significantly from those employed by the Americans, Thai students resorted to indirectness and hedging when they refused requests and offers. American students, on the other hand, preferred directness. The Thais also used the expression "That's OK./alright." or "I can do it myself." in greater quantity than the Americans. In addition, they often used intensifiers such as 'really' and 'greatly' in their apologies and thanks, whereas the Americans hardly did so. In terms of pragmatic transfer, it is found that the Thai learners of English performed the speech act of refusing in English in a similar manner as when they performed the same speech act in their native tongue. They employed the following strategies whic were not found in the American corpus : (1) using intensifiers in apologies, (2) hedging, (3) using the pattern "yes, but..." in expressing positive remarks, (4) giving reasons based on family and personal matters, (5) using intensifiers in thanks and (6) admonishing employees. In addition, it is found that social status played an important role in refusals in Thai and in English used by the Thai students. When the refusers were lower in social status, they tended to give reasons and employed such strategies as hedging, apologizing and expressing positive remarks. Social status seemed to influence the strategic choices made by the American students in a lesser extent than those made by the Thai students. Read more Article La « figure étrange » de la métaphore dans les scholies aux Olympiques July 2015 · Dialogues d'Histoire Ancienne Sylvie David « The “strange figure” of the metaphor in the scholia to the Olympian Odes » The aim of the paper is to study the way scholiasts understand the figure of the metaphor in the corpus of scholia to Pindar’s Olympian Odes. A first part is dedicated to the examination of the terms which indicate the metaphor, namely μεταφορά, μεταφορικῶς and μεταφέρειν, and demonstrates that the commentators give full ... [Show full abstract] sense to this terminology, the metaphor being understood as a process of « transfer ». Then is approached the work of explaining the metaphor, by which scholiasts bring to light the « connections » between what is comparing and what is being compared, and demonstrate the poet’s sensitivity to the multiplicity of meaning. The last part is about the educational aim which leads scholiasts to propose statements which try to « translate » the metaphor by equivalences, the frequent overabundance of the interpretative discourse reflecting the irreducible polysemy of the poetic statement. Read more Article Full-text available Translational Problems in Transferring the Intended Meaning of Clarification Tropes in Shiite Discou... June 2018 Fareed Hameed Al-hindawi This paper attempts to identify the vital role that can be assumed by rhetorical pragmatics in bridging the translation gaps that characterize various kinds of the translations of shiite religious discourses View full-text Last Updated: 07 Aug 2025 Looking for the full-text? You can request the full-text of this article directly from the authors on ResearchGate. Request full-text Already a member? Log in ResearchGate iOS App Get it from the App Store now. Install Keep up with your stats and more Access scientific knowledge from anywhere or Discover by subject area Recruit researchers Join for free LoginEmail Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? - [x] Keep me logged in Log in or Continue with Google Welcome back! Please log in. Email · HintTip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? - [x] Keep me logged in Log in or Continue with Google No account? Sign up Company About us News Careers Support Help Center Business solutions Advertising Recruiting © 2008-2025 ResearchGate GmbH. All rights reserved. Terms Privacy Copyright Imprint Consent preferences
346
Counterexamples in Algebra August 3, 2015 We use k, F, K to denote the fields, and R to denote the rings. Denote by Z the ring of rational integers, Q the field of rational numbers, R the field of real numbers, and C the field of complex numbers. Denote by A the ring of algebraic integers. 1 Groups A Noncyclic Group of Order 4. Z/2Z × Z/2Z. A Presentation Gives a Trivial Group. ⟨x, y, z|xyx−1y−1 = y, yzy−1z−1 = z, zxz−1x−1 = x⟩. Two Nonisomorphic Groups with the Same Character Table. D4 and Q8. A Nonabelian p-Group. Gp = a b 0 1  |a, b ∈Z/(p2), a ≡1 mod p  . This is a nonabelian group of order p3. Another example of a nonzbelian group of order p3 is Hp =      1 a b 0 1 c 0 0 1  |a, b, c ∈Z/(p)   . In fact, these are the only nonabelian groups of order p3. On the other hand, every group of order p2 is abelian. Solvable Groups. Every finite group of order < 60, every Abelian group, any p-group. Finite Simple Groups. Cyclic groups Z/pZ, alternating groups An with n ≥5, groups of Lie type, sporadic groups. Group Homomorphisms of Additive Group of R. There are linear functions f(x) = ax. There are also nonlinear ones, consider a projection onto one basis element of the vector space R over Q. A Paradoxical Decomposition of a Group. Let F2 be the free group with two generators a, b. Consider S(a), S(a−1), S(b), and S(b−1) be the set of elements starting with a, a−1, b, and b−1 respectively. Then we have F2 = ⟨e⟩∪S(a) ∪S(a−1) ∪S(b) ∪S(b−1). We have also F2 = aS(a−1) ∪S(a), and F2 = bS(b−1) ∪S(b). These decompositions are used in the proof of Banach-Tarski Theorem. 2 Rings A Commutative Ring with Identity that is Not an Integral Domain. Z × Z, Z/6Z. A Commutative Ring without Identity. 2Z, {0, 2} in Z/4Z. A Noncommutative Ring without Identity. M2(2Z). A Noncommutative Division Ring with Identity. The real quarternion H. A Ring with Cyclic Multiplicative Group. R = Z/nZ with n = 2, 4, pk, 2pk. Any finite fields. Also Z has units {±1} which is isomorphic to Z/2Z and is cyclic. A Subring that is Not an Ideal. Z ⊂Q. An Order of a Ring is Larger than its Characteristic. Any GF(pn) for n ≥2. A Prime Ideal that is Not a Maximal Ideal. Let R = Z[x]. The ideal P = (x) is a prime ideal since R/P ∼ = Z is an integral domain. Since Z is not a field, P is not a maximal ideal. In PID, every prime ideal is maximal and vice versa. In fact, if R is an integral domain that is not a field, for example Z, then (0) is a prime ideal that is not maximal. A Homomorphic Image Need Not be an Ideal. Z ⊂Q. An Additive Group Homomorphism that is Not a Ring Homomorphism. The derivative map D : R[x] →R[x]. We have D(f + g) = D(f) + D(g) but D(fg) = gD(f) + fD(g). A Multiplicative Group Homomorphism that is Not a Ring Homomorphism. Let f : R →R be f(x) = x2. The Unique Ring Homomorphism from R to R. The identity. A Commutative Ring with Infinitely Many Units. Z[ √ 2]. A Noncommutative Ring with Infinitely Many Units. M2(Z). A Non-Dedekind Domain. The ring Z[√−3] is a subring of A ∩Q(√−3) = Z[(1 + √−3)/2]. This is not Dedekind since it is not integrally closed. A Dedekind Domain which is Not a UFD. Z[√−5]. This is a ring of integers in Q(√−5). We have the non-unique factorization 6 = 2 · 3 = (1 + √−5)(1 −√−5). A UFD which is Not Dedekind. k[x, y]. The Krull-dimension of this ring is 2. A UFD which is Not a PID. Z[x]. Since Z is UFD, Z[x] is a UFD. However, this is not PID because (x, 2) is not principal. A PID which is Not a ED. The ring of integers in Q(√−19). This is Z[(1 + √−19)/2]. A Ring R such that R ∼ = R × R. Let R = Q∞ i=1 Z. Then R ∼ = R × R by the following isomorphism: f : R →R × R defined by f(x1, x2, · · · ) = ((x1, x3, · · · ), (x2, x4, · · · )) . A Commutative Ring with 4 Elements that is Not Isomorphic to Z/4Z or Z/2Z × Z/2Z. The matrices x 0 y x  over Z/2Z = GF(2). This is isomorphic to GF(2)[x]/(x2) by 1 0 0 1  7→1 + (x2), 0 0 1 0  7→x + (x2). This is not isomorphic to Z/4Z since the characteristic is not 4. This is not isomorphic to Z/2Z × Z/2Z since this ring has two solutions in x2 = 0. Another example is the 4-element subring of Z/16Z, where the multiplication of any pair is zero. A Commutative Ring with Identity that the Converse of CRT Holds. Let R be a commutative ring with identity. The converse of CRT is: If I, J are ideals with I + J ̸= R, then R/I ∩J ̸∼ = R/I × R/J. Z, F[x] where F is a field. Further, any Dedekind Domain. A Commutative Ring with Identity that the Converse of CRT does Not Hold. R = Q∞ i=1 Z, and I = J = (0). Then I + J ̸= R and R/I ∩J ∼ = R/I × R/J. A Commutative Ring with Identity that is Noetherian but not Artinian. Z, k[x]. A Commutative Ring with Identity that is neither Noetherian nor Artinian. A the ring of algebraic integers, k[x1, x2, · · · ] the ring of polynomials in infinitely many variables. A Local Noetherian Ring. k the formal power series ring over a field k. This has a unique maximal ideal (x), and it is Noetherian by Hilbert’s Basis Theorem. Furthermore, this is a DVR. Integral Domains A, B which Contains a Field F but A ⊗F B is Not an Integral Domain. Let A = B = GF(p)(X) and F = GF(p)(Xp). Then A and B are integral domains containing F, but X ⊗1 −1 ⊗X ∈A ⊗F B is a nonzero element in A ⊗F B satisfying (X ⊗1 −1 ⊗X)p = Xp ⊗1 −1 ⊗Xp = 0. Hence, A ⊗F B is not an integral domain. A Group Ring which is Not Semisimple. k[x]/(xp −1) with k = GF(p). This is a group ring kG with a cyclic group G or order p. This is not semisimple by Maschke’s theorem. This is a local ring with maximal ideal I := ker(kG ϵ − →k) = Rad(kG). 3 Fields An Algebraically Closed Field of Finite Characteristic. GF(p). An Infinite Field of Finite Characteristic. GF(p), GF(p)(x) the field of rational functions over GF(p). A Real Transcendental Extension. Q ⊂Q(π). A Real Field which is Not Totally Real. Q(2 1 3 ). A Totally Real Field. Q( √ 2). A Normal Extension of a Normal Extension may Not be Normal. Q ⊂Q( √ 2) ⊂Q( p√ 2). An Algebraic Extension of Infinite Degree. Q( √ 2, √ 3, √ 5, · · · ) over Q, Q over Q, GF(p) over GF(p). A Nontrivial Finite Extension that is Isomorphic to the Ground Field. Let F = Q(x) and k = Q(√x). Then k is a degree-2 extension of F. However, they are isomorphic. A Finite Extension which Contains Infinitely Many Subextensions. Let p be a prime. Let F = GF(p)(x, y) and k = GF(p)(x 1 p , y 1 p ). For any f(y) ∈GF(p)(y), K = F(x 1 p f(y) + y 1 p ) is a nontrivial subextension of k. An Irreducible Polynomial f ∈Q[x] with Reducible ¯ f ∈Z/pZ[x] for Every p. Let x4 + 1 ∈Q[x]. If p = 2, then x4 + 1 = (x2 + 1)2. If p ̸= 2, then x4 + 1|x8 −1|xp2−1 −1. 4 Modules A Noetherian Module which is Not Artinian. Z-module Z. An Artinian Module which is Not Noetherian. Z-module M = ∪∞ i=1(p−iZ/Z). A Free Module with Infinite Basis. Q-vector space R. An Injective Module which is Not Torsion-Free. Z-module Q/Z A Torsion-Free Module which is Not Flat. Let R = k[x, y] and I = (x, y). Then I is a torsion-free R-module. This is not flat because I ⊗I →I ⊗R is not injective. In fact, 0 ̸= x ⊗y −y ⊗x ∈Ker (I ⊗I →I ⊗R). A Projective Module which is Not Free. Let R = Z/2Z × Z/2Z, and consider Z/2Z × (0) a submodule of R-module R. This is projective since it is a direct summand of free module but it is too small to be free. A Flat Module which is Not Projective. Z-module Q. A Flat Module which is Neither Projective Nor Injective. The Z-module Q ⊕Z. This is flat because it is a direct sum of flat modules. This is not projective because of Q, not injective because of Z. A Semisimple Module which is Not Simple. CS3 ∼ = C × C × M2(C). A Module which is Faithful and Flat, but Not Faithfully Flat. Z-module Q.
347
Efficient detection of multivariate correlations with different correlation measures | The VLDB Journal =============== Your privacy, your choice We use essential cookies to make sure the site can function. We also use optional cookies for advertising, personalisation of content, usage analysis, and social media. By accepting optional cookies, you consent to the processing of your personal data - including transfers to third parties. Some third parties are outside of the European Economic Area, with varying standards of data protection. See our privacy policy for more information on the use of your personal data. Manage preferences for further information and to change your choices. Accept all cookies Skip to main content Advertisement Log in Menu Find a journalPublish with usTrack your research Search Cart Search Search by keyword or author Search Navigation Find a journal Publish with us Track your research Home The VLDB Journal Article Efficient detection of multivariate correlations with different correlation measures Regular Paper Open access Published: 11 October 2023 Volume 33,pages 481–505, (2024) Cite this article Download PDF You have full access to this open access article The VLDB JournalAims and scopeSubmit manuscript Efficient detection of multivariate correlations with different correlation measures Download PDF Jens E. d’HondtORCID: orcid.org/0000-0001-9069-05911, Koen Minartz1& Odysseas Papapetrou1 3181 Accesses 3 Citations Explore all metrics Abstract Correlation analysis is an invaluable tool in many domains, for better understanding the data and extracting salient insights. Most works to date focus on detecting high pairwise correlations. A generalization of this problem with known applications but no known efficient solutions involves the discovery of strong multivariate correlations, i.e., finding vectors (typically in the order of 3–5 vectors) that exhibit a strong dependence when considered altogether. In this work, we propose algorithms for detecting multivariate correlations in static and streaming data. Our algorithms, which rely on novel theoretical results, support four different correlation measures, and allow for additional constraints. Our extensive experimental evaluation examines the properties of our solution and demonstrates that our algorithms outperform the state-of-the-art, typically by an order of magnitude. Similar content being viewed by others Visualization of statistically significant correlation coefficients from a correlation matrix: a call for a change in practice Article 05 July 2021 Correlation Analysis for Multivariate Functional Data Chapter© 2017 Multiscale Clustering for Functional Data Article 30 March 2019 Explore related subjects Discover the latest articles and news from researchers in related subjects, suggested using machine learning. Big Data Data Mining Logical Analysis Linear Algebra Multivariate Analysis Quantum Correlation and Entanglement Use our pre-submission checklist Avoid common mistakes on your manuscript. 1 Introduction Correlation analysis is one of the key tools in the arsenal of data analysts for understanding the data and extracting insights. For example, in neuroscience, a strong correlation between activity levels in two regions of the brain indicates that these regions are strongly interconnected . In finance, correlation plays a crucial role in finding portfolios of assets that are on the Pareto-optimal frontier of risk and expected returns , and in genetics, correlations help scientists detect cause factors for potentially hereditary syndromes.Footnote 1 In databases, similarity measure like correlations are occasionally used in theta joins to allow for softer joining conditions than pure object equality . Furthermore, when treated as a generalization of functional dependencies, correlations are also used for optimizing access paths in databases . Multivariate correlations, also known as high-order correlations, extend the concept of pairwise correlations to relationships among three or more variables. These variables may represent various forms of data, such as time series or other high-dimensional data stored as vectors.Footnote 2 Multivariate correlations should not be confused with pairwise correlations of multivariate time series. The former refers to correlations involving three or more distinct variables/vectors, whereas the latter deals with correlations of only two multivariate time series. In the last few years, multivariate correlations found extensive use in diverse domains. Detection of ternary correlations in fMRI time series improved the understanding of how different brain regions work in cohort for executing different tasks [2, 3]. For instance, the activity of the left middle frontal region was found to have a high correlation with the total activity of the right superior frontal and left inferior frontal regions while the brain was processing audiovisual stimulus. This insight suggests that the left middle frontal has an integrative role of assimilating information from the other two regions, which was not possible to find by looking only at pairwise correlations. In climate science, a ternary correlation led to the characterization of a new weather phenomenon and to improved climate models . In machine learning, multivariate information-theoretic measures have increasingly served as learning objectives or regularizers for training of neural networks aimed at optimizing the correlation among multiple variables. Usage of such regularizers lead to improved robustness, generalizability, and interpretability of the models [4, 7, 8]. It is also stipulated that a more thorough look at multivariate correlations will open doors in the fields of genomics [6, 52] and medicine [28, 32]. Accordingly, several measures and algorithms for discovering strong multivariate correlations have been proposed, such as Tripoles , Multipoles , Canonical Correlation Analysis (CCA) , and Total Correlation (TC) [35, 36, 46, 52]. However, the proposed algorithms do not sufficiently address the fundamental impediment on the discovery of strong multivariate correlations, which is the vast search space—all combinations of vectors that need to be examined. Unfortunately, apriori-like pruning techniques do not apply for the general case of multivariate correlations. For example, consider the three time series from finance, presented in Fig.1. In this example, the pairwise correlation between all pairs of the three time series is comparatively low, whereas the time series created by averaging QAN and RDF is strongly correlated to MCP.Footnote 3 Therefore, a correlation value of any pair of vectors does not provide sufficient information as of whether these vectors may participate together in a ternary (or higher-order) correlation. Simultaneously, an exhaustive algorithm that iterates over all possible combinations implies combinatorial complexity, and cannot scale to reasonably large datasets. Indicatively, in a small data set of 100 vectors, detection of all ternary high correlations requires iterating over 1 million candidates, whereas finding quaternary high correlations on 1000 vectors involves 1 trillion combinations. The mere generation and enumeration of these combinations already becomes challenging. Therefore, smart algorithms are needed to drastically reduce the search space and computational complexity. Fig. 1 Normalized daily closing prices for stocks traded at the Australian Securities Exchange Full size image Existing algorithms follow at least one of the following approaches: (a) they consider constraining definitions of multivariate correlations that enable apriori-like filtering [3, 35, 52], (b) they rely on hand-crafted assumptions of the user query, which may be too constraining for other application scenarios [2, 3, 52], or, (c) they offer approximate results, with no guarantees [2, 3]. Even though these algorithms are very useful for their particular use cases, they are not generally applicable. In this work, we follow a more general direction. First, we also consider correlation measures that are not suitable for apriori-like pruning. Second, in contrast to some of the earlier work, we abide by Ockham’s razor: we prioritise discovery of the less complex multivariate correlations—the ones that contain the smallest number of vectors. We opt for this approach since correlations between a few variables are more intuitive and interpretable than their counterparts with many variables. Third, we consider different algorithmic variants: an exact threshold variant that returns all correlations higher than a threshold (\tau ), and an exact top-(\kappa ) variant that returns the top-(\kappa ) highest correlations. We also discuss the case of progressively finding results, and extend the proposed algorithms to a dynamic context, for handling streaming updates. We evaluate our algorithms on 7 datasets and compare them to the state-of-the-art. Our evaluation demonstrates that we outperform the existing methods, frequently by several orders of magnitude. Finally, we show that the progressive version of the algorithm produces around 90% of the answers in 10% of the time. The remainder of the paper is structured as follows. In the next section, we formalize the problem and discuss the preliminaries and related work. We then propose the algorithmic variants for the case of static data (Sect.3), and the streaming extension of the algorithm (Sect.4). Section5 summarizes the experimental results. We conclude in Sect.6. 2 Preliminaries We start with a discussion of the multivariate correlation measures that we will be considering in this work. We then formalize the problem and discuss prior work on similar multivariate correlation measures. 2.1 Correlation measures Our work focuses on both types of multivariate correlation measures: a bivariate correlations over aggregated vectors (two-sided), and b specialized multivariate measures (one-sided). Bivariate correlations over aggregates. Given two sets of vectors X and Y, a bivariate correlation over aggregated vectors is defined as $$\begin{aligned} Corr (X,Y) = Corr ({\text {Agg}}(X),{\text {Agg}}(Y)) \end{aligned}$$ (1) with (Corr ) being a bivariate correlation function such as Pearson Correlation, and Agg(X) being a linear combination of the vectors in X. In this work, we consider element-wise averaging combined with Pearson Correlation and Euclidean Similarity , referred to as (PC ) and (ES ), respectively. Pearson Correlation is defined as (\rho (x,y) = \frac{\text {cov}(x,y)}{\sigma x\sigma _y}) with (\sigma _x) denoting the standard deviation of some vector _x, and is a widely used measure for measuring the linear dependence between two variables. Euclidean Similarity is defined as (ES (x,y) = \frac{1}{1 + d(x,y)}) with (d(\cdot ,\cdot )) denoting the Euclidean distance, and is extensively used for k-nearest neighbors queries and range queries [13, 15]. Multipole. The multipole correlation (MP (X)) measures the linear dependence of an input set of vectors X . Specifically, let (\hat{{\textbf {x}}}1, \ldots , \hat{{\textbf {x}}}_n) denote _n z-normalized input (column) vectors, and (\textbf{X} = [\hat{{\textbf {x}}}_1,\ldots , \hat{{\textbf {x}}}_n]) the matrix formed by concatenating the vectors. Then: $$\begin{aligned} MP (X) = 1 - \underset{\textbf{v}\in {\mathbb {R}}^n, \Vert \textbf{v}\Vert _2 = 1}{\min }\text {var}({\textbf{X}} \cdot \textbf{v}^T) \end{aligned}$$ (2) The value of (MP (X)) lies between 0 and 1. The measure takes its maximum value when there exists perfect linear dependence, meaning that there exists a vector ({\textbf{v}}) with norm 1, such that (\text {var}({\textbf{X}} \cdot \textbf{v}^T)=0). Notice that multipoles is not equivalent to, nor a generalization of (PC ) or (ES ). By definition, (MP ) assumes optimal weights (vector ({\textbf{v}}) is such that the variance is minimized), whereas for (PC ) and (ES ), the aggregation function for the vectors (e.g., averaging) is determined at the definition of the measure. Furthermore, (MP (\cdot )) expresses the degree of linear dependence within a single set of vectors, whereas for bivariate measures, two distinct, non-overlapping vector sets are considered. Total correlation. Total correlation (TC (X)) (also known as multi-information or multivariate constraint ) is a generalization of the (pairwise) mutual information measure. It measures the redundancy or dependence among a set of n random variables (X={X_1,\dots ,X_n}) as the KL-divergence from the joint distribution (p(X_1,)(\dots )(,X_n)) to the product of the marginal distributions (p(X_1))(\dots )(p(X_n)) . This can be reduced to the difference of entropies: $$\begin{aligned} TC (X) = \sum _{i=1}^n H(X_i) - H(X_1,\dots ,X_n) \end{aligned}$$ (3) with (H(X_i)) denoting Shannon’s entropy of (X_i \in X). 2.2 Problem definition Consider a set ({\mathcal {V}}={{\textbf{v}}1, {\textbf{v}}_2, \ldots {\textbf{v}}_n}) of _d-dimensional vectors, and a multivariate correlation measure (Corr ), both provided by the data analyst. Function (Corr ) accepts either one or two vector sets (subsets of ({\mathcal {V}})) as input parameters, and returns a scalar. Hereafter, we will be denoting the correlation function with (Corr (X, Y)), with the understanding that for the definitions of (Corr ) that expect one input (i.e., (MP ) and (TC )), Y will be empty. We consider two query types: Query 1: Threshold query: For a user-chosen correlation function (Corr ), correlation threshold (\tau ), and parameters (p_l, p_r\in {\mathbb {N}}), find all pairs of sets ((X \subset {\mathcal {V}}, Y \subset {\mathcal {V}})), for which (Corr (X, Y) \ge \tau ), (X \cap Y = \emptyset ), (|X| \le p_l) and (|Y| \le p_r). Query 2: Top-(\kappa )query: For a user-chosen correlation function (Corr ), and parameters (\kappa , p_l, p_r\in {\mathbb {N}}), find the (\kappa ) pairs of sets ((X \subset {\mathcal {V}}, Y \subset {\mathcal {V}})) that have the highest values (Corr (X, Y)), such that (X \cap Y = \emptyset ), (|X| \le p_l), and (|Y| \le p_r). The combination of (p_l) and (p_r) controls the desired complexity of the answers. Smaller (p_l+p_r) values yield results that are easier to interpret, and arguably more useful to the data analyst. Complementary to the two query types, users may also want to specify additional constraints, relating to the targeted diversity and significance of the answers. We consider two different constraints, but other constraints (e.g., the weak-correlated feature subset constraint of ) can also be integrated in the algorithm into a similar manner: Irreducibility constraint: For each (X,Y) in the result set, there exists no ((X',Y')) in the result set such that (X' \subseteq X), (Y'\subseteq Y), and (\left( X', Y'\right) \ne \left( X,Y\right) ). Intuitively, if (Corr (X', Y')\ge \tau ), then no supersets of (X') and (Y') should be considered together. This constraint prioritizes simpler answers. Minimum jump constraint: For each (X,Y) in the result set, there exists no ((X', Y')) such that (X' \subseteq X), (Y' \subseteq Y), (\left( X', Y'\right) \ne \left( X,Y\right) ), and (Corr (X, Y) -)(Corr (X', Y'))(< \delta ). This constraint, which was first proposed in , discards solutions where a vector in (X \cup Y) contributes less than (\delta ) to the increase of the correlation. For top-(\kappa ) queries, these constraints are ill-defined. For example, consider the irreducibility constraint, and assume (Corr (X,Y)=0.9), and (Corr (X',Y')=0.8), where (X'\subset X) and (Y'\subset Y). In this case, the definition of top-(\kappa ) does not dictate which of (X,Y) or ((X',Y')) should be in the answer set. For conciseness, we will use (Corr (p_l)) and (Corr (p_l, p_r)) to denote the combination of the correlation measure, and the user-chosen values of (p_l) and (p_r). For example, (PC (2,1)) will identify the combinations of sets of vectors of size 2 and 1 with high Pearson correlation, whereas pattern (MP (4)) will identify the combinations of 4 vectors with high multipole correlation. 2.3 Related work Several algorithms exist for efficiently finding highly correlated pairs in large data sets of high-dimensional vectors, e.g., time series. For example, StatStream and Mueen et al. both map pairwise correlations to Euclidean distances. They then exploit Discrete Fourier Transforms, grid-based indexing and dynamic programming to reduce the search space. Other works also enable indexing of high-dimensional vectors in the Euclidean space [11, 40]. However, these works are not applicable for multivariate correlations, since two vectors may have a low pairwise correlation with a third vector, whereas their aggregate may have a high correlation (see, e.g., the example of Fig.1). Prior work addressing multivariate correlations propose algorithms that rely on additional constraints for their pruning power. Agrawal et al. investigate the problem of finding highly-correlated tripoles . Tripoles is a special case of the (PC ) measure, where (|X|=2) and (|Y|=1) (i.e., (PC (2,1))). Their algorithm, named CoMEt, relies on the minimum jump constraint for effective pruning. Compared to tripoles, our work handles the more general definition of Pearson correlation over aggregated vectors, allowing more vectors on the left- and right-hand side. Moreover, our work relies on novel theoretical results to prune the search space and can scale to larger datasets regardless of the introduction of any additional constraints (e.g., minimum jump or irreducibility). Algorithms for discovering high correlations according to the Multipole measure (Eq.2) were first proposed in , with the introduction of the CoMEtExtended algorithm. Both CoMEt and CoMEtExtended are approximate and rely on clique enumeration to efficiently explore the search space. Their efficiency depends on a parameter (\rho ) that trades off result completeness for performance. The minimum jump constraint also becomes relevant to reduce computational effort. For settings of (\rho ) that result in reasonable computation times, the two algorithms yield a substantially more complete result set compared to methods like (l_1)—regularization and structure learning-based techniques. Still, the two algorithms do not come with completeness or accuracy guarantees. In contrast, our work is exact—it always retrieves all answers—and outperforms both algorithms. With respect to Total Correlation, Nguyen et al. propose an algorithm for groups of columns in a database with high Total Correlation. The method analyzes patterns in pairwise correlations (i.e., mutual-information) to identify quasi-cliques of highly correlated column groups, and compute lower bounds on their total correlation. However, it misses strongly correlated groups with low pairwise correlations, which are arguably the most interesting cases. As such, the method is effectively an approximation algorithm. In another work, Zhang et al. developed an algorithm that discovers sets of binary vectors with a high total correlation value . However, the method is again approximate, limited to data with binary features only, and relies on a limiting weak-correlated subset constraint. In contrast, our work returns a guaranteed complete set of results and works on all major data types. In the supervised learning context, subset regression appears to be closely related to multivariate correlation mining. The goal of this feature selection problem is to select the best p predictors out of n candidate features . Our problem differs from the above in that we aim to find interesting patterns in the data, rather than finding the best predictors for a given dependent variable. Furthermore, instead of finding only the highest correlated vector set, our goal is to find a diverse set of results as we argue that that will help domain expert assess the results more on qualitative aspects, gaining more insights. Another similar problem is that of similarity search on multivariate time series [49, 50]. Here, the goal is to find all pairs of multivariate time series (e.g., weather sensors measuring both temperature and wind speed) with a high similarity value, based on some specialized measure such as the PCA similarity factor , or the extended Frobenius norm . Effectively, this extends classic similarity search by adding a degree of freedom (DoF) in the number of variables per time series, increasing the search space cardinality from (O(n^2)) to (O((pn)^2)) for p-variate time series. In contrast, our problem extends classic similarity search by adding a DoF in the number of time series per combination, growing the search space to (O(n^p)). Although this problem seems similar, its challenges differ significantly from similarity search on multivariate time series and can lead to different results and insights. Table1 summarizes the properties of the most closely related work out of the discussed ones. Table 1 Comparison to the most relevant related work for multivariate correlations Full size table 3 Detection of multivariate correlations in static data The main challenge in detecting strongly correlated vector sets stems from the combinatorial explosion of the number of candidates that need to be examined. In a dataset of n vectors, there exist at least (O\left( \sum {p=2}^{p_l + p_r}{n \atopwithdelims ()p}\right) ) possible combinations for a correlation pattern _Corr ((p_l),(p_r)). Even if each possible combination can be checked in constant time, the enumeration of all combinations still requires significant computational effort. Our algorithm—Correlation Detective, abbreviated as CD—exploits the insight that vectors often exhibit (weak) correlations between each other. For example, securities of companies that participate in the same conglomeration (e.g., Fig.2a, GOOGL and GOOG) or are exposed to similar risks and opportunities (e.g., STMicroelectronics and ASML) typically exhibit a high correlation between their stock prices. CD exploits such correlations, even if they are weak, to drastically reduce the search space. Fig. 2 a Two groups of closely related stocks: ASML and STMicroelectronics are exposed to similar risks, while GOOG and GOOGL participate in the same conglomeration; b Running example in 2 dimensions: the centroids of each cluster are depicted with darker background. All clusters are labeled for easy reference; c Illustration of pessimistic pairwise bounds of Lemma1 Full size image Table 2 Properties of the supported multivariate correlation metrics Full size table CD works as follows: rather than iterating over all possible vector combinations that correspond to the correlation pattern, CD clusters the vectors based on their similarity, and enumerates the combinations of only the cluster centroids. For each of these combinations, CD computes upper and lower bounds on the correlations of all vector combinations in the Cartesian product of the clusters. Based on these bounds, CD decides whether or not the combination of clusters (i.e., all combinations of vectors derived from these clusters) should be added to the result set, can safely be discarded, or, finally, if the clusters should be split into smaller subclusters for deriving tighter bounds. This approach effectively reduces the number of combinations that need to be considered, making CD at least an order of magnitude faster than existing methods. In the remainder of this section, we will present the key elements of CD, explaining how the two types of queries presented in Sect.2 are handled. We will start with a brief description of the initialization phase, which includes data pre-processing and clustering. In Sects.3.2 and3.3, we will describe how CD answers threshold and top-(\kappa ) queries, respectively. 3.1 Initialization and clustering First, all vectors are normalized using a measure-specific (e.g., (PC , ES , MP , TC )) normalization technique (discussed in Sect.3.2). The second part of the initialization phase considers constructing a hierarchical clustering of all vectors, again using a measure-specific distance measure (shown in Table2). We will discuss the selection of distance measures in Sect.3.2.2. The clustering algorithm operates in top-down fashion. A root cluster containing all vectors is first created to initialize the hierarchy. The algorithm then consists of three steps. First, K vectors are picked from the root cluster and used as the initial top-level centroids in the hierarchy. These vectors are picked using the seeding strategy of K-means(^{++}) . The use of K-means(^{++}) (as opposed to sampling K random vectors) ensures that these initial centroids are well-distributed over the metric space, and not very close to each other. In the second step, we run the standard K-means algorithm for at most (r_1) iterations, or until convergence using the average function to recompute the cluster centroids after each iteration. The clustering is evaluated using the Within-Cluster Sum of Squares (WCSS) (the sum of the variances within all clusters). In the third step, steps one and two are repeated (r_2) times (i.e., with different centroids), and the clustering with the lowest WCSS is kept as the final clustering assignment for the first level of the hierarchy. These three steps are executed recursively on each individual cluster with non-zero radius, to construct the second, third, etc. levels of the hierarchy, until all leaf nodes contain only one vector. There is a clear tradeoff between the cost of the clustering algorithm and the clustering quality. Increasing the values of (r_1) and (r_2) will generally result in a higher clustering quality (lower WCSS), but will take longer to compute. However, the quality of the clustering does not affect the correctness of CD—in fact, regardless of the employed hierarchical clustering algorithm, CD always returns the same correct result set. A poor clustering only affects the computational efficiency of CD. Still, our experiments show that as long as the clustering is reasonable, a suboptimal clustering is not detrimental to CD’s efficiency. More precisely, we found that the value of (r_1) (max. iterations of K-means, after the initial centroids were decided) had no observable effect on CD’s efficiency. Therefore, we simply set (r_1 = 1). The same generally holds for (r_2), although to prevent ruinous effects due to coincidentally very poorly chosen initial centroids, we set (r_2 = 50). Still, the clustering takes at most a few seconds in our experiments, which is negligible compared to the total execution time of the algorithm. 3.2 Threshold queries CD receives as input the cluster tree produced by the hierarchical clustering algorithm, a correlation pattern, and a correlation threshold (\tau ). It then forms all possible combinations of the correlation pattern with the child clusters of the root. In the example of Fig.2b, for a desired correlation pattern of (PC (2,1)), the following combinations of clusters are examined: $$\begin{aligned} \forall _{C_x,C_y,C_z \in {C_1, C_2, C_3}} ((C_x,C_y),C_z) \end{aligned}$$ Note that we now present the algorithm for finding all interesting triplets following correlation pattern (PC (2,1)). In reality, CD also considers all sub-patterns of the queried correlation pattern (e.g., (PC (1,1))) by re-running the same algorithm on those sub-patterns. Algorithm 1 ThresholdQuery((\mathcal {S}_l), (\mathcal {S}_r), (Corr ), (\tau )) Full size image A combination of clusters compactly represents the combinations created by the Cartesian product of the vectors inside the clusters. For example, assuming that (|C_x|=4) and (|C_y|=3), the cluster combination ((C_x,C_y)) represents a set of 12 vector combinations, which we will refer to as its materializations. For each cluster combination, the algorithm computes lower and upper bounds on the correlation of its materializations, denoted with LB and UB, respectively (Algorithm 1, line 1). These bounds guarantee that any possible materialization of the cluster combination, i.e., replacing each cluster with any one of the vectors in that cluster, will always have a correlation between LB and UB. The next step is to compare the bounds with the user-chosen threshold (\tau ) (lines 2, 4, 6). If (UB < \tau ), the combination is decisive negative—no materialization yields a correlation higher than the threshold (\tau ). Therefore, this cluster combination does not need to be examined further. If (LB \ge \tau ), the combination is decisive positive, guaranteeing that all possible materializations of this cluster combination will have a correlation of at least (\tau ). Therefore, all materializations are inserted in the result. Finally, when (LB < \tau ) and (UB \ge \tau ), the combination is indecisive. In this case, the algorithm (lines 7–11) chooses the cluster (C_\text {max}) with the largest radius,Footnote 4 and recursively checks all combinations where (C_\text {max}) is replaced by one of its sub-clusters. In the example of Fig.2b, assume that the algorithm examined an indecisive combination of clusters (C_1, C_2), (C_3), and (C_2) is the cluster with the largest radius. The algorithm will drill down to consider the three children of (C_2), and examine their combinations with (C_1) and (C_3). The recursion continues until each combination is decisive. We will refer to this process as traversing the comparison tree. Decisive combinations are typically found at high levels of the cluster tree, thereby saving many comparisons. In the following, we will discuss two different approaches for deriving LB and UB for arbitrary correlation patterns. The first approach (theoretical bounds) has constant complexity in the number of materializations a cluster combination covers. The second approach (empirical bounds) extends the theoretical bounds with additional information. It has a slightly higher cost, but typically leads to much tighter bounds. 3.2.1 Theoretical bounds We first present a lemma for bounding the cosine similarity between only two clusters, which serves as a stepping stone for bounding multivariate correlations. Lemma 1 Let (\cos (\theta {\textbf{x},\textbf{y}})) denote the cosine similarity between two vectors (\textbf{x}) and (\textbf{y}), with (\theta {\textbf{x},\textbf{y}}) being the angle formed by these vectors. Consider four vectors (\mathbf {u_1}), (\mathbf {u_2}), (\mathbf {v_1}), and (\mathbf {v_2}), such that (\theta {\mathbf {v_1}, \mathbf {u_1}} \le \theta _1) and (\theta {\mathbf {v_2},\mathbf {u_2}} \le \theta 2). Then, cosine similarity (\cos (\theta {\mathbf {u_1},\mathbf {u_2}})) can be bounded as follows: $$\begin{aligned} \cos (\theta ^\text {max}{\mathbf {u_1}, \mathbf {u_2}}) \le \cos (\theta {\mathbf {u_1},\mathbf {u_2}}) \le \cos (\theta ^\text {min}_{\mathbf {u_1}, \mathbf {u_2}}) \end{aligned}$$ where $$\begin{aligned}{} & {} \theta ^\text {min}{\mathbf {u_1}, \mathbf {u_2}} \max \left( 0, \theta {\mathbf {v_{1}}, \mathbf {v_{2}}} - \theta 1 - \theta _2 \right) \{} & {} \theta ^\text {max}{\mathbf {u_1}, \mathbf {u_2}} \min \left( \pi , \theta {\mathbf {v{1}}, \mathbf {v_{2}}} + \theta _1 + \theta _2\right) \end{aligned}$$ Proof All proofs are included in Appendix A of the Technical Report [12c ")] \(\square \) Lemma1 bounds the cosine similarity between two vectors (\mathbf {u_1}) and (\mathbf {u_2}) that belong to two clusters with centroids (\mathbf {v_1}) and (\mathbf {v_2}), respectively, by using: (a) the angle between the two centroids, and, (b) upper bounds on the angles between (\mathbf {u_1}) and (\mathbf {v_1}), and between (\mathbf {u_2}) and (\mathbf {v_2}). For instance, in the running example (Fig.2b), we can bound the cosine between (\textbf{a}) and (\textbf{b}) if we have the cosine of the two cluster centroids (\textbf{d}) and (\textbf{e}), the cosines of (\textbf{a}) with (\textbf{d}), and of (\textbf{h}) with (\textbf{e}) (as (\textbf{h}) is the furthest point in (C_2) from the centroid (\textbf{e})). The bounds are tightened if the maximum angle formed by each centroid with its corresponding cluster vectors is reduced. We now extend our discussion to cover multivariate correlations, which involve three or more clusters. Theorem 1 (Bounds for (PC )) For any pair of clusters (C_i, C_j), let (l(C_i, C_j)) and (u(C_i, C_j)) denote lower/upper bounds on the pairwise correlations (\rho ) between the cluster pair’s materializations, i.e., (l(C_i, C_j))(\le )(\min \nolimits {\textbf{x}\in C_i, \textbf{y}\in C_j})(\rho (\textbf{x},\textbf{y})) and (u(C_i, C_j) \ge \max \nolimits {\textbf{x}\in C_i, \textbf{y}\in C_j} \rho (\textbf{x},\textbf{y})). Consider sets of clusters ({\mathcal {S}}l = {C^l_i}{i=1}^{p_l}) and ({\mathcal {S}}r = {C^r_j}{j=1}^{p_r}). Let (L({\mathcal {S}}1,{\mathcal {S}}_2))( = \sum {C_i \in {\mathcal {S}}1, C_j \in {\mathcal {S}}_2} l(C_i,C_j)), and (U({\mathcal {S}}_1,{\mathcal {S}}_2))(= \sum {C_i \in {\mathcal {S}}_1, C_j \in {\mathcal {S}}_2})(u(C_i,C_j)). Then, for any two sets of z-normalized vectorsFootnote 5(X = {\hat{{\textbf {x}}}1, \ldots , \hat{{\textbf {x}}}{p_l}}), (Y = {\hat{{\textbf {y}}}1, \ldots , \hat{{\textbf {y}}}{p_r}}) such that (\hat{{\textbf {x}}}i \in C^l_i), (\hat{{\textbf {y}}}_i \in C^r_i), multivariate correlation _PC(X,Y), can be bounded as follows: 1. if (L({\mathcal {S}}_l, {\mathcal {S}}_r) \ge 0: PC(X,Y) \in ) $$\begin{aligned} \left[ \frac{L({\mathcal {S}}_l, {\mathcal {S}}_r)}{\sqrt{U({\mathcal {S}}_l, {\mathcal {S}}_l)} \sqrt{U({\mathcal {S}}_r, {\mathcal {S}}_r)} }, \frac{U({\mathcal {S}}_l, {\mathcal {S}}_r)}{\sqrt{L({\mathcal {S}}_l, {\mathcal {S}}_l)} \sqrt{L({\mathcal {S}}_r, {\mathcal {S}}_r)} } \right] \end{aligned}$$ 2. 2. if (U({\mathcal {S}}_l, {\mathcal {S}}_r) \le 0: PC(X,Y) \in ) $$\begin{aligned} \left[ \frac{L({\mathcal {S}}_l, {\mathcal {S}}_r)}{\sqrt{L({\mathcal {S}}_l, {\mathcal {S}}_l)} \sqrt{L({\mathcal {S}}_r, {\mathcal {S}}_r)} }, \frac{U({\mathcal {S}}_l, {\mathcal {S}}_r)}{\sqrt{U({\mathcal {S}}_l, {\mathcal {S}}_l)} \sqrt{U({\mathcal {S}}_r, {\mathcal {S}}_r)} } \right] \end{aligned}$$ 3. 3. else: (PC(X,Y) \in ) $$\begin{aligned} \left[ \frac{L({\mathcal {S}}_l, {\mathcal {S}}_r)}{\sqrt{L({\mathcal {S}}_l, {\mathcal {S}}_l)} \sqrt{L({\mathcal {S}}_r, {\mathcal {S}}_r)} }, \frac{U({\mathcal {S}}_l, {\mathcal {S}}_r)}{\sqrt{L({\mathcal {S}}_l, {\mathcal {S}}_l)} \sqrt{L({\mathcal {S}}_r, {\mathcal {S}}_r)} } \right] \end{aligned}$$ As Pearson correlation is equivalent to cosine similarity when computed over z-normalized vectors, we can use Lemma1 to compute bounds on the pairwise correlations between any pair of clusters, which allows us to compute the bounds in Theorem1. Consequently, we can bound the multivariate correlation of any cluster combination that satisfies the (PC ) correlation pattern, without testing all its possible materializations. For example, for combination (((C_1, C_2), C_3)) from our running example, we first use Lemma1 to calculate bounds for all cluster pairs in O(1) per pair, which leads to values for (L(\cdot , \cdot )) and (U(\cdot , \cdot )). The bounds on (PC ((C_1, C_2), C_3)) then follow directly from Theorem1. Theorem 2 (Bounds for (MP )) For any pair of clusters (C_i, C_j), let (l(C_i, C_j)) and (u(C_i, C_j)) denote lower / upper bounds on the pairwise correlations between the cluster’s materializations, i.e., (l(C_i, C_j) \le )(\min \nolimits {\textbf{x}\in C_i, \textbf{y}\in C_j})(\rho (\textbf{x},\textbf{y})) and (u(C_i, C_j) \ge \max \nolimits {\textbf{x}\in C_i, \textbf{y}\in C_j} \rho (\textbf{x},\textbf{y})). Consider the set of clusters ({\mathcal {S}} = {C_i}{i=1}^p). Furthermore, let (\textbf{L}) and (\textbf{U}) be symmetric matrices such that ({\textbf{L}}{ij} = l(C_i, C_j)) and ({\textbf{U}}{ij} = u(C_i, C_j))(\forall 1 \le i,j \le p). For any set of z-normalized vectors (X={\hat{{\textbf {x}}}_1,\hat{{\textbf {x}}}_2, \ldots , \hat{{\textbf {x}}}{p}}) such that (\hat{{\textbf {x}}}_i \in C_i), multipole correlation (MP (X)) can be bounded as follows: $$\begin{aligned} MP(X) \in 1 - \lambda _{min}\left( \frac{{\textbf{L}}+{\textbf{U}}}{2}\right) \pm \frac{1}{2}||{\textbf{U}}-{\textbf{L}}||_2 \end{aligned}$$ where (\lambda _{min}\left( \frac{{\textbf{L}}+{\textbf{U}}}{2}\right) ) is the smallest eigenvalue of matrix (\left( \frac{{\textbf{L}}+{\textbf{U}}}{2}\right) ). Similar to Theorem1 for (PC ), we can use Lemma1 to compute the bounds on the pairwise correlations between any pair of clusters, which allows us to compute the bounds of Theorem2, and to analyze the (MP ) values of all materializations of the cluster combination in one go. Theorem 3 (Bounds for (ES )) For any pair of clusters (C_i, C_j), let (l(C_i, C_j)) and (u(C_i, C_j)) denote lower / upper bounds on the dot products (\langle \cdot , \cdot \rangle ) between the clusters’ materializations, i.e., (l(C_i, C_j) \le \min \nolimits {\textbf{x}\in C_i, \textbf{y}\in C_j} \langle \textbf{x},\textbf{y}\rangle ) and (u(C_i, C_j) \ge \max \nolimits {\textbf{x}\in C_i, \textbf{y}\in C_j} \langle \textbf{x},\textbf{y}\rangle ). Consider the sets of clusters ({\mathcal {S}}l = {C^l_i}{i=1}^{p_l}) and ({\mathcal {S}}r = {C^r_j}{j=1}^{p_r}). Let (L({\mathcal {S}}1,{\mathcal {S}}_2))(= \sum {C_i \in {\mathcal {S}}1, C_j \in {\mathcal {S}}_2} l(C_i,C_j)), and (U({\mathcal {S}}_1,{\mathcal {S}}_2))(= \sum {C_i \in {\mathcal {S}}1, C_j \in {\mathcal {S}}_2})(u(C_i,C_j)). Then, for any two sets of vectors (X = )({\textbf{x}_1,)(\ldots )(, \textbf{x}{p_l}}), (Y = {\textbf{y}1, \ldots , \textbf{y}{p_r}}) such that (\textbf{x}i \in C^l_i), (\textbf{y}_i \in C^r_i), multivariate correlation _ES(X,Y), can be bounded as follows: (ES(X,Y) \in ) $$\begin{aligned} \left[ \left( 1 + \sqrt{\frac{U({\mathcal {S}}_l,{\mathcal {S}}_l)}{p_l^2} + \frac{U({\mathcal {S}}_r,{\mathcal {S}}_r)}{p_r^2} - 2\frac{L({\mathcal {S}}_l,{\mathcal {S}}_r)}{p_l p_r}}\right) ^{-1},\right. \ \left. \left( 1 + \sqrt{\frac{L({\mathcal {S}}_l,{\mathcal {S}}_l)}{p_l^2} + \frac{L({\mathcal {S}}_r,{\mathcal {S}}_r)}{p_r^2} - 2\frac{U({\mathcal {S}}_l,{\mathcal {S}}_r)}{p_l p_r}}\right) ^{-1} \right] \end{aligned}$$ Since (\langle \textbf{x},\textbf{y}\rangle = \cos (\theta {\textbf{x},\textbf{y}})\Vert \textbf{x}\Vert _2\Vert \textbf{y}\Vert _2), we can again use Lemma1 to compute bounds on (L(\cdot ,\cdot )) and (U(\cdot ,\cdot )), which allow us to compute the bounds of Theorem3. This is done by first computing bounds on cosines with Lemma1 for all cluster pairs in _O(1) per pair, and combining those with bounds on the (l_2)-norms of each cluster.Footnote 6 Theorem 4 (Bounds for (TC )) For any pair of clusters (C_i, C_j), let (l(C_i, C_j)) and (u(C_i, C_j)) denote lower / upper bounds on the joint (Shannon) entropy (H(\cdot ,\cdot )) between the clusters’ materializations, i.e., (l(C_i, C_j) \le )(\min \nolimits {\textbf{x}\in C_i, \textbf{y}\in C_j})(H(\textbf{x},\textbf{y})) and (u(C_i, C_j) \ge \max \nolimits {\textbf{x}\in C_i, \textbf{y}\in C_j} H(\textbf{x},\textbf{y})). Similarly, let (l(C_i)) and (u(C_i)) denote lower/upper bounds on the marginal entropies of vectors in the cluster (C_i). Consider the set of clusters ({\mathcal {S}} = {C_i}{i=1}^{p}) with ({\mathcal {S}}_i) denoting the i-th cluster in the set. Then, for any set of vectors (X = {\hat{{\textbf {x}}}_1, \ldots , \hat{{\textbf {x}}}{p}}) such that (\textbf{x}i \in C_i), multivariate correlation _TC(X), can be bounded as follows: (TC(X) \in ) $$\begin{aligned} \left[ \sum {i=1}^{p}l(C{i}) - \sum {i=1}^{p-1}(\underset{1\le j\le i}{\min }u(C{i+1}|C_j))\right. \ \left. - u(C_1), \sum _{C_i \in {\mathcal {S}}}u(C_i) - \underset{C_i,C_j \in {\mathcal {S}}}{\max }l(C_i,C_j) \right] \end{aligned}$$ Theorems1–3 are built on the observation that the multivariate correlation of a set of vectors can be expressed as a function of the pairwise relations exhibited by the vectors in that set. Then, this (exact) expression of a multivariate correlation among individual vectors is extended to bounds on the multivariate correlation among clusters of vectors, which are in turn bounded by Lemma1. Although the Total Correlation of a set of vectors X cannot be expressed as a function of cosine similarities, it can be bounded by other pairwise relations, namely conditional entropies with two variables . This enables us to express bounds the (TC )-value of a set of vectors as a function of correlation bounds between pairs of clusters, similar to the previous Theorems . How these bounds on cluster pairs are computed (and tightened) in the absence of Lemma1 will be discussed in the following section. Note that Theorem4 bounds apply to both discrete and continuous data, using differential entropy for the latter case. In case exact probability functions are unknown for continuous data, one can derive empirical distribution functions through discretization. 3.2.2 Tightening the bounds Empirical pairwise bounds. The bounds of Lemma1—which are used for deriving the bounds of Theorems1, 2, and 3—tend to be pessimistic, as they always account for the worst theoretical case. In the example of Fig.2c, the theoretical lower bound (resp. upper bound) accounts for the case that hypothetical vectors (depicted in pink) are located on the clusters’ edges, resulting in the smallest (resp. largest) possible distance between any pair of points in the clusters. Tightening the bounds on cosine similarities will in turn tighten the bounds on (PC ), (MP ), and (ES ), which will lead to more aggressive pruning power of the algorithm described earlier in this section. The empirical bounds approach builds on the observation that the cosine similarities of any pair of vectors (\mathbf {x_i}, \mathbf {x_j}) drawn from a pair of clusters (C_i, C_j), respectively, is typically strongly concentrated around (( l(C_i, C_j) + u(C_i, C_j) )/2), especially for high-dimensional vectors. The approach works as follows. At initialization, we compute all (pairwise) cosines and store these in an upper-triangular matrix. Then, during execution of Algorithm 1, we compute (l(C_i, C_j)) and (u(C_i, C_j)), when required, as follows: $$\begin{aligned} l(C_i, C_j) = \min {\textbf{x} \in C_i, \textbf{y} \in C_j} \cos (\theta {\textbf{x},\textbf{y}}) \end{aligned}$$ and $$\begin{aligned} u(C_i, C_j) = \max {\textbf{x} \in C_i, \textbf{y} \in C_j} \cos (\theta {\textbf{x},\textbf{y}}) \end{aligned}$$ with (\cos (\theta _{\textbf{x},\textbf{y}})) retrieved from the upper-triangular matrix. The computed (l(C_i, C_j)) and (u(C_i, C_j)) are also cached and reused whenever ((C_i, C_j)) is encountered in another cluster combination. It is important to note that the empirical bounds do not induce errors, since they trivially satisfy the requirements of Theorems1–3 that (l(C_i, C_j) \le )(\min \nolimits {\textbf{x}\in C_i, \textbf{y}\in C_j})(\cos (\theta {\textbf{x},\textbf{y}})) and (u(C_i, C_j) \ge \max \nolimits {\textbf{x}\in C_i, \textbf{y}\in C_j} \cos (\theta {\textbf{x},\textbf{y}})). Therefore, the bounds of multivariate correlations derived using these empirical bounds are still correct. Finally, they are at least as tight as the bounds of Lemma1, since they account only the vectors that are actually present in the clusters and not the hypothetical worst case. There is a clear tradeoff between the cost of computing the empirical pairwise bounds (worst case, quadratic to the number of vectors), and the performance improvement of CD from the tighter bounds. Indicatively, in our experiments, the theoretical pairwise bounds computed from Lemma1 were typically between two to eight times wider compared to the empirical pairwise bounds. Exploiting the tighter empirical bounds led to a reduction of the width of the bounds of Theorem1 by 50% to 90% (for (PC (1,2))), which empowered CD to reach to decisive combinations faster. As a result, total execution time of the algorithm with empirical bounds was typically an order of magnitude less than the time with the theoretical bounds. Therefore, all reported results will be using the empirical bounds. Lastly, note that the empirically-bounded versions of Theorem1 and 2 do not require z-normalization. Still, it is performed in both cases to optimize pairwise cache computation and to ensure that (MP \in [-1,1]), as suggested in . However, z-normalization does not impact relative distances and therefore the top-(\kappa ) query answers are identical. Total Correlation bounds. The empirical bounding approach can also be used to compute bounds on the (conditional) entropies between pairs of clusters, which are key in computing the (TC ) bounds of Theorem4. As (H(A|B) = H(A,B) - H(B)), this can be done by (a) pre-computing and caching all marginal entropies and (pairwise) joint entropies of vectors, and, (b) iterating over the Cartesian products of clusters to derive bounds on the entropies of cluster materializations. Notice that the lower bound of (TC (X)) (see Theorem4) involves iterating over ({\mathcal {S}})in sequence, which indicates a dependency on the ordering of clusters in ({\mathcal {S}}). Thereby, finding the optimal permutation of ({\mathcal {S}}) that produces the tightest bound will increase the lower bound without introducing errors in the result set. The total number of permutations is O(p!), where p is the number of vectors in the correlation pattern. Here we introduce a heuristic that costs (O(p^2)). The heuristic, shown in Algorithm 2, computes a tight upper bound on the joint entropy H(X),Footnote 7 by iterating over the sorted list of marginal and conditional entropies to find a selection of entropies that closely estimates H(X). Note that, for conciseness, Algorithm 2 line 3 indicates we always fetch a conditional entropy (H(C_i|C_j)) from the head of the queue ({\mathcal {H}}). However, as ({\mathcal {H}}) also contains marginal entropies (H(C_i)), the condition may also be empty. Algorithm 2 TCPermHeuristic((\mathcal {H})) Full size image Choosing a distance measure for clustering. The empirical pairwise bounds tighten the bounds on correlations between cluster pairs, leading to also tighter multivariate correlation bounds, and improved efficiency of CD. Tightness of the empirical bounds depends on the cluster radius—clusters with large radii lead to weaker, albeit correct, bounds. This is clear for PC, ES, and MP, where triangle inequality is also present in the theoretical bounds (see Sect.3.2.1). However, our experiments have shown that tuning the clustering distance measure also benefits (TC ) queries, even though (TC ) does not satisfy the triangle inequality. Therefore, the clustering distance measure always impacts the pruning power of the algorithm. As Lemma1 is based on angular distance, clustering for (PC ) and (MP ) employs the clustering loss function (WCSS) with angular radii. For (ES ), Euclidean distance is the obvious choice, since it also considers vector norms, which are not captured with the angular radii but are included in Theorem3. Finally, for (TC ), our experiments showed that the normalized information distance metric (D(X,Y) = 1 - \frac{I(X,Y)}{H(X,Y)}) (first introduced in ) leads to tight multivariate correlation bounds. The intuition behind this observation is that D(X,Y) measures information proximity, similar to (TC )—in fact, D(X,Y) is simply a transformation of the pairwise total correlation (i.e., mutual information) between two variables to a strict distance metric ranging between 0 and 1 . Table2 summarizes these choices. 3.2.3 Handling of additional constraints CD supports both the irreducibility and minimum jump constraints, as described in Sect.2. For irreducibility, the process of identifying whether a simpler combination exists requires testing whether a combination of any of the subsets of ({\mathcal {S}}_l) and ({\mathcal {S}}_r) is already contained in the answers. To avoid the cost of enumerating all (O(2^{|{\mathcal {S}}_l|+|{\mathcal {S}}_r|})) subsets during the execution of Algorithm 1, only the pairwise correlations between any two clusters (C_l \in {\mathcal {S}}_l) and (C_r \in {\mathcal {S}}_r) are examined. Precisely, we use (l(C_l,C_r)), which is already computed for Theorems1–4. If there exist (C_l, C_r) s.t. (l(C_l, C_r))(\ge )(\tau ), then any solution that can be derived from further examining the combination (({\mathcal {S}}_l,{\mathcal {S}}_r)) cannot satisfy the irreducibility constraint. Therefore, (({\mathcal {S}}_l,{\mathcal {S}}_r)) can be discarded. The case of minimum jump is analogous: if any (l(C_l, C_r) \ge UB - \delta ), where UB is calculated as in line 1 of Algorithm 1, then the combination is discarded. By considering only the pairwise correlations during the pruning process may lead to inclusion of answers that do not satisfy the constraints. Such combinations are filtered from the query result before returning it to the user. Since the number of answers is typically in the order of a few tens to thousands, this final pass takes negligible time. Both MP and TC have the property that correlation can only increase when adding an extra variable (i.e., (TC (X \cup {y}) \ge TC (X)). We refer to this property as the monotonicity over increasing pattern length. This reduces the relevance of (MP ) and (TC ) threshold queries without any constraints, as for any (TC (X)\ge \tau ) with (X\subset {\mathcal {V}}), all supersets of X will be in the result set, making it more cluttered. Therefore, we disallow such queries for (MP ) and (TC ), defaulting to the addition of the irreducibility constraint. Note that we could still answer unconstrained queries on (MP ) and (TC ), essentially cost-free, by expanding the result set ({\mathcal {R}}) as follows: $$\begin{aligned} { X \cup A: A \subseteq {\mathcal {V}},X \in {\mathcal {R}} \big \vert |A|\in [1,p-|X|]\subset {\mathbb {N}}^+} \end{aligned}$$ However, we refrain from doing so as these additional results do not provide new insights to the user. 3.3 Top-k queries The top-(\kappa ) variant addresses this issue by allowing users to set the desired number of results, instead of (\tau ). The answer then includes the (\kappa ) combinations of vectors with the highest correlation that satisfy the correlation pattern. Assuming an oracle that can predict the (\tau ) that would yield exactly (\kappa ) results, the top-(\kappa ) queries could be transformed to threshold queries and answered with the standard CD algorithm. Since such an oracle is impossible, many top-(\kappa ) algorithms (e.g., Fagin’s threshold algorithm ) start with a low estimate for (\tau ), and progressively increase it, by observing the intermediate answers. The performance of these algorithms depends on how fast they can approach the true value of (\tau ), thereby filtering candidate solutions more effectively. The top-(\kappa ) variant of CD (see Algorithm 3) follows the same idea. The algorithm has the same core as the threshold-based variant, and relies on three techniques to rapidly increase (\tau ). Top-(\kappa )pairwise correlations First, at initialization, input parameter (\tau ) is set to the value of the (\kappa )’th highest pairwise correlation. Since all pairwise correlations are computed for the empirical bounds, this causes zero additional cost. Exploiting (soft) monotonicity. The second technique is inspired by the property of monotonicity of (MP ) and (TC ), which implies that multivariate correlations can only increase when adding an additional variable (i.e., vector) to the set (i.e., correlation pattern). Thereby, given the top-(\kappa ) combinations of size s, ({\mathcal {R}}_s), one can guarantee that any combination of size (s+1) that is a superset of a combination in ({\mathcal {R}}_s) will have a correlation greater than the lowest correlation in ({\mathcal {R}}_s), and will lead to an increase of threshold (\tau ). This observation is exploited by exhaustively computing the correlations of all possible supersets of size (s+1) after finding ({\mathcal {R}}s), in order to quickly increase (\tau ) before traversing the comparison tree with combinations of size (s+1) to construct ({\mathcal {R}}{s+1}). This technique showed to be very effective for all correlation measures (despite (PC ) and (ES ) not possessing the monotonicity property), as many of the supersets of ({\mathcal {R}}s) were also included ({\mathcal {R}}{s+1}). Prioritization of candidates The last technique is an optimistic refinement of the upper bound, aiming to prioritize the combinations with the highest correlations. The algorithm is executed in two phases. In the first phase, similar to Algorithm 1, the algorithm traverses the comparison tree in a Breadth-First manner (BFS), and computes the upper and lower bound per combination. However, it now artificially tightens the bounds by decreasing the value of the upper bound as follows; $$\begin{aligned} UB_\text {shrunk} = (1-\gamma ) \frac{UB+LB}{2} + \gamma UB \end{aligned}$$ where (\gamma \in [-1,1]) is a shrink factor parameter with a default value of 0. Now, decisiveness of cluster combinations is determined based on ((LB,UB_\text {shrunk})) analogous to Algorithm 1, with an exception of the case where (UB_\text {shrunk} \le \tau < UB) (Algorithm 3 lines 3,7,12). In this case, the cluster combination is postponed for further inspection, and placed in a priority queue based on the combination’s critical shrink factor(\gamma ^)—the minimum value of (\gamma ) for which (UB_\text {shrunk}) surpasses (\tau ) (lines 12–14). Intuitively, a small (\gamma ^) means that the combination (i.e., branch in the comparison tree) is more promising to lead to higher correlation values as a large portion of its bound range ((UB-LB)) exceeds (\tau ). In the second phase (lines 15–18), postponed branches are traversed in a Depth-First manner (DFS) by invoking Algorithm 1 on each combination sequentially. Since (\tau ) continuously increases, and the first branches are likely to contain the highest correlation values, most lower-priority branches do not need many cluster splits to reach decisive combinations. Similar to the previous optimizations, the value of (\gamma ) only impacts efficiency of the algorithm, and not completeness of the results. Our experiments (see Sect.5) have shown that values of gamma around 0 lead to a good balance between DFS and BFS exploration. 3.4 Progressive queries The prioritization technique of Algorithm 3 can also be used as a basis for a progressive threshold algorithm. Precisely, Algorithm 3 can be initialized with a user-chosen (\tau ) and with (\kappa \rightarrow \infty ). This will prioritize the combinations that will yield the strongest correlations, and thus also the majority of correlations larger than (\tau ). Prioritization is frequently useful in exploratory data analytics: the user may choose to let the algorithm run until completion, which will yield results identical to Algorithm 1, or interrupt the algorithm after receiving sufficient answers. Recent work also established accurate (any-time) prediction of result completeness and distance for kNN queries . Although valuable, their methods require significant adaptations for our queries and are thus deferred to future work. We evaluate CD on all proposed query types in Sect.5.2. Algorithm 3 Top-(\kappa )-Query((\mathcal {S}l), (\mathcal {S}_r), _Corr, (\tau ), (\kappa ), (\gamma )) Full size image 4 Detection of multivariate correlations in streaming data Data is frequently observed as a live stream. For example, in finance, asset prices may need to be monitored in real-time for detecting strong correlations in a market, for portfolio diversification . In weather monitoring, real-time detection of correlations may reveal interesting short-term weather events, whereas in server monitoring, detection of unexpected correlations, e.g., on server requests originating from many different IP addresses, may reveal attempts of attacks . Similarly, in neuroscience, real-time analysis of fMRI streams to detect correlations brings novel exploitation opportunities, e.g., for neurofeedback training [22, 31, 54]. Our streaming algorithm, called CDStream, builds on top of CD such that it maintains CD’s solution over a sliding window as new data arrives. CDStream does this efficiently by storing the decisive cluster combinations in a custom index, which can subsequently be used after each streaming update to quickly identify the potential changes to the result set. Clearly, the main challenge is to construct, maintain, and utilize this index efficiently, for processing streams with high update rates. CDStream supports (PC ) and (ES ) correlation measures. In the remainder of this section, we will explain the underlying stream processing model and CDStream algorithm in detail. We will also present an extension to CDStream named CDHybrid, which dynamically switches between CDStream and repeated execution of CD in order to adapt to sudden events and concept drift, and improve robustness. 4.1 Stream processing model CDStream builds on the basic windows model, which is widely used for processing of data streams, e.g., in [19, 24, 51, 53]. The model works as follows: the sliding window, of length w, is partitioned to a set of smaller, fixed-length sub-windows (often called basic windows), each of length b. All stream updates received within a basic window are processed (typically aggregated), to generate a single value for that basic window. In other words, the basic windows define the time resolution handled by the algorithm. The introduction of basic windows offers several benefits: (a) it makes the results robust to outliers, noise in the data, and time series with small-period oscillations, e.g., stocks with high trading volumes, (b) it allows for handling time-misaligned and out-of-order arrivals, which are fairly common in real-life data streams (e.g., stock ticks, sensors with variable measurement intervals, weak/slow network connections), and (c) it allows efficient handling of streams with high update rates. At the same time, this approach introduces a—potentially significant—delay on the results, which can be as large as b time units. The latter constraint becomes limiting when processing periods of high activity (e.g., in high-volatility periods of a stock market, or when a network is under a DDoS attack), where it is critical that the user observes intermediary results as soon as possible. Fig. 3 Example of a stream representation with the BW(^{+}) model with (w=100), (b=20), (epoch=5). With red we denote the index/position of the basic window. The blue numbers correspond to the values of the corresponding windows. The updates in the running basic window and running epoch are shown in green color Full size image CDStream alleviates this limitation by disentangling the period of recomputing the results (the key reason behind the stale results) with the length of the basic window b. The model, called BW(^{+}) hereafter, offers an extra knob to the user, called epoch, which controls the acceptable delay/lag for the algorithm to account for new data. When epoch is set to be equal to b, BW(^{+}) degenerates to the standard basic windows model, e.g., as used in . However, by setting epoch to be less than b, the algorithm is instructed to recompute the results more than once within the period of a basic window, accounting also for the new arrivals in the incomplete basic window. The aggregation unit remains unchanged, i.e., the basic window of size b, which allows meaningful handling of time misalignment, noise and outliers. Furthermore, all completed basic windows are not impacted by the epoch—hence their aggregate values are not recomputed. However, whenever an epoch is completed, the algorithm updates the aggregate value for the incomplete basic window and updates the multivariate correlations, to include these new values. As an example, consider the stream depicted in Fig.3. Assume that epoch is set to 5 msec, and the basic and sliding window lengths, b and w, are set to 20 and 100 msec, respectively. Then, at time 100, BW(^{+}) will have identical results to the standard basic windows model. At time 105, BW(^{+}) will recompute the results, accounting for the values that arrived in basic windows 1–5, and within the first five seconds of the (still incomplete) basic window 6. Therefore, if in the period between time 100 and 105, there were drastic changes that led to updates of the results, these will be detected by BW(^{+}). The same process will be repeated at times 110 and 115, whereas at time 120, basic window 1 will expire and the results of BW(^{+}) will again become identical to the output of the standard basic windows model (not shown in figure). It is important to note that BW(^{+}) with an epoch less than b is not equivalent to running the standard basic windows algorithm with (b = epoch ). BW(^{+}) keeps the completed basic windows intact—it does not change their boundaries when an epoch is complete. As we will explain in the following section, this is leveraged by CDStream to optimize performance by avoiding to store, or recompute, fine-grained partial results. We will come back to the discussion about the properties of BW(^{+}), and its impact in terms of computational efficiency and accuracy/completeness of the results of the algorithm in Sect.4.4. Algorithm 4 HandleEpoch((S,A,C,\mathcal {I},\tau )) Full size image Time-based vs arrival-based epoch. Even though our previous discussion assumed that epochs are defined in time units (seconds, minutes, etc.), this does not constitute a requirement of the model. Epochs can also be defined in number of arrivals (e.g., every 10 arrivals). A definition based on number of arrivals may be preferred in use cases where the arrival rate of the streams changes abruptly, e.g., during a market crash. 4.2 Algorithm core We start with a high-level description of CDStream before going over the details of the underlying custom index, which is instrumental for increasing the throughput of the algorithm. CDStream receives as an input the set of streams, and the configuration parameters of the algorithm—length of the sliding window w and basic window b, epoch, and query threshold. The algorithm starts by executing CD on the last w arrivals in the given streams, and prints the initial results to the user. A byproduct of CD is an upper-diagonal matrix that stores the pairwise correlations between all pairs of streams. We will refer to this as the pairwise correlations cache. Then, CDStream enters the monitoring phase. In this phase, whenever an epoch is completed, the algorithm (shown in Algorithm 4) first detects all streams that have at least one update and recomputes the corresponding aggregate for the last (potentially still incomplete) basic window (line 2). It then refreshes the cache of pairwise correlations, to account for the new arrivals (lines 3–4). Notice that this step does not recompute the correlations from scratch; it updates them from the previous correlation values and the change in aggregate value for the running basic window. Following, the algorithm goes through all updates within the epoch, and checks whether these could lead to changes in the result set (either new additions in the result or removals). This process is supported by a custom-build index, which returns all decisive cluster combinations with bounds impacted by the newly arrived updates. These impacted bounds are then reassessed using Algorithm 1, in order to detect the potential changes in the result set, and to update the index (Algorithm 4 lines 5–9).Footnote 8 The described steps are repeated for (\lfloor epoch/b\rfloor ) epochs, after which a basic window is completed. In that case, CDStream will additionally remove the expired basic window, add the newly-completed basic window, and keep repeating the above process (not depicted in Algorithm 4). In the remainder of this section we will look at the custom index, and how this is maintained and utilized by CDStream. The DCC index In short, the index is used for storing a collection of thresholds, that, when fired, signify a potential change in the answer set.Footnote 9 Particularly, the core idea is to store decisive cluster combinations (abbreviated as DCCs) for all clusters, and enable re-validating only these after every stream update. Recall that each stream (\textbf{s}) belongs to a hierarchy of clusters. For example, vector (\textbf{e}) in Fig.2b belongs to (C_2) and (C_7). For a stream (\textbf{s}), we denote the set of these clusters as ({\mathcal {C}}(\textbf{s})). By construction, the algorithm takes a decision concerning any stream (\textbf{s}) based solely on the decisive combinations including any cluster in ({\mathcal {C}}(\textbf{s})) (see the theoretical results in Sect.3.2.1). As long as those decisive combinations are still valid, the final result will remain correct and complete. A naive approach would be to construct an inverted index that maps each cluster to the decisive cluster combinations it participates in. Then, after any update of a stream (\textbf{s}), we would look at all clusters in ({\mathcal {C}}(\textbf{s})), and find and re-validate all their decisive combinations from the index. The use of this index could become too slow for some use cases, particularly for large correlation patterns, due to the potentially large number of decisive combinations associated with each cluster that need to be checked. Two key observations can be exploited to optimize the use of this index: (a) the empirical correlation bounds described in Sect.3.2.2 do not depend on all streams contained in the cluster, but are determined solely by (l(C_i, C_j)) and (u(C_i, C_j)), the minimum and maximum pairwise correlations between all involved clusters in the combination, and (b) the previous applies independent of the number of the clusters contained in the left and right side of the cluster combination. Therefore, the DCC index is designed around these minimum and maximum pairwise correlations. Figure4a depicts an example of the internal organization of the DCC index. At the outer layer, the index is an inverted index that maps each stream (\textbf{s}) to a list of extrema pairs. A pair of streams is called an extremum pair if there exists at least one cluster combination for which this pair constitutes a determining pair, i.e., it is the pair determining the value of (l(C_i, C_j)) or (u(C_i, C_j)). For example, in Fig.2c, the minimum and maximum extrema pairs for ((C_2, C_3)) are (\langle \textbf{h},\textbf{g}\rangle ) and (\langle \textbf{b},\textbf{f}\rangle ), determining the minimum value (l(C_i, C_j)) and maximum value (u(C_i, C_j)), respectively. At the inner layer, for each extremum pair ep we keep a list of all opposite clusters, i.e., the clusters that do not include (\textbf{s}), and participate in at least one decisive cluster combination having ep as an extremum pair. For example, focusing at (\textbf{c}) in Fig.4a, we see that one of its extrema pairs is (\langle b,f \rangle ), which is reused by both clusters (C_2) and (C_8). The clusters are stored in decreasing size, i.e., the cluster at position (i+1) will be a sub-cluster of the cluster at position i. For each cluster, we store all decisive combinations, and whether these are positive or negative. In our running example, for cluster (C_2) we have a negative combination ((C_2, C_3)) and a positive combination ((C_1, (C_2, C_3))). This way of indexing and querying ensures that we only re-validate DCCs with an actual change in bounds, and that this set is complete (i.e., we do not miss any violations). When an update is observed at stream (\textbf{s}), the first step is to use the index for retrieving all extrema pairs that involve a cluster in (C(\textbf{s})). For each extremum pair, we check the pairwise cache whether the pair has changed as a result of the last update. This will happen, e.g., if the update of (\textbf{s}) has caused (\textbf{s}) to form a new extremum pair with another stream, replacing an older pair. If the extremum pair has not changed, we can skip all contents grouped under this pair altogether. In our running example, if (\textbf{c}) has been updated, but (\langle \textbf{b},\textbf{f}\rangle ) is still a valid extremum pair for cluster (C_2), no further validations are needed for any of the combinations involving (C_2). Furthermore, no validations are required for the combinations involving (C_8) (and any other clusters following (C_2) with the same extremum pair), since (C_8) is a strict subset of (C_2) (recall that the clusters are ordered based on their size). If, on the other hand, the update has invalidated an extremum pair, the algorithm drills into the contents of the inner layer, and goes over the clusters sharing this extremum pair. If, e.g., (\textbf{c}) was updated and (\langle \textbf{b},\textbf{f}\rangle ) is no longer an extremum pair for (C_2), we need to check and adjust all combinations stored for (C_2) (in this example, ((C_2, C_3)) and ((C_1, (C_2, C_3)))). This is done by adjusting the extrema pairs and bounds using Theorems1 and 3, re-validating whether the combination is still decisive—positive or negative, and updating the solution accordingly. In this step, the algorithm may even need to break a cluster to two or more sub-clusters, until it again reaches to decisive combinations. However, again, as soon as we find a cluster for which the extremum pair does not change after the update, we can move to the next extremum pair. Fig. 4 a Visualization of the decisive combination index; b Number of results and execution time per basic window, with BW(^{+}) and the standard basic window model. BW(^{+}) is configured with epoch size 1. The results correspond to the Stocks dataset, with (n=1000), (w=120000), and (b=20) Full size image 4.3 User constraints and top-(\kappa ) queries To support the minimum jump and irreducibility constraints, additional triggering functionalities, further described below, are added to the index of CDStream. Irreducibility constraint. Let (X,Y,X',Y') denote sets of clusters. Consider combinations (X,Y), and ((X'\subseteq X, Y'\subseteq Y)), with (|X\cup Y| > |X'\cup Y'|), i.e., irreducibility excludes (X,Y) from the results if ((X', Y')) is in. We need to detect two additional cases: (1) (X,Y) needs to be removed from the result set because ((X',Y')) just surpassed (\tau ), and, (2) (X,Y) needs to be added in the result set, because ((X',Y')) was just removed from the result set because its correlation dropped below (\tau ). Both cases can be triggered by an update of a vector from X or Y (hence, also from (X') and (Y')). Without the irreducibility constraint, the index contains the following extrema pairs: (a) for the negative decisive combinations, the pairs required for upper bounding the correlation, (b) for the positive decisive combinations, all pairs required for lower-bounding the correlation. The irreducibility constraint requires also monitoring of the upper bounds of positive decisive combinations (e.g., for case (1), when an increase of (Corr(X',Y')) will cause the following condition to hold: (Corr(X',Y') )(> \tau ) which will mean that (X,Y) need to be removed from the result set) and the lower bounds of negative decisive combinations with any (Corr(X',Y') > \tau ). These decisive combinations are also added in the index, under the extrema pairs, and checked accordingly. Minimum jump constraint. Monitoring for the minimum jump constraint is analogous to the irreducibility constraint. The following cases need to be considered: (1) (X,Y) needs to be removed from the result set because (Corr(X',Y')+\delta >Corr(X,Y)), and (2) (X,Y) needs to be added in the result set because (Corr(X',Y') + \delta < Corr(X,Y)). Both cases are identified using the discussed method for monitoring the irreducibility constraint. Top-(\kappa )queries Recall that CDStream is initialized with the result of CD. For a top-(\kappa ) query, CDStream queries CD for a slightly larger number of results (\kappa '=b_k\kappa ), where (b_k) is a small integer, greater than 1. CDStream finds the minimum correlation in these results, and uses it as a threshold (\tau ) in the streaming algorithm. As long as the size of the result set is at least (\kappa ), the true top-(\kappa ) results will always have a correlation higher than (\tau ) and will be contained in the top-(\kappa ') results maintained by the algorithm. Therefore, the top-(\kappa ) out of the detected top-(\kappa ') correlations are returned to the user. Scaling factor (b_k) controls the tradeoff between the robustness of the streaming algorithm for top-(\kappa ) queries, and its efficiency. Setting (b_k=1) may lead to the situation that, due to an update, fewer than (\kappa ) results exist with correlation greater than or equal to (\tau ). CDStream then fails to retrieve enough results, and resorts to CD for computing the correct answer, and updating its index. Conversely, a large (b_k) will lead to a larger number of intermediary results, and to more effort for computing the exact correlations of these results, which is necessary for retaining the top-(\kappa ) results. Our experiments with a variety of datasets have shown that (b_k=2) is already sufficient to provide good performance without compromising the robustness of CDStream. We evaluate CDStream in Sect.5.3. 4.4 Impact of the extended basic window model on CDStream Recall that CDStream leverages the proposed extended basic window stream processing model (abbrev. as BW(^{+})) in order to identify updates on the result set earlier. By construction, BW(^{+}) is at least as good as the standard basic windows model in terms of completeness of the result set, since it replicates its behavior every time a basic window is completed. The further improvement that we can expect from BW(^{+})—compared to the standard basic windows model—depends on the volatility of the input streams. In periods where the input streams contain negligible changes, BW(^{+}) will detect very few additional correlations (if any), compared to the standard model. In periods of high volatility, such as market crashes, BW(^{+}) will detect updates and new correlations faster. To examine the importance of BW(^{+}) and evaluate its impact on the computational efficiency of CDStream, we compared the results of CDStream, with and without BW(^{+}). Figure4b presents the number of results (left axis) and runtime (right axis) of CDStream of the two models. The results correspond to processing of a stream with minute-granularity stock prices of 1000 stocks on 16 March 2020 (the dataset is further described in Sect.5). This day was selected because it was the day of the largest price drop in the 2020 Covid crash . As ground truth, we used the results of CD on the same input dataset (without basic windows), recomputed at the end of each epoch. We see that BW(^{+}) is able to identify jumps in the number of results significantly earlier than (\textsc {BW} ). Comparison with the ground truth revealed that BW(^{+}) maintained a recall of 97.8% during this period while (\textsc {BW} ) recall decreased to 69.0%. From epoch 0 to 60 (prior the crash), the recall of BW(^{+}) was 100%. It is also interesting to consider execution time per basic window. Since the new model subsumes the basic window model, it is slightly more expensive to maintain. However, extra computation is only around 10%, for the more-detailed epoch. This extra computation can of course be adjusted, by increasing the epoch length. Therefore, all experiments hereafter will only focus on the BW(^{+}) model. 4.5 CDHybrid: combining CD and CDStream Recall that CDStream handles the stream updates in epochs. The algorithm exhibits high performance when the updates do not drastically change the results set. In streams where the answer changes abruptly, it may be more efficient to simply run CD after the completion of each epoch and recompute the solution from scratch, instead of maintaining CDStream’s index and the result through time. CDHybrid is an algorithm that orchestrates CD and CDStream, transparently managing the switch between the two algorithms based on the properties of the input stream. To decide between CD and CDStream, CDHybrid needs to estimate the cost of both approaches for handling an epoch. A good predictor for this is the number of arrivals in the epoch—more arrivals tend to cause more changes in the result, which takes longer for CDStream to handle. Therefore, CDHybrid starts with a brief training period, where it collects statistics on the observed arrival count and execution time of the two algorithms. Simple (online) linear regression is then used to model the relationship between execution time and the observed number of arrivals. Note that the coefficients of a simple linear regression model can be maintained in constant time and space. Therefore, the regression model is continuously updated, even after the training phase. Switching from one algorithm to the other works as follows. Switching from CDStream to CD. We cache the current results of CDStream (we will refer to these as ({\mathcal {R}}_\text {CDStream})) and stop maintaining the index. When an epoch is completed, the vectors are updated and passed to CD for computing the result. Table 3 Default parameters for the experiments with static and streaming data Full size table Switching from CD to CDStream. Since the stream index was not updated for some time, we need to update it before we can use it again. We compute the symmetric difference (\varDelta ) of the current results of CD (denoted as ({\mathcal {R}}\text {CD})) with the last results of CDStream ({\mathcal {R}}\text {CDStream}). Any result r contained in (\varDelta \cap {\mathcal {R}}\text {CDStream}) is due to a positive decisive combination that has now become negative, whereas any _r contained in (\varDelta \cap {\mathcal {R}}_\text {CD}) leads to a new positive decisive combination. In both cases, the algorithm updates the index accordingly. There is also the case that a decisive combination becomes indecisive. In this case, the algorithm recursively breaks the combination further, as shown in Algorithm 1. We evaluate CDHybrid in Sect.5.3.3. 5 Evaluation The purpose of our experiments was twofold: (a) to assess the scalability and efficiency of our methods, and, (b) to compare them to a series of baselines. The baselines include the state-of-the-art algorithms for multivariate correlation discovery [2, 3], two variants of an exhaustive search algorithm, as well as multiple modern database management systems (DBMS) that can be used to detect multivariate correlations. Our evaluation does not consider the practical significance of multivariate correlations, as this was already extensively demonstrated in earlier works and case studies from different domains, e.g., [2, 3, 29] (see Sect.1 for more examples). Still, to ensure that our evaluation is conducted on data where detection of multivariate correlations has practical significance, we also compare our methods with the data used in these past case studies (or data of the same type, where the original data was inaccessible). Hardware and implementations. All experiments, except for the comparison with the DBMS systems, were executed on a server equipped with two Xeon E5-2697v2 12-Core 2.70 GHz processors, and 500GB RAM. For CoMEtExtended and CONTRa, we used the original implementations, which were kindly provided by the authors [2, 3]. We additionally configured the implementation of CoMEtExtended in order to use all available cores of our server. Consequently, all implementations, except CONTRa and two DBMS, were multi-threaded. Algorithm performance comparisons are exclusively made under matching execution styles (e.g., comparing single-threaded CD only to CONTRa and DBMS). All implementations, except of the UNOPT exhaustive search baseline, cached and reused the pairwise correlation computations, using our results presented in Sect.3.2. This caching was always beneficial for performance. The reported execution time for CD and CDStream corresponds to the total execution cost including the steps of normalizing, clustering and calculating pairwise correlations. All reported results correspond to medians after 10 repetitions. Due to permission constraints on the server, the DBMS experiments were executed on another machine, with an Intel i7-10750H 12-Core 2.60 GHz processor, 32GB RAM, running Ubuntu 22.04.1 LTS. Datasets. We present extensive evaluation results on seven datasets, coming from distinct disciplines (neuroscience, finance, crypto trading, climate science, and machine learning). See GitHub for download links, pre-processing steps, instructions, and code for reading and processing the data. Stocks. Daily closing prices of 28678 stocks over the period Jan. 2, 2016 to Dec. 31, 2020 leading to 1309 observations. For the streaming experiments, we used the minute closing prices of the stocks. fMRI. Functional MRI data of a person watching a movie.Footnote 10 Five datasets were extracted by mean-pooling the data with kernels of different size, leading to 237, 509, 1440, 3152, and 9700 time series, respectively, all of 5470 observations. A similar dataset was used in the case study of . SLP & TMP. Segment of the ISD weather dataset containing sea level pressure (SLP) and atmospheric temperature (TMP) readings of 3222 sensors. CD was evaluated on the daily average values between January 1, 2016 and December 31, 2020, leading to 2927 readings per time series. Streaming experiments were run on hourly sensor measurements. SLP-small. Sea Level Pressure data , as used in the case study of . The dataset contains 171 time series, each with 108 observations. Crypto. 3-hour closing prices of 7075 crypto-currencies, each with 713 observations, covering the period from April 14, 2021 to July 13, 2021. Streaming experiments were run on minute-level closing prices. Deep. A billion vectors of length 96, obtained by extracting the embeddings from the final layers of a convolutional neural network . Whenever needed, we obtain subsets of these datasets with random sampling. To avoid repetition, in the following we will mention the experimental configuration only when this deviates from the default configuration, described in Table3. The remaining section is organized as follows. We start with a comparison of our methods to the baselines (Sect.5.1), and then conduct an extensive sensitivity analysis of CD (Sect.5.2) and CDStream (Sect.5.3). 5.1 Comparison to the baselines We start by comparing CD to the baselines: (a) two algorithms based on exhaustive search, (b) commercial and open-source modern database management systems, (c) CoMEtExtended , and (d) CONTRa . Our experiments compare both efficiency and recall of all systems for threshold queries. Fig. 5 Scalability of CD and exhaustive baselines for threshold queries on subsets of Stocks. Notice that the Y axis is in logarithmic scale. a(ES (1,2), \tau =0.85); b(MP (3), \tau =0.85); c(PC (1,2), \tau =0.85); d(TC (3), \tau =1.7) Full size image Comparison to exhaustive search baselines No other solution covers CD’s range of queries and correlations. For reference on the complexity of the problem, we constructed two baselines (UNOPT and OPT) that exhaustively compute all multivariate correlations by iterating over all possible combinations of vectors. OPT reuses cached pairwise correlations (exploiting our results presented in Sect.3.2), whereas UNOPT recomputes them for every combination. This comparison only focuses on execution time, as all methods have perfect precision and recall. Figure5 plots the time required from CD, UNOPT, and OPT to execute a threshold query on different subsets of Stocks, with sizes up to 12,800 vectors. All algorithms were given at most 8 h to complete. The thresholds were selected such that all correlation measures bring approximately the same number of results on each dataset. Our first observation is that execution time of CD grows at a much slower rate compared to both baselines, for all correlation measures. Furthermore, the difference in efficiency increases with dataset size, which stresses the importance of having an efficient solution like CD. Therefore, CD can handle significantly larger datasets than the baselines. Comparing OPT to UNOPT, we see that caching of the pairwise correlations improves performance for (ES ), (PC ), and (MP ), but not for (TC ). This is because (TC ) is not amenable to the caching optimization, i.e., the (TC ) of three or more vectors cannot be expressed as a linear combination of the pairwise (TC ) values. Yet, even for the other three measures, OPT still times out for larger datasets. The fact that CD scales better than OPT indicates that its core performance boost comes from the way it utilizes the cluster bounds. Comparison to contemporary DBMS CD’s operation can be expressed as an SQL query, as shown in Fig.6a, which shows a (PC (1,2)) threshold query on a (z-normalized) table named “fmri" in SQL. This observation allows us to compare performance of CD with general-purpose state-of-the-art RDBMS. Our experiment used four off-the-shelf databases, all configured with RAM-stored tables for equitable evaluation, given CD’s RAM usage. DBMS1 and DBMS3 supported array attributes, so we developed array functions for Pearson correlation calculation. The other DBMSs stored data in long format (with columns corresponding to a primary key, vector id, time, and value), utilizing a SPSVERBc1 clause for Pearson correlation. Due to limited multi-threading support, all approaches, including CD, ran single-threaded with an eight-hour query limit. Figure6b shows the execution times for each system to detect (PC (1,2)) on different resolutions of the fMRI dataset. The reported DBMS times do not include the one-off costs of loading the dataset in the DBMS and creating the indices. We see that CD outperforms all DBMS by several orders of magnitude, and the difference between CD and the baselines increases with dataset size. In particular, time complexity for all DBMS seems to follow (O(n^3)) for performing a triple nested loop (n is the number of vectors), whereas CD’s execution time grows at a much slower rate. Furthermore, the results indicate that all DBMS perform similarly to an exhaustive search algorithm, iterating over the full search space. Fig. 6 a(PC (1,2)) threshold query, implemented with SQL. The correlation measure is implemented as a stored function. b Comparison of CD with contemporary DBMS, (PC (1,2)), (\tau =0.8, \delta =0.1), fMRI Full size image Table 4 Comparison of CD with CoMEtExtended on SLP-small dataset: execution time (s) and number of retrieved results Full size table Comparison to CoMEtExtended Our next experiment was designed to compare CD with CoMEtExtended . CoMEtExtended’s goal differs slightly from our problem statement. First, CoMEtExtended is approximate without guarantees. Still, its recall can be tuned by parameter (\rho \text {CE}), which takes values between (-1) and 1. Values around 0 offer a reasonable tradeoff between efficiency and recall . In contrast, CD delivers complete answers, making execution time and recall both relevant in our comparison. Second, CoMEtExtended focuses on _maximal strongly correlated sets, whereas CD finds all such sets (up to a specified cardinality). To ensure a fair comparison for CoMEtExtended, we also considered all subsets of the sets returned by CoMEtExtended. When a subset of a CoMEtExtended answer satisfied the query, we added it to the results, thereby increasing CoMEtExtended’s recall. This post-processing step was not included in the execution time of CoMETExtended, i.e., it did not penalize its performance. Table4 presents the number of results and execution time of CoMEtExtended and CD on the same dataset (SLP-small) and the configuration parameters used in . We only consider the (MP ) measure, since CoMEtExtended does not support the other three measures. We see that CD is consistently faster than CoMEtExtended—at least an order of magnitude—and often returns substantially more results. Indicatively, for (MP (4)), CoMEtExtended with (\rho \text {CE}=0) (resp. (\rho \text {CE}=0.02)) is one to two (resp. two to three) orders of magnitude slower than CD. Notice that for queries with (\delta = 0.1), (\rho \text {CE} = 0.02) and (\tau = 0.4), CoMEtExtended also found 281 results with 6 vectors, and one with 7. These amount to (\sim ) 0.3% of the total amount of discovered results. These were not discovered by CD, which was executed with (p_l = 5), prioritizing the simpler and more interpretable results. Nevertheless, even for these settings, CD still found 25% more results than COMEtExtended, and in one third of the time. Moreover, the case studies presented in [2, 3] amongst others on this dataset demonstrate the usefulness and significance of relatively simple relationships, involving at most four time series. Other works on multivariate correlations also emphasize the discovery of relationships that do not contain too many time series . For these cases, with a fixed (l{\max }), CD is guaranteed to find a superset of COMEtExtended’s result set, at a fraction of its cost. Table 5 Comparison of CD with CONTRa on fMRI dataset ((n=9700)): execution time (s) and number of retrieved results Full size table Comparison to CONTRa We also compared CD to CONTRa for discovery of tripoles (i.e., (PC (1,2) \ge \tau )). To ensure a fair comparison, CD was parameterized to find the same results as CONTRa and to utilize the same hardware, as follows: (a) CD was executed with (\tau =0), i.e., pruning was solely due the minimum jump constraint, and (b) CD was configured to utilize at most one thread/core, since the implementation of CONTRa was single-threaded. CONTRa was configured to return exact results. Table5 includes execution time and number of results per method.Footnote 11 We see that CD is more efficient than CONTRa for detecting identical results, even with (\tau =0). However, (\tau =0) yields an impractically large amount of results. As such, we also evaluate CD with (\tau = 0.5) (corresponding to the lowest correlation reported in the case studies of ), and with (\tau =0.9), which gives a more reasonable amount of results. This further decreases the execution time of CD by one to two orders of magnitude, while preventing an overwhelming number of results. Fig. 7 Effect of (\kappa ) values and dataset on execution time, with (ES (1,2)), (MP (3)), (PC (1,2)), (TC (3)). a Effect of (\kappa ), Stocks; b Effect of (\kappa ), fMRI; c Top-(\kappa ) on all datasets Full size image 5.2 CD on static data The following experiments are designed to evaluate the efficiency of CD under different conditions (configurations, datasets, and queries). We first examine the impact of CD’s configuration parameters (the shrink factor and the clustering distance) to CD’s efficiency. We do not consider recall, since CD is exact, always giving complete answers. Then, we evaluate the performance of CD for top-(\kappa ) and threshold queries. 5.2.1 Optimizing configuration parameters We also tested the impact of the values of (\gamma ) and K (shrink factor and number of sub-clusters per cluster) on CD’s efficiency for different configurations. The results showed that both very small ((\gamma = -0.8)) and very large ((\gamma = 0.8)) shrink factor values lead to sub-optimal performance of CD (roughly 38–72% slower than the optimal (\gamma ) value), as they delay the increase of the running threshold (\tau ). Similarly, extreme values of K also led to sub-optimal performance, with executions being as much as 2x slower for ES, PC, and MP queries, and up to 4x slower for TC queries, compared to the execution times with optimal K values. Detailed results can be found in the technical report [12c ")]. However, setting \(\gamma =0\) and \(K=10\) led to near-optimal performance at all configurations—at most \(17\%\) worse than the optimal performance in each case. Therefore, for the following experiments we set \(\gamma =0\) and \(K=10\). 5.2.2 Top-(\kappa ) queries Effect of(\kappa ). Figure7a, b show the execution time of CD for different values of (\kappa ) for Stocks and fMRI. We see that a decrease of (\kappa ) typically leads to increased efficiency. A low value of (\kappa ) allows for a rapid increase of the running threshold (\tau ), leading to more aggressive pruning at Algorithm 1, line 4. Interestingly, this effect is not equally visible among all considered correlation measures. For example, a reduction of (\kappa ) gives a significant boost to (ES ), but a much smaller boost for (MP ). This discrepancy is attributed to the correlation values in the result set and the tightness of the bounds. Indicatively, in this experiment, the lowest (MP ) value in the result set only decreases from 0.998 (top-100) to 0.9972 (top-500) on the Stocks dataset. In contrast, the lowest (ES ) value in the result set decreases from 0.694 (top-100) to 0.672 (top-500) on the same dataset. Effect of the correlation pattern Table6 presents execution time of CD for different correlation patterns. As expected, increasing the complexity of the correlation pattern leads to an increase of the computational time. However, even though the size of the search space follows (O\left( {n \atopwithdelims ()p_l + p_r} \right) ), execution time of CD grows at a much slower rate. Indicatively, for the fMRI dataset, the search space size grows 5 orders of magnitude between (PC (1,2)) and (PC (1,4)), whereas CD’s execution time increases by only three orders of magnitude, indicating efficient pruning of the search space. Table 6 Execution times of CD with different correlation patterns on Top-(\kappa ) queries (seconds) Full size table Experiments with different datasets Fig.7c shows the execution time of CD for all correlation measures on different datasets. We see that efficiency of CD does not vary significantly for (ES ) and (PC ). However, performance for queries involving (TC ) fluctuates significantly across datasets. This is again attributed to the inherent characteristics of the datasets: analysis of the distributions of the multivariate correlation values in the datasets revealed that the correlations in each dataset followed gamma-like distributions. For (TC ), it is sometimes the case that the mean of this distribution is very close to the minimum correlation in the answer set, i.e., the correlation of the top-(\kappa )’th answer. In other words, total correlation is not sufficiently discriminating on these datasets. These situations could be prevented by performing exploratory analysis on the correlation value distribution of a small sample of the dataset. If this analysis does not indicate exceptionally high correlations in the dataset, the data analyst could opt for an alternative correlation measure. 5.2.3 Threshold queries Effect of threshold Figure8 shows the effect of threshold (\tau ) on the execution time of CD for the Stocks (left Y-axis) and fMRI dataset (right Y-axis) for each correlation measure, and for different constraints. Our first observation is that increasing the threshold leads to higher efficiency for all correlation measures and both datasets. This is expected, since a higher threshold enables more aggressive pruning of candidate comparisons: the upper bounds derived by Theorems1–4 will be below (\tau ) more often, leading to less recursions. For similar reasons, the addition of stronger constraints (i.e., higher (\delta ) or introduction of the irreducibility constraint) generally leads to better performance due to increased pruning power. Furthermore, CD is noticeably faster for (PC ) compared to (MP ) for the same (\tau ) values. This is due to two reasons: (a) the high complexity of the computation of eigenvalues of a matrix (cubic to (p_l)), which is required for computing the bounds for (MP ) (Theorem2), and, (b) (MP ) typically results in higher correlation values and more answers for the same value of (\tau ) compared to (PC ). Fig. 8 Effect of constraint and (\tau ) on query performance (Stocks and fMRI), a(ES (2,2)); b(MP (4)); c(PC (2,2)); d(TC (3)) Full size image Progressive variant of CD It is desired for progressive algorithms to collect the majority of results quickly, in order to give early insights to the user about the results, and to enable them to modify/adjust their queries. To evaluate this characteristic of progressive CD (Sect.3.4), we modified our code such that it saves the discovered results at different time points, and compared these intermediary results with the ground truth, in order to compute recall. In this set of experiments, we focused exclusively on queries which take significant time to complete, since these are the ones that would mostly benefit from a progressive algorithm. Figure9 plots the number of results returned by progressive CD at different time points, for all correlation measures on the Stocks dataset. We see that for all correlation measures, CD retrieves more than 90% of the results in the first few seconds – less than 10% of the total execution time. This property of CD is particularly appealing for cases where approximate results will suffice. Fig. 9 Number of retrieved results over execution time, for progressive execution of queries on the Stocks dataset. a(ES (1,3)) query,(\tau =0.58, \delta =0.03); b(MP (4)) query, (\tau =0.8, \delta =0.05); c(PC (1,3)) query, (\tau =0.7, \delta =0.12); d(TC (3)) query,(\tau =0.25, \delta =1.3) Full size image 5.3 CDStream on streaming data Fig. 10 a Effect of dataset size and correlation pattern, with (\tau = 0.95), Stocks, b Effect of epoch size (time-based), (PC (1,2)) with (\tau = 0.95), Stocks, c Effect of top-(\kappa ), (PC (1,2)), Stocks Full size image Table 7 Effect of (\tau ) and (\delta ) on CD and CDStream’s average execution time per epoch (in seconds) with streaming data, Stocks Full size table The third set of experiments was designed to evaluate the performance of CDStream. We used the timestamps contained in all datasets (except Deep, which did not contain the notion of time) for generating the streams. Hereafter, we will present detailed results for the Stocks dataset, and include results with the other datasets only when these provide additional insights. We start with experiments with a time-based epoch definition (Sect.5.3.1), and then investigate the performance of CDStream using arrival-based epochs (Sect.5.3.2). In the technical report [12c ")] we present additional experiments, including an analysis of the algorithm’s performance when executed for a prolonged time period, and an analysis of the impact of the sliding window size on CDStream’s efficiency. 5.3.1 Experiments with time-based epochs Effect of number of streams Figure10a presents the average processing time per epoch of CDStream, for different numbers of streams. Since there is no streaming baseline for CDStream, the plot includes the average execution time taken by CD, per epoch, to compute the answers, using the same sliding window data (of course, repeated executions of CD are needed to maintain the results with the streaming updates). We see that CDStream is more efficient than CD for small correlation patterns, requiring only a few milliseconds per epoch—an order of magnitude less compared to CD for both correlation measures. Also note that, even though the search space grows at a combinatorial rate with the number of vectors, the growth in execution time of CDStream is substantially slower. This is attributed to the grouping technique in the CDStream index, which effectively reduces the work for processing each update. Also notice that CD outperforms CDStream on more complex correlation patterns. This is because of the index maintenance cost of CDStream: for more complex correlation patterns, the number of combinations that need to be maintained in the index also grows, eventually surpassing the performance boost coming from the index. Since CD does not depend on this index, it avoids this cost. This observation clearly demonstrates the importance of an automated algorithm (similar to the hybrid algorithm proposed in Sect.4.5) that can dynamically switch between the two for optimizing performance. Effect of the query parameters Table7 presents the effect of (\tau ) and additional constraint values (minimum jump and irreducibility) on CDStream’s performance. We see that efficiency of CDStream is robust to constraints—a constraint only causes a small difference in the number of decisive combinations that need to be monitored. In contrast, an increasing value of (\tau ) leads to better performance, as decisive combinations are reached earlier, similar to the case of CD. Effect of epoch size For the next experiment, we fixed the basic window size to 10 min, and measured the processing time per basic window (i.e., sum of epoch execution times), for different epoch sizes. Since the basic window size is fixed, epoch size also determines the number of epochs per basic window. The results, presented in Fig.10b for the Stocks dataset, demonstrate that CDStream utilizes larger epochs to increase efficiency: larger epochs (alternatively, fewer epochs per basic window) allow CDStream to optimize the checks on the affected DCCs, by combining multiple updates and checking the affected DCC only once. Furthermore, a larger epoch increases the probability that arrivals with outlier values (potentially due to noise)—which would otherwise cause temporary invalidations of DCCs—are dampened by other arrivals on the same stream. We also see that, for all configurations with ES, CDStream requires less cumulative time per basic window to maintain the results, compared to a single execution of CD at the end of the basic window. In other words, CDStream updates the results more frequently compared to CD (up to 10 times more frequently in this experiment), and still requires less total execution time. With PC, CDStream with 1 and 2 epochs per basic window has comparable performance with a single execution of CD. Increasing the number of epochs further enables CDStream to provide even more frequent updates compared to CD, yet with a slight degradation of efficiency (up to 20% more time). This discrepancy of results for the two correlation measures is due to the inherent distribution of the correlation values—the results for PC change more rapidly compared to the results for ES in this dataset, which causes a higher cost for maintaining the index. Top-(\kappa )queries Figure10c plots the average processing time per epoch for top-(\kappa ) queries (PC (1,2)) and (ES (1,2)), for different (\kappa ) values. The results correspond to the Stocks dataset with 1000 stocks. We see that processing time for both algorithms increases with (\kappa ) for both correlation measures, but at a much slower rate for CDStream compared to CD. In CD, execution time grows almost linearly with (\kappa ) (from 4.94 to 17.20 s for (ES (1,2))), whereas for CDStream the time increases by only 7% for the same queries. The reason for this notable difference in efficiency is that CDStream only maintains the top-(\kappa ) solutions, already having a good estimate for the threshold of the top-(\kappa ) highest correlation from previous runs, whereas CD has to start each run from scratch. Therefore, for CDStream, the only increase in execution time for higher (\kappa )-values comes from updating and sorting a slightly larger result set and buffer set. 5.3.2 Experiments with arrival-based epochs Effect of epoch size Figure11a plots the average processing time per arrival, for varying epoch sizes. As a reference, the plot also includes the average processing time for a periodic re-execution of CD after the end of every epoch (amortized on the epoch’s arrival). We see that increasing the epoch size also increases CDStream’s performance. This behavior is expected, since a larger epoch provides more opportunities to CDStream for reducing the number of DCCs that need to be checked. Therefore, similar to the case of time-based epochs (Sect.5.3.1), epoch size provides a knob to the user for fine-tuning the trade-off between freshness of results and CPU/total execution time. Fig. 11 Effect of query parameters on CDStream’s performance with an arrival-based epoch. a Effect of epoch size, with (\tau = 0.95), Stocks; b Effect of dataset Full size image Also observe that the execution time per arrival for CD approaches that of CDStream as the epoch size increases. In the case of (PC ), the processing time for the two algorithms crosses at epoch size 80, whereas for (ES ), this crossing happens at epoch size 160. This difference is due to the inherent distribution of correlations according to the two correlation measures in this dataset. Effect of dataset Figure11b presents the average execution time per arrival (i.e., epoch size of 1), for (PC (1,2)) and (ES (1,2)) threshold queries on all datasets. The cost of a periodic execution of CD at the end of every epoch is also included in the figure, as a reference. We see that, even though arrivals are processed in at most 50 msec, processing cost is noticeably higher for the two weather sensor datasets (SLP and TMP) compared to all others. This can be attributed to the lower time resolution in these two datasets(minimum arrival rate for these datasets is 1 h, compared to seconds/minutes for the others). This leads to a substantially higher volatility of the result set, and consequently, to more frequent updates in the DCC index. Fig. 12 a Efficiency of CDHybrid over time, (PC (1,2), n=1000,) Stocks, b Impact of dataset on CDHybrid efficiency, (ES (1,2)), c Impact of dataset on CDHybrid efficiency, (PC (1,2)) Full size image 5.3.3 Evaluation of CDHybrid For the final set of experiments, we test the ability of CDHybrid to switch seamlessly and efficiently between CD and CDStream, in order to minimize processing cost in the presence of stream bursts. Since our streams did not present significant bursts that could cause noticeable differences to CDStream throughout the runtime of CDStream, we introduced an artificial burst at all streams between epochs 70 and 90, by temporarily increasing the arrival rate by a factor of 30 (i.e., speeding up all streams during these epochs). CDHybrid was allowed a small warmup period of 40 epochs, where it was processing the updates, but was also switching between CD and CDStream in order to collect initial measurements and train the cost regression model. Algorithm effectiveness Figure12a depicts the processing time per epoch (moving averages over 5 epochs), for processing Stocks with CD, CDStream, and CDHybrid. The figure also includes the number of arrivals within each epoch (right Y axis). We observe that when the burst starts—at around epoch 70—CDStream becomes substantially slower than CD, whereas performance of CD is not impacted. CDHybrid immediately recognizes the burst and switches to CD, thereby maintaining peak performance. After the burst is completed (shortly after epoch 90), CDHybrid switches back to CDStream. This switch includes a small additional overhead for updating the DCC index. However, this overhead is insignificant. Effect of dataset Figure12b, c show the average processing time per epoch for CD, CDStream, and CDHybrid on all datasets (excluding the warm-up time), for (ES (1,2)) and (PC (1,2)) queries, respectively. We see that CDHybrid consistently outperforms both CD and CDStream. This means that neither CD nor CDStream is the best algorithm for processing the whole stream. Yet, CDHybrid efficiently switches between the two as a response to the varying arrival rate, thereby providing near-optimal performance for each epoch. 6 Conclusions We considered the problem of detecting high multivariate correlations with four correlation measures, and with different constraints. We proposed three algorithms: (a) CD, optimized for static data, (b) CDStream, which focuses on streaming data, and (c) CDHybrid for streaming data, which autonomously chooses between the two algorithms. The algorithms rely on novel theoretical results, which enable us to bound multivariate correlations between large sets of vectors. A thorough experimental evaluation using real datasets showed that our contribution outperforms the state of the art typically by an order of magnitude. The current methods are limited to correlations over linear combinations of vectors. Future work should extend them to also accommodate nonlinear aggregations like MIN and MAX, which find applications in the discussed domains. Furthermore, detailed analysis showed that caching pairwise statistics (through ’empirical bounds’) greatly boosted CD’s performance. While all proposed measures suited these bounds, future ones might not. Thus, optimizing the application of the more general theoretical bounds will be vital as the proposed techniques evolve. Notes A prime example is the Spark project for discovering gene properties related to the manifestation of the autism spectrum disorder, which led to a list of genes and their correlated symptoms . Although we will mostly refer to the more general case of vectors in this paper, the data often consists of time series—possibly with live updates. Weighted averages of stock prices are commonly considered in risk management to evaluate portfolio performance, diversity, and volatility . Radii are computed using the distance metrics in Table2. Z-normalization involves shifting and scaling a vector such that they have zero mean and unit standard deviation. Similar to z-normalization for (PC ) and (MP ), the (l_2)-norm of each vector can be computed and cached as a preprocessing step, after which bounds on the norms per cluster can be quickly derived on cluster initialization. H(X) is upper bounded by the factor (\sum {i=1}^{p-1}(\min \nolimits {1\le j\le i}u(C_{i+1}|C_j)) - u(C_1)) in (TC _{LB}(X)). In practice, method UpdateIndex is coded inside a custom implementation of Algorithm 1, to avoid duplicate work. Similar indices were used in earlier works, e.g., , but for bounding the values of individual correlations. Available online at We used file sub-1 _ task-500daysofsummer _ bold _ blur _ censor, which already includes the recommended pre-processing for voxel-based analytics. For this experiment, the minimum jump parameter (\delta ) is defined as in , to represent the minimum difference between the squared correlations. References 2020 stock market crash - wikipedia. Agrawal, S., Atluri, G., Karpatne, A., Haltom, W., Liess, S., Chatterjee, S., Kumar, V.: Tripoles: a new class of relationships in time series data. In: Proceedings of the SIGKDD’17 Agrawal, S., Steinbach, M., Boley, D., Chatterjee, S., Atluri, G., Dang, A.T., Liess, S., Kumar, V.: Mining novel multivariate relationships in time series data using correlation networks. TKDE 32(9), 1798–1811 (2020) Google Scholar Alemi, A.A., Fischer, I., Dillon, J.V., Murphy, K.: Deep variational information bottleneck. In: ICLR’17 Arthur, D., Vassilvitskii, S.: K-Means++: the advantages of careful seeding. In: Proceedings of the SODA’07 Carlborg, Ö., Haley, C.S.: Epistasis: Too often neglected in complex trait studies? Nat. Rev. Genet. 5(8), 618–625 (2004) ArticleCASPubMedGoogle Scholar Chen, X., Duan, Y., Houthooft, R., Schulman, J., Sutskever, I., Abbeel, P.: Infogan: Interpretable representation learning by information maximizing generative adversarial nets. In: NIPS’16 Cheng, P., Min, M.R., Shen, D., Malon, C., Zhang, Y., Li, Y., Carin, L.: Improving disentangled text representation learning with information-theoretic guidance. In: Proceedings of the ACL’20 Chiang, R.H., Huang Cecil, C.E., Lim, E.P.: Linear correlation discovery in databases: a data mining approach. Data Knowl. Eng. 53(3), 311–337 (2005) ArticleGoogle Scholar Das, A., Kempe, D.: Algorithms for subset selection in linear regression. In: Proceedings of the STOC’08 Datar, M., Immorlica, N., Indyk, P., Mirrokni, V.S.: Locality-sensitive hashing scheme based on p-stable distributions. In: Proceedings of the SCG’04 d’Hondt, J., Papapetrou, O., Minartz, K.: Efficient detection of multivariate correlations with different correlation measures. Technical Reports (2023). Available in Ding, H., Trajcevski, G., Scheuermann, P., Wang, X., Keogh, E.: Querying and mining of time series data: experimental comparison of representations and distance measures. In: Proceedings of the VLDB’08 Echihabi, K., Tsandilas, T., Gogolou, A., Bezerianos, A., Palpanas, T.: Pros: data series progressive k-nn similarity search and classification with probabilistic quality guarantees. VLDB J. 32, 763–789 (2023) ArticleGoogle Scholar Echihabi, K., Zoumpatianos, K., Palpanas, T., Benbrahim, H.: The Lernaean hydra of data series similarity search: an experimental evaluation of the state of the art. In: Proceedings of the VLDB’18 Fagin, R., Lotem, A., Naor, M.: Optimal aggregation algorithms for middleware. J. Comput. Syst. Sci. 66(4), 614–656 (2003) ArticleMathSciNetGoogle Scholar Foundation, S.: SPARK for autism. Garner, W.R.: Uncertainty and Structure as Psychological Concepts. Wiley, New York (1962) Google Scholar Gedik, B., Bordawekar, R.R., Yu, P.S.: Cell Join: a parallel stream join operator for the cell processor. VLDB J. 18, 501–519 (2009) ArticleGoogle Scholar Handwerker, D.A., Roopchansingh, V., Gonzalez-Castillo, J., Bandettini, P.A.: Periodic changes in fMRI connectivity. Neuroimage 63(3), 1712–1719 (2012) ArticlePubMedGoogle Scholar He, Y., Ganjam, K., Chu, X.: Sema-join: joining semantically-related tables using big table corpora. In: Proceedings of the VLDB’15 Heunis, S., Lamerichs, R., Zinger, S., Caballero-Gaudes, C., Jansen, J.F., Aldenkamp, B., Breeuwer, M.: Quality and denoising in real-time functional magnetic resonance imaging neurofeedback: a methods review. Hum. Brain Mapp. 41(12), 3439–3467 (2020) ArticlePubMedPubMed CentralGoogle Scholar Härdle, W.K.: Applied Multivariate Statistical Analysis, 2nd edn. Springer, Berlin (2007) Google Scholar Jiang, L., Kawashima, H., Tatebe, O.: Incremental window aggregates over array database. In: Proceedings of the IEEE BigData 2014 Kistler, R., Kalnay, E., Collins, W., Saha, S., White, G., Woollen, J., Chelliah, M., Ebisuzaki, W., Kanamitsu, M., Kousky, V., van den Dool, H.: The NCEP/NCAR 50-year reanalysis: monthly means CD-ROM and documentation. Bull. Am. Meteorol. Soc. 82, 247–268 (2001) ArticleADSGoogle Scholar Kraskov, A., Grassberger, P.: Mic: mutual information based hierarchical clustering. Information theory and statistical learning, pp. 101–123 (2009) Li, M., Chen, X., Li, X., Ma, B., Vitányi, P.M.B.: The similarity metric. IEEE Trans. Inf. Theory 50(12), 3250–3264 (2004) ArticleMathSciNetGoogle Scholar Licher, S., Ahmad, S., Karamujić-Čomić, H., Voortman, T., Leening, M.J.G., Ikram, M.A., Ikram, M.K.: Genetic predisposition, modifiable-risk-factor profile and long-term dementia risk in the general population. Nat. Med. 25(9), 1364–1369 (2019) ArticleCASPubMedPubMed CentralGoogle Scholar Liess, S., Agrawal, S., Chatterjee, S., Kumar, V.: A teleconnection between the west Siberian plain and the ENSO region. J. Clim. 30(1), 301–315 (2017) ArticleADSGoogle Scholar Mangram, M.E.: A simplified perspective of the Markowitz portfolio theory. Glob. J. Bus. Res. 7(1), 59–70 (2013) Google Scholar Megumi, F., Yamashita, A., Kawato, M., Imamizu, H.: Functional MRI neurofeedback training on connectivity between two regions induces long-lasting changes in intrinsic functional network. Front. Hum. Neurosci. 9, 160 (2015) ArticlePubMedPubMed CentralGoogle Scholar Mitra, I., Lavillaureix, A., Yeh, E., Traglia, M., Tsang, K., Bearden, C.E., Rauen, K.A., Weiss, L.A.: Reverse pathway genetic approach identifies epistasis in autism spectrum disorders. PLoS Genet. 13, 1–27 (2017) ArticleGoogle Scholar Mueen, A.: Enumeration of time series motifs of all lengths. In: Proceedings of the ICDM’13 Mueen, A., Nath, S., Liu, J.: Fast approximate correlation for massive time-series data. In: Proceedings of the SIGMOD’10 Nguyen, H.V., Müller, E., Andritsos, P., Böhm, K.: Detecting correlated columns in relational databases with mixed data types. In: Proceedings of the SSDBM’14 Nguyen, H.V., Müller, E., Vreeken, J., Efros, P., Böhm, K.: Multivariate maximal correlation analysis. In: Proceedings of the ICML’14 Oceanic, N., Administration, A.: NOAA integrated surface dataset. O’sullivan, A., Sheffrin, S.M.: Economics: Principles in Action. Pearson Prentice Hall, London (2003) Google Scholar Rostoker, C., Wagner, A., Hoos, H.: A parallel workflow for real-time correlation and clustering of high-frequency stock market data. In: Proceedings of the IPDPS’07 Satuluri, V., Parthasarathy, S.: Bayesian locality sensitive hashing for fast similarity search. In: Proceedings of the VLDB’12 Skoltech computer vision | deep billion-scale indexing. Segaran, T.: Programming Collective Intelligence: Building Smart Web 2.0 Applications. O’Reilly Media, Inc., Sebastopol (2007) Google Scholar Studenỳ, M., Vejnarová, J.: The multi-information function as a tool for measuring stochastic dependence. Learn. Gr. Models 89, 261–297 (1998) ArticleGoogle Scholar Tan, Z., Jamdagni, A., He, X., Nanda, P., Liu, R.P.: A system for denial-of-service attack detection based on multivariate correlation analysis. IEEE Trans. Parallel Distrib. Syst. 25(2), 447–456 (2014) ArticleGoogle Scholar Wang, J., Zhu, Y., Li, S., Wan, D., Zhang, P.: Multivariate time series similarity searching. Sci. World J. 2014(1) (2014) Watanabe, S.: Information theoretical analysis of multivariate correlation. IBM J. Res. Dev. 4(1), 66–82 (1960) ArticleMathSciNetGoogle Scholar Wu, Y., Yu, J., Tian, Y., Sidle, R., Barber, R.: Designing succinct secondary indexing mechanism by exploiting column correlations. In: Proceedings of the SIGMOD’19 Yang, K., Shahabi, C.: A PCA-based similarity measure for multivariate time series. In: Proceedings of the ACM-MMDB’04 Yang, K., Shahabi, C.: An efficient k nearest neighbor search for multivariate time series. Inf. Comput. 205(1), 65–98 (2007) ArticleMathSciNetGoogle Scholar Yu, C., Luo, L., Chan, L.L.H., Rakthanmanon, T., Nutanong, S.: A fast LSH-based similarity search method for multivariate time series. Inf. Sci. 476, 337–356 (2019) ArticleGoogle Scholar Zaharia, M., Das, T., Li, H., Hunter, T., Shenker, S., Stoica, I.: Discretized streams: fault-tolerant streaming computation at scale. In: Proceedings of the SOSP’13 Zhang, X., Pan, F., Wang, W., Nobel, A.: Mining non-redundant high order correlations in binary data. In: Proceedings of the VLDB’08 Zhu, Y., Shasha, D.: Statstream: statistical monitoring of thousands of data streams in real time. In: Proceedings of the VLDB’02 Zilverstand, A., Sorger, B., Zimmermann, J., Kaas, A., Goebel, R.: Windowed correlation: a suitable tool for providing dynamic fmri-based functional connectivity neurofeedback on task difficulty. PLoS ONE 9(1), 1-13 (2014) ArticleGoogle Scholar Download references Acknowledgements This work has received funding from the European Union’s Horizon Europe research and innovation programme STELAR under Grant Agreement No. 101070122. Author information Authors and Affiliations Eindhoven University of Technology, De Zaale 1, 5600 MB, Eindhoven, The Netherlands Jens E. d’Hondt,Koen Minartz&Odysseas Papapetrou Authors 1. Jens E. d’HondtView author publications Search author on:PubMedGoogle Scholar 2. Koen MinartzView author publications Search author on:PubMedGoogle Scholar 3. Odysseas PapapetrouView author publications Search author on:PubMedGoogle Scholar Corresponding author Correspondence to Jens E. d’Hondt. Additional information Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Rights and permissions Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit Reprints and permissions About this article Cite this article d’Hondt, J.E., Minartz, K. & Papapetrou, O. Efficient detection of multivariate correlations with different correlation measures. The VLDB Journal33, 481–505 (2024). Download citation Received: 20 March 2023 Revised: 22 August 2023 Accepted: 14 September 2023 Published: 11 October 2023 Issue Date: March 2024 DOI: Share this article Anyone you share the following link with will be able to read this content: Get shareable link Sorry, a shareable link is not currently available for this article. Copy to clipboard Provided by the Springer Nature SharedIt content-sharing initiative Keywords Similarity search Multivariate correlations Time series Streaming data Profiles Koen MinartzView author profile Use our pre-submission checklist Avoid common mistakes on your manuscript. Sections Figures References Abstract 1 Introduction 2 Preliminaries 3 Detection of multivariate correlations in static data 4 Detection of multivariate correlations in streaming data 5 Evaluation 6 Conclusions Notes References Acknowledgements Author information Additional information Rights and permissions About this article Advertisement Fig. 1 View in articleFull size image Fig. 2 View in articleFull size image Algorithm 1 View in articleFull size image Algorithm 2 View in articleFull size image Algorithm 3 View in articleFull size image Fig. 3 View in articleFull size image Algorithm 4 View in articleFull size image Fig. 4 View in articleFull size image Fig. 5 View in articleFull size image Fig. 6 View in articleFull size image Fig. 7 View in articleFull size image Fig. 8 View in articleFull size image Fig. 9 View in articleFull size image Fig. 10 View in articleFull size image Fig. 11 View in articleFull size image Fig. 12 View in articleFull size image 2020 stock market crash - wikipedia. Agrawal, S., Atluri, G., Karpatne, A., Haltom, W., Liess, S., Chatterjee, S., Kumar, V.: Tripoles: a new class of relationships in time series data. In: Proceedings of the SIGKDD’17 Agrawal, S., Steinbach, M., Boley, D., Chatterjee, S., Atluri, G., Dang, A.T., Liess, S., Kumar, V.: Mining novel multivariate relationships in time series data using correlation networks. TKDE 32(9), 1798–1811 (2020) Google Scholar Alemi, A.A., Fischer, I., Dillon, J.V., Murphy, K.: Deep variational information bottleneck. In: ICLR’17 Arthur, D., Vassilvitskii, S.: K-Means++: the advantages of careful seeding. In: Proceedings of the SODA’07 Carlborg, Ö., Haley, C.S.: Epistasis: Too often neglected in complex trait studies? Nat. Rev. Genet. 5(8), 618–625 (2004) ArticleCASPubMedGoogle Scholar Chen, X., Duan, Y., Houthooft, R., Schulman, J., Sutskever, I., Abbeel, P.: Infogan: Interpretable representation learning by information maximizing generative adversarial nets. In: NIPS’16 Cheng, P., Min, M.R., Shen, D., Malon, C., Zhang, Y., Li, Y., Carin, L.: Improving disentangled text representation learning with information-theoretic guidance. In: Proceedings of the ACL’20 Chiang, R.H., Huang Cecil, C.E., Lim, E.P.: Linear correlation discovery in databases: a data mining approach. Data Knowl. Eng. 53(3), 311–337 (2005) ArticleGoogle Scholar Das, A., Kempe, D.: Algorithms for subset selection in linear regression. In: Proceedings of the STOC’08 Datar, M., Immorlica, N., Indyk, P., Mirrokni, V.S.: Locality-sensitive hashing scheme based on p-stable distributions. In: Proceedings of the SCG’04 d’Hondt, J., Papapetrou, O., Minartz, K.: Efficient detection of multivariate correlations with different correlation measures. Technical Reports (2023). Available in Ding, H., Trajcevski, G., Scheuermann, P., Wang, X., Keogh, E.: Querying and mining of time series data: experimental comparison of representations and distance measures. In: Proceedings of the VLDB’08 Echihabi, K., Tsandilas, T., Gogolou, A., Bezerianos, A., Palpanas, T.: Pros: data series progressive k-nn similarity search and classification with probabilistic quality guarantees. VLDB J. 32, 763–789 (2023) ArticleGoogle Scholar Echihabi, K., Zoumpatianos, K., Palpanas, T., Benbrahim, H.: The Lernaean hydra of data series similarity search: an experimental evaluation of the state of the art. In: Proceedings of the VLDB’18 Fagin, R., Lotem, A., Naor, M.: Optimal aggregation algorithms for middleware. J. Comput. Syst. Sci. 66(4), 614–656 (2003) ArticleMathSciNetGoogle Scholar Foundation, S.: SPARK for autism. Garner, W.R.: Uncertainty and Structure as Psychological Concepts. Wiley, New York (1962) Google Scholar Gedik, B., Bordawekar, R.R., Yu, P.S.: Cell Join: a parallel stream join operator for the cell processor. VLDB J. 18, 501–519 (2009) ArticleGoogle Scholar Handwerker, D.A., Roopchansingh, V., Gonzalez-Castillo, J., Bandettini, P.A.: Periodic changes in fMRI connectivity. Neuroimage 63(3), 1712–1719 (2012) ArticlePubMedGoogle Scholar He, Y., Ganjam, K., Chu, X.: Sema-join: joining semantically-related tables using big table corpora. In: Proceedings of the VLDB’15 Heunis, S., Lamerichs, R., Zinger, S., Caballero-Gaudes, C., Jansen, J.F., Aldenkamp, B., Breeuwer, M.: Quality and denoising in real-time functional magnetic resonance imaging neurofeedback: a methods review. Hum. Brain Mapp. 41(12), 3439–3467 (2020) ArticlePubMedPubMed CentralGoogle Scholar Härdle, W.K.: Applied Multivariate Statistical Analysis, 2nd edn. Springer, Berlin (2007) Google Scholar Jiang, L., Kawashima, H., Tatebe, O.: Incremental window aggregates over array database. In: Proceedings of the IEEE BigData 2014 Kistler, R., Kalnay, E., Collins, W., Saha, S., White, G., Woollen, J., Chelliah, M., Ebisuzaki, W., Kanamitsu, M., Kousky, V., van den Dool, H.: The NCEP/NCAR 50-year reanalysis: monthly means CD-ROM and documentation. Bull. Am. Meteorol. Soc. 82, 247–268 (2001) ArticleADSGoogle Scholar Kraskov, A., Grassberger, P.: Mic: mutual information based hierarchical clustering. Information theory and statistical learning, pp. 101–123 (2009) Li, M., Chen, X., Li, X., Ma, B., Vitányi, P.M.B.: The similarity metric. IEEE Trans. Inf. Theory 50(12), 3250–3264 (2004) ArticleMathSciNetGoogle Scholar Licher, S., Ahmad, S., Karamujić-Čomić, H., Voortman, T., Leening, M.J.G., Ikram, M.A., Ikram, M.K.: Genetic predisposition, modifiable-risk-factor profile and long-term dementia risk in the general population. Nat. Med. 25(9), 1364–1369 (2019) ArticleCASPubMedPubMed CentralGoogle Scholar Liess, S., Agrawal, S., Chatterjee, S., Kumar, V.: A teleconnection between the west Siberian plain and the ENSO region. J. Clim. 30(1), 301–315 (2017) ArticleADSGoogle Scholar Mangram, M.E.: A simplified perspective of the Markowitz portfolio theory. Glob. J. Bus. Res. 7(1), 59–70 (2013) Google Scholar Megumi, F., Yamashita, A., Kawato, M., Imamizu, H.: Functional MRI neurofeedback training on connectivity between two regions induces long-lasting changes in intrinsic functional network. Front. Hum. Neurosci. 9, 160 (2015) ArticlePubMedPubMed CentralGoogle Scholar Mitra, I., Lavillaureix, A., Yeh, E., Traglia, M., Tsang, K., Bearden, C.E., Rauen, K.A., Weiss, L.A.: Reverse pathway genetic approach identifies epistasis in autism spectrum disorders. PLoS Genet. 13, 1–27 (2017) ArticleGoogle Scholar Mueen, A.: Enumeration of time series motifs of all lengths. In: Proceedings of the ICDM’13 Mueen, A., Nath, S., Liu, J.: Fast approximate correlation for massive time-series data. In: Proceedings of the SIGMOD’10 Nguyen, H.V., Müller, E., Andritsos, P., Böhm, K.: Detecting correlated columns in relational databases with mixed data types. In: Proceedings of the SSDBM’14 Nguyen, H.V., Müller, E., Vreeken, J., Efros, P., Böhm, K.: Multivariate maximal correlation analysis. In: Proceedings of the ICML’14 Oceanic, N., Administration, A.: NOAA integrated surface dataset. O’sullivan, A., Sheffrin, S.M.: Economics: Principles in Action. Pearson Prentice Hall, London (2003) Google Scholar Rostoker, C., Wagner, A., Hoos, H.: A parallel workflow for real-time correlation and clustering of high-frequency stock market data. In: Proceedings of the IPDPS’07 Satuluri, V., Parthasarathy, S.: Bayesian locality sensitive hashing for fast similarity search. In: Proceedings of the VLDB’12 Skoltech computer vision | deep billion-scale indexing. Segaran, T.: Programming Collective Intelligence: Building Smart Web 2.0 Applications. O’Reilly Media, Inc., Sebastopol (2007) Google Scholar Studenỳ, M., Vejnarová, J.: The multi-information function as a tool for measuring stochastic dependence. Learn. Gr. Models 89, 261–297 (1998) ArticleGoogle Scholar Tan, Z., Jamdagni, A., He, X., Nanda, P., Liu, R.P.: A system for denial-of-service attack detection based on multivariate correlation analysis. IEEE Trans. Parallel Distrib. Syst. 25(2), 447–456 (2014) ArticleGoogle Scholar Wang, J., Zhu, Y., Li, S., Wan, D., Zhang, P.: Multivariate time series similarity searching. Sci. World J. 2014(1) (2014) Watanabe, S.: Information theoretical analysis of multivariate correlation. IBM J. Res. Dev. 4(1), 66–82 (1960) ArticleMathSciNetGoogle Scholar Wu, Y., Yu, J., Tian, Y., Sidle, R., Barber, R.: Designing succinct secondary indexing mechanism by exploiting column correlations. In: Proceedings of the SIGMOD’19 Yang, K., Shahabi, C.: A PCA-based similarity measure for multivariate time series. In: Proceedings of the ACM-MMDB’04 Yang, K., Shahabi, C.: An efficient k nearest neighbor search for multivariate time series. Inf. Comput. 205(1), 65–98 (2007) ArticleMathSciNetGoogle Scholar Yu, C., Luo, L., Chan, L.L.H., Rakthanmanon, T., Nutanong, S.: A fast LSH-based similarity search method for multivariate time series. Inf. Sci. 476, 337–356 (2019) ArticleGoogle Scholar Zaharia, M., Das, T., Li, H., Hunter, T., Shenker, S., Stoica, I.: Discretized streams: fault-tolerant streaming computation at scale. In: Proceedings of the SOSP’13 Zhang, X., Pan, F., Wang, W., Nobel, A.: Mining non-redundant high order correlations in binary data. In: Proceedings of the VLDB’08 Zhu, Y., Shasha, D.: Statstream: statistical monitoring of thousands of data streams in real time. In: Proceedings of the VLDB’02 Zilverstand, A., Sorger, B., Zimmermann, J., Kaas, A., Goebel, R.: Windowed correlation: a suitable tool for providing dynamic fmri-based functional connectivity neurofeedback on task difficulty. PLoS ONE 9(1), 1-13 (2014) ArticleGoogle Scholar Discover content Journals A-Z Books A-Z Publish with us Journal finder Publish your research Language editing Open access publishing Products and services Our products Librarians Societies Partners and advertisers Our brands Springer Nature Portfolio BMC Palgrave Macmillan Apress Discover Your privacy choices/Manage cookies Your US state privacy rights Accessibility statement Terms and conditions Privacy policy Help and support Legal notice Cancel contracts here 34.34.225.232 Not affiliated © 2025 Springer Nature
348
EM 3 Section 14: Electromagnetic Energy and the Poynting Vector 14. 1. Poynting’s Theorem (Griffiths 8.1.2) Recall that we saw that the total energy stored in electromagnetic fields is: U = UM + UE = 1 2 Z allspace 1 µ0 B2 + ϵ0E2 ! dV (1) Let us now derive this more generally. Consider some distribution of charges and currents. In small time dt a charge will move vdt and, according to the Lorentz force law, the work done on the charge will be dU = F · dl = q(E + v × B) · vdt = qE · vdt where as usual the magnetic forces do no work. Now let q = ρdV (usual definition of charge density) and ρv = J (usual definition of current). Then dividing through by dt and integrating over a volume V containing the charges, we find that the rate at which work is done (i.e. the power delivered to the system) is dU dt = Z V E · J dV (2) Thus E · J is the power delivered per unit volume. Now use MIV to express E · J = 1 µ0 E · (∇× B) −ϵ0E · ∂E ∂t Furthermore we can use a product rule from lecture 1 to write E · (∇× B) = B · (∇× E) −∇· (E × B) = −B · ∂B ∂t −∇· (E × B) where we used MIII in the last line. Putting it all together, and noting B · ∂B ∂t = 1 2 ∂B2 ∂t E · ∂E ∂t = 1 2 ∂E2 ∂t , yields E · J = −1 2 ∂ ∂t ϵ0E2 + 1 µ0 B2 ! −1 µ0 ∇· (E × B) Finally we can integrate over the volume V containing the currents and charges and use the divergence theorem on the second term to obtain from (2) dU dt = −∂ ∂t Z V 1 2 ϵ0E2 + 1 µ0 B2 ! dV −1 µ0 I S(E × B) · dS (3) Let us now examine each term in Poynting’s Theorem (3): the left hand side is the power delivered to the volume i.e. the rate of gain in energy of the particles; the first term on 1 the right hand side is the rate of loss of electromagnetic energy stored in fields within the volume; the second term is the rate of energy transport out of the volume i.e. across the surface S. Thus Poynting’s theorem reads: energy lost by fields = energy gained by particles+ energy flow out of volume. Hence we can identify the vector S = 1 µ0 E × B (4) as the energy flux density (energy per unit area per unit time) and it is known as the Poynting vector (it ‘Poynts’ in the direction of energy transport). Also we can write Poynting’s theorem as a continuity equation for the total energy U = Uem + Umec. The left hand side of (3) is the rate of change of mechanical energy thus d(Uem + Umec) dt = − I S S · dA (to avoid a nasty clash of notation with S as Poynting vector we use dA rather than dS as vector element of area). As usual, expressing energy as a volume over energy densities uem,umec and using the divergence theorem on the right hand side we arrive at ∂ ∂t(uem + umec) = −∇· S (5) which is the continuity equation for energy density. Thus the Poynting vector represents the flow of energy in the same way that the current J represents the flow of charge. 14. 2. Energy of Electromagnetic Waves (Griffiths 9.2.3) As we saw last lecture a monochromatic plane wave in vacuo propagating in the ez direction is described by the fields: E = exE0 cos(kz −ωt) B = eyB0 cos(kz −ωt) (6) where B0 = E0 c The total energy stored in the fields associated with the wave is: U = UE + UM = 1 2 Z V B2 µ0 + ϵ0E2 ! dV Now since |B| = |E|/c and c = 1/√µ0ϵ0 we see that the electric and magnetic contributions to the total energy are equal and the electromagnetic energy density is (for a linearly polarised wave) uEM = ϵ0E2 = ϵ0E2 0 cos2(kz −ωt) The Poynting vector becomes for monochromatic waves S = 1 µ0 (E × B) = cϵ0E2 0 cos2(kz −ωt)ez = uEMcez 2 Note that S is just the energy density multiplied by the velocity of the wave cez as it should be. Generally S = uEMcˆ k N.B To compute the Poynting vector it is simplest to use a real form for the fields B and E rather than a complex exponential representation. The time average of the energy density is is defined as the average over one period T of the wave ⟨uEM⟩ = ϵ0E2 0 T Z T 0 cos2(kz −ωt)dt = ϵ0E2 0 T T 2 = 1 2ϵ0E2 0 = 1 2 B2 0 µ0 The energy density of an electromagnetic wave is proportional to the square of the amplitude of the electric (or magnetic) field. 14. 3. Example of discharging capacitor Consider a discharging circular parallel plate capacitor (plates area A) in a circuit with a Figure 1: Discharging capacitor in a circuit with a resistor resistor R. Ohm’s law gives Vd = Q C = IR or I = −dQ dt = Q RC ⇒ Q = Q0e−t/RC I = Q0 RC e−t/RC Now assume a ‘quasistatic’ approximation where we treat the fields as though they were static: E = −Q Aϵ0 ˆ n = −Q Aϵ0 e−t/RC We take the normal to the plates (direction of E) is ˆ n. Now we can compute B through Amp` ere-Maxwell noting that the cylindrical symmetry implies that B is circumferential. The Amperian loop is a circle radius r between the capacitor plates where J = 0 I B · dl = µ0 Z S J + ϵ0 ∂E ∂t ! · dS = −µ0πr2ϵ0 ∂ ∂t  Q0 Aϵ0 e−t/RC  3 The lhs = 2πrBφ so B = µ0I(t)r 2A eφ The Poynting vector is given by S = 1 µ0 E × B = −Q Aϵ0 e−t/RC µ0I0 r 2Ae−t/RCez × eφ = I2 0CR 2A2ϵ0 re−2t/RCer Thus the Poynting vector and the direction of energy flow point radially out of the capacitor. 14. 4. ∗Momentum of electromagnetic radiation Let us reinterpret the Poynting vector from a quantum perspective. Due to wave-particle duality, radiation can be thought of as photons travelling with speed c with energy ε = ¯ hω = hν The momentum of a single photon p = ¯ hk = ε c ˆ k For n photons per unit volume travelling at speed c we can interpret the average Poynting vector as average energy density nε multiplied by velocity vector cˆ k ⟨S⟩= nεcˆ k = ⟨uEM⟩cˆ k Again thinking of the energy transport as effected by photons, we must have an accompa-nying momentum flux e P e P is defined as the momentum carried across a plane normal to propagation, per unit area per unit time For each photon p = ε/c (along ˆ k) so ˜ P = S/c If light strikes the absorber (normal incidence) momentum is absorbed, this creates a force per unit area equal to the incoming (normal) momentum flux This causes radiation pressure prad = e P · ˆ n = S/c ⇒ prad = ⟨uEM⟩ If light is reflected not absorbed so twice the momentum is imparted, prad doubles but so does ⟨uEM⟩, and this result still holds. To understand radiation pressure classically let’s go back to the example of an x polarised wave propagating in ez direction: the electric field moves charges, on the surface the radiation strikes, in the x direction; then the Lorentz force qv × B (with v in the x direction and B in the y direction) is in the ez direction and creates the pressure. Above is for a collimated light beam (i.e. single direction) The other extreme is “diffuse radiation” = light bouncing around in all directions; this gives instead prad = ⟨uEM⟩/3 (the factor 1/3 is as in kinetic theory of gases). 4
349
Published Time: 2025-02-23T08:22:20Z Floating Point Precision and Its Limitations | by Umair Akbar | Medium =============== Sitemap Open in app Sign up Sign in Write Sign up Sign in Floating Point Precision and Its Limitations Umair Akbar Follow 23 min read · Feb 23, 2025 Listen Share Press enter or click to view image in full size Floating-Point Formats (FP32, FP16, BF16): Modern GPUs support multiple precision formats. FP32 (32-bit) is IEEE single-precision with 1 sign bit, 8 exponent bits, and 23 fraction bits (24-bit significand including the implicit leading 1). FP16 (16-bit half) has 1 sign, 5 exponent, and 10 fraction bits. BF16 (16-bit bfloat16) has 1 sign, 8 exponent, and 7 fraction bits (bfloat16 (BF16) range and precision). In summary: FP32 provides ~24 bits of precision, FP16 ~11 bits, and BF16 ~8 bits. BF16 sacrifices precision (only ~8 significant bits) to retain a wide exponent range (same 8-bit exponent as FP32) (bfloat16 floating-point format — Wikipedia). This means BF16 can represent very large or small magnitudes similar to FP32 (~10³⁸ range (bfloat16 (BF16) range and precision)), but with much coarser resolution between representable numbers. Machine Epsilon (ε): Formally, machine epsilon is the smallest positive number such that 1.0 + ε > 1.0 in the given floating-point format. It effectively measures the precision of the format. For FP32, ε ≈ 2^(-23) ≈ 1.19×10^-7 (about 7 decimal digits of precision). For FP16, ε = 2^(-10) ≈ 9.76×10^-4, and for BF16, ε = 2^(-7) = 7.8125×10^-3 (bfloat16 (BF16) range and precision). In other words, BF16’s precision is ~1/128 ≈ 0.008 (only ~2–3 decimal digits), far less precise than FP32. These epsilons quantify the rounding granularity: any real number must be rounded to the nearest representable value, introducing a relative error ≤ ε/2 (for round-to-nearest-even mode). Rounding Errors: When performing arithmetic, results are generally rounded to the nearest representable value. If ⊕ denotes floating-point addition, we can model a single addition with an error term: fl(a+b)=(a+b) (1+δ),\text{fl}(a + b) = (a + b)\,(1 + \delta), where $|\delta| \le \epsilon$ for that format (assuming no overflow/underflow) (Taming Floating-Point Sums | orlp.net). Here $\epsilon$ is the unit roundoff (on the order of the machine epsilon). This means the computed sum is the exact sum scaled by a factor $(1+\delta)$ very close to 1. For example, in BF16 $|\delta| \le 7.8\times10^{-3}$, so each addition can incur up to ~0.78% relative error; in FP32, $|\delta| \lesssim 10^{-7}$, or 0.00001% relative error per operation — five orders of magnitude more accurate. Underflow and Overflow: Finite precision also limits the range of representable numbers. If a result’s magnitude is too small, it may underflow to 0 (or a subnormal number with reduced precision); if too large, it overflows to infinity. FP32 and BF16 share a similar exponent range (~1e–38 to 1e+38 for normal values (bfloat16 (BF16) range and precision)), so underflow/overflow thresholds are similar. FP16, with only 5 exponent bits, has a much narrower range (max ~6.55×10⁴, min normal ~6.10×10^-5) (Half-precision floating-point format — Wikipedia) (Half-precision floating-point format — Wikipedia). In practice, gradient values rarely hit FP32/BF16 overflow, but FP16 could overflow if summing many large values. (For instance, summing ~10⁴ values of order 10¹ could exceed 6×10⁴.) Underflow is less of a concern in gradient summation since gradients are not extremely tiny in magnitude, but it could occur when subtracting nearly equal numbers (catastrophic cancellation yielding a tiny difference). In this discussion, we assume no overflow/underflow for simplicity, focusing on rounding error. Key Point: FP32, FP16, and BF16 all follow the same arithmetic rules but with different precisions. BF16’s coarser precision (ε~0.008) means it rounds much more aggressively than FP32 (ε~1e-7). Each arithmetic operation in BF16 can introduce a hundreds-of-times larger relative error than FP16, and ~65,000× larger than FP32 (bfloat16 (BF16) range and precision). These errors can accumulate when many operations (like summations) are performed. Summation in Finite Precision Arithmetic When we sum many numbers in floating-point, the rounding errors from each addition can accumulate. Let $x_1, x_2, \dots, x_N$ be $N$ real numbers, and consider their sum $S = x_1 + x_2 + \cdots + x_N$. A computer will perform this summation with a certain association (order of additions), each introducing a small error. For simplicity, assume a simple left-to-right summation: Compute $s_2 = \text{fl}(x_1 + x_2)$ — the floating-point sum of $x_1$ and $x_2$. This yields $s_2 = (x_1 + x_2)(1 + \delta_2)$ with $|\delta_2| \le \epsilon$. Add $x_3$: $s_3 = \text{fl}(s_2 + x_3) = (s_2 + x_3)(1 + \delta_3)$. We can expand this as: s3=((x1+x2)(1+δ2)+x3)(1+δ3).s_3 = ((x_1 + x_2)(1+\delta_2) + x_3)(1+\delta_3). If we distribute and ignore second-order tiny terms ($\delta_2\delta_3$), this is approximately $(x_1+x_2+x_3) + \delta_2(x_1+x_2) + \delta_3(x_1+x_2+x_3)$. The error after 3 additions is about $\delta_2(x_1+x_2) + \delta_3(x_1+x_2+x_3)$. Continuing this pattern, after summing $N$ terms the final result $s_N$ can be expressed as the exact sum plus accumulated error terms. To first order in the small $\delta_i$’s: sN≈x1+x2+⋯+xN + ∑k=2Nδk(∑i=1kxi).s_N \approx x_1 + x_2 + \cdots + x_N \;+\; \sum_{k=2}^{N} \delta_k \Big(\sum_{i=1}^{k} x_i\Big). The total error $E = s_N — S$ is roughly $\sum_{k=2}^N \delta_k , S_k$, where $S_k = \sum_{i=1}^k x_i$ is the partial sum after $k$ terms. Each $|\delta_k| \le \epsilon$, and $S_k$ is at most the final sum $S$ in magnitude (assuming all positive terms for a moment). Thus a rough worst-case bound on the error is: ∣E∣ ≤ ∑k=2N∣δk∣ ∣Sk∣ ≤ ∑k=2Nϵ ∣S∣ = (N−1) ϵ ∣S∣.|E| \;\le\; \sum_{k=2}^N |\delta_k|\,|S_k| \;\le\; \sum_{k=2}^N \epsilon \, |S| \;=\; (N-1)\,\epsilon\,|S|. So in the worst case, the relative error in the sum could be as large as $(N-1)\epsilon$. This worst-case occurs if every rounding $\delta_k$ has the same sign and maximum magnitude (so errors compound rather than cancel) (Taming Floating-Point Sums | orlp.net). It’s an extreme scenario (e.g. always rounding up) but gives an order-of-magnitude: naïve summation has $O(N \epsilon)$ worst-case error growth (Taming Floating-Point Sums | orlp.net). For FP32 (ε ≈ 1e-7): even summing $N=1$ million numbers, worst-case relative error $\approx 10⁶ \times 10^{-7} = 0.1$ (10% error). For more modest $N$, the error bound is tiny (e.g. $N=1000$ → $<0.01%$ worst-case error). For BF16 (ε ≈ 7.8e-3): summing $N=1000$ could incur up to $1000 \times 7.8\times10^{-3} \approx 7.8$ (780%!) relative error in the worst case. Even $N=128$ gives ~$(128–1)\times0.0078 ≈ 0.99$ (~99% error). In other words, with sufficiently many additions BF16 can theoretically lose almost all accuracy — the computed sum might be off by an order of magnitude. (This happens if many small contributions fall below the rounding step of a growing total.) Such worst-case scenarios are pathological (e.g. adding a long series of tiny numbers to a large number can “drop” the tiny ones entirely in low precision). In practice, rounding errors tend to be random in sign and partially cancel out rather than purely accumulate. A more realistic error estimate treats the $\delta_k$ as random zero-mean disturbances. If we assume each $\delta_k$ is an independent random variable uniformly distributed in $[-\epsilon, +\epsilon]$, then the error $E$ is a sum of $N-1$ small random terms $\delta_k S_k$. Its expected value $\mathbb{E}[E]$ would be 0 (unbiased, assuming symmetric rounding), and the variance adds up. As a rough approximation, Var(E)≈∑k=2NVar(δkSk)≈∑k=2Nϵ212Sk2.\mathrm{Var}(E) \approx \sum_{k=2}^N \mathrm{Var}(\delta_k S_k) \approx \sum_{k=2}^N \frac{\epsilon²}{12} S_k². If the partial sums $S_k$ are on the order of the final $S$ (say all terms are positive and similar magnitude so $S_k \sim \frac{k}{N}S$), this gives $\mathrm{Var}(E) \approx \frac{\epsilon² S²}{12} \sum_{k=2}^N (k/N)²$. For large $N$, $\sum_{k=1}^N k² \approx N³/3$, so roughly $\mathrm{Var}(E) \sim \frac{\epsilon² S²}{12}\frac{N³}{3N²} = \frac{N \epsilon² S²}{36}$. Then the standard deviation (RMS error) grows as: σE≈ϵ∣S∣6N.\sigma_E \approx \frac{\epsilon |S|}{6}\sqrt{N}. Thus RMS error ~ $O(\sqrt{N},\epsilon)$ (rather than $N\epsilon$). For example, if $N=1000$ and using BF16 (ε≈0.0078), we’d expect on the order of $\sigma_E ≈ \frac{0.0078}{6}\sqrt{1000} ≈ 0.041$ times the sum (≈4.1% error one-sigma). In FP32, $\sigma_E$ for $N=1000$ would be $\sim 3\times10^{-7}$ (0.00003% error). These stochastic estimates align with empirical observations that typical summation error is much smaller than the pessimistic bound (Taming Floating-Point Sums | orlp.net). In fact, with random data, the distribution of summation error tends to be approximately normal centered near 0, with standard deviation on the order of that $\sqrt{N}$ scaling (Estimating Errors in Summing Arrays of Floating Point Numbers — Math, Numerics, and Software) (Estimating Errors in Summing Arrays of Floating Point Numbers — Math, Numerics, and Software). In short: Worst-case error: grows linearly with $N$ (summing many numbers can accumulate significant error in low precision). Expected error: grows more like $\sqrt{N}$ (errors cancel out partially on average). Summation Order: The order in which we sum can affect error. A pairwise (tree) reduction or Kahan compensated summation can reduce worst-case error. For instance, summing in a balanced binary tree yields error $O((\log N),\epsilon)$ (Taming Floating-Point Sums | orlp.net), and Kahan’s algorithm can reduce error to $O(N \epsilon²)$ (Taming Floating-Point Sums | orlp.net) (nearly eliminating first-order error). However, in distributed frameworks, the reduction is usually a simple pairwise or sequential accumulation across processes, without full compensation. We will assume the straightforward summation model (as is effectively the case with standard all-reduce operations, which perform a sequence of adds). Takeaway: Finite precision means summing many values introduces a small round-off error at each addition. In high precision (FP32), these errors are tiny (millionths or less) and usually negligible even over thousands of elements. In low precision (BF16/FP16), each addition is much less accurate, so errors can build up to a noticeable fraction of the result when many terms are summed. Next, we examine how this impacts distributed gradient reduction. Gradient Reduction in Distributed Training In distributed data-parallel training (DDP or FSDP in PyTorch), each GPU worker computes gradients on its local mini-batch. These local gradients $g_1, g_2, …, g_N$ (for $N$ GPUs) need to be aggregated so that each worker can update with the gradient as if computed on the combined larger batch. The aggregation is typically an all-reduce operation: all GPUs sum their gradients together (and usually each GPU then gets the summed result, or the averaged result). Mathematically, if $G = \sum_{i=1}^N g_i$ is the global gradient (exact sum), we want each GPU to obtain $G$ (or $G/N$ if averaging). This summation across devices is done by the distributed communication library (NCCL in PyTorch) either via a ring-allreduce or reduce-scatter + all-gather. Regardless of algorithm, every element of the gradient tensor undergoes a summation of $N$ contributions — conceptually: $G[j] = \sum_{i=1}^N g_i[j]$ for each coordinate $j$. Reduction Precision: A crucial detail is in what precision this summation is performed. PyTorch’s DDP by default uses FP32 gradients for the all-reduce, even when using mixed precision training, to ensure numerical stability (Does NCCL allreduce use fp16? — mixed-precision — PyTorch Forums). That is, each $g_i$ is kept as FP32 (or cast to FP32 before communication if it was in lower precision), and the sum $G$ is computed in FP32. In FSDP, there is a configurable reduce_dtype. By default, if training in BF16 precision, one might be tempted to also reduce in BF16 to save bandwidth. However, this means the summation of gradients across GPUs happens in BF16 precision. The difference can be dramatic: summing in FP32 vs summing in BF16 could yield slightly different $G$ due to the rounding errors discussed. Let’s formalize the reduction as an addition sequence: G^BF16=flBF16(g1+g2+⋯+gN),\hat{G}{\text{BF16}} = \text{fl}{\text{BF16}}(g_1 + g_2 + \cdots + g_N), G^FP32=flFP32(g1+g2+⋯+gN),\hat{G}{\text{FP32}} = \text{fl}{\text{FP32}}(g_1 + g_2 + \cdots + g_N), Get Umair Akbar’s stories in your inbox Join Medium for free to get updates from this writer. Subscribe Subscribe where $\text{fl}{p}()$ denotes performing the sum in precision $p$. Both are attempting to compute $G = \sum{i=1}^N g_i$, but with different rounding behaviors. We can denote the reduction error in each case as $E_{BF16} = \hat{G}{BF16} — G$ and $E{FP32} = \hat{G}_{FP32} — G$. From our earlier analysis, if the gradients are aggregated by summing $N$ values: $|E_{FP32}| \lesssim (N-1),\epsilon_{32},|G|$ (worst-case), with $\epsilon_{32}\approx1.19\times10^{-7}$. In practice $E_{FP32}$ is negligible for typical $N$ (even $N=1000$ gives <$10^{-4}|G|$ worst-case). $|E_{BF16}| \lesssim (N-1),\epsilon_{BF16},|G|$, with $\epsilon_{BF16}=7.8\times10^{-3}$. This can be much larger. For example, with $N=128$ GPUs, worst-case $|E_{BF16}| \le 127 \times 7.8\times10^{-3} |G| \approx 0.99|G|$. Even if errors partly cancel, the scale of rounding noise in BF16 reduction is orders of magnitude higher than in FP32. Another way to see this: BF16 has ~8 bits of precision, so about 2–3 decimal digits. If you sum, say, 100 contributions of roughly 0.1, the true sum is ~10. But BF16 cannot represent increments smaller than ~0.01 of a number ~10 (because 10 in binary16 has an exponent such that 0.01 is near the last-bit resolution). So some of those contributions can get rounded off. In contrast, FP32 can represent differences ~1e-7 of a number ~10, easily capturing all those 0.1 contributions accurately. In effect, summing many values in BF16 can “saturate” — additional small contributions stop changing the accumulated sum once the sum grows large relative to ε. If $N$ is so large that each $g_i$ is below the noise floor of the grand total, those $g_i$ may be partially or fully lost in BF16 summation. All-Reduce Implementation Details: The actual summation across $N$ GPUs is done by algorithms (ring or tree reductions) that add subsets of values at a time. For instance, ring all-reduce breaks the vector into chunks and each chunk passes through a sequence of $N-1$ additions. A tree reduction would do $\log_2 N$ pairwise adds per value. In either case, each element of the gradient still undergoes roughly $N-1$ additions to incorporate all contributions (in a ring, each chunk accumulates one contribution from each of the other $N-1$ ranks). Thus, the complexity (and error accumulation) is similar to summing $N$ numbers sequentially — though a tree can reduce the depth of accumulation to $\sim \log N$, it doesn’t fully eliminate error, and ring algorithms essentially simulate sequential addition for each element. So it’s valid to approximate that each gradient element’s summation error behaves on the order of $N \epsilon$. Pairwise summation might improve the constant factors (error $\sim O((\log N)\epsilon)$ for worst-case (Taming Floating-Point Sums | orlp.net)), but for large $N$ and coarse precision, even $O(\log N)$ can be significant if $\epsilon$ is large. Example — 128 vs 384 GPUs: Suppose each GPU’s gradient contribution is $g_i$. If $N=128$ and all $g_i$ are similar magnitude, each is about $1/128 \approx 0.0078$ fraction of the total $G$. BF16’s epsilon is also 0.0078 — right at this threshold. This means when adding the last few contributions, the BF16 accumulator might not change (the increment is at the level of the last representable digit). At $N=384$ (which is 3× larger), each $g_i$ is ~0.26% of $G$; BF16 cannot reliably add values that are ~0.26% of a large total without significant rounding. Indeed, users observed that training which was stable on 96 GPUs (12×8) became unstable on 384 GPUs (48×8) when using BF16 reductions, and switching the reduce precision to FP32 fixed the issue (FP32 accumulation of bf16 gradients in FSDP · Issue #106395 · pytorch/pytorch · GitHub). This suggests that beyond a certain world size, BF16 summation error became large enough to harm the optimization process. FP32, with far smaller ε, did not have this issue. In formula terms, when $N$ is large relative to $1/\epsilon_{BF16}\approx 128$, some fraction of the $N$ contributions effectively get lost in BF16 rounding. By contrast, $1/\epsilon_{FP32} \approx 8.4\times10⁶$ — one could sum millions of values in FP32 before hitting similar loss of significance. FP16 (ε≈0.00098, $1/\epsilon_{FP16}\approx1024$) lies between: summing a few hundred to a thousand values is generally fine, but beyond that (or if values vary drastically in magnitude) FP16 would start dropping increments. Variance of Summation Error: If we model each reduction as adding a small random rounding noise, the noise variance in BF16 reduction grows with $N$. As derived, $\mathrm{Var}(E_{BF16}) \propto N,\epsilon_{BF16}²$. So standard deviation of the error $\propto \sqrt{N},\epsilon_{BF16}$. That means if we double the number of GPUs (doubling $N$), the typical rounding error increases by about $\sqrt{2}\approx1.414$ (41% higher) — not linear, but still growing. Meanwhile, the signal (the true sum $|G|$) grows linearly with $N$ if each $g_i$ has similar magnitude (assuming each GPU contributes roughly the same scale gradient). Thus the relative rounding error might grow roughly as $\sqrt{N},\epsilon$ divided by $N$ (if summing identical magnitudes) after averaging. But careful: usually we average the gradient after summing. If the all-reduce produces $\hat{G}$ and then we divide by $N$ for the average, the relative error in the average is $|E|/|G|$ same as for the sum (because dividing error and true sum by $N$ rescales both equally). Another perspective is to treat rounding error as adding a small noise to the gradient. The noise-to-signal ratio in BF16 reduction will increase with $N$ until the point where the signal itself (the exact sum) also increases. If global batch size is fixed while increasing $N$ (so each GPU has smaller batch), the total gradient $G$ stays ~constant in expectation, but $N$ increases — in that scenario, BF16 reduction error grows with $\sqrt{N}$ (RMS) while $|G|$ stays constant, so the fractional noise grows. If global batch grows with $N$ (common in scaling up training), $|G|$ grows with $N$, partially offsetting the per-sum error growth. Roughly, if each GPU sees a batch of size $B$, often $G$ scales ~linearly with $N$ (since more data → proportionally larger total gradient). Then $|G| \propto N$ while $\sigma_E \propto \sqrt{N},\epsilon$. The relative RMS error $\sigma_E/|G| \propto \frac{\sqrt{N},\epsilon}{N} = \frac{\epsilon}{\sqrt{N}}$. In that idealized case, increasing $N$ reduces the relative rounding error (because the sum grows faster). However, that assumes perfectly coherent additions (all contributions adding constructively). In practice, some components of the gradient may not scale linearly with $N$ due to noise or differing data — and those components could suffer cancellation or inconsistent growth, making them more vulnerable to rounding error. For example, if gradients are noisy, the variance in $g_i$ across GPUs might mean some coordinates of $G$ grow like $\sim N$ in magnitude (signal-dominated), while others grow slower (partially canceling, noise-dominated). In the latter case, rounding error can be significant: if the true sum $G[j]$ is small due to cancellation (i.e., different GPUs had positive/negative contributions), the rounding error might dominate that coordinate. Lower precision reduction could then produce a completely wrong sign or value for that component. FP32’s high precision can capture subtle cancellations accurately, whereas BF16 might round them to zero. (Notably, one reason frameworks kept FP32 gradients is to handle cases of cancellation or tiny gradient updates accurately.) To quantify, consider gradient components as random variables. Let each GPU’s gradient component $g_i$ be $\mu + \eta_i$, where $\mu$ is the true gradient (deterministic part) and $\eta_i$ is noise (mean 0, var $\sigma²$). Then $G = N\mu + \sum_{i=1}^N \eta_i$. The variance of the exact sum is $N\sigma²$. The averaged gradient has variance $\sigma²/N$ (the noise reduces with larger $N$). Now add rounding error: $\hat{G} = G + E_{round}$. If $E_{round}$ has variance $\approx c,N,\epsilon² (\text{some scale})²$ as earlier, on averaging it’s $\approx c,N,\epsilon²$. For large $N$, $\mathrm{Var}(E_{round}/N) \approx c,\epsilon²/N$ (since dividing by $N$ to get average). So the noise from gradient sampling is $\sigma²/N$, and noise from BF16 rounding is on the order of $\epsilon²/N$ (times some constant factor dependent on gradient magnitudes). For FP32, $\epsilon²$ is so tiny (~1e-14) that rounding noise is essentially zero compared to any realistic $\sigma²/N$. For BF16, $\epsilon² \approx (7.8e-3)² \approx 6.1\times10^{-5}$. If $\sigma²$ (the intrinsic gradient variance per worker) is, say, on the order of 0.01 (just as an example), then for large $N$ the gradient averaging noise $\sigma²/N$ becomes very small, but $\epsilon²/N$ might dominate once $N$ is extremely large. However, in most practical cases, gradient noise (from finite data) is orders of magnitude larger than $10^{-5}$, so BF16 rounding noise remains smaller until $N$ is extremely large or the gradient component’s true value is extremely small. That said, the problematic scenario is when deterministic cancellation occurs: e.g., two GPUs have gradients +X and –X on some component (perhaps due to different data producing opposing gradients). The true sum is 0, but BF16 might round each to a value where the subtraction isn’t exact. For instance, if $g_1 = 1.0$ and $g_2 = -1.0$ (exactly canceling), the sum is 0. FP32 would sum to (1 + -1) exactly 0. BF16 might represent 1.0 as 1.000 (exact) and -1.0 as -1.000 (exact) — that would still sum to 0 exactly. But if they were 1.003 and -1.003 (so that true sum is 0), BF16 rounding each to 2 decimal digits of precision yields perhaps 1.00 and -1.00, summing to 0.00 — still fine here. The real danger is if there’s a long chain of partial sums with rounding; BF16 could introduce a bias. Generally, BF16 rounding is unbiased over many operations, but it adds random noise that FP32 would not. In summary, reducing gradients in lower precision inserts an additional source of noise/error into the training process. FP32’s noise is so minuscule it can be ignored; BF16/FP16’s noise is larger and in some cases can meaningfully distort the summed gradient. The difference becomes more pronounced as the number of summed values grows. Empirical and Theoretical Comparison To illustrate the impact, consider a Gaussian model for gradients: each GPU’s gradient element $g_i \sim \mathcal{N}(\mu, \sigma²)$ (with $\mu$ the true gradient and $\sigma²$ the variance due to data sampling on that GPU). The exact global sum $G \sim \mathcal{N}(N\mu, N\sigma²)$ and the averaged gradient $\bar{g} = G/N \sim \mathcal{N}(\mu, \sigma²/N)$. Now suppose we sum in BF16. We can treat the BF16 rounding error as adding an independent noise $\eta_{round}$ with $\mathbb{E}[\eta_{round}]=0$ and $\mathrm{Var}(\eta_{round}) \approx \alpha,N,\epsilon_{BF16}² M²$, where $M$ is a scale related to the magnitude of gradients (perhaps on the order of $|\mu|$ or typical $|g_i|$), and $\alpha$ is some constant factor from the summation order (our earlier derivation yielded $\alpha \approx 1/36$ for purely random rounding, but let’s keep it symbolic). After averaging, the rounding noise contribution is $\eta_{round}/N$ with variance $\approx \alpha,\epsilon_{BF16}² M²/N$. The total variance of the BF16-reduced averaged gradient is approximately: Var(g^BF16) ≈ σ2N + α ϵBF162M2N.\mathrm{Var}(\hat{g}{BF16}) \;\approx\; \frac{\sigma²}{N} \;+\; \alpha\,\frac{\epsilon{BF16}² M²}{N}. The first term is the natural variance from having $N$ samples; the second is the numeric variance from reduction. As $N$ grows, $\sigma²/N$ decays (by the law of large numbers, averaging more samples reduces noise), but the rounding term $\propto 1/N$ as well (since more summation steps add more error, but we divide by $N$). Notably, both terms scale as $1/N$, meaning even at very large $N$, the BF16 rounding doesn’t vanish — it approaches $\alpha,\epsilon_{BF16}² M²/N$. If $N$ is astronomically large, eventually the $\epsilon² M²/N$ term could dominate if $\sigma²$ falls below that level. However, for realistic $N$ in deep learning (say up to a few thousand GPUs), and typical gradient magnitudes, $\sigma²/N$ is likely much larger than $\epsilon_{BF16}² M²/N$ unless $\sigma$ is extremely small or $M$ (the gradient scale) is large. One could solve for when they are equal: σ2N∼ϵBF162M2N ⟹ σ2∼ϵBF162M2.\frac{\sigma²}{N} \sim \frac{\epsilon_{BF16}² M²}{N} \implies \sigma² \sim \epsilon_{BF16}² M². Taking $\epsilon_{BF16} \approx 7.8\times10^{-3}$, this means $\sigma \sim 7.8\times10^{-3} M$. In other words, if the noise in individual gradients is on the order of 0.78% of the gradient magnitude, then BF16 rounding noise is of similar order. Many gradients in deep learning have much higher stochastic noise (due to mini-batch sampling) than 0.78% relative error, so BF16 rounding is usually a smaller effect. However, as one increases batch size (reducing $\sigma$) or scales to where $\mu$ is very accurately estimated, BF16 noise can become a limiting factor. This is analogous to how, in extremely large-batch training or very long training runs, even tiny biases or noise sources can matter. From another angle, some experimental reports show that reduction precision usually doesn’t affect model convergence until a threshold. The anecdote from PyTorch FSDP was that using BF16 reduce was fine on 96 GPUs, but at 384 GPUs the training diverged (FP32 accumulation of bf16 gradients in FSDP · Issue #106395 · pytorch/pytorch · GitHub). This suggests that up to a certain world size, the extra noise from BF16 was tolerable (perhaps on the order of the inherent gradient noise or within the optimizer’s tolerance). But beyond that, the error compounded enough to destabilize. Indeed, 384 is >3× 128 (the rule-of-thumb $1/\epsilon$ for BF16), meaning some gradient contributions were being lost. Conversely, if gradients are very large and mostly additive (no cancellation), BF16 might still do okay because the relative error stays bounded. Problems arise particularly when many contributions or partial cancellations occur. Variance Growth with World Size: In summary, if global batch grows with world size, the relative impact of rounding might decrease (since signal grows), but if global batch is fixed or the problem inherently requires precise cancellation, the impact grows. World size $N$ increases the number of addition operations, roughly linearly, which increases total rounding variance ∝ $N$ (absolute error). If $|G|$ grows ∝ $N$ too, purely additive case sees relative error roughly constant or even decreasing in RMS sense (because $\sqrt{N}\epsilon / N = \epsilon/\sqrt{N}$). But the worst-case bound $(N-1)\epsilon$ increases with $N$. This means the tail risk of something going wrong (all errors aligning or hitting a sensitive cancellation) grows with more processors. Thus, larger distributed training runs face a greater risk if using low precision reductions. Variance and Batch Size: Increasing batch size (per GPU or overall) reduces the stochastic gradient variance $\sigma²$, which can make the optimizer more sensitive to small numerical errors (since the training is less noisy inherently). In high-batch low-noise regimes, one must be careful that numerical error doesn’t become the new dominant noise. Using FP32 reduction ensures the numeric error stays negligible. Using BF16 reduction might inject a fixed noise floor of ~1e-3 relative error. If your gradients are accurate to 1e-4 (due to huge batch), a 1e-3 noise could indeed degrade convergence. Does Reduction Precision Matter in Practice? Theoretical Answer: Yes — reduction precision can matter when the number of summed values is large or when high accuracy is needed. Mathematically, summing in BF16 is far less accurate than summing in FP32. We derived that worst-case error grows as $O(N \epsilon)$ (Taming Floating-Point Sums | orlp.net), and since $\epsilon_{BF16}$ is ~$65,536\times$ larger than $\epsilon_{FP32}$ (bfloat16 (BF16) range and precision), the BF16 reduction’s worst-case error is that much larger. Even average-case errors grow as $O(\sqrt{N},\epsilon)$, so the gap remains huge (BF16’s random error ~0.01×√N vs FP32’s ~1e-7×√N). Therefore, in principle, reduction precision absolutely matters for correctness — using a lower precision can introduce non-negligible error. Pragmatic Answer: In many real training scenarios, small rounding errors are tolerated. Stochastic gradient descent is robust to a bit of noise; in fact, techniques like Dropout or quantized training intentionally add noise. A BF16 all-reduce effectively adds a tiny random perturbation to gradients. For moderate $N$ (e.g. ≤ 32 or 64 GPUs), this perturbation (perhaps on the order of <1% of the gradient) often doesn’t noticeably hurt convergence, especially if learning rates and other settings are tuned with some slack. This is why some systems successfully use FP16 or BF16 communication to save bandwidth and still train deep networks. Empirical studies have shown models can reach the same accuracy with FP16 gradient aggregation in many cases, as long as loss scaling or dynamic scaling is used to avoid underflow of tiny gradients. However, as we push to larger scale, or when training is less forgiving (small learning rates, very low inherent noise, or models that are sensitive to exact gradient sums), reduction precision becomes critical. The PyTorch developers by default chose FP32 gradient syncing (Does NCCL allreduce use fp16? — mixed-precision — PyTorch Forums) because it’s the safe option — it ensures numerical error in gradient summation is negligible compared to other sources of error. This default avoids any surprise divergence issues due to precision. In contrast, if one force-casts the model to FP16 (older “O2” mixed precision strategies) so that gradients are FP16, one must be careful: as one forum note cautions, using FP16 gradients (instead of AMP’s FP32 gradients) can lead to instability (Does NCCL allreduce use fp16? — mixed-precision — PyTorch Forums). Real-world training reports confirm that using BF16 for inter-GPU reduction worked until a point where instability was encountered, which was resolved by switching back to FP32 (FP32 accumulation of bf16 gradients in FSDP · Issue #106395 · pytorch/pytorch · GitHub). Trade-offs: The trade-off is performance vs precision. FP32 communication uses twice the bandwidth of FP16/BF16. In multi-node training, communication is often a bottleneck, so reducing precision cuts time (and memory) for gradient exchange. If it doesn’t hurt accuracy, that’s a win. Frameworks like NVIDIA’s NCCL support FP16 all-reduce, and researchers have used it successfully especially on ~tens of GPUs. For very large clusters or very long training runs, the risk accumulates. One mitigation is hybrid accumulation: for example, one might communicate in BF16 but accumulate partial sums in FP32 on each node. The idea (used in some DeepSpeed configurations) is to cast gradients to BF16 for transit, but perform the actual summation in higher precision (FP32 accumulation of bf16 gradients in FSDP · Issue #106395 · pytorch/pytorch · GitHub). In practice, NCCL doesn’t natively do mixed-mode reduction — it reduces in the datatype given (so BF16 in, BF16 out) (FP32 accumulation of bf16 gradients in FSDP · Issue #106395 · pytorch/pytorch · GitHub). That means if we send BF16 gradients, the sums are done in BF16 on the GPU communicators, with all the attendant rounding error. A true “BF16 with FP32 accumulation” would require collecting gradients, upcasting to FP32 to sum, then perhaps downcasting again — which is not currently standard. Instead, what PyTorch FSDP does is accumulate across micro-batches in FP32 (if you do gradient accumulation steps) but the inter-GPU reduction is whatever reduce_dtype is set to (FP32 accumulation of bf16 gradients in FSDP · Issue #106395 · pytorch/pytorch · GitHub). So if reduce_dtype=None (default), it will use the param’s dtype (often FP32 or BF16 depending on setup). Users have found setting reduce_dtype=torch.float32 can stabilize large runs at the cost of extra bandwidth. Key Takeaways: Reduction precision can affect the summed gradient accuracy: Summing in FP32 yields virtually exact results (relative error on the order of 1e-7, negligible), whereas summing in BF16 can introduce relative errors on the order of 0.1%–1% (even average-case) for large numbers of values, and worst-case could drop or mis-estimate contributions entirely (FP32 accumulation of bf16 gradients in FSDP · Issue #106395 · pytorch/pytorch · GitHub). This can translate to training divergence if the model is sensitive to those errors. For small-to-moderate scale (typical DDP on up to a few dozen GPUs), using BF16 or FP16 for gradients often “does not matter” much for final accuracy — the natural gradient noise and SGD stochasticity dominate a 0.5% rounding noise. Many large models have been trained with mixed precision and low-precision gradient communication successfully. So in that sense, reduction precision might not visibly matter, especially if you’re safely within the regime where $N,\epsilon \ll 1$. For example, on 8 GPUs, BF16 sum error bound $\approx 7 \times 0.0078 = 0.054$ (5.4% worst-case) but typical error maybe ~1–2%, which most training can absorb. For very large scale (hundreds of GPUs) or extremely low-noise training (huge batch sizes), reduction precision does matter. As $N$ grows, the probability of hitting adverse rounding or the accumulation of small biases grows. The cost of a divergence or loss spike late in training is far worse than the cost of extra communication, so usually one opts for safety: use FP32 reduce. The PyTorch FSDP team acknowledges half-precision accumulation is “often not super stable” (FP32 accumulation of bf16 gradients in FSDP · Issue #106395 · pytorch/pytorch · GitHub), and defaults are chosen accordingly. Practical guidance: If using FSDP/DDP with mixed precision (FP16/BF16 forward), it’s recommended to keep gradient all-reduce in FP32 (the default in DDP). In FSDP, if you experiment with reduce_dtype=torch.bfloat16 for speed, monitor your training closely – it might work on smaller scales, but if you see unexplained divergence on larger world sizes, consider reverting to FP32 reduction (FP32 accumulation of bf16 gradients in FSDP · Issue #106395 · pytorch/pytorch · GitHub). The trade-off is roughly doubling gradient sync time vs. risking numerical issues. Summary: Reduction precision can be a silent source of error. FP32 gives a virtually exact reduction, BF16 gives an inexact one. Whether that error matters depends on scale and tolerance. In first principles terms, we have shown the error grows with the number of terms and is proportional to the precision. Lowering precision by a factor of 1e5 (FP32→BF16) is acceptable only if other noise sources are >>1e5 larger than round-off. At small scales, they are; at huge scales, they might not be. Thus, “does reduction precision matter?” — mostly it doesn’t, until it suddenly does. It’s safer to preserve precision in reductions to avoid hitting that cliff. Finally, it’s worth noting that deep learning is somewhat noise-tolerant — even if BF16 reduction introduces a bit of error, the SGD process might simply treat it as additional noise and still converge (some researchers even intentionally add gradient noise for generalization). But if the noise is too large or systematic, it could lead to slower convergence or instability. As hardware and algorithms evolve, techniques like error-compensated quantization or block-wise summation might allow low-precision communication with high accuracy. For now, the conservative approach (and PyTorch’s default) is to do gradient summation in full precision to eliminate this concern. References: High-precision vs low-precision accumulation and its stability is discussed in PyTorch forums and development: it’s “well known that accumulation operations are often not super stable in half-precision” (FP32 accumulation of bf16 gradients in FSDP · Issue #106395 · pytorch/pytorch · GitHub). Empirical evidence of BF16 reduction causing instability at large scale and the switch to FP32 to fix it is given in a user report (FP32 accumulation of bf16 gradients in FSDP · Issue #106395 · pytorch/pytorch · GitHub). The magnitude of machine epsilon for FP32, FP16, BF16 (1e-7, 1e-3, 7.8e-3 respectively) is provided in John D. Cook’s comparison (bfloat16 (BF16) range and precision). These values underpin why summation in BF16 has much higher rounding error. Worst-case error growth $O(N\epsilon)$ and typical error behavior are described in numerical analysis literature (Taming Floating-Point Sums | orlp.net) (Taming Floating-Point Sums | orlp.net), confirming our derivations. Be the first to hear about new stories from Umair Akbar Create account Programming Artificial Intelligence Machine Learning ChatGPT Coding Follow Written by Umair Akbar ---------------------- 0 followers ·0 following Follow No responses yet Write a response What are your thoughts? Cancel Respond More from Umair Akbar Umair Akbar Mac OS Process Masking ---------------------- ### Introduction Feb 21 Umair Akbar anatomy of a cloud breach ------------------------- ### I never expected a simple misconfigured storage bucket to be my golden ticket. As a Red Team consultant conducting GCP security… Jan 20 22 Umair Akbar The Gentle Power of Why ----------------------- ### The most unassuming of words — three letters, a single beat on the tongue — is the lever that has shifted civilizations: Why? It is the… May 2 Umair Akbar always ask questions -------------------- ### You know, I’ve been fascinated by how our relationship with knowledge is evolving. Remember when we used to measure someone’s intelligence… Jan 15 See all from Umair Akbar Recommended from Medium In Long. Sweet. Valuable. by Ossai Chinedum I’ll Instantly Know You Used Chat Gpt If I See This --------------------------------------------------- ### Trust me you’re not as slick as you think May 16 20K 1222 In Towards AI by MKWriteshere Microsoft Just Solved AI’s Biggest Problem: Why Magentic-UI Changes Everything ------------------------------------------------------------------------------ ### How human-AI collaboration beats pure automation every time with 71% better results 5d ago 146 7 Alberto Romero GPT-5 Is Here: There’s Only One Feature Worth Writing About ----------------------------------------------------------- ### My short review of OpenAI’s new flagship model 6d ago 1.9K 76 Abhinav Docker Is Dead — And It’s About Time ------------------------------------ ### Docker changed the game when it launched in 2013, making containers accessible and turning “Dockerize it” into a developer catchphrase. Jun 9 4.7K 115 In Coding Beauty by Tari Ibaba This new IDE from Google is an absolute game changer ---------------------------------------------------- ### This new IDE from Google is seriously revolutionary. Mar 11 6.2K 368 Efrat taig VAE. The Latent Bottleneck: Why Image Generation Processes Lose Fine Details ---------------------------------------------------------------------------- ### Why do AI-generated images lose critical details? If you’re frustrated by vanishing textures, blurry text, or delicate features that simply… Mar 31 9 See more recommendations Help Status About Careers Press Blog Privacy Rules Terms Text to speech
350
On Traveling Wave Solutions of Linear and Nonlinear Wave Models Seeking Solitary Waves A thesis submitted in partial fulfillment of the requirements for the degree of Master of Science Department of Mathematical Sciences College of Arts and Sciences University of Cincinnati, March 2023 Author: Mounira Moussa Degrees: B.A. Mathematics, 2021, University of Cincinnati Chair: Deniz Bilman, Ph.D. Committee: Robert J. Buckingham, Ph.D. ii Abstract This thesis concerns traveling wave solutions of linear and nonlinear partial differential equations modeling wave propagation in different physical media. We show nonexistence of so-called solitary traveling wave solutions for a variety of linear models of wave propagation and we discuss the theory of linear waves in order to explain why these linear models do not support solitary waves. Motivated by this, we turn our attention to the famous Korteweg-de Vries equation, a nonlinear partial differential equation modeling propagation of waves with long wavelength in shallow water. We construct solitary wave solutions of the Korteweg-de Vries equation. We also introduce Jacobi elliptic functions and provide a construction of periodic traveling wave solutions of the Korteweg-de Vries equation. Finally, we recover the solitary wave solution from a degeneration of the periodic wave, in a limit where the period tends to infinity. iii © 2023 by Mounira Moussa. All rights reserved. Acknowledgments It has been a wonderful journey that could not have been possible without the support I had from my surroundings. I would like to express my deepest gratitude to my advisor, Dr. Deniz Bilman. Words cannot demonstrate enough his continuous help and expertise throughout this thesis. It was a great honor to work under someone who is knowledgeable and dedicated like him. I also would like to thank the Graduate Program Director, Dr. Robert Buckingham, for his support throughout my graduate studies. Thanks should also go to the staff and graduate students of the Department of Mathematical Sciences, University of Cincinnati, who I shared great memories with along the years. Finally, I thank God for all His graces and for giving me such an understanding and supportive family. vContents Abstract iii Copyright iv Acknowledgments vList of Figures viii 1 Introduction 1 1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Organization of the thesis and results . . . . . . . . . . . . . . . . . . . . 3 2 Linear Wave Theory and Traveling Waves 5 2.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52.2 The Wave equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62.2.1 D’Alembert’s solution . . . . . . . . . . . . . . . . . . . . . . . . . 92.2.2 Traveling waves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 2.2.3 Standing waves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 2.3 A linear modification of the wave equation: Klein-Gordon equation . . . . 15 2.3.1 Traveling waves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 vi 2.4 Airy’s dispersive wave equation . . . . . . . . . . . . . . . . . . . . . . . . 21 2.4.1 Traveling waves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 2.5 The Heat equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 2.5.1 Traveling waves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 2.6 Linear wave theory and dispersion . . . . . . . . . . . . . . . . . . . . . . 28 2.6.1 Superposition perspective: the Fourier transform . . . . . . . . . . 32 2.6.2 Revisiting the linear wave models . . . . . . . . . . . . . . . . . . . 35 3 Nonlinear Waves and the Korteweg-de Vries Equation 37 3.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 3.2 Korteweg & de Vries: Seeking solitary traveling waves . . . . . . . . . . . 40 3.3 Other types of traveling waves? . . . . . . . . . . . . . . . . . . . . . . . . 44 3.4 Jacobi elliptic functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 3.5 Korteweg-de Vries equation revisited: Wavetrains . . . . . . . . . . . . . . 52 Bibliography 61 vii List of Figures 2.1 Time evolution of u(x, t ) = sech (x − ct ), a solitary wave solution of the wave equation (2.2.1). Here c = 1 . Top pane: t = 0 , bottom pane: t = 8 . . 13 2.2 Time evolution of u(x, t ) = cos (x + ct ), a solitary wave solution of the wave equation (2.2.1) . Here c = 1 . Top pane: t = 0 , bottom pane: t = 8 .See how to position of the peak marked with red changes. . . . . . . . . . 13 2.3 A standing wave solution u(x, t ) = cos (x − ct ) + cos (x + ct ) of (2.2.1) with c = 1 at different times t = kπ 16 , k ∈ Z. Opacity values of the graphs are correlated with the amplitude at a given time. . . . . . . . . . . . . . . . . 15 2.4 Sample of unbounded traveling wave solutions (2.3.9) of the Klein-Gordon equation (2.3.1) with c = 12 in (2.3.9) . Top row: C1 = 1 and C2 = 0 ,bottom row: C1 = 1 and C2 = 20 . Left column shows the profile u(x, t ) at t = 0 , right column shows the profile at t = 10 . . . . . . . . . . . . . . . . 17 2.5 Sample of bounded and periodic traveling wave solutions (2.3.14) of the Klein-Gordon equation (2.3.1) with c = 32 , C1 = 35 and C2 = 45 in (2.3.14) .Top-row: the profile u(x, t ) at t = 0 , bottom-row: the profile u(x, t ) at t = 4 . The red dot marks one of the peaks in the top-row and shows its time-evolved location at the later time t = 4 in the bottom-row. . . . . . 19 viii 2.6 Sample of bounded and periodic traveling wave solutions (2.3.14) of the Klein-Gordon equation (2.3.1) with c = 32 , C1 = 35 and C2 = 45 in (2.3.14) .Top-row: the profile u(x, t ) at t = 0 , bottom-row: the profile u(x, t ) at t = 4 . The red dot marks one of the peaks in the top-row and shows its time-evolved location at the later time t = 4 in the bottom-row. . . . . . 20 2.7 Plot of the wavetrain solution (2.4.15) of (2.4.1) , with speed c = 2 . Top row: t = 0 , bottom row: t = 4 . The propagation of a peak is marked with red. The mean of oscillations is at level u = 3 . . . . . . . . . . . . . . . . . 25 2.8 Plot of the wavetrain solution (2.4.16) of (2.4.1) , with speed c = 4 . Top row: t = 0 , bottom row: t = 4 . The propagation of a peak is marked with red. The mean of oscillations is at level u = 3 . . . . . . . . . . . . . . . . . 25 2.9 Sinusoid cos (kx − ωt ) at t = 0 for different choices of wavenumbers k > 0.From left to right, top-row: k = 8 and k = 4 , middle-row: k = 2 , k = 1 ,bottom-row: k = 12 and k = 14 . The dashed red line indicates the endpoint of the spatial interval [0 , 2π]. . . . . . . . . . . . . . . . . . . . . . . . . . 29 2.10 The dispersion relation ω(k) = k3 for the Airy’s dispersive wave equation (2.4.1). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 3.1 Solitary wave solutions of Korteweg-de Vries equation with varying speeds c at time t = 0 . Top-to-bottom: c = 1 , c = 2 , c = 3 , c = 4 . Here we took x0 = 0 . As c increases the amplitude of the profile becomes larger and the width of the profile becomes smaller. . . . . . . . . . . . . . . . . . . . . . 43 3.2 Solitary wave solutions of Korteweg-de Vries equation with different speeds c at time t = 4 . Top-to-bottom: c = 1 , c = 2 , c = 3 , c = 4 . Here we took x0 = 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 3.3 Left pane: V (f ; c, A ) with ∆ < 0, right pane: V (f ; c, A ) with ∆ = 0 .Solution trajectories are confined below the level V = E drawn in orange. Maximum value of a solution f is denoted by ˜f0. . . . . . . . . . . . . . . 46 ix 3.4 V (f ; c, A ) with ∆ > 0. Solution trajectories are confined below the level V = E drawn in orange, and the level E must be chosen in the range [V −, V +]. f3 < f 2 < f 1 denote the simple real roots of V (f ; c, A ) = E for a value of E ∈ (V −, V +). . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 3.5 Cnodial wave solution ucn (x, t ) driven by V (f ; c, A ) = E, where c = 10 and A = 2 , resulting in f1 = 4 , f2 = 2 , f3 = −1. Peak height is f1 = 4 ,through height is f2 = 2 . Peak-to-trough measurement is f1 − f2 = 2 .Here m = 25 . Top row: ucn (x, t ) at time t = 0 , bottom row: ucn (x, t ) at time t = 32 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 3.6 The case where E = V +(c, A ) so that f2 and f3 merge at the value f = h1,which is a saddle point. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 3.7 The solution ucn( x,t ) at t = 0 obtained from choosing different values of E ≤ V +(c, A ). As E approaches V +, m approaches 1. The values of V +(c; A) − E from top to bottom are: 10 −1, 10 −2, 10 −8, 10 −12 , and 10 −16 .Observe that the spatial period of oscillations tends to infinity as m → 1−. 60 xChapter 1 Introduction 1.1 Motivation The study of waves has been a fundamental area of research in mathematics for centuries, with applications ranging from acoustics and optics to fluid dynamics and electromag-netism. Wave formations are a ubiquitous phenomenon that can be found in a wide variety of natural and human-made systems, making them an important subject of study for researchers across a range of disciplines. In this Master’s thesis, we explore the mathematics behind waves, with a particular focus on the formation of waves propagating with constant speed and without changing their shapes. In more detail, this thesis presents a study of the different types of “traveling wave” solutions of a variety of linear and nonlinear wave models which govern wave propagation in different physical contexts. Broadly speaking, a wave is a disturbance or oscillation that propagates through space or a medium, accompanied by the transfer of energy without the overall transfer of matter. Waves can take many forms, including sound waves, light waves, water waves, and electromagnetic waves. They can be characterized by their amplitude (the height of 1the wave), wavelength (the distance between two consecutive peaks or troughs of the wave), and frequency (the number of waves passing a point per unit of time). A traveling wave is a type of wave that propagates through a medium or space, moving in a specific direction. As the wave moves, the energy it carries is transported from one point to another, without any net movement of the medium itself. The speed of a traveling wave in general depends on the properties of the medium, such as its density and elasticity, and the frequency and wavelength of the wave. The type of the traveling wave — transverse or longitudinal — depends on the direction of the oscillation of the particles. Transverse waves are created when a disturbance causes oscillations that are perpendicular to the direction of energy transfer. We may think of ripples on the surface of water or vibrations in a guitar string, as examples. In both cases the particles move up or down while the propagation direction is horizontal. Longitudinal waves occur when oscillations are parallel to the direction of energy transfer. A good example of a longitudinal wave is sound. Traveling waves are important in many areas of science and engineering, including telecommunications, seismology, and acoustics. Examples of such waves traveling waves include ocean waves, sound waves, and light waves. The spatial profile of a traveling wave in general consists of a series of peaks and troughs, describing a periodic disturbance of the surface. Unlike this scenario, its profile may also be localized in space, describing a pulse-like disturbance. Such a wave is called a “solitary wave”. A solitary traveling wave remarkably maintains its shape and speed as it propagates through a medium. Solitary waves can be found in many physical systems, including water waves, electromagnetic waves, and nonlinear optics. This thesis investigates the existence of traveling solitary wave and traveling periodic wave solutions for a variety of wave models. We find that many linear models of wave propagation do not admit solitary waves, while they admit periodic waves; and we explain why solitary waves are not supported by the linear models in general and why nonlinear dynamics are necessary for existence of solitary traveling waves. Having done that, we 2consider the Korteweg-de Vries equation, a famous nonlinear model that describes the behavior of certain surface water waves with long wavelength in shallow channels. We show the existence of solitary waves for the Korteweg-de Vries equation by explicitly constructing them and also provide a construction of nonlinear periodic traveling waves supported by this wave model. 1.2 Organization of the thesis and results Chapter 2 considers linear partial differential equations that model various physical phenomena and seeks traveling wave solutions of these model equations. More specifically, we consider the wave equation, the Klein-Gordon equation, Airy’s dispersive wave equation, and the heat equation. We find that all of these equations except for the heat equation support traveling wave solutions that are bounded. We show that the wave equation supports traveling waves of arbitrary profiles, in particular, solitary traveling waves. We find, on the other hand, that the (bounded) traveling wave solutions of the Klein-Gordon equation and Airy’s dispersive wave equation necessarily have periodic profiles, ruling out the possibility of admitting solitary wave solutions. We then cover theory of linear waves to explain the mechanism that prohibits solitary waves for a class of linear partial differential equations, introducing the notion of dispersion .Our findings in Chapter 2 shows that if a linear partial differential equation exhibits dispersion, presence of nonlinearity becomes essential in order to support solitary waves. Indeed, one may think of propagation of a solitary wave as the manifestation of a delicate balance between the “dispersive forces” trying to spread the wave and “nonlinear forces” trying to amplify the wave. Motivated by this, Chapter 3 considers the Korteweg-de Vries equation, a famous nonlinear equation modeling propagation of shallow water waves, which remarkable has the aforementioned balance between nonlinear and dispersive effects. We construct the solitary wave solution of Korteweg-de Vries equation, which dates back to 1834, the 3observation of the “Great Wave of Translation” by John Scott Russell. We also show that Korteweg-de Vries equation admits traveling waves that have periodic profiles. It turns out that these solutions are given in terms of Jacobi elliptic functions, as opposed to the periodic traveling waves given in terms of trigonometric functions in the case of linear models covered in Chapter 2. We continue Chapter 3 by introducing basic theory of Jacobi elliptic functions and constructing the periodic traveling wave solutions of the Korteweg-de Vries equation explicitly. We conclude Chapter 3 by recovering the solitary wave solution from the periodic waves in a limit where the elliptic functions period tends to infinity. 4Chapter 2 Linear Wave Theory and Traveling Waves 2.1 Overview In this section we will consider a variety of linear partial differential equations (PDEs) modeling wave propagation and seek traveling wave solutions of these linear wave models. When seeking such solutions for a given model equation governing an unknown u = u(x, t ) in one spatial dimension, we will often use the traveling wave ansatz , that is, we demand that u(x, t ) is a solution of the form u(x, t ) = f (x − ct ) (2.1.1) for a sufficiently smooth function f (ξ) of a single variable ξ ∈ R. Here t denotes time and x is the space variable; c is a constant that gives the velocity of the traveling wave. As can be seen above, our focus in this work are traveling waves with constant speed, which are waves propagating by translation . In particular, the traveling wave solutions we seek may be solitary waves or periodic traveling waves . We remind the reader that we 5have already introduced these notions in Chapter 1. We now provide proper definitions of these structures. Most of the material in this Chapter can be found in, for example, , , or . Definition 2.1.1 (Solitary wave) . A solitary wave is a traveling wave whose profile is localized in space. In other words, for a wave model governing a quantity u(x, t ), a solitary wave is a traveling wave solution of the form u(x, t ) = f (x − ct ), c ∈ R, (2.1.2) for a sufficiently smooth function f , where f (ξ) and all of its derivatives tend to 0 as ξ → ±∞ . Definition 2.1.2 (Wavetrain) . A wavetrain is a traveling wave whose profile is periodic in space. In other words, for a wave model governing a quantity u(x, t ), a wavetrain is a traveling wave solution of the form u(x, t ) = f (x − ct ), c ∈ R, (2.1.3) for a sufficiently smooth function f , where f (ξ) is a periodic function of ξ. 2.2 The Wave equation We begin our discussion of exact traveling wave solutions of linear PDEs with the wave equation, which is the second-order linear PDE given by utt − c2uxx = 0 , x ∈ R. (2.2.1) Here c ∈ R is an arbitrary nonzero parameter. The equation (2.2.1) models vibrations of a string in one spatial dimension: t denotes the time variable, x denotes the variable for 6the position on the string which is assumed to lie horizontally, and u = u(x, t ) describes the deviation of the string at time t at position x from the equilibrium state u ≡ 0. In two spatial dimensions with coordinates (x, y ), the wave equation reads utt − c2(uxx + uyy ) = 0 , (x, y ) ∈ R2, (2.2.2) and it models the vibrations of a thin membrane (assumed to be two-dimensional). Again u = u(x, y, t ) describes deviation at time t and position (x, y ) from the equilibrium state of the membrane u ≡ 0. In three spatial dimensions with coordinates (x, y, z ), the wave equation utt − c2(uxx + uyy + uzz ) = 0 , (x, y, z ) ∈ R3, (2.2.3) models the pressure vibrations of an acoustic wave in air (e.g., a sound wave). In this case u = u(x, y, z, t ) describes again the deviation from the equilibrium (e.g. from having “no sound”) at time t at position (x, y, z ).The wave equation in one-spatial dimension (2.2.1) was formulated by French mathe-matician and music theorist Jean le Rond D’Alembert (1717–1783) in 1743. He published his study of oscillations of a vibrating string using the wave equation in his book “Traité de Dynamique” . D’Alembert also formulated the general solution of the wave equation (2.2.1), which we will reproduce below. We will limit our treatment to the model (2.2.1) in one spatial dimension. It is customary to denote the second-order linear partial differential operator in (2.2.1) by □,namely, □ := ∂2 ∂t 2 − c2 ∂2 ∂x 2 . (2.2.4) 7Using this notation, (2.2.1) simply reads □u = 0 . The “ −” sign in front of the uxx term is important and cannot be changed by a (real-valued) scaling of the dependent or independent variables ∗.A straightforward, but important observation is given by the following proposition. Proposition 2.2.1 (A family of solutions of the wave equation) . Suppose that F (ξ) is a function of a single variable ξ that is twice differentiable with continuous derivatives, that is, F ∈ C2(R). Then u(x, t ) = F (x − ct ) and u(x, t ) = F (x + ct ) (2.2.5) define solutions of (2.2.1) . Proof. The proof is by direct computation. We set ξ = ξ(x, t ) = x + σct with σ = ±1.By chain rule we have ∂u ∂x = dF dξ · ∂ξ ∂x = dF dξ (2.2.6) since ∂ξ ∂x = 1 , hence ∂2u∂x 2 = d2Fdξ 2 · ∂ξ ∂x = d2Fdξ 2 . (2.2.7) Similarly, ∂u ∂t = dF dξ · ∂ξ ∂t = σc dF dξ (2.2.8) since ∂ξ ∂t = σc . Thus, ∂2u∂t 2 = σc d2Fdξ 2 · ∂ξ ∂t = ( σc )2 d2Fdξ 2 = c2 d2Fdξ 2 (2.2.9) ∗If we had the “ +” sign in front of the uxx term in (2.2.1) , the resulting equation utt +c2uxx = 0 would just be the Laplace’s equation in R2in coordinates (t, cx ). 8since σ2 = 1 . From (2.2.7) and (2.2.9), we have ∂2u∂t 2 − c2 ∂2u∂x 2 = c2 d2Fdξ 2 − c2 d2Fdξ 2 = 0 . (2.2.10) This finishes the proof as the calculation captures both cases ξ = x − ct and ξ = x + ct through the use of σ = ±1. ■ In fact, we will now show that all solutions of the wave equation (2.2.1) are of the form u(x, t ) = A1F (x − ct ) + A2G(x + ct ) (2.2.11) for functions F, G ∈ C2(R) and arbitrary constants A1, A 2 ∈ R. 2.2.1 D’Alembert’s solution The wave operator (2.2.4) can be factored as a product of two first-order operators: □ = ( ∂∂t − c ∂∂x ) ( ∂∂t + c ∂∂x ) . (2.2.12) These operators govern transport with characteristic directions (1 , ±c) in the (t, x )-plane. Thus, it makes sense to introduce a change of variables (x, t ) 7 → (ζ, η ), where ζ := x + ct, η := x − ct. (2.2.13) Then, by the Chain Rule, we have ∂∂x = ∂∂ζ · ζx + ∂∂η ηx = ∂∂ζ + ∂∂η , (2.2.14) ∂∂t = ∂∂ζ · ζt + ∂∂η ηt = c ( ∂∂ζ − ∂∂η ) . (2.2.15) 9Then, by adding and subtracting the two expressions from each other suitably, we find that ∂∂t + c ∂∂x = 2 c ∂∂ζ (2.2.16) ∂∂t − c ∂∂x = −2c ∂∂η , (2.2.17) and this gives □ = −4c2 ∂2 ∂ζ∂η . (2.2.18) Therefore, in the new coordinates (ζ, η ), since c̸ = 0 , the wave equation (2.2.1) becomes wζη = 0 , (2.2.19) where w(ζ, η ) = u(x, t ). This implies that w(ζ, η ) = φ(ζ) + ψ(η) (2.2.20) for sufficiently smooth functions φ and ψ. Then we have that u(x, t ) = φ(x + ct ) + ψ(x − ct ) (2.2.21) solves the wave equation (2.2.1) and this is already in the form (2.2.11) . We now show how the arbitrary functions φ and ψ are determined from data in the context of an initial-value problem for (2.2.1). Consider utt − c2uxx = 0 , x ∈ R, t > 0,u(x, 0) = g(x), x ∈ R,ut(x, 0) = h(x), x ∈ R, (2.2.22) 10 for given functions g and h. From (2.2.21) we have u(x, 0) = φ(x) + ψ(x) (2.2.23) ut(x, 0) = cφ ′(x) − cψ ′(x) (2.2.24) which means that φ(x) + ψ(x) = g(x) (2.2.25) cφ ′(x) − cψ ′(x) = h(x). (2.2.26) Differentiating the first identity with respect to x and dividing by c in the the second yields the following two identities φ′(x) + ψ′(x) = g′(x), (2.2.27) φ′(x) − ψ′(x) = 1 c h(x). (2.2.28) These immediately give us φ′(x) = 12 g′(x) + 12c h(x), (2.2.29) ψ′(x) = 12 g′(x) − 12c h(x), (2.2.30) hence φ(x + ct ) = 12 g(x + ct ) + 12c ∫x+ct 0 h(s) ds, (2.2.31) ψ(x − ct ) = 12 g(x − ct ) − 12c ∫x−ct 0 h(s) ds. (2.2.32) 11 Recalling (2.2.21), we obtain u(x, t ) = 12 (g(x + ct ) + g(x − ct )) + 12c ∫x+ct x−ct h(s) ds. (2.2.33) This is known as D’Alembert’s solution formula for the initial value problem (2.2.22) for the wave equation in one spatial dimension. Note that for given functions g and h, the solution (2.2.33) is of the form u(x, t ) = F (x + ct ) + G(x − ct ) for functions F and G related to g and h. 2.2.2 Traveling waves Proposition 2.2.1 tells us that the wave equation (2.2.1) admits traveling wave solutions such that they are of arbitrary shape or form (choosing arbitrary function F in Proposi-tion 2.2.1) and they may travel in either direction (towards x = + ∞ or towards x = −∞ ). Indeed, let c be an arbitrary positive real number. Then for any function F ∈ C2(R), u(x, t ) = F (x − ct ) is a traveling wave with profile F propagating towards x = + ∞ and u(x, t ) = F (x + ct ) is a traveling wave with profile F propagating towards x = −∞ .We may consider, for example, the following solitary wave solution of (2.2.1) obtained by taking F (ξ) = sech( ξ): u(x, t ) = sech( x − ct ), (2.2.34) which propagates right. Time evolution of this solitary wave is provided in Figure 2.1. We may also consider the following wavetrain solution of (2.2.1) obtained by taking F (ξ) = cos( ξ) in Proposition 2.2.1: u(x, t ) = cos( x + ct ), (2.2.35) which propagates left. Time evolution of this wavetrain is provided in Figure 2.2. 12 -20 -10 0 10 20 -1.0 -0.5 0.0 0.5 1.0 -20 -10 010 20 -1.0 -0.5 0.0 0.5 1.0 Figure 2.1: Time evolution of u(x, t ) = sech( x − ct ), a solitary wave solution of the wave equation (2.2.1). Here c = 1 . Top pane: t = 0 , bottom pane: t = 8 .-20 -10 0 10 20 -1.0 -0.5 0.0 0.5 1.0 -20 -10 010 20 -1.0 -0.5 0.0 0.5 1.0 Figure 2.2: Time evolution of u(x, t ) = cos( x + ct ), a solitary wave solution of the wave equation (2.2.1). Here c = 1 . Top pane: t = 0 , bottom pane: t = 8 . See how to position of the peak marked with red changes. 13 2.2.3 Standing waves We begin with a definition. Definition 2.2.2 (Standing Wave) . A standing wave , also referred to as a stationary wave , is a type of wave that oscillates over time, but its maximum amplitude remains in a fixed position and does not propagate through space. We now try to answer the following question: Does the wave equation (2.2.1) admit a standing wave solution? Note that the speed of propagation of traveling waves for (2.2.1) is c and this parameter is intrinsic to the PDE (2.2.1) . Thus, we cannot simply obtain a standing wave solution of the wave equation by simply taking c = 0 as (2.2.1) would no longer be the wave equation if c = 0 . However, we can invoke Proposition 2.2.1 again and consider adding the same profiles F (ξ) with opposite velocities, that is, with the same speed but with opposite directions. Consider the periodic profile F (ξ) = cos (ξ), for instance, and the resulting solution of (2.2.1) given by u(x, t ) = cos( x − ct ) + cos( x + ct ). (2.2.36) This is a (linear) superposition of two periodic traveling waves that have the same speeds, amplitudes, and profiles. The periodic traveling waves coincide at time t = 0 and later times, whenever t = 0 mod (2 π). Figure 2.3 shows the time evolution of (2.2.36) under (2.2.1) . Notice how the locations of the peaks do not propagate in space and the motion solely consists of oscillations perpendicular to the horizontal axis. 14 -20 -10 0 10 20 -2 -1012 Figure 2.3: A standing wave solution u(x, t ) = cos (x − ct ) + cos (x + ct ) of (2.2.1) with c = 1 at different times t = kπ 16 , k ∈ Z. Opacity values of the graphs are correlated with the amplitude at a given time. 2.3 A linear modification of the wave equation: Klein-Gordon equation We now consider the Klein-Gordon equation utt − uxx + u = 0 , x ∈ R. (2.3.1) This is a so-called relativistic wave equation and it is used in mainly particle physics and quantum mechanics. It was proposed in 1926 by two physicists Water Gordon (1893–1939) and Oskar Klein (1894–1977) in [10, 13]. 2.3.1 Traveling waves We seek traveling wave solutions of (2.3.1), that is, solutions of the form u(x, t ) = f (ξ), ξ := x − ct, (2.3.2) for a constant c ∈ R.15 Noting that ∂ξ ∂t = −c and ∂ξ ∂x = 1 , we substitute the ansatz (2.3.2) in (2.3.1) and use the chain rule to obtain (after some algebra) the ordinary differential equation (c2 − 1) f ′′ (ξ) + f (ξ) = 0 (2.3.3) for the unknown function f and the independent variable ξ. First, it is clear that (2.3.3) cannot have a nonzero constant solution f (ξ) ≡ constant . This means that the traveling wave ansatz does not produce a (nonzero) constant solution of (2.3.1) . Another property we immediately notice from (2.3.3) is that f (ξ) ≡ 0 is the only solution to (2.3.3) if c2 = 1 , which yields the trivial solution u ≡ 0 for (2.3.1) . Thus, we have our first result for the Klein-Gordon equation. Proposition 2.3.1. The Klein-Gordon equation (2.3.1) does not admit a traveling wave solution propagating with unit speed ( c = 1 ) in either direction (left or right) . Accordingly, we will assume c2̸ = 1 for the remainder of the discussion in this section and seek f (ξ) that is not a constant. Under these assumptions, we rewrite (2.3.3) in the form f ′′ (ξ) = ( 11 − c2 ) f (ξ), (2.3.4) and consider the following two cases c2 < 1 and c2 > 1 separately. The reason for the separate treatment is because the characteristic equation associated with (2.3.4) is given by ζ2 − 11 − c2 = 0 , (2.3.5) which has two real simple roots if c2 < 1 and complex-conjugate pair of roots if c2 > 1. The case c2 < 1. In this case, the constant factor on right hand side of (2.3.4) is positive, therefore a general solution of (2.3.4) is given by f (ξ) = C1e 1√1−c2ξ C2e− 1√1−c2 ξ , (2.3.6) 16 -20 -10 0 10 20 0200 400 600 800 1000 -20 -10 0 10 20 0200 400 600 800 1000 -20 -10 0 10 20 0200 400 600 800 1000 -20 -10 0 10 20 0200 400 600 800 1000 Figure 2.4: Sample of unbounded traveling wave solutions (2.3.9) of the Klein-Gordon equation (2.3.1) with c = 12 in (2.3.9). Top row: C1 = 1 and C2 = 0 , bottom row: C1 = 1 and C2 = 20 .Left column shows the profile u(x, t ) at t = 0 , right column shows the profile at t = 10 . for arbitrary constants C1, C 2 ∈ R. Since the constant factors in the exponents are real-valued with opposite signs, f (ξ) is an unbounded function. In more detail,we have |f (ξ)| → +∞, as ξ → +∞, if C1̸ = 0 (2.3.7) and |f (ξ)| → +∞, as ξ → −∞ , if C2̸ = 0 . (2.3.8) Since we cannot allow for C1 = C2 = 0 to have a nontrivial solution f , any nontrivial solution of (2.3.4) is indeed an unbounded function in this case. Thus, we find that when c2 < 1, the traveling wave solutions of the Klein-Gordon equation (2.3.1) consist of solutions of the form u(x, t ) = C1e 1 √1−c2 (x−ct ) C2e− 1√1−c2 (x−ct ) . (2.3.9) Any such solution u(x, t ) is an unbounded function of x for any given t ∈ R. A sample of such solutions are plotted in Figure 2.4. The case c2 > 1. In this case, the constant factor on right hand side of (2.3.4) is negative. Therefore, a general solution of (2.3.4) involves complex-valued exponential 17 functions: f (ξ) = C1ei 1√c2−1 ξ C2e−i 1√c2−1 ξ , (2.3.10) where C1, C 2 ∈ C are arbitrary constants as before, but this time they are generalized to be complex-valued. Since (2.3.4) has real-valued coefficients, the real and imaginary parts of any of these complex exponential solutions also define solutions (2.3.4) as they can be obtained by taking appropriate linear combinations of two complex exponential solutions in (2.3.4): ℜ{ f (ξ)} = 12 f (ξ) + 12 f (ξ), (2.3.11) ℑ{ f (ξ)} = 12i f (ξ) − 12i f (ξ). (2.3.12) Using Euler’s formula eiθ = cos (θ) + i sin (θ), we obtain a real-valued general solution built out of {ℜ{ f (ξ)}, ℑ{ f (ξ)}} : f (ξ) = C1 cos ( 1 √c2 − 1 ξ ) C2 sin ( 1 √c2 − 1 ξ ) . (2.3.13) The solutions { cos (1√c2−1 ξ ) , sin (1√c2−1 ξ )} indeed form a set of fundamental solutions to (2.3.4) since their Wronskian is identically equal to 1√c2−1̸ = 0 . Thus, we find that when c2 > 1, the traveling wave solutions of the Klein-Gordon equation (2.3.1) consist of solutions of the form u(x, t ) = C1 cos ( 1 √c2 − 1 (x − ct ) ) C2 sin ( 1 √c2 − 1 (x − ct ) ) (2.3.14) for arbitrary constants C1, C 2 ∈ R. We note that, in contrast with the case c2 < 1, the family (2.3.14) of solutions consists of bounded traveling waves. However, none of these solutions are spatially localized: For any given C1, C 2 ∈ R and c ∈ R with c2 > 1, u(x, t ) is a periodic function of x with period 2√c2 − 1π at any time t. Such a solution is plotted in Figure 2.5. 18 -20 -10 0 10 20 -1.0 -0.5 0.0 0.5 1.0 -20 -10 010 20 -1.0 -0.5 0.0 0.5 1.0 Figure 2.5: Sample of bounded and periodic traveling wave solutions (2.3.14) of the Klein-Gordon equation (2.3.1) with c = 32 , C1 = 35 and C2 = 45 in (2.3.14). Top-row: the profile u(x, t ) at t = 0 , bottom-row: the profile u(x, t ) at t = 4 . The red dot marks one of the peaks in the top-row and shows its time-evolved location at the later time t = 4 in the bottom-row. We note that there is no sign condition on the speed parameter c as it is only required to satisfy c2 > 1. This means that the Klein-Gordon equation admits traveling wave solutions that propagate in either direction. The wavetrain (2.3.14) propagates right (in the direction of x = + ∞) if c > 1 and it propagates left (in the direction of x = −∞ ) if c < −1.We now study the effect of varying the value of the speed parameter c while maintaining c2 > 1. For larger values of |c|, the wavetrain (2.3.14) propagates faster but the period of spatial oscillations given by 2√c2 − 1π becomes larger. In other words, faster propagating wavetrains governed by (2.3.14) have less oscillatory profiles. We will make these notions more precise in Section 2.6. We give a plot of a wavetrain with speed parameter c = 3 in Figure 2.6. The reader may compare it with the wavetrain with c = 32 plotted in Figure 2.6 to see the difference in profiles and speeds of propagation. 19 -20 -10 0 10 20 -1.0 -0.5 0.0 0.5 1.0 -20 -10 010 20 -1.0 -0.5 0.0 0.5 1.0 Figure 2.6: Sample of bounded and periodic traveling wave solutions (2.3.14) of the Klein-Gordon equation (2.3.1) with c = 32 , C1 = 35 and C2 = 45 in (2.3.14). Top-row: the profile u(x, t ) at t = 0 , bottom-row: the profile u(x, t ) at t = 4 . The red dot marks one of the peaks in the top-row and shows its time-evolved location at the later time t = 4 in the bottom-row. We now conclude our efforts in finding traveling solutions of the Klein-Gordon equation (2.3.1) . Our procedure until now exhausted every single possibility resulting from the traveling wave ansatz (2.3.2) . None of the traveling wave solutions we have found have profiles that are localized in space: given any time t, a traveling wave solution u(x, t ) (either the one in (2.3.9) or the one in (2.3.9) ) does not decay to 0 as |x| → ∞ . Therefore, there are no solitary traveling wave solutions of (2.3.1) . We summarize our findings in the following proposition. Proposition 2.3.2. The Klein-Gordon equation (2.3.1) does not admit any solitary traveling wave solutions. Bounded traveling wave solutions propagating with speed c exist only if |c| > 1, in which case the solution is a periodic traveling wave, called a wavetrain , given by (2.3.14) .Faster traveling wavetrains have less oscillatory profiles. 20 Remark 2.3.3 (Comparison with the wave models studied previously) . Proposition 2.3.2 presents a contrast with the traveling wave solutions of the wave equation (2.2.1) . The only bounded traveling wave solutions of the Klein-Gordon equation (2.3.1) are wavetrains. Moreover, the speed of the wavetrain is correlated with the period of spatial oscillations. Thus, the shape of the profile depends on the speed for wavetrain solutions of the Klein-Gordon equation, unlike the case for the wave equation. 2.4 Airy’s dispersive wave equation We now consider Airy’s dispersive wave equation ut − uxxx = 0 . (2.4.1) This is a linear PDE mainly used in fluid dynamics. It is named after George Biddell Airy (1801–1892) who formulated the equation in 1841 in his long paper on the linear theory of water waves. The derivation of the equation (2.4.1) from a water wave equation is based on the assumption that the amplitude of the wave is small compared to the depth and wavelength of the wave. See, for example, for a study of the equation (2.4.1) . This equation is sometimes also referred to as the “linear Korteweg-de Vries” equation (See Chapter 3). 2.4.1 Traveling waves We again seek traveling wave solutions of the form u(x, t ) = f (ξ), ξ := x − ct, (2.4.2) 21 for a constant c ∈ R. Noting that ∂ξ ∂t = −c and ∂3ξ∂x 3 = 1 , we substitute the ansatz (2.4.2) in (2.4.1) and obtain the ordinary differential equation −cf ′(ξ) − f ′′′ (ξ) = 0 (2.4.3) for the unknown function f and the independent variable ξ. We may call g(ξ) := f ′(ξ) and work with the second-order equation g′′ (ξ) + cg (ξ) = 0 (2.4.4) instead of (2.4.3). This equation has the characteristic equation ζ2 + c = 0 , (2.4.5) so there are two cases to consider: c < 0 and c > 0. The case c < 0. In this case the roots of (2.4.5) are ζ = ±√|c| and a general solution to (2.4.4) is given by g(ξ) = C1e √|c|ξ + C2e−√|c|ξ (2.4.6) for arbitrary constants C1, C 2 ∈ R. We integrate this to obtain a general solution of (2.4.3), given by f (ξ) = C1e √|c|ξ + C2e−√|c|ξ + C3, (2.4.7) for arbitrary constants C1, C 2, C 3 ∈ R. We note that unlike the Klein-Gordon equation (2.3.1) , Airy’s dispersive wave equation (2.4.1) admits constant solution u ≡ C for any constant C ∈ R. The corresponding traveling wave solutions of (2.4.1) are given by u(x, t ) = C1e √|c|(x−ct ) + C2e−√|c|(x−ct ) + C3. (2.4.8) 22 Just as the case for Klein-Gordon equation given in (2.3.9) , none of the nontrivial (not constant, so C1̸ = 0 or C2̸ = 0 ) solutions are bounded. Stating in terms of f (ξ), we have |f (ξ)| → +∞, as ξ → +∞, if C1̸ = 0 (2.4.9) and |f (ξ)| → +∞, as ξ → −∞ , if C2̸ = 0 . (2.4.10) The case c > 0. In this case the roots of (2.4.5) are ζ = ±i√c and a general solution to (2.4.4) is given by g(ξ) = C1ei√cξ + C2e−i√cξ (2.4.11) for arbitrary constants C1, C 2 ∈ R. As in Section 2.3, we obtain a real-valued general solution by taking a linear combination of the real and imaginary parts of the complex exponential ei√cξ : g(ξ) = C1 cos( √cξ ) + C2 sin( √cξ ). (2.4.12) We can now integrate this to obtain a general solution of (2.4.3), given by f (ξ) = C1 cos( √cξ ) + C2 sin( √cξ ) + C3, (2.4.13) for arbitrary constants C1, C 2, C 3 ∈ R. Here we absorbed possible minus signs and c-dependent constant factors arising from integration in the arbitrary constants C1 and C2. Note that all of the solutions (2.4.13) are bounded as |ξ| → ∞ . The corresponding traveling wave solutions of (2.4.1) are given by u(x, t ) = C1 cos( √c(x − ct )) + C2 sin( √c(x − ct )) + C3. (2.4.14) for arbitrary constants C1, C 2, C 3 ∈ R.23 Our analysis exhausted every possibility of obtaining a traveling wave solution to (2.4.1) starting with the traveling wave ansatz (2.4.2) . We found that all of the bounded traveling wave solutions of (2.4.1) are wavetrains. Therefore, Airy’s dispersive wave equation does not admit any solitary traveling wave solutions. Moreover, bounded traveling wave solutions exist if and only if their velocity c is positive , which means that the model (2.4.1) only allows for traveling waves propagating right, towards x = + ∞. This makes Airy’s dispersive wave equation a unidirectional (also called “one-way”) model. Note that this is in contrast with the Klein-Gordon equation studied in Section 2.3. Another feature of the wavetrain solution (2.4.14) is the presence of an arbitrary additive constant C3 ∈ R, resulting from the fact that (2.4.1) is a third-order PDE. An implication of this is that we can choose any real number to be the mean of oscillation for the wavetrain solutions (2.4.14). For instance, u(x, t ) = 3 + 35 cos( √2( x − 2t)) − 45 sin( √2( x − 2t)) (2.4.15) is a wavetrain solution of (2.4.1) with velocity c = 2 and a mean of oscillation 3,corresponding to taking C3 = 3 in formula (2.4.14) along with C1 = 25 and C2 = − 45 . We give a plot of the solution (2.4.15) in Figure 2.7. We now study the effect of varying c > 0 on the profile of the wavetrain solution (2.4.14) . It is clear from the formula (2.4.14) that increasing c > 0 results in making u(x, t ) more oscillatory in x for any given t, shortening the spatial period. To illustrate the effect of increasing c > 0, we double the speed c = 2 in the solution (2.4.15) and plot u(x, t ) = 3 + 35 cos(2( x − 4t)) − 45 sin(2( x − 4t)) , (2.4.16) which has speed c = 4 , in Figure 2.8. We summarize our findings in the following proposition. 24 -20 -10 0 10 20 01234-20 -10 0 10 20 01234Figure 2.7: Plot of the wavetrain solution (2.4.15) of (2.4.1) , with speed c = 2 . Top row: t = 0 ,bottom row: t = 4 . The propagation of a peak is marked with red. The mean of oscillations is at level u = 3 .-20 -10 0 10 20 01234-20 -10 0 10 20 01234 Figure 2.8: Plot of the wavetrain solution (2.4.16) of (2.4.1) , with speed c = 4 . Top row: t = 0 ,bottom row: t = 4 . The propagation of a peak is marked with red. The mean of oscillations is at level u = 3 . 25 Proposition 2.4.1. Airy’s dispersive wave equation (linear Korteweg-de Vries) equation (2.4.1) does not admit any solitary traveling wave solutions. Bounded traveling wave solutions propagating with speed c exist only if c > 0, in which case the solution is a periodic traveling wave, called a wavetrain , given by (2.4.14) .Wavetrains supported by (2.4.1) can only propagate right and can have arbitrary means of oscillations. Faster traveling wavetrains have more oscillatory profiles. Remark 2.4.2 (Comparison with the wave models studied previously) . Proposition 2.4.1 presents a contrast with the traveling wave solutions of (2.2.1) and of (2.3.1) . While the only bounded traveling wave solutions of both the Klein-Gordon equation (2.3.1) and Airy’s dispersive wave equation (2.4.1) are wavetrains, the wavetrains of (2.4.1) are only allowed to propagate right and their speed c > 0 can be taken as small as desired. In contrast, wavetrains of the wave equation (2.2.1) can travel in both directions with arbitrary speeds and wavetrains of the Klein-Gordon equation (2.3.1) can travel in both directions with speed |c| > 1. Faster traveling wavetrains of (2.4.1) have more oscillatory profiles, whereas faster traveling wavetrains of (2.3.1) have less oscillatory profiles. For the wave equation (2.2.1) , the speed and the shape of the profile are not correlated. 2.5 The Heat equation We now consider the Heat equation: ut − uxx = 0 , x ∈ R, (2.5.1) which governs diffusion of a heat (or a substance such as a chemical) along a one-dimensional rod. The heat equation was first formulated by Joseph Fourier in the early 19th Century for the purpose of modeling how heat diffuses through a given region or body. 26 The reason we consider this equation is because our work until this point may lead the reader to think that any linear partial differential equation admits wavetrain solutions. This is far from the truth, as we will see below. 2.5.1 Traveling waves We seek traveling wave solutions of (2.5.1), that is, solutions of the form u(x, t ) = f (ξ), ξ := x − ct, (2.5.2) for a constant c ∈ R. Noting that ∂ξ ∂t = −c and ∂2ξ∂x 2 = 1 , we substitute (2.5.2) in (2.5.1) and obtain f ′′ (ξ) + cf ′(ξ) = 0 . (2.5.3) This is a second-order equation that has the characteristic equation: ζ2 + cζ = 0 , (2.5.4) whose roots are given by ζ = 0 , ζ = −c. (2.5.5) Therefore, a general solution of (2.5.3) is given by f (ξ) = C1 + C2e−cξ (2.5.6) for arbitrary constants C1, C 2 ∈ R.It is clear that f (ξ) cannot be a periodic function of ξ for any value of c ∈ R. So the heat equation (2.5.1) does not support wavetrains. Moreover, |f (ξ)| → +∞ as ξ → +∞ if c < 0 and C2̸ = 0 and |f (ξ)| → +∞ as ξ → −∞ if c > 0 and C2̸ = 0 . Therefore, f (ξ) is an unbounded solution whenever C2̸ = 0 . If we seek a solitary wave solution, we are 27 forced to set C2 = 0 , but that just leaves us with constant solutions. Thus, we arrived at the following result. Proposition 2.5.1. The heat equation (2.5.1) does not admit nontrivial periodic or solitary traveling wave solutions. The only bounded traveling wave solutions are constants. 2.6 Linear wave theory and dispersion Recall that the traveling wave solutions of the PDEs studied in Section 2.3 and Section 2.4 ended up being spatially periodic functions, hence not spatially localized: not solitary .In this section we will explain the mechanism that prohibits solitary traveling waves for linear model equations such as (2.3.1) and (2.4.1) . We follow and expand the material in . We consider a plane wave with unit amplitude of the form u(x, t ) = Ae i(kx −ωt ) (2.6.1) for constants k ∈ R and ω ∈ C. The amplitude can be taken to be 1 without loss of generality since the equations considered in this chapter are linear and homogeneous. For fixed t, the plane wave in (2.6.1) is a periodic function of x with period 2π |k| , therefore the spatial frequency of oscillations is |k| 2π . Definition 2.6.1 (Wavenumber) . The absolute value |k| of the real-valued parameter k in the plane wave given in (2.6.1) is called the wavenumber . To interpret the meaning of the wavenumber we can for the moment look at the real part of the plane wave (2.6.1): ℜ{ u(x, t )} = cos( kx − ωt ). (2.6.2) 28 0 π 2π3π 22π5π 23π7π 24π -1.0 -0.5 0.0 0.5 1.0 0π 2π3π 22π5π 23π7π 24π0π 2π3π 22π5π 23π7π 24π -1.0 -0.5 0.0 0.5 1.0 0π 2π3π 22π5π 23π7π 24π0π 2π3π 22π5π 23π7π 24π -1.0 -0.5 0.0 0.5 1.0 0π 2π3π 22π5π 23π7π 24π0π 2π3π 22π5π 23π7π 24π -1.0 -0.5 0.0 0.5 1.0 0π 2π3π 22π5π 23π7π 24π0π 2π3π 22π5π 23π7π 24π -1.0 -0.5 0.0 0.5 1.0 0π 2π3π 22π5π 23π7π 24π0π 2π3π 22π5π 23π7π 24π -1.0 -0.5 0.0 0.5 1.0 0π 2π3π 22π5π 23π7π 24π Figure 2.9: Sinusoid cos (kx − ωt ) at t = 0 for different choices of wavenumbers k > 0. From left to right, top-row: k = 8 and k = 4 , middle-row: k = 2 , k = 1 , bottom-row: k = 12 and k = 14 .The dashed red line indicates the endpoint of the spatial interval [0 , 2π]. This shows that |k| is precisely the number of full-period (spatial) oscillations of cosine that fit in the spatial interval [0 , 2π] for fixed t. The sign of k, denoted by sgn (k), just effects the direction of propagation of the plane wave. See Figure 2.9 for the effect of varying k in (2.6.2) . A plane wave of the form (2.6.1) is sometimes called a monochromatic plane wave since it contains only one spatial frequency (wavenumber). Since the spatial period 2π |k| is the length of one full period of spatial oscillation, it is given a special name. Definition 2.6.2 (Wavelength) . Given a monochromatic plane wave ei(kx −ωt ), the number λ := 2π |k| is called the wavelength . Note that plane waves of large wavelength correspond to those with small wavenumbers. We now explain the meaning of the parameter ω in (2.6.1). Definition 2.6.3 (Frequency) . Given a monochromatic plane wave ei(kx −ωt ), the number ω is called the frequency . The frequency has the following interpretation when ω ∈ R. If we place a buoy at a fixed position in space (at fixed x), ω > 0 is the number of full period temporal 29 oscillations the buoy reads in during 0 ≤ t < 2π. The sign of ω describes the direction of propagation in this case. We suppose for a moment that ω is real valued. Observe that the monochromatic plane wave (2.6.1) can be written as a traveling wave ei(kx −ωt ) = eik (x− ωk t) (2.6.3) by just factoring out the wavenumber k in the exponent. Definition 2.6.4. For ω ∈ R, the ratio vp = ωk (2.6.4) is called the phase velocity of the plane wave (2.6.3) . As the reader may suspect, a given linear PDE does not admit an arbitrary monochro-matic plane wave solution ei(kx −ωt ). The structure of the linear differential operator dictates a relation between ω and k. In other words, for given suitable k ∈ R, the frequency ω depends on k: ω = ω(k). This relationship is known as the dispersion relation of the linear wave model understudy. To illustrate this, we reconsider Airy’s dispersive wave equation (2.4.1) , and suppose that k > 0. Demanding that (2.6.1) is a solution of (2.4.1) yields the condition (−iω )ei(kx −ωt ) − (ik )3ei(kx −ωt ) = 0 , (2.6.5) which is simplified as −iωe i(kx −ωt ) + ik 3ei(kx −ωt ) = 0 . (2.6.6) 30 0 1 2 3 4012345Figure 2.10: The dispersion relation ω(k) = k3 for the Airy’s dispersive wave equation (2.4.1) . After dividing by the common exponential term and canceling out the factor i, we solve for ω to find that ω(k) = k3. (2.6.7) Thus, a monochromatic plane wave solution u(x, t ) = ei(kx −ωt ) of Airy’s dispersive wave equation with wavenumber |k| = k > 0 has the phase velocity vp(k) = ω(k) k = k3 k = k2. (2.6.8) See Figure 2.10 for a plot of vp(k) for k > 0. This tells us the following about the Airy’s dispersive wave equation (2.4.1) : Plane waves with different wavenumbers |k| have different phase velocities vp(k). This property is known as dispersion . Definition 2.6.5 (Dispersion) . Let k denote the wavenumber and ω = ω(k) denote the frequency of monochromatic plane wave solutions ei(kx −ωt ) of a linear partial differential equation. The linear partial differential equation is said to be a dispersive equation if the function k 7 → ω(k) |k| (2.6.9) is real-valued and not constant. 31 As we shall see in more detail below, presence of dispersion is precisely why the Klein-Gordon equation did not admit any solitary traveling wave solutions. Remark 2.6.6. Some sources augment the properties in Definition 2.6.5 with the condition that ω(k) |k| → +∞ as |k| → +∞ or that ω(k) |k| is monotonic in k. For our purposes the conditions in Definition 2.6.5 suffice since they are enough to imply that waves with different wavenumbers have in general different phase velocities. But, we will say more about the case vp = ω(k) |k| is monotone increasing in the next subsection. 2.6.1 Superposition perspective: the Fourier transform In this section we provide an explanation as to why presence of dispersion rules out existence of solitary wave solutions for a linear PDE. Our point of view will be a superposition of plane waves (a wave packet) provided by the Fourier integral transform. We first define the Fourier transform ˆf of a sufficiently localized function f : R → R by ˆf (k) := 1 √2π ∫∞−∞ f (x)e−ikx dx. (2.6.10) and the inverse Fourier transform of f by ˇf (y) := 1 √2π ∫∞−∞ f (k)eiky dk. (2.6.11) For our purposes, “sufficiently localized” means that f belongs to the Schwartz space of functions S(R), which consists of all smooth (infinitely differentiable) functions f : R → C such that the function itself together with all its derivatives decay at a rate faster than any polynomial rate at infinity. The Fourier transform is a continuous and 1-to-1 map from S(R) onto S(R). In this case, the inverse Fourier transform of the Fourier transform ˆf of f gives us back f , that is, we have f (x) = 1 √2π ∫∞−∞ ˆf (k)eikx dk. (2.6.12) 32 We now reconsider the linear model (2.4.1) and suppose that we are given an initial condition u0(x) = u(x, 0) in S(R) that has a pulse-like profile. We want to understand if it is possible for u0(x) to evolve under the PDE ut − uxxx = 0 while maintaining its pulse-like shape (propagating as a solitary wave). We let ˆu0 denote the Fourier transform of u0: ˆu0(k) = 1 √2π ∫∞−∞ u0(x)e−ikx dx. (2.6.13) Then by Fourier inversion (2.6.12), we have u0(x) = 1 √2π ∫∞−∞ ˆu0(k)eikx dk. (2.6.14) We for the purposes of this section, we should view this integral as a continuous sum of oscillatory exponential functions eikx with amplitude u0(k) for each k, as k ranges over real numbers. We let ˆu(k, t ) denote the Fourier transform (taken in x) of the solution u(x, t ) evolving from u(x, 0) = u0(x) according to (2.4.1) . Then, by the Fourier inversion (2.6.12) we have u(x, t ) = 1 √2π ∫∞−∞ ˆu(k, t )eikx dk. (2.6.15) So, if we could find out how the Fourier transform ˆu(k, t ) evolves in t as u(x, t ) evolves according to (2.4.1) in t, we would have a formula for the solution u(x, t ) by (2.6.15) . To see why this is true, we refer to [17, Chapter 12.3] and note that ( ∂u ∂t ) (k, t ) = ∂∂t ˆu(k, t ), ( ∂3u∂x 3 ) (k, t ) = ( ik )3 ˆu(k, t ). (2.6.16) Therefore, as long as u satisfies (2.4.1), ˆu satisfies for each k ∈ R fixed: ∂∂t ˆu(k, t ) − (ik )3 ˆu(k, t ) = 0 , (2.6.17) 33 which becomes ∂∂t ˆu(k, t ) = −ik 3 ˆu(k, t ). (2.6.18) This ordinary differential equation (for each k ∈ R fixed) with ˆu(k, 0) = ˆu0(k) has the solution ˆu(k, t ) = ˆ u(k, 0) e−ik 3t = ˆ u0(k)e−ik 3t. (2.6.19) Using this in (2.6.15), we find that u(x, t ) = 1 √2π ∫∞−∞ ˆu0(k)ei(kx −k3t)dk (2.6.20) is the solution of (2.4.1) with the initial profile u(x, 0) = u0(x).As before we view the integral (2.6.20) as a continuous sum (a superposition) of monochromatic plane wave solutions of (2.4.1) with wave number k and amplitude ˆu0(k),i.e., the waves ˆu0(k)ei(kx −k3t). Comparing (2.6.14) with (2.6.20) , we see that (2.6.14) is just the value of (2.6.20) at t = 0 . Thus, the initial pulse-like profile u0 is a wave packet consisting of the (continuous) sum (2.6.14) of “building blocks” ˆu0(k)eikx , k ∈ R. As time is turned on, each of the the building blocks of the wave packet evolve in time to give rise to (2.6.20) , but with different speeds! As we also observed in Section 2.6, we have for each wavenumber k ∈ R we have ˆu0(k)ei(kx −k3t) = ˆ u0(k)eik (x−k2t). (2.6.21) This shows us that the building block with wavenumber k in (2.6.20) has phase velocity vp(k) = k2, which is precisely recovering the dispersion phenomenon we covered in Section 2.6. As the building blocks of the initial profile (2.6.14) with different wave numbers propagate with different speeds as soon as t > 0, the wavepacket cannot maintain its solitary profile rigidly as t evolves. The presence of dispersion makes the initial wave 34 profile spread as time evolves. This is why dispersive linear wave models cannot support solitary wave solutions. 2.6.2 Revisiting the linear wave models We now look at the dispersion relation for the linear wave models we have studied earlier. For the wave equation (2.2.1) , a plane wave u(x, t ) = ei(kx −ωt ) is a solution if and only if ω2 = c2k2. (2.6.22) Thus, ω(k) |k| = ±| c|, (2.6.23) which is real-valued and constant! Recalling the Definition 2.6.5 of dispersion, we see that the wave equation does not exhibit dispersion. This also means that plane waves with different wave numbers k all have the phase velocity vp(k) = ±| c|. From a wave theory point, this is why we were able to find solitary wave solutions of the wave equation, as well as traveling waves of arbitrary shapes preserved in time. For the Klein-Gordon equation (2.3.1) , a plane wave u(x, t ) = ei(kx −ωt ) is a solution if and only if ω2 = k2 + 1 . (2.6.24) Thus, ω(k) |k| = ±√k2 + 1 |k| , (2.6.25) which is real-valued but not constant. Therefore, Klein-Gordon equation exhibits dis-persion, just as Airy’s dispersive wave equation. However, note that ω(k) |k| is monotone decreasing for k > 0 as opposed to the case for the Airy’s equation (see Figure 2.10). Plane waves with larger wave numbers k smaller phase velocities, but still, the wave 35 packets are distorted in time and this is why Klein-Gordon equation (2.3.1) does not support solitary waves. Finally, for the heat equation (2.5.1) , a plane wave u(x, t ) = ei(kx −ωt ) is a solution if and only if −iω + k2 = 0 , (2.6.26) which gives ω = −ik 2. (2.6.27) Thus, ω(k) |k| = −i √k2 |k| = −i|k|, (2.6.28) which is purely imaginary. This relation looks different compared to the relations we obtained for other linear wave models. Recall from Proposition 2.5.1 that the heat equation does not support any solitary waves. But it does not support any wavetrains either! Is the heat equation dispersive? Recalling (2.6.27) , we see that a monochromatic plane wave solution of the heat equation (2.5.1) is given by u(x, t ) = ei(kx −ωt ) = ei(kx +ik 2t) = eikx −k2t = e−k2teikx . (2.6.29) Since −k2 < 0, it is clear that the amplitude of any plane wave ei(kx −ωt ) solution of the heat equation goes to 0 exponentially fast as t → ∞ . This is called dissipation . The reason why heat equation does not support solitary wave solutions is not because plane waves with different numbers propagate with different velocities (dispersion), it is because any plane wave melts to 0 over time without any propagation (dissipation). The heat equation is an example of a dissipative linear PDE and it does not support any sort of traveling wave solutions. 36 Chapter 3 Nonlinear Waves and the Korteweg-de Vries Equation 3.1 Overview Solitary waves, also known as localized traveling waves, are distinguished from periodic solutions, which resemble a train of waves rather than a single wave. Our study in Chapter 2 showed that whenever dispersion is present, a linear equation modeling wave propagation cannot have any solitary traveling wave solutions. In systems that exhibit dispersion, the presence of nonlinearity is necessary for the existence of solitary waves. The propagation of a solitary wave can be understood as a dynamic balance between “dispersive forces” that attempt to separate the wave and “nonlinear forces” that try to compress it. We will continue our study with a nonlinear model that has this remarkable balance. But first, we would like to briefly review the beautiful history of discovery of solitary waves. The traveling waves were first believed to be limited to being periodic waves. In the 19th century, the British astronomer George Biddell Airy (1801–1892) formed the Airy wave theory, which is a linear theory of wave propagation on the surface of a fluid. The 37 traveling wave solutions describing were sinusoidal functions of horizontal position and time (see also Stokes’ work ). Many mathematicians after Airy worked on the idea of a purely periodic solutions modeling traveling waves on the water surface, and the understanding of wave propagation mainly relied on linear theory. A detailed account of developments surrounding Airy’s work and prior works on water waves can be found in . This understanding was significantly challenged by a curious observation made by John Scott Russell (1808–1882) in 1834. John Scott Russell was Scottish engineer and naval architect, mainly working on determining the most efficient design for canal boats. In 1834 Russell made an accidental discovery that would change the theory of waves forever (he was 26 years old). While riding his horse by the Union Canal between Edinburgh and Glasgow, he observed a surface water wave in the canal that appeared to be a spatially localized traveling wave. We quote his own account of the moment, published in : “I believe I shall best introduce the phenomenon by describing the circum-stances of my own first acquaintance with it. I was observing the motion of a boat which was rapidly drawn along a narrow channel by a pair of horses, when the boat suddenly stopped — not so the mass of water in the channel which it had put in motion; it accumulated round the prow of the vessel in a state of violent agitation, then suddenly leaving it behind, rolled forward with great velocity, assuming the form of a large solitary elevation, a rounded, smooth and well-defined heap of water, which continued its course along the channel apparently without change of form or diminution of speed . I followed it on horseback, and overtook it still rolling on at a rate of some eight or nine miles an hour, preserving its original figure some thirty feet long and a foot to a foot and a half in height. Its height gradually diminished, and after a chase of one or two miles I lost it in the windings of the channel.” 38 This was a tall wave in size, compared to typical ripples described in linear theory. As such a tall wave would interact with the bottom of a canal differently compared to a small wave, a wave model governing elevation u from surface that is capable of producing such a solution should not be invariant under an amplitude scaling of the form u 7 → au for a > 0.So, Russell’s observation was calling for a nonlinear model for wave propagation. Russell constructed water canals to recreate his “Great Wave of Translation” and succeeded. The observation of John Scott Russell was neglected by the scientists believed in linear wave theory. About 40 years after Russell’s report, a Dutch engineer Diederik J. Korteweg and his student Gustav de Vries derived in a nonlinear mathematical model for wave propagation on shallow water surfaces, known today as the Korteweg-de Vries (KdV) equation ut + 6 uu x + uxxx = 0 . (3.1.1) Here, u(x, t ) describes the elevation of the sea surface from rest at time t as a function of x, where the two-dimensional surface is assumed to vary in only one coordinate, in this case the x-coordinate. The model (3.1.1) was actually published by Joseph Boussinesq in 1877 in (about 20 years before Korteweg and de Vries), but it was not noticed broadly at the time by the community studying propagation of water waves. Korteweg and de Vries noted in their paper that for any speed c > 0, the equation (3.1.1) has spatially localized traveling wave solutions propagating right with speed c, with the exact form u(x, t ) = c 2 sech ( √c 2 (x − ct ) )2 . (3.1.2) We begin with a detailed construction of this solution. 39 3.2 Korteweg & de Vries: Seeking solitary traveling waves We seek solutions of (3.1.1) that are of the form u(x, t ) = f (ξ), ξ := x − ct (3.2.1) for a real-valued parameter c. Then we have by the chain rule ut(x, t ) = −cf ′(ξ), ux(x, t ) = f ′(ξ), uxxx (x, t ) = f ′′′ (ξ). (3.2.2) Substituting these in (3.1.1) yields the following nonlinear third-order ordinary differential equation: −cf ′(ξ) + 6 f (ξ)f ′(ξ) + f ′′′ (ξ) = 0 , (3.2.3) which can be immediately integrated once to obtain −cf (ξ) + 3 f (ξ)2 + f ′′ (ξ) + A = 0 , (3.2.4) for an arbitrary integration constant A ∈ R. We first solve in (3.2.4) for f ′′ (ξ) to have f ′′ (s) = − [ 3f (s)2 − cf (s) + A ] . (3.2.5) and then multiply by f ′(ξ) to obtain f ′(ξ)f ′′ (ξ) = cf ′(ξ)f (ξ) − 3f ′(ξ)f 2(ξ) − Af ′(ξ). (3.2.6) This equation can also be integrated exactly once to get 12 f ′(ξ)2 = −f (ξ)3 + c 2 f (ξ)2 − Af (ξ) + E, (3.2.7) 40 for another arbitrary integration constant E ∈ R. Since we are looking for a solitary wave, that is, a solution that’s localized in space, we demand that f (ξ), f ′(ξ), f ′′ (ξ) → 0 as ξ → ±∞ . Looking at (3.2.4) and (3.2.7) , we deduce that A = 0 and E = 0 need to hold under this assumption. Thus, we take the integration constants to be zero. In this case (3.2.7) is simplified to 12 f ′(ξ)2 = −f (ξ)3 + c 2 f (ξ)2. (3.2.8) Now recall our assumption that f (ξ) and its first two derivatives decay as |ξ| → ∞ . This implies that f (ξ)3 is much smaller than c 2 f (ξ)2 as |ξ| is taken to be sufficiently large. Since f (ξ)2 ≥ 0 and the left-hand side of (3.2.8) is also a square, a necessary condition for (3.2.8) to have a real-valued solution f (ξ) is the condition that c > 0. (3.2.9) We now multiply (3.2.8) by 2 and factor the right-hand side to get f ′(ξ)2 = f (ξ)2(c − 2f (ξ)) . (3.2.10) This immediately shows that an additional condition needs to hold in order to have real-valued solutions: c − 2f (ξ) ≥ 0. (3.2.11) Seeking a solution f ∈ (0 , c 2 ], we may substitute f (ξ) = c 2 sech( a(ξ − ξ0)) 2 (3.2.12) 41 in (3.2.10) for a ∈ R to be determined, and demand that it is a solution. We use the identity ddy sech( y)2 = −2 sech( y)2 tanh( y) (3.2.13) and find that f ′(ξ)2 = a2c2 sech( a(ξ − ξ0)) 4 tanh( a(ξ − ξ0)) 2 (3.2.14) and also f (ξ)2(c − 2f (ξ)) = 14 c3 sech( a(ξ − ξ0)) 4 tanh( a(ξ − ξ0)) 2 (3.2.15) Combining (3.2.14) and (3.2.15) in (3.2.10) shows that (3.2.12) solves (3.2.10) if and only if a2 = c 4 (3.2.16) Since y 7 → sech( y) is an even function, we may choose a = √c 2 and find f (ξ) = c 2 sech ( √c 2 (ξ − ξ0) )2 . (3.2.17) This gives u(x, t ) = usol (x, t ; c, x 0) := c 2 sech ( √c 2 (x − ct − x0) )2 , (3.2.18) where we have relabeled x0 := ξ0. This is the famous solitary wave solution of the Korteweg-de Vries equation (3.1.1) discovered in the original paper of Korteweg and de Vries. Profiles of sample solutions at t = 0 are in Figure 3.1 to illustrate the dependence of the profile on the parameter c > 0. Plots of the solitary waves in Figure 3.1 at the later time t = 4 are provided in Figure 3.2. Remark 3.2.1. The solitary wave solution (3.2.18) depends on a single parameter c > 0 aside from the translation parameter x0 that determines the initial position of the peak, at x = x0 when t = 0 . c is the speed of propagation of the solitary wave. But we see from (3.2.18) that the width of the wave profile and its amplitude are all correlated with speed c. 42 -20 -10 0 10 20 -0.5 0.0 0.5 1.0 1.5 2.0 -20 -10 010 20 -0.5 0.0 0.5 1.0 1.5 2.0 -20 -10 010 20 -0.5 0.0 0.5 1.0 1.5 2.0 -20 -10 010 20 -0.5 0.0 0.5 1.0 1.5 2.0 Figure 3.1: Solitary wave solutions of Korteweg-de Vries equation with varying speeds c at time t = 0 . Top-to-bottom: c = 1 , c = 2 , c = 3 , c = 4 . Here we took x0 = 0 . As c increases the amplitude of the profile becomes larger and the width of the profile becomes smaller. -20 -10 0 10 20 -0.5 0.0 0.5 1.0 1.5 2.0 -20 -10 010 20 -0.5 0.0 0.5 1.0 1.5 2.0 -20 -10 010 20 -0.5 0.0 0.5 1.0 1.5 2.0 -20 -10 010 20 -0.5 0.0 0.5 1.0 1.5 2.0 Figure 3.2: Solitary wave solutions of Korteweg-de Vries equation with different speeds c at time t = 4 . Top-to-bottom: c = 1 , c = 2 , c = 3 , c = 4 . Here we took x0 = 0 . 43 We see that if c > 0 is made larger, then the factor √c 2 multiplying (x − ct − x0) becomes larger, making the wave profile narrower. At the same time, the peak amplitude c 2 also becomes larger. Thus, taller solitary wave solutions of the KdV equation (3.1.1) have narrower profiles and they travel faster. This is illustrated in Figure 3.1 and Figure 3.2. 3.3 Other types of traveling waves? We will now investigate whether there are other (not solitary) traveling wave solutions of the KdV equation (3.1.1) . It is possible that this equation also admits wavetrain solutions (spatially periodic traveling waves) like the linear equations we have studied in Chapter 2. Moreover, we do not know if there are some other types of traveling waves supported by (3.1.1) . As the solutions we are seeking are not solitary waves, we will abandon the requirement that the integration constants A and E are both zero and instead reconsider (3.2.7): 12 f ′(ξ)2 = −f (ξ)3 + c 2 f (ξ)2 − Af (ξ) + E, (3.3.1) with arbitrary A, E ∈ R. We observe that (3.3.1) is the statement that 12 (f ′)2 + V (f ) = E (3.3.2) for the cubic potential V (f ) := f 3 − c 2 f 2 + Af, V (f ) ≡ V (f ; c, A ). (3.3.3) The equation (3.3.2) is the statement of conservation of total energy (kinetic energy + potential energy) for the motion of a particle with unit mass driven by the conservative force field F (f ) with the potential V (f ). Namely, the force F (f ) is given by F (f ) = − ∂V ∂f , (3.3.4) 44 or explicitly, the conservative force field is F (f ) = − [ 3f 2f ′ − cf f ′ + Af ′] . (3.3.5) The constant E on the right-hand side of (3.3.2) is the energy level to which the motion is confined to. From this point of view, the equation (3.2.5) is the equation of motion (Newton’s Second Law of Motion): f ′′ = − ∂V ∂f or f ′′ = F (f ). (3.3.6) Since 12 (f ′)2 ≥ 0, the condition (3.3.2) implies that V (f ) ≤ E (3.3.7) must hold for any solution of (3.3.1). We will now look at different regimes of parameters c and A resulting in different types of behavior of solutions f driven by the potential V (f ; c, A ) through (3.3.2) . Our analysis will be based on whether V (f ; c, A ) has local extrema or not, which is related to the sign of the (quadratic) discriminant of derivative V ′(f ; c, A ) of V with respect to f . We have V ′(f ; c, A ) = 3 f 2 − cf + A (3.3.8) and the discriminant is ∆ := c2 − 12 A. (3.3.9) The case ∆ ≤ 0. In this case V (f ; c, A ) does not have any real critical points, hence V (f ; c, A ) does not have local extrema. Any solution f of (3.3.1) must satisfy (3.3.7) .So, as can be seen in Figure 3.3, any solution f starting at f0 = f (ξ0) for some ξ0 such that V (f0) ≤ E is unbounded: f is forced to tend to f = −∞ eventually. Thus, we see 45 Figure 3.3: Left pane: V (f ; c, A ) with ∆ < 0, right pane: V (f ; c, A ) with ∆ = 0 . Solution trajectories are confined below the level V = E drawn in orange. Maximum value of a solution f is denoted by ˜f0. that if ∆ ≤ 0, then there are no bounded real-valued solutions. The case ∆ > 0. In this case V (f ; c, A ) has a local maximum value V = V +(c, A ) attained at f = h1 := 16 (c − √∆) (3.3.10) and a local minimum value V = V −(c, A ) attained at f = h2 := 16 (c + √∆) > h 1, (3.3.11) See Figure 3.4 for a plot of a sample V (f ; c, A ), with c = 10 and A = 5 , satisfying ∆ = 10 2 − 12 · 5 = 40 > 0, and local extrema locations f = h1 and f = h2. As can also be seen in Figure 3.4, if the total energy level E is larger than V + or less than V −, the resulting solutions are bounded: f is forced to tend to f = −∞ eventually. Therefore, in order to have a bounded solution of (3.3.1) , the energy level E in (3.3.2) must be chosen to lie in the range [V −(c, A ), V +(c, A )] . Such a level is plotted in Figure 3.4. Moreover, if E = V −(c, A ), then f is stuck in the bottom of the potential well, resulting in the constant solution f (ξ) ≡ h2. The other extreme case is when E = V +(c, A ). We delay the treatment of that case to the end of this chapter since this choice results in a solitary wave solution, which we have already constructed in Section 3.2. 46 Figure 3.4: V (f ; c, A ) with ∆ > 0. Solution trajectories are confined below the level V = E drawn in orange, and the level Emust be chosen in the range [V−, V +].f3< f 2< f 1denote the simple real roots of V(f;c, A ) = Efor a value of E∈(V−, V +). In case E is fixed satisfying V − < E < V +, the equation V (f ; c, A ) = E (3.3.12) has three distinct real roots, f = f1, f 2, f 3, which we order to satisfy f3 < f 2 < f 1, see Figure 3.4. Any solution starting at f = f0 for some f0 < f 3 is again an unbounded solution, tending to f = −∞ . The only other possibility we have left while satisfying V (f ) ≤ E is to have f to be confined between f2 and f1, see Figure 3.4. We may assume without loss of generality that f starts at f = f1. Then the solution f oscillates periodically between f2 and f1. It turns out that such periodic solutions are not described in terms of trigonometric functions as was the case for linear wave models we studied in Chapter 2. Instead, the periodic solutions we are facing in in this chapter are given in terms of so-called elliptic functions introduced by Jacobi. We now introduce these functions and cover their basic properties. 47 3.4 Jacobi elliptic functions The material presented in this section is an expanded version of the material in Chapter 2 of . We start with an analogy with inverse trigonometric functions. Consider the integral v = ∫ϕ 0 dθ √ 1 − m sin( θ)2 , (3.4.1) where the parameter m is taken to satisfy 0 ≤ m ≤ 1 so that the integrand is real-valued. We compare the integral (3.4.1) with the elementary integral w = ∫ψ 0 d t (1 − t2)1/2 . (3.4.2) Substituting t = sin (θ) in (3.4.2) for θ ∈ [− π 2 , π 2 ], the endpoints become θ = 0 and ϕ ∈ [− π 2 , π 2 ] such that sin( ϕ) = ψ. Then (3.4.2) reads w = ∫ψ 0 d t (1 − t2)1/2 = ∫ϕ 0 d θ = ϕ = arcsin( ψ) (3.4.3) Thus, we arrived at the inverse trigonometric function w = arcsin (ψ), or ψ = sin (w).The point we are trying to make is that the integral (3.4.2) can be taken as a definition for the inverse trigonometric function such that sin( w) = ψ. (3.4.4) In 1829 Carl Gustav Jacob Jacobi defined, in a similar manner, a pair of inverse functions from (3.4.1) . Jacobi published his theory introducing what is known today as the Jacobi elliptic functions in his seminal book . The same quantities and functions were simultaneously discovered and studied by Niels Henrik Abel in and . These elliptic 48 functions are defined from (3.4.1) by the relations sn( v | m) = sin( ϕ), cn( v | m) = cos( ϕ), v = ∫ϕ 0 dθ √ 1 − m sin( θ)2 , (3.4.5) and they are often referred to as the "Jacobi-sn " and the "Jacobi-cn " functions. Here the parameter m ∈ [0 , 1] is called the squared elliptic modulus m = K2, where K is called the elliptic modulus .We look at the two edge-case values of the parameter m in the integral (3.4.5) first. The case m = 0 . In this case the integral in (3.4.5) is simply v = ∫ϕ 0 dθ = ϕ. (3.4.6) Thus, cn( v | 0) = cos( ϕ) = cos( v), (3.4.7) showing that the Jacobi-cn function degenerates to cosine. Similarly, we have sn( v | 0) = sin( ϕ) = sin( v). (3.4.8) We record this property in the following proposition. Proposition 3.4.1. The Jacobi elliptic functions cn (· | m) and sn (· | m) degenerate to the trigonometric functions cos( ·) and sin( ·), respectively, as m → 0+. We now look at the edge case m = 1 which is slightly more involved. The case m = 1 . In this case we make the substitution sin (θ) = tanh (τ ) in the integral (3.4.5) with m = 1 : v = ∫ϕ 0 dθ √ 1 − sin( θ)2 . (3.4.9) 49 Note that θ = 0 is mapped to τ = 0 and we let τ0 to be the value of τ such that θ = ϕ when τ = τ0. Then, cos( θ) dθ = sech( τ )2 dτ. (3.4.10) The left-hand side of (3.4.10) can be expressed in terms of τ by reusing the substitution and as cos( θ) dθ = √ 1 − tanh( τ )2 dθ = sech( τ ) dθ. (3.4.11) Thus, we have dθ = sech( τ ) dτ. (3.4.12) Therefore, the integrand in (3.4.9) is transformed as dθ √ 1 − sin( θ)2 = dτ (3.4.13) and we obtain v = ∫ϕ 0 dθ (1 − sin( θ)2)1/2 = τ ∣∣∣τ0 0 = τ0. (3.4.14) Now, recall that tanh (τ0) = sin (ϕ). Consider a right triangle with unit-length hy-potenuse and fix the acute angle ϕ. Then the side opposite to ϕ has length sin (ϕ) = tanh (τ0), hence the adjacent side has length √1 − tanh( τ0)2 = sech (τ0). Since the adja-cent side length is also equal to cos (ϕ), we find that cos (ϕ) = sech (τ0). Recalling that v = τ0, we have arrived at cos( ϕ) = sech( v). (3.4.15) It follows from (3.4.1) that cn( v | 1) = sech( v). (3.4.16) We record this property in the following proposition. 50 Proposition 3.4.2. The Jacobi elliptic function cn (· | m) degenerates to the hyperbolic function sech( ·) as m → 1−. One can similarly show that the Jacobi elliptic function sn (· | m) degenerates to the hyperbolic function tanh( ·) as m → 1−.The calculations above show that the cn (v | m) and similarly sn (v | m) are periodic functions of v for 0 ≤ m < 1, but the periodicity of cn (v | m) is lost when m = 1 (or, the period becomes infinite since sech (v) has “infinite period”). The period of cn (v | m) and sn (v | m) is related to the period 2π of cos and sin in the integration domain in (3.4.1) ,respectively. Indeed, the period T in the v-domain of cn( v | m) can be written as: T = ∫2π 0 dθ (1 − m sin( θ)2)1/2 = 4 ∫π/ 20 dθ (1 − m sin( θ)2)1/2 , (3.4.17) where the last equality is obtained by setting ˜θ := θ/ 4 and dropping the tilde afterwards. The last integral K(m) := ∫π/ 20 dθ (1 − m sin( θ)2)1/2 (3.4.18) is called the complete elliptic integral of the first kind (“complete” because it has fixed limits of integration, compare with (3.4.1)). Clearly, K(0) = π 2 . (3.4.19) Note that given 0 ≤ m1 < m 2 ≤ 1, we have for 0 ≤ θ ≤ π 2 m1 sin( θ) < m 2 sin( θ) = ⇒ ( 1 − m1 sin( θ)2)1/2 ( 1 − m2 sin( θ)2)1/2 (3.4.20) which implies K(m1) < K (m2). (3.4.21) Thus, K(m) is monotonically increasing. 51 3.5 Korteweg-de Vries equation revisited: Wavetrains We may now resume investigating the traveling wave solutions of the KdV equation (3.1.1) . Our aim in this section is to construct wavetrain solutions of (3.1.1) , which are the periodic solutions of (3.3.1) mentioned at the end of Section 3.3. We proceed with the integration of (3.3.2) to find f that oscillates periodically between f2 and f1, see Figure 3.4. This is possible when we have an energy level E ∈ (V −, V +) in (3.3.2) . First, we rewrite (3.3.2) as (f ′)2 = 2[ E − V (f ; c, A )] = 2[ −f 3 + c 2 f 2 − Af + E] (3.5.1) Recall that the polynomial E − V (f ; c, A ) has three simple roots f3 < f 2 < f 1 which, of course, depend on c, A, and E. Thus, we can express the right hand side above as (f ′)2 = 2[ −(f − f1)( f − f2)( f − f3)] . (3.5.2) Since f is confined to f2 ≤ f ≤ f1, we may take the square root and integrate the resulting separable equation to obtain the implicitly defined solution s = s1 ± ∫ff1 dg √ 2[ E − V (g; c, A )] = s1 ± ∫ff1 dg √ 2[ −(g − f1)( g − f2)( g − f3)] , (3.5.3) where f (s1) = f1. Due to periodicity it makes sense to write g := f1 + ( f2 − f1) sin( θ)2, dg = 2( f2 − f1) sin( θ) cos( θ) dθ, (3.5.4) 52 which gives s = s1 ± ∫ ϕ 0 2( f2 − f1) sin( θ) cos( θ) dθ √2[( f1 − f2) sin( θ)2(f1 − f2)(1 − sin( θ)2)( f1 − f3 − (f1 − f2) sin( θ)2)] 1/2 = s1 ± ∫ ϕ 0 2( f2 − f1) sin( θ) cos( θ) dθ √2[( f1 − f2)2 sin( θ)2 cos( θ)2(f1 − f3 − (f1 − f2) sin( θ)2)] 1/2 = s1 ± ∫ ϕ 0 −√2 dθ [( f1 − f3 − (f1 − f2) sin( θ)2)] 1/2 = s1 ± ( − √ 2 f1 − f3 ) ∫ ϕ 0 dθ (1 − m sin( θ)2)1/2 (3.5.5) where m := f1 − f2 f1 − f3 , f = f1 + ( f2 − f1) sin( ϕ)2. (3.5.6) Thus, we obtain from (3.5.5) ∓ √ f1 − f3 2 (s − s1) = ∫ ϕ 0 dθ (1 − m sin( θ)2)1/2 . (3.5.7) Recalling the definition (3.4.5), we see that −v = − ∫ ϕ 0 dθ (1 − m sin( θ)2)1/2 = ∫ −ϕ 0 dθ (1 − m sin( θ)2)1/2 (3.5.8) where we have employed θ 7 → − θ in the last equality. Thus, cn (−v | m) = cos (−ϕ) = cos (ϕ) = cn (v | m). We may now suppress the ∓ in (3.5.7) as we recall (3.4.5) and obtain from (3.5.7) the identity cn ( 1√2 (f1 − f3)1/2(s − s1) ∣∣∣ m ) = cos( ϕ). (3.5.9) Finally, observe from (3.5.6) that f = f1 + ( f2 − f1) sin( ϕ)2 = f1 + ( f2 − f1)(1 − cos( ϕ)2) = f2 + ( f1 − f2) cos( ϕ)2. (3.5.10) 53 -20 -10 0 10 20 01234-20 -10 0 10 20 01234Figure 3.5: Cnodial wave solution ucn (x, t ) driven by V (f ; c, A ) = E, where c = 10 and A = 2 ,resulting in f1 = 4 , f2 = 2 , f3 = −1. Peak height is f1 = 4 , through height is f2 = 2 .Peak-to-trough measurement is f1 − f2 = 2 . Here m = 25 . Top row: ucn (x, t ) at time t = 0 ,bottom row: ucn (x, t ) at time t = 32 . Combining these gives f (s) = f2 + ( f1 − f2) cn ( 1 √2 (f1 − f3)1/2(s − s1) ∣∣∣ m )2 , m = f1 − f2 f1 − f3 ∈ (0 , 1) , (3.5.11) which reads in the (x, t ) coordinates u(x, t ) = ucn (x, t ) := f2 + ( f1 − f2) cn ( 1 √2 (f1 − f3)1/2(x − ct − x0) ∣∣∣ m )2 , x0 := s1. (3.5.12) This is the cnodial wave solution of (3.1.1) . Note that the Jacobi-cn function is on an ambient level f = f2. The peak amplitude of ucn (x, t ) is f2 + ( f1 − f2) = f1 and the through level is f2, recall that f1 > f 2. Peak-to-trough measurement is f1 − f2. We give a plot of the cnodial wave solution of the KdV equation in Figure 3.5. 54 Figure 3.6: The case where E = V +(c, A ) so that f2 and f3 merge at the value f = h1, which is a saddle point. Recovering the solitary wave from the wavetrain Recall from Proposition 3.4.2 that cn (v | m) → sech (v) as m → 1−. Given c and A fixed, satisfying ∆ = c2 − 12 A > 0, recall the cnodial wave solution of the KdV equation (3.1.1) given by (3.5.12) . In this context, the squared elliptic modulus m is given in terms of the roots f3 < f 2 < f 1 of the polynomial equation V (f ; c, A ) = E for given E ∈ (V −, V +);see Figure 3.4. We recall from (3.5.11) that m = f1 − f2 f1 − f3 ∈ (0 , 1) . (3.5.13) This shows that the limit m → 1− is realized in the limit where the roots f2 and f3 merge. Looking at Figure 3.4 again, we see that this regime corresponds to taking E to be E = V +(c, A ). Note that c and A are fixed at the beginning, fixing the potential V (f ; c, A ). This situation is illustrated in Figure 3.6 and as can be seen, one necessarily has f2 = f3 = h1 (3.5.14) in this case, where h1 = 16 (c − √∆ ). We see from Figure 3.6 that the equilibrium 55 point f = h1 is a saddle point. It is unstable in the sense that any solution perturbed from f = h1 in the direction f < h 1 has an unbounded trajectory. On the other hand, perturbing f = h1 in the direction f > h 1 results in a trajectory that goes to f = f1,where the potential energy in (3.3.2) becomes maximal and the kinetic energy becomes zero, and then converges back to f = h1. We claim that this is the orbit of the solitary wave constructed in Section 3.2 but on a nonzero background. First, note that V (f ; c, A ) − E = V (f ; c, A ) − V +(c, A )= f 3 − c 2 f 2 + Af − V (h1; c, A )= f 3 − c 2 f 2 + Af − V (h1; c, A ) (3.5.15) and we also have V (f ; c, A ) − E = ( f − h1)2(f − f1). (3.5.16) We use these two equations to arrive at the identity f 3 − c 2 f 2 + Af − V (h1; c, A ) = ( f − h1)2(f − f1), (3.5.17) which we use to find the largest root f1 of V (f ; c, A ) = E when E = V +. The comparing the coefficients of the terms that are quadratic in f in (3.5.17) yields − c 2 = −f1 − 2h1. (3.5.18) This gives f1 = c 2 − 2h1 = c 2 − 26 (c − √∆) = c 6 + 2√∆6 = h2 + √∆6 , (3.5.19) where we recalled that h2 = 16 (c + √∆). Thus, in the limit as m → 1−, we have f2 = f3 = h1 and f1 = h2 + √∆6 . We also note the following values of other combinations 56 that appear in the formula (3.5.12). We have f1 − f2 = h2 + √∆ − h1 = 16 (c + √∆) + √∆6 − 16 (c − √∆) = √∆2 =: a > 0. (3.5.20) and also f1 − f3 = a (3.5.21) since f2 = f3. Thus, if m = 1 , using Proposition 3.4.2, we find that the solution (3.5.12) becomes u(x, t ) = h1 + a sech ( √a √2 (x − ct − x0) )2 . (3.5.22) Note that this solution is localized in space and no longer periodic. In fact, the limit m → 1− corresponds to sending the spatial period of oscillations to ∞. We illustrate this in Figure 3.7. As m → 1−, we are eventually left with just one peak, which is the sech 2 profile. Although the solution (3.5.22) is spatially localized, the localized sech profile is su-perimposed on an ambient level u = h1. We cannot directly subtract this background and obtain another solution of the KdV equation (3.1.1) because (3.1.1) is a nonlinear equation — the sum of two solutions is not a solution. An important invariance of the KdV equation plays a key role to remove the ambient background. Proposition 3.5.1 (Galilean invariance) . Suppose that u(x, t ) is a solution of the KdV equation ut + 6 uu x + uxxx = 0 . (3.5.23) Let X := x − λt and T := t. Define v(X, T ) := u(X + λT, T ) − B. Then v(X, T ) solves vT + 6 vv X + vXXX = 0 , (3.5.24) provided B = λ 6 . 57 Proof. By definition, we have x = X + λT, t = T. (3.5.25) Since B is a constant, the Chain Rule yields ∂v ∂X = ∂u ∂x ∂x ∂X + ∂u ∂t ∂t ∂X = ∂u ∂x + 0 = ∂u ∂x , (3.5.26) which implies that ∂3v∂X 3 = ∂3u∂x 3 . (3.5.27) On the other hand, ∂v ∂T = ∂u ∂x ∂x ∂T + ∂u ∂t ∂t ∂T = ∂u ∂x · λ + ∂u ∂t . (3.5.28) Then vT + 6 vv X + vXXX = λu x + ut + 6( u − B)ux + uxxx = ut + 6 uu x + uxxx + ( λ − 6B)ux = ( λ − 6B)ux, (3.5.29) where we have used the fact that u satisfies the KdV equation (3.1.1) by assumption. The identity (3.5.29) shows that vT + 6 vv X + vXXX = 0 provided λ = 6 B. ■ We may use Proposition 3.5.1 to remove the nonzero background (3.5.22) . To do so, we simply need to choose B = h1. This introduces a time-dependent shift in the x-coordinate as in Proposition 3.5.1, so we introduce new coordinates ˜x = x − λt = x − 6Bt = x − 6h1t, ˜t = t. (3.5.30) 58 We then find that the function v( ˜x, ˜t) := u( ˜x + 6 h1˜t, ˜t) − h1, where u is as given in (3.5.22), becomes v(˜ x, ˜t) = h1 + a sech ( √a √2 (˜ x + 6 h1˜t − c˜t − x0) )2 − h1 = a sech ( √a √2 (˜ x − (c − 6h1)˜ t − x0) )2 . (3.5.31) We note that c − 6h1 = c − (c − √∆) = √∆ =: ˜ c (3.5.32) is the speed in the new coordinates ( ˜x, ˜t). We recall the definition of a > 0 in (3.5.20) and see that a = ˜c 2 . (3.5.33) Therefore, v(˜ x, ˜t) = ˜c 2 sech ( √˜c 2 (˜ x − ˜c˜t − x0) )2 . (3.5.34) Comparing this with the formula (3.2.18) , we see that v( ˜x, ˜t) is a solitary wave solution of the KdV equation v˜t + 6 vv ˜x + v˜x˜x˜x = 0 (3.5.35) with speed of propagation ˜c.59 -20 -10 0 10 20 01234-20 -10 0 10 20 01234-20 -10 0 10 20 01234-20 -10 0 10 20 01234-20 -10 0 10 20 01234Figure 3.7: The solution ucn( x,t ) at t = 0 obtained from choosing different values of E ≤ V +(c, A ). As E approaches V +, m approaches 1. The values of V +(c; A) − E from top to bottom are: 10 −1, 10 −2, 10 −8, 10 −12 , and 10 −16 . Observe that the spatial period of oscillations tends to infinity as m → 1−. 60 Bibliography Niels Henrik Abel. “Recherches sur les fonctions elliptiques”. In: Journal für die reine und angewandte Mathematik 2 (1827), pp. 101–181. Niels Henrik Abel. “Recherches sur les fonctions elliptiques”. In: Journal für die reine und angewandte Mathematik 3 (1828), pp. 160–190. George Biddell Airy. “Tides and waves”. In: Encyclopædia Metropolitana. Mixed Sciences 3 (1841). Ed. by H.J. Rose and Et. Al. Joseph Valentin Boussinesq. Essai sur la Théorie des Eaux Courantes . Paris: Imprimerie Nationale, 1877, pp. 1–680. Walter Craig and Jonathan Goodman. “Linear dispersive equations of Airy type”. In: Journal of Differential Equations 87.1 (1990), pp. 38–61. issn : 00220396. doi : 10.1016/0022-0396(90)90014-G . Alex Craik. “The origins of water wave theory”. In: Annual Review of Fluid Mechanics 36.1 (2004), pp. 1–28. issn : 0066-4189. doi : 10.1146/annurev.fluid. 36.050802.122118 . Jean le Rond D’Alembert. Traité de Dynamique . Paris: Reprinted in English by Kessinger Publishing (2009), 1743, p. 272. Phillip Gerald Drazin and Robin Stanley Johnson. Solitons . Cambridge University Press, 1989. isbn : 9780521336550. doi : 10.1017/CBO9781139172059 . Lawrence Evans. Partial Differential Equations . 2nd ed. Vol. 19. Graduate Studies in Mathematics. Providence, Rhode Island: American Mathematical Society, 2010. isbn : 9780821849743. doi : 10.1090/gsm/019 . url : 019 . Walter Gordon. “Der Comptoneffekt nach der Schrödingerschen Theorie”. In: Zeitschrift für Physik 40.1-2 (1926), pp. 117–133. issn : 1434-6001. doi : 10.1007/ BF01390840 . Carl Gustav Jacob Jacobi. Fundamenta Nova Theoriae Functionum Ellipticarum .Königsberg: Borntraeger. Reprinted by Cambridge University Press (2012), 1829. isbn : 978-1-108-05200-9. Robin Stanley Johnson. A Modern Introduction to the Mathematical Theory of Water Waves . Cambridge University Press, 1997. isbn : 9780521591720. doi : 10. 1017/CBO9780511624056 .61 Oskar Klein. “Quantentheorie und fünfdimensionale Relativitätstheorie”. In: Zeitschrift für Physik 37.12 (1926), pp. 895–906. issn : 1434-6001. doi : 10.1007/BF01397481 . Diederik J. Korteweg and Gustav de Vries. “On the change of form of long waves advancing in a rectangular canal, and on a new type of long stationary waves”. In: The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science 39.240 (1895), pp. 422–443. issn : 1941-5982. doi : 10.1080/14786449508620739 . John Scott Russell. Report on Waves: : Report of the fourteenth meeting of the British Association for the Advancement of Science, York, September 1844 . Tech. rep. British Association for the Advancement of Science, 1845, pp. 311–390. George G. Stokes. “On the theory of oscillatory waves”. In: Transactions of the Cambridge Philosophical Society 8 (1847), pp. 441–445. Walter A. Strauss. Partial Differential Equations: An Introduction . Second Edi. New Jersey: John Wiley and Sons Inc., 2008, 464 pages. isbn : 978-0470-05456-7. 62
351
elementary set theory - Cardinal exponentiation without generalized continuum hypothesis - Mathematics Stack Exchange =============== Join Mathematics By clicking “Sign up”, you agree to our terms of service and acknowledge you have read our privacy policy. Sign up with Google OR Email Password Sign up Already have an account? Log in Skip to main content Stack Exchange Network Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Visit Stack Exchange Loading… Tour Start here for a quick overview of the site Help Center Detailed answers to any questions you might have Meta Discuss the workings and policies of this site About Us Learn more about Stack Overflow the company, and our products current community Mathematics helpchat Mathematics Meta your communities Sign up or log in to customize your list. more stack exchange communities company blog Log in Sign up Home Questions Unanswered AI Assist Labs Tags Chat Users Teams Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Try Teams for freeExplore Teams 3. Teams 4. Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Explore Teams Teams Q&A for work Connect and share knowledge within a single location that is structured and easy to search. Learn more about Teams Hang on, you can't upvote just yet. You'll need to complete a few actions and gain 15 reputation points before being able to upvote. Upvoting indicates when questions and answers are useful. What's reputation and how do I get it? Instead, you can save this post to reference later. Save this post for later Not now Cardinal exponentiation without generalized continuum hypothesis Ask Question Asked 6 years, 7 months ago Modified6 years, 7 months ago Viewed 409 times This question shows research effort; it is useful and clear 3 Save this question. Show activity on this post. First I have to confess that I don't know about set theory language. Let A A and B B be infinite cardinals with A>B A>B. My question is: A B=A A B=A? (without assuming generalized continuum hypothesis) Remark: assuming generalized continuum hypothesis (GCH briefly), this can be proved by the following (at least for unlimit cardinal). Sps A A is a unlimit cardinal. Then A=2 C A=2 C for some C≥B C≥B by GCH. Therefore A B=(2 C)B=2 C B=2 C=A A B=(2 C)B=2 C B=2 C=A. Unfortunately, I don't know how to prove for limit cardinal case. Please somebody help me! elementary-set-theory cardinals Share Share a link to this question Copy linkCC BY-SA 4.0 Cite Follow Follow this question to receive notifications asked Jan 15, 2019 at 12:36 MiRi_NaEMiRi_NaE 483 2 2 silver badges 10 10 bronze badges Add a comment| 1 Answer 1 Sorted by: Reset to default This answer is useful 11 Save this answer. Show activity on this post. This is a great question! It's totally reasonable to expect - assuming GCH - that A B=A A B=A when the base A A is larger than the exponent B B since that's true in all the "simply-imaginable" situations. However, that's not the whole picture. As you've noticed, limit cardinals pose an odd difficulty, and it turns out that a particular kind of limit cardinals break the pattern entirely - even if GCH holds. Some weirdness Let me begin with a counterexample to your reasonable intuition, which works regardless of whether GCH holds, to motivate what follows: (ℵ ω)ℵ 0>ℵ ω.(ℵ ω)ℵ 0>ℵ ω. (Recall that ℵ ω ℵ ω is the limit of the ℵ n ℵ n s (n∈N n∈N). Even with GCH it's a bit of a weird object, in contrast with say ℵ 2 ℵ 2 which is just the cardinality of the set of real functions under GCH.) The fact above may look mysterious, but its proof is actually just a direct diagonalization argument. First, let's replace (ℵ ω)ℵ 0(ℵ ω)ℵ 0 with something more meaningful. Specifically, it's not hard to show that (ℵ ω)ℵ 0(ℵ ω)ℵ 0 is the cardinality of the set S e q S e q of increasing ω ω-sequences of ordinals less than ℵ ω ℵ ω. Now let's set up our diagonalization. No need to use proof by contradiction - let's be constructive! Suppose F:ℵ ω→S e q F:ℵ ω→S e q; I want to produce an ω ω-sequence S S of ordinals <ℵ ω<ℵ ω which is not in the range of F F. To do this, the trick is to "chop ℵ ω ℵ ω into ω ω-many blocks" (namely, "up to ℵ 0 ℵ 0," "from ℵ 0 ℵ 0 to ℵ 1 ℵ 1," ..., "from ℵ n ℵ n to ℵ n+1 ℵ n+1," ...) - even though the blocks together cover all of ℵ ω ℵ ω, each individual block is "small" (= of size <ℵ ω<ℵ ω). Now just let the i i th entry of our "antidiagonal sequence" S S be the smallest ordinal which isn't any of the first i i entries of any of the sequences F(κ)F(κ) for κ<ℵ i κ<ℵ i. So, for example, to find S(2)S(2) we look at the first ℵ 2 ℵ 2-many (according to F F) elements of S e q S e q, and check all of the ordinals that occur as either the first or second terms of any of those; there are only ℵ 2 ℵ 2-many of these, so there is some ordinal which doesn't appear in the first two terms of F(κ)F(κ) for any κ<ℵ 2 κ<ℵ 2, and the smallest of these is the ordinal we pick to be S(2)S(2). It's easy to check that the sequence S S so built is an element of S e q S e q not in the range of F F, so we're done! This is really weird. What makes ℵ ω ℵ ω so different from, say, ℵ 17 ℵ 17? The answer is: Cofinality The distinction between limit and successor (= non-limit) cardinals isn't all there is. The limit cardinals themselves split further into two types - the regular and singular limit cardinals - and it is the singular limit cardinals that often cause all the trouble. Incidentally, it is consistent with ZFC+GCH that there are no regular limit cardinals at all - however, there are guaranteed to be lots of singular limit cardinals. Intuitively, a limit cardinal κ κ is singular if we can "count up to it" in fewer than κ κ-many steps. For example, the sequence ℵ 1,ℵ 2,ℵ 3,...ℵ 1,ℵ 2,ℵ 3,... lets us count up to the cardinal ℵ ω ℵ ω in ω ω-many steps; since ℵ ω ℵ ω is much bigger than ω ω, this means that ℵ ω ℵ ω is singular. This is exactly the "block-chopping-into" thing we did above, but phrased a bit more abstractly. By contrast, it's not hard to show that every successor (= non-limit) cardinal is regular (= non-singular): if (α η)η<δ(α η)η<δ is an increasing sequence of ordinals with limit β=γ+β=γ+, then β β is the union of δ δ-many sets of size ≤γ≤γ, so β=δ×γ β=δ×γ and since γ<β γ<β this means δ=β δ=β. The number of steps you need to count up to a given cardinal is called its cofinality, and the cofinality of κ κ is denoted c f(κ)c f(κ). Exponentiation So what does this have to do with exponentiation? Well, looking back at the proof that (ℵ ω)ℵ 0>ℵ ω(ℵ ω)ℵ 0>ℵ ω, the key point was that we were able chop the "base" (= ℵ ω ℵ ω) into "exponent-many" (= ℵ 0 ℵ 0) small blocks; that is, the cofinality of the base was no larger than the exponent. Indeed, this turns out to be a fundamental issue - if the exponent λ λ is large relative to the cofinality of the base κ κ (not just the base itself!), we get κ λ>κ κ λ>κ (a bit more snappily, we have κ c f(κ)>κ κ c f(κ)>κ for all κ κ). Coda Let me end by mentioning three points around this topic: The fact that κ c f(κ)>κ κ c f(κ)>κ is a consequence of the more general Konig's theorem. If you want to get a handle on basic cardinal arithmetic, you should play around with this theorem until you're comfortable with it. Interestingly, in a precise sense Konig's theorem is basically the only nontrivial fact about cardinal exponentiation which ZFC can outright prove - this is a consequence of Easton's theorem. This is a very technical result, but I mention it only because knowing that something like it exists gives some additional "punch" to Konig's theorem. Easton's theorem (and its method of proof in general) suggests a rather bleak picture for ZFC: that basically any nontrivial question about cardinal arithmetic can't be decided from the ZFC axioms alone. This turns out to be false, and the ZFC-only investigation of cardinal arithmetic was pioneered by Shelah - I think this paper of his is a good, if quite hard, survey of the situation. I won't try to describe it here, but I'll mention one of his flashier results: that if ℵ ω ℵ ω is a "strong limit cardinal" (that is, 2 ℵ n<ℵ ω 2 ℵ n<ℵ ω for all finite n n - this is implied by, but much weaker than, GCH up to ℵ ω ℵ ω), then 2 ℵ ω<ℵ ω 4.2 ℵ ω<ℵ ω 4. Incidentally, Shelah is on record as asking "Why the hell is it 4 4?" (page 4 4 of the above-linked article). Share Share a link to this answer Copy linkCC BY-SA 4.0 Cite Follow Follow this answer to receive notifications edited Jun 12, 2020 at 10:38 CommunityBot 1 answered Jan 15, 2019 at 16:17 Noah SchweberNoah Schweber 261k 22 22 gold badges 385 385 silver badges 668 668 bronze badges 1 2 Thank you for your detailed answer!!! I couldn't fully understand what you wrote because I have no basis about set theory, but I 'll read it carefully again. Have a nice day~ –MiRi_NaE Commented Jan 18, 2019 at 0:20 Add a comment| You must log in to answer this question. Start asking to get answers Find the answer to your question by asking. Ask question Explore related questions elementary-set-theory cardinals See similar questions with these tags. Featured on Meta Will you help build our new visual identity? Upcoming initiatives on Stack Overflow and across the Stack Exchange network... Community help needed to clean up goo.gl links (by August 25) Report this ad Related 9cardinal exponentiation, k<λ k<λ 3Question about cardinals without GCH 34How to formulate continuum hypothesis without the axiom of choice? 1What does it mean for the continuum function to be eventually constant below k? 1Arithmetic on infinite cardinal numbers 4Proving 2 m≰m 2 2 m≰m 2 for infinite cardinal m m without AC 1Is 2 κ 2 κ regular if κ κ is regular? 2Is 2 κ≤κ cf(κ)2 κ≤κ cf⁡(κ) for infinite cardinals κ κ? 1Aleph numbers, Beth numbers and the continuum hypothesis Hot Network Questions Factoring RSA numbers on a laptop Why are there no 'add14' chords? How can a theory be discarded if the Duhem–Quine thesis suggests it can’t be falsified Why are illegal immigrants counted towards congressional district apportionment and allocation of Electoral College votes in the United States? SPDX: GPL-2.0-or-later vs the + operator What keeps an index ETF pegged to the index? Landmark identification in "The Angel" (Arsenal FC's anthem) How do Commoners "change class"? What's at stake if the E3/EU "snaps back" their sanctions on Iran? Road tire bulge - is it still safe to ride? Did recently killed Al Jazeera journalist Anas al-Sharif call the Oct 7 attackers "heroes"? LM393 comparator not pulling down Why does grounding eliminate mains hum but not radio signals? Which set has greater cardinality and why? Why do we expect AI to reason instantly when humans require years of lived experience? Why לֶחֶם instead of לַחַם? When was this builder's paper produced? In Grep, how can I grep -r --exclude build/lib//.py Is laser engraving on an interstellar object feasible? Are there other LEGO Duplo track layouts with two trains that trigger all the switches indefinitely? Elfquest story where two elves argue over one's hypnotizing of an animal Quantum model of atom Use bigger sample for predictors in regression Reskinning creatures without accidentally hiding how dangerous/safe they are Question feed Subscribe to RSS Question feed To subscribe to this RSS feed, copy and paste this URL into your RSS reader. Why are you flagging this comment? It contains harassment, bigotry or abuse. This comment attacks a person or group. Learn more in our Code of Conduct. It's unfriendly or unkind. This comment is rude or condescending. Learn more in our Code of Conduct. Not needed. This comment is not relevant to the post. Enter at least 6 characters Something else. A problem not listed above. Try to be as specific as possible. Enter at least 6 characters Flag comment Cancel You have 0 flags left today Mathematics Tour Help Chat Contact Feedback Company Stack Overflow Teams Advertising Talent About Press Legal Privacy Policy Terms of Service Your Privacy Choices Cookie Policy Stack Exchange Network Technology Culture & recreation Life & arts Science Professional Business API Data Blog Facebook Twitter LinkedIn Instagram Site design / logo © 2025 Stack Exchange Inc; user contributions licensed under CC BY-SA. rev 2025.8.13.32804 By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy. Accept all cookies Necessary cookies only Customize settings Cookie Consent Preference Center When you visit any of our websites, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences, or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and manage your preferences. Please note, blocking some types of cookies may impact your experience of the site and the services we are able to offer. Cookie Policy Accept all cookies Manage Consent Preferences Strictly Necessary Cookies Always Active These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information. Cookies Details‎ Performance Cookies [x] Performance Cookies These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance. Cookies Details‎ Functional Cookies [x] Functional Cookies These cookies enable the website to provide enhanced functionality and personalisation. They may be set by us or by third party providers whose services we have added to our pages. If you do not allow these cookies then some or all of these services may not function properly. Cookies Details‎ Targeting Cookies [x] Targeting Cookies These cookies are used to make advertising messages more relevant to you and may be set through our site by us or by our advertising partners. They may be used to build a profile of your interests and show you relevant advertising on our site or on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device. Cookies Details‎ Cookie List Clear [x] checkbox label label Apply Cancel Consent Leg.Interest [x] checkbox label label [x] checkbox label label [x] checkbox label label Necessary cookies only Confirm my choices
352
Metropolitan Opera Archives =============== Guide Key Word Search Multi-Field Search Browse Repertory Report Performers Report Contacts Met Opera Website Search results for: "Jacques Bars" : 5 items found sort bydatetitle [Met Performance] CID : 36410 La Favorita (10) Metropolitan Opera House; Fri, December 29, 1905 [Met Performance] CID : 36310 La Favorita (9) Metropolitan Opera House; Sat, December 23, 1905 [Met Performance] CID : 36200 La Favorita (8) Metropolitan Opera House; Mon, December 11, 1905 [Met Tour] CID : 36140 La Favorita (7) Philadelphia; Pennsylvania; Tue, December 5, 1905 [Met Performance] CID : 36070 La Favorita (6) Metropolitan Opera House; Wed, November 29, 1905 ©2023 The Metropolitan Opera Archives
353
Useful Astronomical Data Vega Flux Zeropoints | | | | | | | | | | | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | Quantity | U | B | V | R | I | J | H | K | Notes and units | | λeff | 0.36 | 0.438 | 0.545 | 0.641 | 0.798 | 1.22 | 1.63 | 2.19 | microns | | Δλ | 0.06 | 0.09 | 0.085 | 0.15 | 0.15 | 0.26 | 0.29 | 0.41 | microns, UBVRI from Bessell (1990), JHK from AQ | | fν | 1.79 | 4.063 | 3.636 | 3.064 | 2.416 | 1.589 | 1.021 | 0.64 | x10-20 erg cm-2 s-1 Hz-1, from Bessell et al. (1998) | | fλ | 417.5 | 632 | 363.1 | 217.7 | 112.6 | 31.47 | 11.38 | 3.961 | x10-11 erg cm-2 s-1 A-1, from Bessell et al. (1998) | | Φλ | 756.1 | 1392.6 | 995.5 | 702.0 | 452.0 | 193.1 | 93.3 | 43.6 | photons cm-2 s-1 A-1, calculated from above quantities | These are for the Vega magnitude system and the Bessell et al. (1998) Johnson-Cousins-Glass System. AB Flux Zeropoints | | | | | | | | | --- | --- | --- | --- | --- | --- | --- | | Quantity | u | g | r | i | z | Notes and units | | λeff | 0.356 | 0.483 | 0.626 | 0.767 | 0.910 | microns, from Fukugita et al. (1996) | | Δλ | 0.0463 | 0.0988 | 0.0955 | 0.1064 | 0.1248 | microns, from Fukugita et al. (1996) | | fν | 3631 | 3631 | 3631 | 3.631 | 3631 | Jy or x10-23 erg cm-2 s-1 Hz-1 | | fλ | 859.5 | 466.9 | 278.0 | 185.2 | 131.5 | x10-11 erg cm-2 s-1 A-1, calculated from above quantities | | Φλ | 1539.3 | 1134.6 | 875.4 | 714.5 | 602.2 | photons cm-2 s-1 A-1, calculated from above quantities | These are for the SDSS filters on the AB system. Data from Fukugita et al. (1996) repeat their Table 1, rows 1 and 6. Note that the AB system is defined such that a source with Fnu = 3.63 x 10-20 erg cm-2 s-1 Hz-1 has AB mag = 0 in every filter, and in general ABmag = - 2.5 log Fnu - 48.6. Vega - AB Magnitude Conversion | | | | | | | --- | --- | --- | --- | --- | | Band | λeff | mAB - mVega | MSun(AB) | MSun(Vega) | | U | 3571 | 0.79 | 6.35 | 5.55 | | B | 4344 | -0.09 | 5.36 | 5.45 | | V | 5456 | 0.02 | 4.80 | 4.78 | | R | 6442 | 0.21 | 4.61 | 4.41 | | I | 7994 | 0.45 | 4.52 | 4.07 | | J | 12355 | 0.91 | 4.56 | 3.65 | | H | 16458 | 1.39 | 4.71 | 3.32 | | Ks | 21603 | 1.85 | 5.14 | 3.29 | | u | 3546 | 0.91 | 6.38 | 5.47 | | g | 4670 | -0.08 | 5.12 | 5.20 | | r | 6156 | 0.16 | 4.64 | 4.49 | | i | 7472 | 0.37 | 4.53 | 4.16 | | z | 8917 | 0.54 | 4.51 | 3.97 | | Y | 10305 | 0.634 | | | These data are mostly from Blanton et al. (2007). ST Magnitudes The ST magnitude system is defined such that an object with constant flux Flambda = 3.63 x 10-9 erg cm-2 s-1 A-1 will have magnitude ST = 0 in every filter, and in general STmag = - 2.5 log Flambda - 21.1. A fairly comprehensive list of HST filters and their zeropoints is available on the WFC3 Photometric Zeropoints page at STScI (Thanks to Molly Peeples for sharing the link!).
354
Published Time: 2016-02-11T05:52:58Z Maxwell–Boltzmann distribution - Simple English Wikipedia, the free encyclopedia =============== Jump to content [x] Main menu Main menu move to sidebar hide Getting around Main page Simple start Simple talk New changes Show any page Help Contact us About Wikipedia Special pages Search Search [x] Appearance Appearance move to sidebar hide Text Small Standard Large This page always uses small font size Width Standard Wide The content is as wide as possible for your browser window. Color (beta) Automatic Light Dark This page is always in light mode. Give to Wikipedia Create account Log in [x] Personal tools Give to Wikipedia Contribute Create account Log in Pages for logged out editors learn more Talk [x] Toggle the table of contents Contents move to sidebar hide Beginning 1 References Maxwell–Boltzmann distribution [x] 41 languages العربية Беларуская Català Чӑвашла Čeština Dansk Deutsch Ελληνικά English Español Euskara فارسی Français Gaeilge 한국어 Հայերեն Bahasa Indonesia Italiano עברית Қазақша Latviešu Magyar Nederlands 日本語 Norsk bokmål Norsk nynorsk Oʻzbekcha / ўзбекча Polski Português Русский Shqip Slovenčina Slovenščina Српски / srpski Suomi Svenska Türkçe Українська Tiếng Việt 吴语 中文 Change links Page Talk [x] English Read Change Change source View history [x] Tools Tools move to sidebar hide Actions Read Change Change source View history General What links here Related changes Upload file Permanent link Page information Cite this page Get shortened URL Download QR code Sandbox Edit interlanguage links Print/export Make a book Download as PDF Page for printing In other projects Wikimedia Commons Wikidata item From Simple English Wikipedia, the free encyclopedia Maxwell–Boltzmann Probability density function Cumulative distribution function Parameters a>0{\displaystyle a>0} Supportx∈(0;∞){\displaystyle x\in (0;\infty )} Probability density function (pdf)2 π x 2 e−x 2/(2 a 2)a 3{\displaystyle {\sqrt {\frac {2}{\pi }}}{\frac {x^{2}e^{-x^{2}/\left(2a^{2}\right)}}{a^{3}}}} Cumulative distribution function (cdf)erf⁡(x 2 a)−2 π x e−x 2/(2 a 2)a{\displaystyle \operatorname {erf} \left({\frac {x}{{\sqrt {2}}a}}\right)-{\sqrt {\frac {2}{\pi }}}{\frac {xe^{-x^{2}/\left(2a^{2}\right)}}{a}}} where erf is the error function Meanμ=2 a 2 π{\displaystyle \mu =2a{\sqrt {\frac {2}{\pi }}}} Median Mode2 a{\displaystyle {\sqrt {2}}a} Varianceσ 2=a 2(3 π−8)π{\displaystyle \sigma ^{2}={\frac {a^{2}(3\pi -8)}{\pi }}} Skewnessγ 1=2 2(16−5 π)(3 π−8)3/2{\displaystyle \gamma {1}={\frac {2{\sqrt {2}}(16-5\pi )}{(3\pi -8)^{3/2}}}} Excess kurtosisγ 2=4(−96+40 π−3 π 2)(3 π−8)2{\displaystyle \gamma {2}=4{\frac {\left(-96+40\pi -3\pi ^{2}\right)}{(3\pi -8)^{2}}}} Entropyln⁡(a 2 π)+γ−1 2{\displaystyle \ln \left(a{\sqrt {2\pi }}\right)+\gamma -{\frac {1}{2}}} Moment-generating function (mgf) Characteristic function In statistics the Maxwell–Boltzmann distribution is a particular probability distribution named after James Clerk Maxwell and Ludwig Boltzmann. It was first defined and used in physics (in particular in statistical mechanics) for describing particle speeds in idealized gases. In an idealized gas the particles move freely inside a stationary container without interacting with one another, most of the time. Sometimes they collide and exchange energy and momentum with each other or with their thermal environment. Particle in this context refers to gaseous particles (atoms or molecules), and the system of particles is assumed to have reached thermodynamic equilibrium. While the distribution was first derived by Maxwell in 1860 on heuristic grounds, Boltzmann later carried out significant investigations into the physical origins of this distribution. A particle speed probability distribution indicates which speeds are more likely: a particle will have a speed selected randomly from the distribution, and is more likely to be within one range of speeds than another. The distribution depends on the temperature of the system and the mass of the particle. The Maxwell–Boltzmann distribution applies to the classical ideal gas, which is an idealization of real gases. In real gases, there are various effects (e.g., van der Waals interactions, vortical flow, relativistic speed limits, and quantumexchange interactions) that can make their speed distribution different from the Maxwell–Boltzmann form. However, rarefied gases at ordinary temperatures behave very nearly like an ideal gas and the Maxwell speed distribution is an excellent approximation for such gases. For this reason, it forms the basis of the Kinetic theory of gases, which provides a simplified explanation of many fundamental gaseous properties, including pressure and diffusion. References [change | change source] ↑Statistical Physics (2nd Edition), F. Mandl, Manchester Physics, John Wiley & Sons, 2008, ISBN9780471915331 ↑See: Maxwell, J.C. (1860) "Illustrations of the dynamical theory of gases. Part I. On the motions and collisions of perfectly elastic spheres,"Philosophical Magazine, 4th series, 19: 19-32. Maxwell, J.C. (1860) "Illustrations of the dynamical theory of gases. Part II. On the process of diffusion of two or more kinds of moving particles among one another,"Philosophical Magazine, 4th series, 20: 21-37. ↑University Physics – With Modern Physics (12th Edition), H.D. Young, R.A. Freedman (Original edition), Addison-Wesley (Pearson International), 1st Edition: 1949, 12th Edition: 2008, ISBN (10-) 0-321-50130-6, ISBN (13-) 978-0-321-50130-1 ↑Encyclopaedia of Physics (2nd Edition), R.G. Lerner, G.L. Trigg, VHC publishers, 1991, ISBN (Verlagsgesellschaft) 3-527-26954-1, ISBN (VHC Inc.) 0-89573-752-3 Retrieved from " Category: Probability distributions This page was last changed on 13 March 2023, at 15:28. Text is available under the Creative Commons Attribution-ShareAlike License and the GFDL; additional terms may apply. See Terms of Use for details. Privacy policy About Wikipedia Disclaimers Code of Conduct Developers Statistics Cookie statement Mobile view Edit preview settings Search Search [x] Toggle the table of contents Maxwell–Boltzmann distribution 41 languagesAdd topic
355
reference request - articles that explain Conway's game of life - Mathematics Stack Exchange =============== Join Mathematics By clicking “Sign up”, you agree to our terms of service and acknowledge you have read our privacy policy. Sign up with Google OR Email Password Sign up Already have an account? Log in Skip to main content Stack Exchange Network Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Visit Stack Exchange Loading… Tour Start here for a quick overview of the site Help Center Detailed answers to any questions you might have Meta Discuss the workings and policies of this site About Us Learn more about Stack Overflow the company, and our products current community Mathematics helpchat Mathematics Meta your communities Sign up or log in to customize your list. more stack exchange communities company blog Log in Sign up Home Questions Unanswered AI Assist Labs Tags Chat Users Teams Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Try Teams for freeExplore Teams 3. Teams 4. Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Explore Teams Teams Q&A for work Connect and share knowledge within a single location that is structured and easy to search. Learn more about Teams Hang on, you can't upvote just yet. You'll need to complete a few actions and gain 15 reputation points before being able to upvote. Upvoting indicates when questions and answers are useful. What's reputation and how do I get it? Instead, you can save this post to reference later. Save this post for later Not now articles that explain Conway's game of life Ask Question Asked 4 years, 11 months ago Modified4 years, 11 months ago Viewed 281 times This question shows research effort; it is useful and clear 2 Save this question. Show activity on this post. For a few days I have been studying the game of life, I would like to understand it better from a mathematical point of view. I am looking for an article that talks about it without neglecting the mathematical reasons why it is important and as accessible as possible to a student with only a basic knowledge of mathematics. I write in search of advice, good day to all readers! reference-request recreational-mathematics game-theory Share Share a link to this question Copy linkCC BY-SA 4.0 Cite Follow Follow this question to receive notifications asked Aug 26, 2020 at 11:15 Davide La MannaDavide La Manna 21 2 2 bronze badges 3 1 @AntimatterHedgehog since the OP is explicitly asking for a reference request, your three comments are good enough for an answer on their own! I would delete them and repost them as your own answer. Please include a short description of the link and why it's helpful as you did in the comments. –Hooked Commented Aug 27, 2020 at 13:30 1 @Hooked Done! Thank you for pointing this out! Still learning the site, so I always feel insecure about what counts as "usefull" answers, but I can see your argument makes sense. –AntimatterHedgehog Commented Aug 27, 2020 at 14:14 @AntimatterHedgehog we were all new users at one point. The Stack Exchange sites can sometimes be off-putting for new users, so try to follow the lead of others when you ask or answer a question. If you really want to contribute, find a tag and try to read everything from that tag and answer those questions you feel qualified for. Welcome! –Hooked Commented Aug 27, 2020 at 14:21 Add a comment| 2 Answers 2 Sorted by: Reset to default This answer is useful 1 Save this answer. Show activity on this post. Davide, here is an equation for Conways game of life. Here is the m-file to see how it works. Hope this helps. Share Share a link to this answer Copy linkCC BY-SA 4.0 Cite Follow Follow this answer to receive notifications answered Aug 26, 2020 at 13:24 graham medlandgraham medland 27 4 4 bronze badges Add a comment| This answer is useful 1 Save this answer. Show activity on this post. I am also interested in the game of life and the more general concept of cellular automata. This article served as a nice introduction for me. It is kinda short, and contains many references that you can follow. If you are interested in the game of life, I would also recommend looking into the more general concept of cellular automata. One entry-point might be this brief article. It also contains references that you can follow. By chance, I also stumbled upon this thread about cellular automata here on MSE. There you can see what references the experts on MSE recommend on the topic, and one of the links leads to a set of lecture notes which are available for free. Note that the lecture notes actually start out with discussing the game of life :) Share Share a link to this answer Copy linkCC BY-SA 4.0 Cite Follow Follow this answer to receive notifications answered Aug 27, 2020 at 14:08 AntimatterHedgehogAntimatterHedgehog 280 2 2 silver badges 11 11 bronze badges Add a comment| You must log in to answer this question. Start asking to get answers Find the answer to your question by asking. Ask question Explore related questions reference-request recreational-mathematics game-theory See similar questions with these tags. Featured on Meta Will you help build our new visual identity? Upcoming initiatives on Stack Overflow and across the Stack Exchange network... Community help needed to clean up goo.gl links (by August 25) Report this ad Related 11Articles on ideas in the history of mathematics notation? 20Complex analysis book with a view toward Riemann surfaces? 15Conway's Game of Life 4How to denote this in game theoretic notation 19Guidance regarding research in Mathematical Physics 15Guide to mathematical physics? 1Fields of Interest in Game Theory for a Mathematics Dissertation 6Reference request: Pedagogical/tutorial articles on the historical development of modern real analysis. Hot Network Questions Replacing \kern1em with $\hookrightarrow$ in macro using \discretionary gives ‘Improper discretionary list’ error. How to solve this problem? Can my daughter’s candy preferences be modelled using numeric weights II? Does the Melf's Acid Arrow spell require a ranged attack roll? Rectangle and circle with same area and circumference Why isn't gauge symmetry a symmetry while global symmetry is? Does it make any sense to run a journal for pre-college students interested in medicine? What's the difference between democracy and totalitarianism if, even in democracy, we must respect laws set by parties we didn't vote for? Use bigger sample for predictors in regression Proper way to power off a Ubuntu 22.04-5 desktop from single user mode Did recently killed Al Jazeera journalist Anas al-Sharif call the Oct 7 attackers "heroes"? What keeps an index ETF pegged to the index? Elfquest story where two elves argue over one's hypnotizing of an animal What does it mean to be one's "God"? Samba(Linux)/Windows interaction Is it possible that death existed before the fall? In Matthew 17:4, what was Peter’s intention in proposing to make three tents for Jesus, Moses, and Elijah? How to deal with this problem in hedonism? What do you call this outfit that Japanese housewives always wear? Why does my HDD keep spinning and seeking when I power off the computer? I found that we can calculate the time of solar eclipses that will happen in the very far future. Do we need relativity in this calculation? Could a Manned Jupiter Mission use a Shadow Shield? Dropdown width with very long options Reskinning creatures without accidentally hiding how dangerous/safe they are Graphical software tools for quick and easy diagrams more hot questions Question feed Subscribe to RSS Question feed To subscribe to this RSS feed, copy and paste this URL into your RSS reader. Why are you flagging this comment? It contains harassment, bigotry or abuse. This comment attacks a person or group. Learn more in our Code of Conduct. It's unfriendly or unkind. This comment is rude or condescending. Learn more in our Code of Conduct. Not needed. This comment is not relevant to the post. Enter at least 6 characters Something else. A problem not listed above. Try to be as specific as possible. Enter at least 6 characters Flag comment Cancel You have 0 flags left today Mathematics Tour Help Chat Contact Feedback Company Stack Overflow Teams Advertising Talent About Press Legal Privacy Policy Terms of Service Your Privacy Choices Cookie Policy Stack Exchange Network Technology Culture & recreation Life & arts Science Professional Business API Data Blog Facebook Twitter LinkedIn Instagram Site design / logo © 2025 Stack Exchange Inc; user contributions licensed under CC BY-SA. rev 2025.8.13.32804 By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy. Accept all cookies Necessary cookies only Customize settings Cookie Consent Preference Center When you visit any of our websites, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences, or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and manage your preferences. Please note, blocking some types of cookies may impact your experience of the site and the services we are able to offer. Cookie Policy Accept all cookies Manage Consent Preferences Strictly Necessary Cookies Always Active These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information. Cookies Details‎ Performance Cookies [x] Performance Cookies These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance. Cookies Details‎ Functional Cookies [x] Functional Cookies These cookies enable the website to provide enhanced functionality and personalisation. They may be set by us or by third party providers whose services we have added to our pages. If you do not allow these cookies then some or all of these services may not function properly. Cookies Details‎ Targeting Cookies [x] Targeting Cookies These cookies are used to make advertising messages more relevant to you and may be set through our site by us or by our advertising partners. They may be used to build a profile of your interests and show you relevant advertising on our site or on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device. Cookies Details‎ Cookie List Clear [x] checkbox label label Apply Cancel Consent Leg.Interest [x] checkbox label label [x] checkbox label label [x] checkbox label label Necessary cookies only Confirm my choices
356
Medio =============== Mapa del sitio Abrir en la aplicación Inscribirse Iniciar sesión Escribir Inscribirse Iniciar sesión 500 Disculpas, pero algo salió mal de nuestra parte. Actualice la página, verifique el estado del sitio deMedium o encuentre algo interesante para leer . Original text Rate this translation Your feedback will be used to help improve Google Translate No se puede admitir este formulario Para proteger su seguridad mientras utiliza Google Translate, no envíe información en este tipo de formulario. Entiendo Ir a la URL original open_in_new
357
Kenneth Burke Burke Kenneth : Kenneth Burke has been termed "simply the finest literary critic in the world, and perhaps the finest since Coleridge" (Stanley Edgar Hyman, The New Leader). Mr. Burke has published ten other works with the University of California Press: Towards a Better Life (1966); Language as Symbolic Action: Essays on Life, Literature, and Method (1966) Collected Poems, 1915-1967 (1968); The Complete White Oxen: Collected Short Fiction of Kenneth Burke (1968); A Grammar of Motives (1969); Permanence and Change: An Anatomy of Purpose (1984); The Philosophy of Literary Form (1974); A Rhetoric of Motives (1969); The Rhetoric of Religion: Studies in Logology (1970); and Attitudes Toward History, Third Edition (1984). Ebooks Essays Toward a Symbolic of Motives, 1950-1955 $25.60 A Grammar of Motives $33.95 Permanence and Change: An Anatomy of Purpose $3.99 Permanence and Change: An Anatomy of Purpose $0.99 The Philosophy of Literary Form $36.95 On Human Nature: A Gathering While Everything Flows, 1967-1984 $85.00 Language As Symbolic Action: Essays on Life, Literature, and Method $38.95 La filosofía de la forma literaria: Y otros estudios sobre la acción simbólica $8.99
358
359
IteratedLog | Wolfram Function Repository =============== Wolfram.com WolframAlpha.com WolframCloud.com Wolfram Function Repository =========================== Instant-use add-on functions for the Wolfram Language Primary Navigation Categories Core Language & Structure Data Manipulation & Analysis Visualization & Graphics Machine Learning Symbolic & Numeric Computation Higher Mathematical Computation Strings & Text Graphs & Networks Images Geometry Sound & Video Knowledge Representation & Natural Language Time-Related Computation Geographic Data & Computation Scientific and Medical Data & Computation Engineering Data & Computation Financial Data & Computation Social, Cultural & Linguistic Data Notebook Documents & Presentation User Interface Construction System Operation & Setup External Interfaces & Connections Cloud & Deployment Repository Tools Programming Utilities Just For Fun Wolfram Physics Project Random Function Recent Functions Alphabetical List Submit a New Function Function Repository Resource: IteratedLog Source Notebook Determine the iterated logarithm of an input Contributed by: Wolfram|Alpha Math Team 1 likes ResourceFunction["IteratedLog"][z] gives the iterated natural logarithm of z. ResourceFunction["IteratedLog"][b,z] gives the iterated logarithm base b of z. Details and Options The iterated logarithm is also known as inverse tetration or the super-logarithm. It is defined to be the smallest (integer) number of times that the logarithm must be applied to a number to yield a result less than 1. Examples open all Example Notebook Open in Cloud Download Notebook Basic Examples(2) IteratedLog is the inverse of tetration (repeated exponentiation): In:= Out= A slightly larger input shows a step-like jump in the value of IteratedLog: In:= Out= Make a table of the iterated logarithm of the first 50 integers: In:= Out= Scope(1) The logarithmic base can be any real number greater than 1: In:= Out= Applications(1) Plot the iterated logarithm for different logarithmic bases: In:= Out= Possible Issues(1) IteratedLog will return unevaluated in cases where evaluation might lead to numerical overflow: In:= Out= Publisher Wolfram|Alpha Math Team Related Links Iterated Logarithm–Wolfram MathWorld Logarithm–Wolfram MathWorld Natural Logarithm–Wolfram MathWorld Binary Logarithm–Wolfram MathWorld Version History 4.0.0 – 23 March 2023 3.1.0 – 12 May 2021 3.0.0 – 24 January 2020 2.0.0 – 05 September 2019 1.0.0 – 10 July 2019 Related Resources BinaryIteratedLog Tetration Related Symbols Log Exp Log10 Log2 Power ProductLog License Information This work is licensed under a Creative Commons Attribution 4.0 International License Send a message about this function Discuss on Wolfram Community Top © 2025 Wolfram. All rights reserved. Legal&Privacy Policy Contact Us WolframAlpha.com WolframCloud.com
360
object oriented - A Python wrap-around list - Code Review Stack Exchange =============== Join Code Review By clicking “Sign up”, you agree to our terms of service and acknowledge you have read our privacy policy. Sign up with Google OR Email Password Sign up Already have an account? Log in Skip to main content Stack Exchange Network Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Visit Stack Exchange Loading… Tour Start here for a quick overview of the site Help Center Detailed answers to any questions you might have Meta Discuss the workings and policies of this site About Us Learn more about Stack Overflow the company, and our products current community Code Review helpchat Code Review Meta your communities Sign up or log in to customize your list. more stack exchange communities company blog Log in Sign up Home Questions Unanswered AI Assist Labs Tags Chat Users Companies Teams Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Try Teams for freeExplore Teams 3. Teams 4. Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Explore Teams Teams Q&A for work Connect and share knowledge within a single location that is structured and easy to search. Learn more about Teams Hang on, you can't upvote just yet. You'll need to complete a few actions and gain 15 reputation points before being able to upvote. Upvoting indicates when questions and answers are useful. What's reputation and how do I get it? Instead, you can save this post to reference later. Save this post for later Not now A Python wrap-around list Ask Question Asked 8 years, 6 months ago Modified7 years, 8 months ago Viewed 10k times This question shows research effort; it is useful and clear 19 Save this question. Show activity on this post. I want to gain experience creating data structures that look and feel like Python builtin types. As a first exercise, I've written a WraparoundList class meant to be identical to the builtin list, except that accessing out-of-bounds elements "wraps around". Goals: The only behavior that differs from that of a list is when explicitly indexed with []. Should look and feel like the Python builtin, i.e., wouldn't look too out of place in the collections module. Should be compatible with both Python 2.7.x and 3.x (though I only have tested on 2.7.13). The complete source code with doctests follows: ```python !/usr/bin/env python from sys import maxint as MAXINT class WraparoundList(list): """A list whose index wraps around when out of bounds. A `WraparoundList` is the same as an ordinary `list`, except that out-of-bounds indexing causes the index value to wrap around. The wrapping behavior is as if after reaching the last element, one returned to the other end of the list and continued counting. >>> x = WraparoundList('abcd') >>> x ['a', 'b', 'c', 'd'] >>> x 'd' >>> x # wraps to x 'a' >>> x[-6] = 'Q' # wraps to x[-2] >>> x ['a', 'b', 'Q', 'd'] >>> del x # wraps to x >>> x ['a', 'b', 'Q'] Indices used in out-of-range slices also wrap around. If the slice's `start` or `stop` is out-of-bounds, it gets wrapped around. >>> x = WraparoundList('abcd') >>> x ['a', 'b', 'c', 'd'] >>> x[:10] # wraps to x[:2] ['a', 'b'] >>> x[-7:3] # wraps to x[-3:3] ['b', 'c'] The one way in which slicing a `WraparoundList` differs from slicing an ordinary `list` is the case of using the list length as the upper limit. >>> x = WraparoundList('abcd') >>> x ['a', 'b', 'c', 'd'] >>> x[2:] ['c', 'd'] >>> x[2:4] # wraps to x[2:0] [] Initializing a `WraparoundList` with a nested iterable does not cause inner indices to wrap. To have a multi- dimensional `WraparoundList`, all the elements of the outer `WraparoundList` must also be `WraparoundList`s. >>> x = WraparoundList([list('abc'), list('def')]) >>> x >>> x ['d', 'e', 'f'] >>> x Traceback (most recent call last): File "<stdin>", line 1, in <module> IndexError: list index out of range >>> y = WraparoundList([WraparoundList(i) for i in x]) >>> y 'f' """ def __getitem__(self, i): """x.__getitem__(i) <=> x[i]""" if isinstance(i, slice): return list.__getitem__(self, self._wrap_slice(i)) else: return list.__getitem__(self, self._wrap_index(i)) def __getslice__(self, i, j): """x.__getslice__(i, j) <=> x[i:j]""" return self.__getitem__(slice(i, j, None)) def __setitem__(self, i, y): """x.__setitem__(i, y) <=> x[i] = y""" if isinstance(i, slice): list.__setitem__(self, self._wrap_slice(i), y) else: list.__setitem__(self, self._wrap_index(i), y) def __setslice__(self, i, j): """x.__setslice__(i, j) <=> x[i:j] = y""" self.__setitem__(slice(i, j, None)) def __delitem__(self, i): """x.__delitem__(i, y) <=> del x[i]""" if isinstance(i, slice): list.__delitem__(self, self._wrap_slice(i)) else: list.__delitem__(self, self._wrap_index(i)) def __delslice__(self, i, j): """x.__delslice__(i, j) <=> del x[i:j]""" self.__delitem__(slice(i, j, None)) def _wrap_index(self, i): _len = len(self) if i >= _len: return i % _len elif i < -_len: return i % (-_len) else: return i def _wrap_slice(self, slc): if slc.start is None: start = None else: start = self._wrap_index(slc.start) if slc.stop is None: stop = None elif slc.stop == MAXINT: # __slice__ methods treat absent upper bounds as sys.maxint, which would # wrap around to a system-dependent (and probably unexpected) value. Setting # to `None` in this case forces the slice to run to the end of the list. stop = None else: stop = self._wrap_index(slc.stop) step = slc.step return slice(start, stop, step) def main(): pass if name == 'main': main() ``` python object-oriented inheritance circular-list Share Share a link to this question Copy linkCC BY-SA 3.0 Follow Follow this question to receive notifications edited Dec 15, 2017 at 8:48 Billal BEGUERADJ 1 asked Jan 29, 2017 at 18:04 EndulumEndulum 383 2 2 silver badges 7 7 bronze badges 7 4 I'd recommend calling it a circular list –user98809 Commented Jan 29, 2017 at 18:05 1 @theonlygusti Considered that, but thought that it may be too suggestive of an infinite iterable a laitertools.cycle. –Endulum Commented Jan 29, 2017 at 21:15 Interestingly, itertools.cycle does not appear to have real Python source code, apart from being plugged into the Python object API. The source is here in CPython: github.com/python/cpython/blob/master/Modules/… –Vasili Syrakis Commented Jan 30, 2017 at 14:07 In case you didn't know, negative indices already wrap around in regular Python lists and other sequence types. –mkrieger1 Commented Jan 30, 2017 at 22:57 @mkrieger1 Not sure what you mean. x=range(4); x[-10] raises IndexError, at least in 2.7.13. Do you mean "wrap" in a different sense? –Endulum Commented Jan 31, 2017 at 4:34 |Show 2 more comments 2 Answers 2 Sorted by: Reset to default This answer is useful 14 Save this answer. Show activity on this post. This is well documented, well commented code. The docstring says: The one way in which slicing a WraparoundList differs from slicing an ordinary list is the case of using the list length as the upper limit. but this isn't quite the whole story — an ordinary list can also be sliced using a value greater than the list length, and in that case WraparoundList also has a different behaviour: ```python x = [1, 2, 3] x[:10] [1, 2, 3] x = WraparoundList(x) x[:10] `` 3. The code is not portable to Python 3, because there's nosys.maxint` (all integers in Python 3 are "long"). I suggest something like this: python try: # In Python 2.7, when __slice__ methods are called with no "stop" # value, sys.maxint is passed instead. from sys import maxint as NO_STOP except ImportError: # Python 3 does not have sys.maxint or use the __slice__ methods. NO_STOP = object() I prefer a name like NO_STOP because it communicates the intention rather than the implementation. _wrap_index raises ZeroDivisionError if the list is empty: ```python w = WraparoundList([]) w Traceback (most recent call last): File "", line 1, in File "cr153920.py", line 79, in getitem return list.getitem(self, self._wrap_index(i)) File "cr153920.py", line 110, in _wrap_index return i % _len ZeroDivisionError: integer division or modulo by zero ``` Raising an exception is the right thing to do in this case, but I would expect to get an IndexError instead. The code calls list.__getitem__ directly, rather than via the super function. But this has the unsatisfactory consequence that if someone has another class C also inheriting from list and overriding the __getitem__ method, and combines WraparoundList and C via inheritance, like this: python class D(WraparoundList, C): pass Then D() calls WraparoundList.__getitem__, which calls list.__getitem__, but C.__getitem__ is never called, contrary to what one would expect. If you want to support subclassing of WraparoundList, then you need to write: python return super(WraparoundList, self).__getitem__(self._wrap_slice(i)) and so on. With a little refactoring, you could avoid some of the repetition. In particular, if you had a method like this: python def _wrap_arg(self, i): if isinstance(i, slice): return self._wrap_slice(i) else: return self._wrap_index(i) Then you'd be able to write: python def __getitem__(self, i): """x.__getitem__(i) <=> x[i]""" return super(WraparoundList, self).__getitem__(self._wrap_arg(i)) and so on. Once you've done the refactoring above, you'll see that _wrap_slice is only called from one place, so it could be inlined at its point of use. There is no need to include an empty main function or an if __name__ == '__main__': section — if there's nothing to do, then there's no need to write code to do it. Share Share a link to this answer Copy linkCC BY-SA 3.0 Follow Follow this answer to receive notifications edited Jan 29, 2017 at 19:15 answered Jan 29, 2017 at 18:48 Gareth ReesGareth Rees 50k 3 3 gold badges 130 130 silver badges 210 210 bronze badges 2 Wonderful! One clarification: Do I understand correctly that the __slice__ methods are redundant if I'm handling slices in __item__, even in 2.7.X? –Endulum Commented Jan 29, 2017 at 21:08 See the documentation: "built-in types in CPython currently still implement __getslice__(). Therefore, you have to override it in derived classes when implementing slicing." –Gareth Rees Commented Jan 29, 2017 at 21:16 Add a comment| This answer is useful 1 Save this answer. Show activity on this post. On top of Gareth Ree's excellent answer: because the WraparoundList and the usual list behave differently (for instance when sliced using a value greater than the list length), your class does not respect Liskov substitution principle (see also less formal explanation). In a nutshell: because some code using a list would behave differently if it was to use a WraparoundList, then WraparoundList should not inherit from list even if the "WraparoundList is a list" relationship is respected. A way to change this could be to stop inheriting from list but instead to use a list internally to store data (Composition over inheritance). Share Share a link to this answer Copy linkCC BY-SA 3.0 Follow Follow this answer to receive notifications answered Dec 15, 2017 at 0:03 SylvainDSylvainD 29.6k 1 1 gold badge 48 48 silver badges 93 93 bronze badges Add a comment| Your Answer Thanks for contributing an answer to Code Review Stack Exchange! Please be sure to answer the question. Provide details and share your research! But avoid … Asking for help, clarification, or responding to other answers. Making statements based on opinion; back them up with references or personal experience. Use MathJax to format equations. MathJax reference. To learn more, see our tips on writing great answers. Draft saved Draft discarded Sign up or log in Sign up using Google Sign up using Email and Password Submit Post as a guest Name Email Required, but never shown Post Your Answer Discard By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy. Start asking to get answers Find the answer to your question by asking. Ask question Explore related questions python object-oriented inheritance circular-list See similar questions with these tags. The Overflow Blog AI isn’t stealing your job, it’s helping you find it Featured on Meta Will you help build our new visual identity? Upcoming initiatives on Stack Overflow and across the Stack Exchange network... Report this ad Report this ad Related 6Project Euler #11 Largest product in a grid 6Cross-platform file hash sum generator in Python 4Rotating list in python 6Fixing the endlessly loading devices and printers tray 4Return a minimum number of ranges from a collection of ranges 2Project RoboNest - Nestable/Breakable For Loops + Basic While Loops for Robot Framework 7Catching missing attribute errors sooner in Python using strict interface specifications 6Multiplayer card game "Hearts" with OOP in Python Hot Network Questions What's the difference between democracy and totalitarianism if, even in democracy, we must respect laws set by parties we didn't vote for? Why isn't gauge symmetry a symmetry while global symmetry is? Can defamation occur without specific intent for false statements about ordinarily non-damaging things? Can high schoolers post to arXiv or write preprints? Show double quotient with congruence subgroup is simply connected? PCB design for audio compressor – THT routing, GND plane and power tracks Illustrative GenAI images of real world objects in publications Why לֶחֶם instead of לַחַם? How do Commoners "change class"? Why do the rules allow resigning in drawn positions with insufficient mating material? Use bigger sample for predictors in regression Reuse the profile of apt Firefox in Flatpak version through a symbolic link? What does, "For you alone are holy." mean in Revelation 15:4? What are the originals of the Namdeo poems translated by Kolatkar? repeat_and_join function for strings and chars in rust Summation with fractional part How to describe this set of figures? A specific case Is Adj N Adj possible? Are there other LEGO Duplo track layouts with two trains that trigger all the switches indefinitely? History of Wilcoxon/Mann-Whitney being for the median? Type error: Namespace 'global.JSX' has no exported member 'Element'. XM Cloud Nextjs Elfquest story where two elves argue over one's hypnotizing of an animal Where should I host software for individual papers when GitHub is now part of Microsoft AI? Can my daughter’s candy preferences be modelled using numeric weights II? Question feed Subscribe to RSS Question feed To subscribe to this RSS feed, copy and paste this URL into your RSS reader. lang-py Why are you flagging this comment? It contains harassment, bigotry or abuse. This comment attacks a person or group. Learn more in our Code of Conduct. It's unfriendly or unkind. This comment is rude or condescending. Learn more in our Code of Conduct. Not needed. This comment is not relevant to the post. Enter at least 6 characters Something else. A problem not listed above. Try to be as specific as possible. Enter at least 6 characters Flag comment Cancel You have 0 flags left today Code Review Tour Help Chat Contact Feedback Company Stack Overflow Teams Advertising Talent About Press Legal Privacy Policy Terms of Service Your Privacy Choices Cookie Policy Stack Exchange Network Technology Culture & recreation Life & arts Science Professional Business API Data Blog Facebook Twitter LinkedIn Instagram Site design / logo © 2025 Stack Exchange Inc; user contributions licensed under CC BY-SA. rev 2025.8.13.32804 By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy. Accept all cookies Necessary cookies only Customize settings Cookie Consent Preference Center When you visit any of our websites, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences, or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and manage your preferences. Please note, blocking some types of cookies may impact your experience of the site and the services we are able to offer. Cookie Policy Accept all cookies Manage Consent Preferences Strictly Necessary Cookies Always Active These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information. Cookies Details‎ Performance Cookies [x] Performance Cookies These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance. Cookies Details‎ Functional Cookies [x] Functional Cookies These cookies enable the website to provide enhanced functionality and personalisation. They may be set by us or by third party providers whose services we have added to our pages. If you do not allow these cookies then some or all of these services may not function properly. Cookies Details‎ Targeting Cookies [x] Targeting Cookies These cookies are used to make advertising messages more relevant to you and may be set through our site by us or by our advertising partners. They may be used to build a profile of your interests and show you relevant advertising on our site or on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device. Cookies Details‎ Cookie List Clear [x] checkbox label label Apply Cancel Consent Leg.Interest [x] checkbox label label [x] checkbox label label [x] checkbox label label Necessary cookies only Confirm my choices
361
Published Time: 2023-09-24T17:40:54Z How Electrical Energy Really Flows: A Journey through Poynting’s Vector | by M&Z | Predict | Medium =============== Sitemap Open in app Sign up Sign in Write Sign up Sign in Predict ------- · Follow publication where the future is written Follow publication Top highlight 2 1 How Electrical Energy Really Flows: A Journey through Poynting’s Vector Learn the Truth about Electricity M&Z Follow 5 min read · Sep 24, 2023 994 10 Listen Share Today I have a special piece for you. I will not wear my literary mantle, but I will instead take advantage of my status as an engineer and try to reveal a scientific truth to you with simplicity and accuracy. This particular truth concerns the greatest misunderstanding in the scientific community and that is electricity. DISCLAIMER:This article is an inspiration preceded by this excellent video. Imagine you have a giant circuit consisting of a battery, a switch, a light bulb, and two wires, each 300,000 kilometers long. That’s the distance light travels in just one second. These wires would reach halfway to the moon before coming back to connect to the light bulb, which is only one meter away. Now, I have a quiz for you. If you close the switch, how long would it take for the bulb to light up? Is it half a second, one second, two, and 1/c seconds, or none of the above? Get M&Z’s stories in your inbox Join Medium for free to get updates from this writer. Subscribe Subscribe Please commit to an answer and put it down in the comments so you can’t say, oh yeah, I knew that was the answer. Play by the rules, Medium Community! In this article, we’ll explore the fascinating world of electrical energy transmission, debunk some common misconceptions, and uncover the truth about how electrical energy really flows. The Shape of the Circuit For this circuit to function correctly, some simplifying assumptions must be made. For instance, the wires must have no resistance and the light bulb must immediately turn on when current passes through it. The Lies We Were Taught To understand how electrical energy flows, we need to address the misconceptions that many of us were taught about electricity. One common belief is that electrons themselves carry potential energyand are pushed or pulled through a continuous conducting loop. Additionally, it’s often assumed that electrons dissipate their energy in a device. However, these ideas are fundamentally flawed. Consider the inaccuracy of this theory from the point of view that if electron motions really did result in energy transfer, then why did the energy only transfer in one direction? From the source to the coil and not simultaneously from the device back to the source? The Breakthrough: Maxwell’s Equations In the 1860s and 70s, Scottish physicist James Clerk Maxwell made a groundbreaking discovery. He realized that light consists of oscillating electric and magnetic fields that are perpendicular to each other and in phase. Maxwell formulated equations, known as Maxwell’s equations, to describe the behavior of these fields and their associated waves. Ε: electric field, B: magnetic field The Poynting Vector In 1883, one of Maxwell’s former students, John Henry Poynting, delved deeper into the conservation of energy. Poynting’s work led to the development of an equation describing energy flux, known as the Poynting vector (S). This vector helps us understand how electromagnetic energy flows from one place to another. In simpler terms, we are trying to decipher the movement of these 2 fields. We want to find out how much electromagnetic energy passes through an area per unit of time. That’s what Poynting was trying to explain to us. Press enter or click to view image in full size How Energy Really Flows Now, let’s consider a simple circuit with a battery and a light bulb. When the battery is connected to the circuit, its electric field extends through the wires at the speed of light. This electric field pushes electrons, causing them to drift slowly in one direction, creating an electric current. However, the motion of electrons is minimal, about a tenth of a millimeter per second. This current, known as conventional current, flows in the opposite direction to electron motion but is responsible for powering devices. The Role of Electric and Magnetic Fields As electrons move through the wires, they create both electric and magnetic fields around the circuit. According to Poynting’s theory, energy flows through these fields, not through the movement of electrons. This energy is transmitted as electromagnetic waves, and it travels at the speed of light. Press enter or click to view image in full size The charge on the surfaces of the conductors creates an eclectic field (red vectors) outside the wires and the current inside the wires creates a magnetic field outside the wires (blue vectors). Press enter or click to view image in full size The movement of the fields from the battery is to the right, according to the The Poynting Vector. The result of electrical flow. Historical Lessons: Undersea Telegraph Cables To further emphasize the importance of electromagnetic fields in energy transmission, we can look at historical examples. In the mid-19th century, undersea telegraph cables suffered from signal distortion over long distances. Scientists like Lord Kelvin initially believed that electrical signals traveled through the cables much like water through a tube. However, it was eventually proven that it was the electromagnetic fields around the wires that carried the energy and information. The Reality of Electrical Energy Transmission In conclusion, the answer to our initial question about the giant circuit and the light bulb is that the light bulb will turn on almost instantaneously, in roughly 1/c seconds. This may seem counterintuitive to some who imagine that the electric field needs time to travel through the long wire. However, what truly matters is the propagation of electric and magnetic fields, which can transmit energy at the speed of light. The Takeaway Understanding how electrical energy truly flows challenges common misconceptions about electricity. Instead of focusing on the movement of electrons within wires, we should recognize the role of electromagnetic fields in energy transmission. This knowledge helps us appreciate the complex yet fascinating journey of electrical energy from power plants to our homes. So, the next time you flick on a light switch, remember that it’s not just the wires but also the invisible fields around them that bring light to your life. Hi! My name is Konstantinos, and I am studying Production Engineering and Management at the Technical University of Crete. If you liked the content of this piece, supportusby subscribing for freehere, and you will be the first to read our work Science Technology Writing Learning 994 994 10 Follow Published in Predict -------------------- 84K followers ·Last published 20 hours ago where the future is written Follow Follow Written by M&Z -------------- 1.3K followers ·195 following We paint and write the art of human emotions, accomplishments, and failures at a social, political, and scientific level. Join us on this journey… Follow Responses (10) Write a response What are your thoughts? Cancel Respond Benighted they/them Sep 25, 2023 The Lies We Were Taught Why do you think we're not taught things more accurately? I understand that depending on the age of the learners the information has to be appropriate to their level of understanding, but what about high school though? 28 1 reply Reply Alan Inkwell Sep 25, 2023 I found this article truly enlightening! 😊 The analogy of the giant circuit and the moon-distance wires was mind-boggling 🌕, making the concept of electrical energy transmission crystal clear. 23 1 reply Reply Mika Oka she/her Sep 25, 2023 Interesting read 13 1 reply Reply See all responses More from M&Z and Predict In Counter Arts by M&Z Greece — The Country of Complainers ----------------------------------- ### Sometimes gratefulness is all we need Dec 28, 2023 1.2K 17 In Predict by iswarya writes GPT-5 Is Coming in July 2025 — And Everything Will Change --------------------------------------------------------- ### “It’s wild watching people use ChatGPT… knowing what’s coming.” — OpenAI insider Jul 8 10.7K 408 In Predict by iswarya writes Future-Proof Careers in the Age of AI: What You Should Learn in 2026 -------------------------------------------------------------------- ### What if I told you that by this time next year, you could land a job that pays over $100,000 — and it won’t be threatened by AI? Jul 30 2K 102 In The Unfair Advantage by M&Z Things You Can Only Learn by Visiting The Greek Village — Flampouro ------------------------------------------------------------------- ### Stories that are written to be indelible. Aug 22, 2022 952 8 See all from M&Z See all from Predict Recommended from Medium In Entrepreneurship Handbook by Joe Procopio It’s Official, The World Doesn’t Want AI Generated Content ---------------------------------------------------------- ### AI is for information dissemination. Humans are for making you think. Humans are winning. Here’s how to win. 2d ago 834 20 In Long. Sweet. Valuable. by Ossai Chinedum I’ll Instantly Know You Used Chat Gpt If I See This --------------------------------------------------- ### Trust me you’re not as slick as you think May 16 20K 1209 In Write A Catalyst by Adarsh Gupta The Slow Poisoning of Tech Jobs ------------------------------- ### People Don’t Realize What’s Coming Aug 5 899 56 Alberto Romero GPT-5 Is Here: There’s Only One Feature Worth Writing About ----------------------------------------------------------- ### My short review of OpenAI’s new flagship model 5d ago 1.7K 61 Jordan Gibbs OpenAI Just Gutted ChatGPT. --------------------------- ### GPT-5 is nothing but a poorly disguised downgrade. 5d ago 2.1K 107 ThreadSafe Diaries He Was a Senior Developer, Until We Read His Pull Request --------------------------------------------------------- ### When experience doesn’t translate to expertise, and how one code review changed everything Aug 3 5.8K 179 See more recommendations Help Status About Careers Press Blog Privacy Rules Terms Text to speech
362
Mathematisches Forschungsinstitut Oberwolfach Report No. 14/2020 DOI: 10.4171/OWR/2020/14 Set Theory (online meeting) Organized by Ilijas Farah, Toronto Ralf Schindler, M¨ unster Dima Sinapova, Chicago W. Hugh Woodin, Cambridge MA 5 April – 11 April 2020 Abstract. Set theory continues to experience dramatic progress, both in pure set theory, with its fundamental techniques of forcing, large cardinals, and inner model theory, and in applied set theory, with its deep connections to other areas of mathematics. Specific topics include: (Pure Set Theory) Forcing axioms, iteration theorems for various classes of forcings, cardinal characteristics and descriptive set theory of the continuum and of general-ized Baire spaces, HOD (the hereditarily ordinal definable sets), inner model theory and the core model induction, singular cardinal combinatorics and cardinal arithmetic (pcf theory), partition theorems, Borel reducibility; (Ap-plied Set Theory) Borel and measurable combinatorics, structural Ramsey theory, set theory and operator algebras, topological dynamics and ergodic theory, set theory and Banach spaces, metric structures. Mathematics Subject Classification (2010): 03Exx. Introduction by the Organizers In this workshop we intended to explore topics in both pure and applied set theory which have experienced the most exciting developments over the recent years. The goal was to bring together researchers in set theory from over 10 countries. Un-fortunately, the workshop had to be cancelled due to the SARS-CoV-2 pandemic. This report is an attempt to capture the potential content of the non-existent meeting as well as possible. Let us mention some of the recent major breakthroughs in the area of set theory which would have found a stage for being presented at that meeting. 2 Oberwolfach Report 14/2020 A major progress in pure set theory would have been reported in D. Asper´ o’s talk. In joint work with R. Schindler, they proved that the strong version of Martin’s Maximum, namely MM++ implies Woodin’s Pmax axiom (∗). Until now it was not known even whether these axioms were jointly consistent. This result provides unification of forcing axioms and Woodin’s canonical Pmax model in which the Π2 theory of the structure H(ℵ2), of all sets whose transitive closure has cardinality at most ℵ1, is maximized. Building on this result, M. Viale defined a natural extension of ZFC with built-in absoluteness and proved an extension of Woodin’s Π2-maximality results in this context. In a technical tour de force, A. Vignati proved that forcing axioms imply that all isomorphisms between coronas of separable, non-unital C∗-algebras are trivial. This confirms the rigidity conjecture posed by Coskey and Farah in 2013. M. Gitik developed a novel way to violate the singular cardinal hypothesis (SCH). His forcing has the advantage that it preserves cardinals and cofinalities and can also be used to obtain failure of SCH (an instance of non compactness) together with failure of weak square and even the tree property, which are com-pactness type principles. Until now the only known way to get failure of SCH and failure of weak square (or with the tree property) simultaneously involved singu-larizing cardinals. The new construction answers an old combinatorial question and opens up a promising direction of solving other well known open problems, including a question of Woodin from the 80s. There are also major advances in the theory of large cardinals. G. Goldberg expands his list of truly remarkable insights about strongly compact cardinals. He shows that above a strongly compact cardinal, the theory of large cardinals is, surprisingly, much more tractable than the slew of independence results would suggest. Some of his theorems in particular draw a stark parallel with inner model-like behavior of the large cardinal structure above a super compact cardinal. F. Schlutzenberg would have reported on striking results on Reinhardt cardinals, rank-into-rank embeddings, and new developments in the theory of iterated ultra-powers and extenders under ZF. In particular, he provides significant constraints on the possible existence of rank-into-rank embeddings just in ZF. In descriptive set theory, Gao’s contribution is particularly notable. In joint work with Etedadialiabadi, La Maˆ ıtre, and Melleray, Gao verified a conjecture of Vershik and proved that Hall’s universal countable locally finite group can be embedded as a dense subgroup in the isometry group of the Urysohn space and in the automorphism group of the random graph. This is the culmination of a work of many hands. The proposed participants are a mix of both established mathematicians and some very promising junior people. We hope that in the not too distant future we can bring them together to facilitate discussion and research collaboration for another Oberwolfach meeting of this type. Set Theory 3 Workshop (online meeting): Set Theory Table of Contents David Asper´ o (joint with Ralf Schindler) Martin’s Maximum++ and (∗) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 Omer Ben-Neria, Martin Zeman Lower Bounds for Mutual Stationarity principles with an application to the theory of iterated Distributive Forcings . . . . . . . . . . . . . . . . . . . . . . . . . . 7 J¨ org Brendle (joint with Francesco Parente) Combinatorics of ultrafilters on Boolean algebras . . . . . . . . . . . . . . . . . . . . 8 Ruiyuan Chen A universal characterization of standard Borel spaces . . . . . . . . . . . . . . . . 10 James Cummings (joint with Arthur Apter) Variations on Cohen forcing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 Mirna Dˇ zamonja On wide Aronszajn trees in the presence of MA . . . . . . . . . . . . . . . . . . . . . 12 Vera Fischer (joint with Diana C. Montoya, Jonathan Schilhan and Daniel T. Soukup) Gaps and Towers at uncountable cardinals . . . . . . . . . . . . . . . . . . . . . . . . . . 14 Su Gao (joint with Mahmood Etedadialiabadi, Fran¸ cois La Maˆ ıtre, and Julien Melleray) Vershik’s Conjecture for Ultraextensive Spaces . . . . . . . . . . . . . . . . . . . . . . 15 Moti Gitik Some applications of Extender based forcings with overlapping extenders. 16 Gabe Goldberg Structure theorems from strongly compact cardinals . . . . . . . . . . . . . . . . . . 17 Joel D. Hamkins (joint with Alfredo R. Freire) Bi-interpretation of weak set theories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 John Krueger Entangledness in Suslin lines and trees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 Paul B. Larson (joint with Saharon Shelah) Universally measurable sets may all be ∆ ∼ 1 2 . . . . . . . . . . . . . . . . . . . . . . . . . . 21 Andrew Marks (joint with Adam Day) The decomposability conjecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 Heike Mildenberger (joint with Christian Br¨ auninger) Parametrised Miller Forcing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 4 Oberwolfach Report 14/2020 Benjamin D. Miller The Feldman-Moore, Glimm-Effros, and Lusin-Novikov theorems over quotients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 Justin T. Moore Finitely generated groups of piecewise linear homeomorphisms . . . . . . . . . 28 Alejandro Poveda (joint with Assaf Rinot and Dima Sinapova) Σ-Prikry forcings and their iterations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 Assaf Rinot (joint with Jing Zhang) Transformations of the transfinite plane . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 Christian Rosendal How much choice is needed to construct a discontinuous homomorphism? 35 Grigor Sargsyan The consistency of the failure of the convergence of Kc constructions . . 35 Farmer Schlutzenberg ZF rank-into-rank embeddings and non-definability . . . . . . . . . . . . . . . . . . . 36 S lawomir Solecki Transfinite sequences of topologies, descriptive complexity, and approximating equivalence relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 Stevo Todorˇ cevi´ c Ramsey degrees of products of infinite sets . . . . . . . . . . . . . . . . . . . . . . . . . . 41 Todor Tsankov Universal minimal flows of homeomorphism groups of high-dimensional manifolds are not metrizable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 Anush Tserunyan (joint with Robin Tucker-Drob) Hyperfinite subequivalence relations of treed equivalence relations . . . . . . 42 Matteo Viale Tameness for Set Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 Alessandro Vignati Rigidity conjectures in C∗-algebras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 Trevor Wilson Weak Vopˇ enka cardinals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 Jindrich Zapletal Coloring algebraic hypergraphs without choice . . . . . . . . . . . . . . . . . . . . . . . 51 Andy Zucker (joint with Gianluca Basso) Topological dynamics beyond Polish groups . . . . . . . . . . . . . . . . . . . . . . . . . 53 Set Theory 5 Abstracts Martin’s Maximum++ and (∗) David Asper´ o (joint work with Ralf Schindler) Classical forcing axioms are natural maximality principles asserting some degree of saturation of the universe relative to forcing axioms: if σ is a simple enough statement that can be forced via a (nice enough) forcing, then σ is in fact true. The strongest such axiom at the level of ω1 is Martin’s Maximum++ (MM++). If κ is a supercompact cardinal, then there is a semiproper forcing of size κ, obtained as the limit of an iteration of length κ, and which forces MM++ (). Another maximality principle, of a somewhat different flavour, is Woodin’s Pmax axiom (∗) (). This is the assertion that AD holds in L(R) and the inner model L(P(ω1)) is a Pmax-extension of L(R). While ADL(R) follows from large cardinals, the assertion that L(P(ω1)) is a Pmax-extension of L(R) does not. Although (∗) may look prima facie as a minimality assumption about L(P(ω1)), it turns out that this axiom implies remarkable forms of maximality for this inner model. For example: Theorem 1. (Woodin) Suppose (∗) holds and there is a proper class of Woodin cardinals. If A is a set of reals in L(R), σ is a Π2 sentence, and there is a set-forcing P such that ⊩P (Hω2; ∈, NSω1, A ˙ GP) | = σ,1 then (Hω2; ∈, NSω1, A) | = σ. Pmax ∈L(R) is a weakly homogeneous forcing which is definable in L(R) with-out parameters. It follows that, in the presence of large cardinals, (∗) completely decides the theory of L(P(ω1)) modulo forcing: Theorem 2. (Woodin) Suppose there is a proper class of Woodin cardinals. If P and Q are partial orders, G is P-generic over V , H is Q-generic over V , V [G] | = (∗), and V [H] | = (∗), then L(P(ω1))V [G] and L(P(ω1))V [H] have the same theory. Despite its nice properties, in order for (∗) to be a convincing candidate for a natural axiom, it would have to be compatible with all consistent large cardinal axioms. While L(R)[g] is trivially a model of (∗) if g is Pmax-generic over L(R), L(R)[g] cannot even have measurable cardinals. In fact, prior to our work it was open whether (∗) is compatible with large cardinals beyond the level of Woodin cardinals (). Looking back at MM++, it turned out that all natural questions about Hω2 seemed to be decided by this axiom2 and, moreover, that the answers MM++ gave to these question seemed to be the same as those provided by (∗). For example, 1By the presence of the Woodin cardinals, A is universally Baire and therefore there is a canonical interpretation AG of A in any set-generic extension V [G]. 2This is not the case for a slight weakening of MM++ denoted by MM+ω (s. ). 6 Oberwolfach Report 14/2020 both axioms have as a consequence a certain Π2 sentence about Hω2 implying that L(P(ω1)) | = ZFC and 2ℵ0 = ℵ2. This agreement between MM++ and (∗) made it natural to conjecture that MM++ implies (∗). However, the actual connection between classical forcing ax-ioms, whose models are typically obtained by means of iterated over models of ZFC with large cardinals, and (∗), whose models were produced by forcing over models of determinacy, remained unknown for a while. In 2019 we proved that the above conjecture is in fact true. Theorem 3. (Asper´ o-Schindler) MM++ implies (∗). This unifying result shows that (∗) is compatible with all large cardinals in V as it can in fact be set-forced if there is a supercompact cardinal, and thereby renders (∗) a convincing candidate for a natural axiom extending ZFC. It is notable that this natural axiom decides the cardinality of the continuum and in fact implies 2ℵ0 = ℵ2. A few words on the proof of Theorem 3: Given any A ⊆ω1 such that ω1 = ωL[A] 1 , we may define ΓA to be the set of Pmax-conditions p = (M, I, a) such there is a correct iteration of p sending a to A. It was well-known that, assuming MM++ (in fact much less) and given any A ⊆ω1 as above, ΓA is a filter of Pmax such that every subset of ω1 is in L(R)[ΓA]. Hence it was enough to prove that ΓA is generic over L(R). In other words, given a dense subset D of Pmax in L(R), which by our hypothesis is essentially a universally Baire sets, it was enough to produce, using MM++, a correct iteration of a condition (M, I, a) ∈D sending a to A. There was a natural scenario for doing this using L-forcing, i.e., adding the desired objects by finite approximations which, in some suitable outer model W, provide finite pieces of information about a certain object in W with properties mirroring the properties we want the desired objects to have. The main technical problem was in showing that some L-forcing P doing the above also preserves stationary subsets of ω1. This was finally accomplished through the incorporation of side conditions in the forcing consisting of countable models external to V and, more crucially, the construction of P as the union of a certain recursively defined sequence of forcing notions.3 References D. Asper´ o and R. Schindler, Martin’s Maximum++ implies Woodin’s axiom (∗). Submitted (2019). M. Foreman, M. Magidor, and S. Shelah, Martin’s Maximum, saturated ideals and non-regular ultrafilters, I, Ann. of Mathematics 127 (1988), pp. 1–47. P. Larson, Martin’s Maximum and definability in H(ω2), Annals of Pure and Applied Logic 156 (2008), pp. 110–122. W. H. Woodin, The axiom of determinacy, forcing axioms, and the non-stationary ideal, de Gruyter, Berlin-New York 1999. 3This sequence is not an iteration. Set Theory 7 Lower Bounds for Mutual Stationarity principles with an application to the theory of iterated Distributive Forcings Omer Ben-Neria, Martin Zeeman (joint work with Domink Adolf and Ralf Schindler) The purpose of the talk is to present new lower bounds for the consistency strength of mutual stationarity principles at the first uncountable cardinals and for the theory of iterated forcing. The notion of mutually stationary sets was introduced by Foreman and Magidor in . The precise formulation for instance at ℵω reads that, given some uncount-able γ = ℵk and a sequence Sn such that each Sn is a stationary subset of ℵn concentrating on ordinals of cofinality γ there is a stationary set of substructures X of Hθ (θ large given in advance) such that X ∩ℵn ∈Sn on a tail-end of n’s. Foreman and Magidor have shown in that every sequence of stationary sets Sn consisting of ordinals of cofinality ω, is mutually stationary. In this talk, we will address the extension of this principle to sequences of stationary sets Sn consisting of ordinals of a fixed uncountable cofinality. It is known that such principle is consistent relative to the existence of ω-many su-percompact cardinals, and lower bounds at the level of measurable cardinals have been obtained by Koepke and Welch in , and Ben-Neria and Zeman. Our first main result improves the lower bound for the mutually stationary principle to the existence of a Woodin cardinal. The second main result centers around the theory of iterated distributive forc-ings on different cardinals. We consider sequences of (names) of posets where each Qn is a name (with respect to the finite iteration by Qk, k < n) of an ℵn distribu-tive posets of size ℵn. We study the forcing iteration principle which asserts that every such sequence of posets, there exists a cardinal preserving generic extension which contains generic filters for each Qn. Extending our lower bound methods for the mutually stationary principle, we show that the forcing iteration principle has a similar lower bound. Finally, by building on the iteration theory of Prikry-type forcings, developed by Gitik (cf. ), we prove that the forcing iteration principle is consistent relative to ω-many supercompact cardinals. References M. Foreman and M. Magidor. Mutually stationary sequences of sets and the non-saturation of the non-stationary ideal on Pκ(λ) Acta Math., Volume 186, Number 2 (2001), 271–300. P. Koepke and P. D. Welch. Global square and mutual stationarity at the ℵn Ann. Pure Appl. Logic , 162: (10): 787–806, 2011. M. Gitik. Prikry type Forcings. In Handbook of set theory. Vols. 1, 2, 3, pages 1351–1447. Springer, Dodrecht, 2010 8 Oberwolfach Report 14/2020 Combinatorics of ultrafilters on Boolean algebras J¨ org Brendle (joint work with Francesco Parente) Combinatorial properties of free ultrafilters on ω, that is, ultrafilters on the Boolean algebra P(ω)/fin, have been extensively studied for the past half century. Central questions have been for example existence of ultrafilters with additional proper-ties like P-points (whose existence was shown to be independent by Shelah), the Rudin-Keisler ordering on ultrafilters, or cardinal invariants related to ultrafilters. Much less is known about ultrafilters on general Boolean algebras, though a strong interest in such ultrafilters has developed in recent years in the wake of the work of Malliaris and Shelah in model theory. We investigate combinatorial aspects of ul-trafilters on complete ccc Boolean algebras, with particular focus on the following two closely related topics: (1) existence and nonexistence of not Tukey maximal ultrafilters (2) the ultrafilter number Our results are mainly (but not exclusively) about Cohen and random algebras. 1. (Non-)Existence of non-maximal ultrafilters Let ⟨D, ≤⟩and ⟨E, ≤⟩be directed sets. We say that ⟨D, ≤⟩is Tukey reducible to ⟨E, ≤⟩(⟨D, ≤⟩≤T ⟨E, ≤⟩in symbols) if there are maps f : D →E and g : E →D such that for all d ∈D and e ∈E, f(d) ≤e implies d ≤g(e). If D ≤T E and E ≤T D both hold, we say D and E are Tukey equivalent and write D ≡T E. A classical result of Tukey says that if ⟨D, ≤⟩is a directed sets and κ is a cardinal at least the size of D then ⟨D, ≤⟩≤T ⟨[κ]<ω, ⊆⟩. Note that if U is an ultrafilter on a Boolean algebra A then ⟨U, ≥⟩is a directed set. In particular, ⟨U, ≥⟩≤T ⟨[A]<ω, ⊆⟩. We call U Tukey maximal if U ≡T [A]<ω. A simple characterization of maximality is (1) [DT] An ultrafilter U on A is Tukey maximal if and only if there exists a subset X ⊆U with |X| = |A| such that every infinite Y ⊆X is unbounded in U. Tukey reducibility of free ultrafilters over ω has been studied intensively for the past decade, see e.g. [DT]. It is well-known that P-points are not Tukey maximal. On the other hand, an old question of Isbell asking for non-maximal ultrafilters over ω in ZFC is still open. A connection to ultrafilters on complete ccc Boolean algebras is given by (2) Let U be an ultrafilter on a complete ccc Boolean algebra A. Then there is a free ultrafilter V on ω such that V ≤T U. Let Cκ (Bκ, respectively) denote the algebra for adding κ many Cohen reals (ran-dom reals, resp.). Theorem 1. Assume κℵ0 = κ. Then every ultrafilter on Cκ and Bκ is Tukey maximal. Set Theory 9 In fact this holds for a larger class of forcing notions defined as quotients of the Baire subsets of 2κ by an ideal obtained from an index invariant σ-ideal on 2ω as in Kunen’s framework [Ku]. Non-maximal ultrafilters over complete ccc Boolean algebras of size c are more difficult to obtain because the existence of such an ultrafilter implies the existence of a non-maximal ultrafilter over ω (by item 2), which, as mentioned, is still open in ZFC. The situation for P(ω)/fin suggests we look for “P-point like” objects. Following Star´ y [St], we say an ultrafilter U on a complete ccc Boolean algebra A is a coherent P-ultrafilter if for every maximal antichain {pi : i ∈ω} in A, the set {X ⊆ω : W{pi : i ∈X} ∈U} is a P-point over ω. Theorem 2. Let A be a complete ccc Boolean algebra whose density is strictly smaller than its size. Then any coherent P-ultrafilter on A is not Tukey maximal. Since Star´ y [St] proved the existence of coherent P-ultrafilters on complete ccc Boolean algebras of size c under d = c, we obtain for example: Corollary 3. Assuming d = c there is a non-maximal ultrafilter on Cω. We believe this is also true for Bω. However, the approach above does not work because the density of Bω is above d, and so far we only have the following result, which uses a much stronger assumption. Theorem 4. Assuming ♦there exists a non-maximal ultrafilter on Bω. 2. Ultrafilter numbers Let A be an infinite Boolean algebra. The ultrafilter number u(A) of A is the least size of a basis of an ultrafilter on A. So u = u(P(ω)/fin). The discussion about Tukey reducibility shows (3) if u(A) < |A| then there is a non-maximal ultrafilter on A (follows from item 1) (4) if A is a complete ccc Boolean algebra, then u ≤u(A) (follows from item 2) (5) in particular, max{u, κ} ≤u(Cκ), u(Bκ) ≤κℵ0 Other known lower bounds are (6) d ≤u(Cω) (7) ([CKP] and [Bu]) cof(N) ≤u(Bω) Thus, using any model for u < d (see e.g. [BS]) we see Corollary 5. u < u(Cω) and u < u(Bω) are consistent. By an ω1-stage finite support iteration (fsi) of a σ-centered forcing over a model for large continuum we obtain Theorem 6. u(Cω) < c is consistent. This can be extended to u(Cω1) as well. By the σ-centeredness non(N) and thus cof(N) will be large in this model, and we additionally obtain the consistency of u(Cω) < u(Bω) (see item 7). Another fsi gives 10 Oberwolfach Report 14/2020 Theorem 7. u(Bω) < c is consistent. Again this also works with u(Bω1). We do not know whether u(Bω) can be strictly smaller than u(Cω). Neither do we know whether u(Cω) (u(Bω), resp.) can be strictly smaller than u(Cω1) (u(Bω1), resp.). References [BS] A. Blass and S. Shelah, Ultrafilters with small generating sets, Israel Journal of Mathe-matics 65 (1989) 259-271. [Bu] M. Burke, Weakly dense subsets of the measure algebra, Proceedings of the American Mathematical Society 106 (1989), 867-874. [CKP] J. Cicho´ n, A. Kamburelis, and J. Pawlikowski, On dense subsets of the measure algebra, Proceedings of the American Mathematical Society 94 (1985), 142-146. [DT] N. Dobrinen and S. Todorˇ cevi´ c, Tukey types of ultrafilters, Illinois Journal of Mathematics 55 (2011), 907-951. [Ku] K. Kunen, Random and Cohen reals, in: Handbook of Set-theoretic Topology (K. Kunen and J. Vaughan, eds.), North-Holland, 1984, 887-911. [St] J. Star´ y, Coherent ultrafilters and nonhomogeneity, Commentationes Mathematicae Uni-versitatis Carolinae 56 (2015), 257-264. A universal characterization of standard Borel spaces Ruiyuan Chen A standard Borel space is a measurable space that is isomorphic to a Borel sub-space of Cantor space 2N. Standard Borel spaces and Borel maps (i.e., preimages of Borel sets are Borel) are ubiquitous in descriptive set theory as a basic model of “definable sets” and “definable functions” between them. The notion of “defin-ability” here is a coarse one where, roughly speaking, all countable information is considered definable. As a result, standard Borel spaces are closed under many fa-miliar set operations of countable arity, e.g., countable products, countable unions, Borel preimages, injective (or more generally countable-to-1) Borel images. In this work, we give an abstract characterization of the category SBor of standard Borel spaces and Borel maps as the universal category equipped with some countable-arity operations, including the ones above, subject to some simple compatibility axioms. This gives a precise formulation of the idea that standard Borel spaces are a “canonical” notion of “definable space”. The proof combines methods from descriptive set theory, Boolean algebras, and categorical logic. The operations on SBor are formalized in terms of categorical limits and colimits, which are defined in general categories as universal objects equipped with morphisms to/from a diagram. For example, the product of two objects X, Y in a category C is by definition a universal object X × Y ∈C equipped with two morphisms π1 : X × Y →X and π2 : X × Y →Y . Other types of limits give categorical generalizations of such set operations as preimages of subsets, the equality binary relation on a set, and the kernel of a function. Similarly, colimits give categorical generalizations of disjoint unions (coproducts) and quotients by equivalence relations. A countably complete, Boolean countably extensive Set Theory 11 category is one equipped with countable limits and countable coproducts obeying a natural compatibility axiom satisfied by disjoint unions of sets1 and in addition all of whose subobjects have complements. Thus, these are categories equipped with abstract versions of familiar countable-arity set operations. Theorem 1. SBor is the initial countably complete, Boolean countably exten-sive category: for any other such category C, there is a unique-up-to-unique-isomorphism functor SBor →C preserving countable limits and countable co-products. In other words, any other category C admitting the same operations must con-tain an essentially unique “image” of SBor inside. We also have a generalization to higher cardinalities: Theorem 2. For any infinite regular cardinal κ, the dual of the category κBoolκ of κ-presented κ-complete Boolean algebras is the initial κ-complete, Boolean κ-extensive category. This result implies Theorem 1, because it follows from a classical theorem of Loomis-Sikorski (cf. ) that ω1Boolω1 is dually equivalent to SBor. The proof of Theorem 2 is by presenting κBoolop κ as the syntactic category of a theory in a restricted subset of the infinitary logic Lκκ. The main ingredient is a quantifier-elimination lemma, based on a proof of the strong amalgamation property for κ-complete Boolean algebras due to LaGrange (cf. ). References R. Sikorski. Boolean Algebras. Springer, Berlin-Heidelberg-New York, 1964. LaGrange, R. Amalgamation and epimorphisms in m complete Boolean algebras. Algebra Universalis 4, 277–279 (1974). Variations on Cohen forcing James Cummings (joint work with Arthur Apter) Cohen forcing is a flexible tool for manipulating the values of the continuum func-tion. In situations where we want to manipulate the continuum function and preserve large cardinals, we have to deal with posets of the form j(Add(κ, λ)) where j : V →M is an elementary embedding with crit(j) = κ and κM ⊆M: this is a κ+-closed poset whose properties depend on the nature of the embedding j. In connection with his work on getting failure of the SCH from weak hypothe-ses, Woodin showed that if GCH holds and U is a normal measure on κ then jU(Add(κ, κ++)) is equivalent to Add(κ+, κ++). Analogous results hold for posets of the form jU(Add(κ, λ)) up to about λ = κ+κ, at which point the argument breaks down. 1namely, morphisms X →F i Yi are equivalent to partitions X = F i Xi together with mor-phisms Xi →Yi 12 Oberwolfach Report 14/2020 We introduce a natural forcing poset Add∗(κ, λ) which shares many of the pleasant features of Add(κ+, λ): assuming GCH it is κ+-closed, κ++-cc and adds λ many mutually generic Cohen subsets of κ+. It also has a universal property: if E is an appropriate short extender with critical point κ then Add∗(κ, λ) projects onto jE(Add(κ, λ)). Add∗forcing can be used to give an alternative proof of a result of Friedman and Honzik: Theorem 1 (Friedman and Honzik). Let GCH hold and let F be a locally definable Easton function. Then there is a class Reverse Easton forcing poset such that in the generic extension: (1) Cardinals and cofinalities are preserved. (2) 2κ = F(κ) for all regular κ. (3) Strong cardinals and supercompact cardinals from the ground model are preserved. On wide Aronszajn trees in the presence of MA Mirna Dˇ zamonja 1. Introduction We study the class T of trees of height and size ℵ1, but with no uncountable branch. We call such trees wide Aronszajn trees. A particular instance of such a tree is a classical Aronszajn tree, so the class A of Aronszajn trees satisfies A ⊆T . Definition 1. For two trees T1, T2, we say that T1 is weakly embeddable in T2 and we write T1 ≤T2, if there is f : T1 →T2 such for all x, y ∈T1 x <T1 y = ⇒f(x) <T2 f(y). We are interested in the structure of (T , ≤) and (A, ≤). Our first result is Theorem 2, which proves that under MA(ω1) there is no universal element in (A, ≤). The is a result of Todorˇ cevi´ c from to which we now give another proof. The second result is Theorem 5, which shows that under MA(ω1) every wide Aronszajn tree embeds into an Aronszajn tree. Putting the two results together, we obtain the main result of the paper, Theorem 8, which shows that under MA(ω1) the class (T , ≤) has no universal element. This resolves a question raised by . Our paper contains two main theorems. We state them and define the corre-sponding forcing notions used in the proof. Set Theory 13 2. Embeddings between Aronszajn trees and the non-existence of a universal element under MA The following theorem is due to Todorˇ cevi´ c, . We give a different proof. Theorem 2. For every tree T ∈A, there is a ccc forcing which adds a tree in A not weakly embeddable into T . In particular, under the assumption of MA(ω1) there is no Aronszajn tree universal under weak embeddings. Our proof is obtained using the following notion of forcing. Definition 3. Suppose that T ∈A, we shall define a forcing notion Q = Q(T ) to consist of all p = (up, vp, <p, cp) such that: (1) up ⊆ω1 ∪{⟨⟩}, vp ⊆T are finite and ⟨, ⟩∈vp, (2) if α ∈vp then there β ∈up with ht(α) = ht(β), (3) <p is a partial order on up such that α <p β implies ht(α) < ht(β) and which fixes α ∩ ht(y1 ∩T2 y2). The order p ≤q on Q is given by inclusion up ⊆uq, vp ⊆vq, <p⊆<q, cp ⊆cq with the requirement that if p ≤q, then the intersection and the root given by <p are preserved in <q. Remark 4. Theorem 2 gives another proof of the main result of , which is that under MA(ω1) all Aronszajn trees are special. 3. Embedding wide Aronszajn trees into Aronszajn trees The proof of the following theorem is the main method. Theorem 5. For every tree T ∈T , there is a ccc forcing which adds a tree in A into which T weakly embeds. In particular, under the assumption of MA(ω1) the class A is cofinal in the class (T , ≤). We give the definition of the forcing used to prove this theorem. The forcing is dual to the one in §2, in the sense that we now start with a tree T in T and generically add an Aronszajn tree that T weakly embeds to. We use the control function c to make sure that the generic tree does not have an uncountable branch. For the definition of the forcing, we represent every T ∈T by an isomorphic copy which is a subtree of <ω1ω1. Definition 6. Suppose that T ⊆ω1>ω1 is a tree of size ℵ1 and with no uncountable branches, we define a forcing notion P = P(T ) to consist of all p = (up, vp, <p, f p, cp) such that: 14 Oberwolfach Report 14/2020 (1) up ⊆T , vp ⊆ω1 are finite and ⟨⟩∈up, (2) up is closed under intersections, (3) <p is a partial order on vp, (4) f p is a surjective weak embedding from (up, ⊂) onto (vp, <p), (5) if f(ρ) <p f(σ), then there are ρ′ ⊂σ′ such that f(ρ′) = f(ρ), f(σ) = f(σ′) and ρ′ ⊂σ′, (6) for every η ∈up, we have ht(f p(η)) = lg(η), (7) cp is a function from vp into ω such that α <p β = ⇒cp(α) ̸= cp(β). The order p ≤q on P is given by inclusion up ⊆uq, vp ⊆vq, <p⊆<q and cp ⊆cq. We remark that putting Theorem 5 together with the results of , gives a nice consequence about the class of Lipschitz trees, as follows. Corollary 7. Under MA(ω1) the class L of Lipschitz trees is cofinal in the class of wide Aronszajn trees (T , ≤). 4. Conclusion Putting the results of Section §2 and Section §3 together, we obtain our main theorem, as follows. Theorem 8. Under MA(ω1), there is no wide Aronszajn tree universal under weak embeddings. References Alan Mekler and Jouko V¨ a¨ an¨ anen. Trees and Π1 1-subsets of ω1ω1. J. Symb. Log., 58(3):1052– 1070, 1993. Stevo Todorˇ cevi´ c. Walks on ordinals and their characteristics. Volume 263 of Progress in Mathematics. Birkh¨ auser Verlag, Basel, 2007. James E. Baumgartner, Jerome Malitz, and William Reinhardt. Embedding trees in the rationals. Proc. Natl. Acad. Sci. USA, 67(4):1748–1753, 1970. Gaps and Towers at uncountable cardinals Vera Fischer (joint work with Diana C. Montoya, Jonathan Schilhan and Daniel T. Soukup) In this project we study pseudo-intersection and tower numbers on uncountable regular cardinals, and particular focus on the question if these two cardinal char-acteristics are equal. Let κ be a regular uncountable cardinal. We say that a family F ⊆[κ]κ has the strong intersection property (appreviated SIP) if for every H ∈[F]<κ, the cardinality of T H is κ. A set A ⊆κ is a pseudo-intersection of a family F ⊆[κ]κ if for each F ∈F, A ⊆∗F, which means that |A\F| < κ for each F ∈F. We say that a family T ⊆[κ]κ is a κ-tower (or just tower when κ is clear from the context) if T is ≤∗well-founded, T has the SIP, but T no Set Theory 15 pseudo-intersection of cardinality κ. Recall that p(κ) is defined as the least cardi-nality of a family with SIP on κ, which does not have a pseudo-intersection and t(κ) is defined as the least cardinality of a κ-tower. In the above paper, we intro-duce a natural higher analogue of the notion of a gap (see Definition 2.6 of ), which gives us the following interesting analogue of a theorem of Malliaris-Shelah (see ), namely the following: Theorem 1. Let κ be a regular cardinal such that κ<κ = κ. Then either p(κ) = t(κ) or there is λ < p(κ) and a club-supported (p(κ), λ)-gap of slaloms. While the existence of gaps as in the above theorem is unclear, the result is a promising step in lifting the celebrated result of Malliaris-Shelah stating that p = t. As a result of our study on gaps of slaloms, we obtain: Theorem 2. If κ is a regular uncountable cardinal, then p(κ) is regular. Moreover, we study the club variants pcl(κ) and tcl(κ) of p(κ) and t(κ) respec-tively, where pcl(κ) is defined as the least cardinality of a family of clubs on κ which has the SIP but no pseudo-intersection and tcl(κ) is the least cardinality of a κ-tower consisting of clubs. We show that pcl(κ) = tcl(κ) = b(κ) and obtain the following result: Theorem 3. Let κ < λ be regular uncountable cardinals, where κ = κ<κ. Then there is a κ-closed, κ+-cc forcing extension, in which p(κ) = κ+ < pcl(κ) = λ = 2κ. The consistency of p(κ) < b(κ)(= pcl(κ)) is originally due to Shelah and Spaso-jevic, . Our techniques however, significantly differ from theirs: We add κ-Cohen reals and successively diagonalize the club filter, while preserving a Cohen witness to p(κ) = κ+. References V. Fischer, D. C. Montoya, J. Schilhan, D. Soukup Gaps and towers at uncountable cardinals submitted. M. Malliaris, S. Shelah Cofinality spectrum theorems in model theory, set theory and general topology J. Amer. Math. Soc. 29 (1), 237–297 (2016). S. Shelah, Z. Spasojevi´ c Cardinal invariants pκ and tκ, Publications de l’Institut Mathe-matique 72, 1–9, 2002. Vershik’s Conjecture for Ultraextensive Spaces Su Gao (joint work with Mahmood Etedadialiabadi, Fran¸ cois La Maˆ ıtre, and Julien Melleray) Extending previous work by Bhattacharjee, McPherson, Vershik, Pestov, Solecki, and Rosendal, we introduce a notion of ultraextensive metric spaces and state some properties of such spaces, including that their isometry groups all contain dense locally finite groups. Then we verify a conjecture of Vershik which states that Hall’s universal countable locally finite group can be embedded as a dense 16 Oberwolfach Report 14/2020 subgroup in the isometry group of the Urysohn space and in the automorphism group of the random graph. In fact, we show the same for all automorphism groups of known infinite ultraextensive spaces. These include, in addition, the isometry group of the rational Urysohn space, the isometry group of the ultramet-ric Urysohn spaces, and the automorphism group of the universal Kn-free graph for all n ≥3. Furthermore, we show that finite group actions on finite metric spaces or finite relational structures form a Fra¨ ıss´ e class, where Hall’s group ap-pears as the acting group of the Fra¨ ıss´ e limit. We also embed continuum many non-isomorphic countable universal locally finite groups into the isometry groups of various Urysohn spaces, and show that all dense countable subgroups of these groups are mixed identity free (MIF). Finally, we give a characterization of the isomorphism type of the isometry group of the Urysohn ∆-metric spaces in terms of the distance value set ∆. Some applications of Extender based forcings with overlapping extenders. Moti Gitik Let a be a set of regular cardinals, with |a| < min(a), and let J be an ideal on a. If f, g ∈Q a, then f <J g iff{ν ∈a | f(ν ≥g(ν)} ∈J. Definition 1. (S. Shelah ) A regular cardinal λ is called tcf(Q a, <J) iffthere exists an <J-increasing sequence of functions ⟨fα | α < λ⟩in Q a such that for every g ∈Q a there is α < λ with g κ. S. Shelah (Problem (ǫ), Analytical Guide, ) asked: Question 3. (Shelah) Does the K¨ onig Lemma remain true if we replace 2κ by pp(κ), i.e., is cof(pp(κ)) > κ? It turns out that the answer is negative. For example it is possible, after a forcing with an extender based forcings with overlapping extenders, to have the following situation: There is a cardinal κ of cofinality ω such that cof(pp(κ)) = ω1. In the early 80s H. Woodin asked the following two questions: Question 4. (Woodin) Set Theory 17 (i) Is it possible to have a singular strong limit κ such that weak □κ fails and 2κ > κ+? (ii) Is it possible to have a singular strong limit κ such that the tree property over κ+ + 2κ > κ+? Both questions were answered affirmatively, the first one by A. Sharon and my-self in (even for a bit stronger approachability property ¬APκ+) and the second by I. Neeman . D. Sinapova extended this results to cardinals of uncountable cofinality. A. Sharon solved a similar question for the reflection property Refκ+. Recently, Ben-Neria, Hayut, Unger gave a different construction and Poveda, Rinot, Sinapova formulated a general framework. Using extenders based forcings with overlapping extenders, new constructions of models of: (1) ¬APκ+ + 2κ > κ+ (2) TPκ+ + 2κ > κ+ (3) Refκ+ + 2κ > κ+ for a singular strong limit κ, are given. Actually all three properties can hold in the same model. Preprints are available at: References O. Ben-Neria, Y. Hayut, S. Unger. Stationary reflection and the failure of SCH. M. Gitik. Prikry type forcings, in Handbook of Set Theory, Foreman, Kanamori, eds. M. Gitik. Blowing up the power of a singular cardinal of uncountable cofinality. M. Gitik. Extenders based forcings with overlapping extenders and negations of the Shelah Weak Hypothesis. M. Gitik. An other method for constructing models of not approachability and not SCH. M. Gitik. An other model with tree property and not SCH. M. Gitik and A. Sharon. On SCH and approachability property. Proc. AMS, 136(1), 2008, 311–320. I. Neeman. Aronszajn trees and failure of SCH. J. Math. Logic, Vol. 09, No. 01, 2009, pp. 139–157. A. Poveda, A. Rinot, D. Sinapova. SIGMA-PRIKRY FORCING I: THE AXIOMS. A. Sharon. Ph.D. thesis. Tel Aviv University, 2005. S. Shelah. Cardinal arithmetic. Oxford Logic Guides, vol. 29, Oxford Univ. Press, London and New York, 1994. Structure theorems from strongly compact cardinals Gabe Goldberg Expanding on ideas due to Hamkins in the context of forcing and Woodin in the context of inner model theory, we prove several theorems which suggest that the large cardinal structure of the universe of sets above a strongly compact cardinal is more tractable than the ubiquity of independence results at this level would suggest. The main idea behind our results is the following improvement on a result of Woodin, who used a supercompact cardinal instead of a strongly compact one. This Research was supported by NSF Grant DMS 1902884. 18 Oberwolfach Report 14/2020 Theorem 1. Suppose κ is a strongly compact cardinal and U is a countably com-plete ultrafilter in Vκ. Then the ultrapower of V by U has the κ-approximation and cover properties. The proof of this result has a number of applications. The first concerns the notion of a cardinal preserving embedding, introduced by Caicedo . If M is an inner model, an elementary embedding j : V →M is said to be cardinal preserving if every cardinal of M is a cardinal in V . Caicedo asked whether there can be such an embedding. We use the theorem above to prove the nonexistence of cardinal preserving embeddings assuming large cardinals. Theorem 2. Suppose there is a proper class of strongly compact cardinals. Then there are no cardinal preserving embeddings. This result can be viewed as a generalization of the Kunen Inconsistency The-orem, but the proof is quite different from all of the known proofs of Kunen’s theorem. Second, we give a partial answer to an old question of Silver , in spite of an independence result due to Sheard . A nonprincipal ultrafilter U on a cardinal λ is indecomposable if for any descending sequence {Aα}α<η of sets in U with ω1 ≤η < λ, T α<η Aα ∈U. Indecomposability is roughly “λ-completeness minus ω1-completeness”. If there are infinitely many measurable cardinals ⟨κn⟩n<ω, there is an indecom-posable ultrafilter on λ = supn<ωκn that is not λ-complete: if D is a nonprincipal ultrafilter on ω and for each n < ω, Un is a nonprincipal κn-complete ultrafilter on κn, then D −lim n<ω Un = {A ⊆λ : {n < ω : A ∩κn ∈Un} ∈D} is an indecomposable ultrafilter on λ. Can there be indecomposable ultrafilters on cardinals that are neither measurable nor the limit of countably many measurable cardinals? This question, posed by Silver , cannot be resolved in ZFC: in the canonical inner models, the answer is no, while in a forcing extension constructed by Sheard , the answer is yes. Once one reaches a strongly compact cardinal, however, inner model-like behaviour wins out: Theorem 3. Suppose κ is strongly compact and λ ≥κ carries an indecomposable ultrafilter. Then λ is either a measurable cardinal or a countable cofinality limit of measurable cardinals. In fact, one can completely characterize the indecomposable ultrafilters above the first strongly compact cardinal as exactly those ultrafilters resulting from the construction above. Theorem 4. Suppose κ is strongly compact, λ ≥κ, and U is an indecompos-able ultrafilter on λ. Then U is either λ-complete or else for some ultrafilter D on ω, some sequence ⟨κn⟩n<ω of distinct measurable cardinals, and some sequence ⟨Un⟩n<ω of κn-complete ultrafilters on κn, U = D −limn<ω Un. Set Theory 19 Our last result concerns the partial forms of strong compactness defined by Bagaria-Magidor . A cardinal κ is almost strongly compact if for all ν < κ, every κ-complete filter on κ extends to a ν-complete ultrafilter on κ. In , we prove that assuming an inner model principle, the first almost strongly compact cardinal is strongly compact (and in fact supercompact). Whether this is provable outright is an open question, posed by Boney and Brooke-Taylor. Our techniques give the following partial answer: Theorem 5. Assume the Singular Cardinals Hypothesis. If the least almost strongly compact cardinal has uncountable cofinality, it is strongly compact. It is not true in general that every almost strongly compact cardinal is strongly compact, since the almost strongly compact cardinals form a closed class, while every strongly compact cardinal is measurable. However, for successor almost strongly compacts, one can almost prove the equivalence of the two concepts: Corollary 6. For any ordinal α, if the (α+1)-st almost strongly compact cardinal has uncountable cofinality, it is strongly compact. The corollary follows by applying the preceding theorem in V [G] where G ⊆ Col(ω, κ) is generic for the collapse of the α-th almost strongly compact cardinal. Note that the Singular Cardinals Hypothesis holds in this model. References Andr´ es Eduardo Caicedo. Cardinal preserving elementary embeddings. In Logic Colloquium 2007, volume 35 of Lect. Notes Log., pages 14–31. Assoc. Symbol. Logic, La Jolla, CA, 2010. Jack H. Silver. Indecomposable ultrafilters and 0♯. In Proceedings of the Tarski Symposium (Proc. Sympos. Pure Math., Vol. XXV, Univ. Calif., Berkeley, Calif., 1971), pages 357–363, 1974. Michael Sheard. Indecomposable ultrafilters over small large cardinals. J. Symbolic Logic, 48(4):1000–1007 (1984), 1983. Joan Bagaria and Menachem Magidor. On ω1-strongly compact cardinals. J. Symb. Log.79(1):266–278, 2014. Gabriel Goldberg. The Ultrapower Axiom. PhD thesis, Harvard University, 2019. Bi-interpretation of weak set theories Joel D. Hamkins (joint work with Alfredo R. Freire) Set theory exhibits a truly robust mutual interpretability phenomenon: In any model of one set theory we can define models of diverse other set theories and vice versa. In any model of ZFC, we can define models of ZFC + GCH and also of ZFC + ¬CH and so on in hundreds of cases. And yet, it turns out, in no instance do these mutual interpretations rise to the level of bi-interpretation. Ali Enayat proved that distinct theories extending ZF are never bi-interpretable, and models of ZF are bi-interpretable only when they are isomorphic. So there is no nontrivial 20 Oberwolfach Report 14/2020 bi-interpretation phenomenon in set theory at the level of ZF or above. Never-theless, for natural weaker set theories, we prove, including ZFC−without power set and Zermelo set theory Z, there are nontrivial instances of bi-interpretation. Specifically, there are well-founded models of ZFC−that are bi-interpretable, but not isomorphic – even ⟨Hω1, ∈⟩and ⟨Hω2, ∈⟩can be bi-interpretable – and there are distinct bi-interpretable theories extending ZFC−. Similarly, using a construc-tion of Mathias, we prove that every model of ZF is bi-interpretable with a model of Zermelo set theory in which the replacement axiom fails. Entangledness in Suslin lines and trees John Krueger This talk is concerned with the property of entangledness in Suslin lines and trees. The idea of an entangled linear order was originally introduced by Abraham and Shelah in the context of ω1-dense sets of reals. Recall that an uncountable linear order L is n-entangled, where n is a positive integer, if for any pairwise disjoint sequence ⟨(aξ,0, . . . , aξ,n−1) : ξ < ω1⟩of increasing n-tuples of L and any function g : n →2, there exist ξ, β < ω1 such that for all i < n, aξ,i <L aβ,i iffg(i) = 1. And L is entangled if it is n-entangled for all positive integers n. The concept of entangledness is closely tied to topological properties of the linear order L. For example, if L is 2-entangled then L has the countable chain condition, and if L is 3-entangled then L is separable. So any 3-entangled dense linear order is order isomorphic to a set of reals. Todorcevic proved that if there exists an entangled linear order, then there exist c.c.c. forcing posets P and Q such that P × Q is not c.c.c. It follows that Martin’s axiom together with the negation of the continuum hypothesis implies that there does not exist an entangled linear order. Recall that a Suslin line is a linear order with the countable chain condition which is not separable. By the remarks above, a Suslin line cannot be 3-entangled. In this talk we introduce a natural weakening of the property of entangledness which can consistently be satisfied by a Suslin line. For any positive integer n, we say that a linear order L is weakly n-entangled if the property described in the first paragraph holds, except only for pairwise disjoint sequences ⟨(aξ,0, . . . , aξ,n−1) : ξ < ω1⟩of increasing n-tuples of L which have the property that there exist c0 <L . . . <L cn−1 such that for all ξ < ω1 and i < n −1, aξ,i <L ci <L aξ,i+1. We proved that it is consistent for a Suslin line to be weakly n-entangled for all positive integers n. Any dense c.c.c. linear order L is weakly 2-entangled iff it is 2-entangled, so it is consistent for a Suslin line to be 2-entangled. However, this equivalence fails if L is not dense. If L is dense and separable, then L is n-entangled iffL is weakly n-entangled. Thus, we have found a natural weakening of entangledness which coincides with entangledness for dense separable linear orders, but can consistently be satisfied by Suslin lines. It is a reasonable question to ask whether the concept of entangledness has any significance for partial orders other than linear orders. In this talk, we introduce a natural definition of entangledness in the class of ω1-trees. Recall that an ω1-tree Set Theory 21 is a tree of height ω1 all of whose levels are countable, and a Suslin tree is an ω1-tree which has no uncountable chains or antichains. As is well-known, there exists a Suslin line iffthere exists a Suslin tree. Let (T, <T) be an ω1-tree. For any distinct nodes x and y of T , define ∆(x, y) as the order type of the set {z ∈T : z <T x, y}. For any positive integer n, we say that an ω1-tree T is n-entangled if for all sequences ⟨(aξ,0, . . . , aξ,n−1) : ξ < ω1⟩of injective n-tuples which satisfy that the set of ordinals {∆(aξ,i, aξ,j) : i < j < n, ξ < ω1} is bounded in ω1, for all g : n →2 there exist ξ < β < ω1 such that for all i < n, aξ,i <T aβ,i iffg(i) = 1. The restriction on ∆is required, since any Suslin tree fails to have the property without this restriction. It turns out that an ω1-tree T is 1-entangled iffit is Suslin, and more generally, T is n-entangled iffall of its derived trees of dimension n are Suslin. We also proved that for any positive integer n, it is consistent for a Suslin tree to be n-entangled, but all of its derived trees of dimension n + 1 are special. Universally measurable sets may all be ∆ ∼ 1 2 Paul B. Larson (joint work with Saharon Shelah) A subset of a Polish space X is said to be universally measurable if it is measured by the completion of any σ-additive Borel measure on X. Equivalently, A ⊆X is universally measurable if and only if f −1[A] is Lebesgue measurable whenever f : ωω →X is a Borel function. This characterization induces the corresponding notion for category : we say that a set A ⊆X is universally categorical if and only if f −1[A] has the property of Baire whenever f : ωω →X is a Borel function. We identify a set A consisting of σ-algebras on ωω and prove the consistency of the following statement : for every A ∈A, each A ∈A is ∆ ∼ 1 2. The σ-algebras in A are induced by suitably coherent and absolute assignments of σ-ideals to the set of infinite branches through each finitely-branching tree of height ω. The set A contains the collection of universally measurable sets and the collection of universally categorial sets. The proof of our main theorem proceeds by a countable support iteration of (ω, ∞)-distributive partial orders forcing that every set of reals of cardinality ℵ1 is ∆ ∼ 1 2. The following theorem is a special case of our main theorem. In the case of the Lebesgue-null ideal, the theorem answers part of problem CG on David Fremlin’s problem list. Theorem 1. If, for some a ⊆ω, V=L[a], then there is a proper forcing extension in which every universally measurable subset of any Polish space is ∆ ∼ 1 2, and every universally categorical subset of any uncountable Polish space is ∆ ∼ 1 2. 22 Oberwolfach Report 14/2020 The decomposability conjecture Andrew Marks (joint work with Adam Day) Assuming Π1 2 determinacy, we prove the decomposability conjecture is true: a Borel function f is decomposable into a countable union of functions which are piecewise continuous on ∆0 n domains iffthe preimage of every Σ0 n set under f is Σ0 n. Our proof uses a new dichotomy characterizing when a set is Σ0 n complete in terms of a Baire category criterion. A central tool in this proof is Antonio Montalb´ an’s true stages machinery (cf. ). Our proof also relies a theorem of Leo Harrington in that, assuming the axiom of determinacy, there are no definable ω1 sequences of distinct Borel sets of bounded rank. References Antonia Montalb´ an Priority Arguments via True Stages The Journal of Symbolic Logic, 79(4), 1315–1335. Leo Harrington Analytic Determinacy and 0♯The Journal of Symbolic Logic Vol. 43, No. 4 (Dec., 1978), pp. 685–693 (9 pages) Parametrised Miller Forcing Heike Mildenberger (joint work with Christian Br¨ auninger) Let F be a filter over ω. Guzm´ an and Kalajdzievski introduced a parametrised version of Miller forcing called PT(F). We use PT(U) for a recursivly constructed sequence of ultrafilters. Using this type of iterands, we prove that we can specifi-cally preserve certain P-points. 1. Brief outline We use a new forcing notion introduced by Guzm´ an and Kalajdzievski in and apply it with particularly chosen parameters. We use games in order to show that conditions with some blockstructure are dense. We transfer some key arguments in the evaluation of Blass Shelah-forcing to the new forcing. The aim is to work on the old conjecture that the existence of a simple Pℵ1-and simple Pℵ2-point is consistent relative to ZFC. In order to carry the construction over iteration steps of uncountable cofinality, absoluteness properties like the ones proved in [7, §3] for countable support iterations of Mathias forcing are needed. Definition 1. Let κ be a regular uncountable cardinal. (1) An ultrafilter U over ω is called a Pκ-point if for any γ < κ, any ⊆∗-descending sequence ⟨Aβ : β < γ⟩of elements of U has a pseudointersec-tion B ∈U, that is some B such that for β < γ, B ⊆∗Aβ. A Pℵ1-point is also just called a P-point. Set Theory 23 (2) By Fr we denote the Fr´ echet filter which is the filter of cofinite subsets of ω. (3) Let F be a filter over ω that contains the Fr´ echet filter. We say just filter over ω. A subset B ⊆F is called a basis of F if for every F ∈F there is some B ∈B such that B ⊆F. (4) A Pκ-point is called simple if it has a basis B ⊆U such that B consists of a ⊆∗-descending sequence ⟨Aα : α < κ⟩. (5) The character of a filter F is the smallest size of a basis of F. Note that any Pκ-point with character κ is simple. The space 2ω is endowed with the product topology of the discrete space 2 = {0, 1}. Any subset A of ω is a point in 2ω via its characteristic function χA. Collections C of subsets of ω are said to be of descriptive complexity Γ if the set {χA : A ∈C} is contained in Γ. Definition 2. (1) The partial order Fσ is the forcing with Fσ-filters1 over ω. Stronger filters are superfilters. (2) If F is a filter, then Fσ(F) is the forcing with Fσ-filters that are compatible with F, i.e. G ∈Fσ(F) iffG is an Fσ-filter and G ⊆F+ = {X ⊆ω : ∀(F ∈F)(X ∩F ̸= ∅)}. Definition and Observation 3. Let G be an Fσ(F)-generic filter. We let U be a Fσ(F)-name for the union of G. By a density argument, the poset Fσ(F) forces that U is an ultrafilter that contains F as a subset. The set of finite strictly increasing sequences of natural numbers is called ω↑<ω. The length of s ∈ω↑<ω is its domain. For s, t ∈ω<ω, we say “t extends s” or “s is an initial segment of t” and write s ⊴t if dom(s) ⊆dom(t) and s = t ↾dom(s). Definition 4. A subset p ⊆ω↑<ω that is closed under initial segments is called a tree. The elements of a tree are called nodes. Given any tree p, a node s ∈p is called a splitting node of p if s has more than one direct ⊳-successor in p and ω-splitting node of p if s has infinitely many direct ⊳-successors in p. The set of splitting nodes of p is denoted by spl(p) while ω- spl(p) denotes the set of ω-splitting nodes of p. The set of finite/infinite subsets of ω is denoted by [ω]<ω/[ω]ω. Definition 5. For E ⊆[ω]ω such that for all n ∈ω and x1, . . . , xn ∈E we have x1 ∩· · · ∩xn ∈[ω]ω, we denote by filter(E) the filter generated by E ∪Fr, i.e. filter(E) = {Y ⊆ω : ∃n ∈ω∃x1, . . . , xn ∈E(Y ⊇∗x1 ∩· · · ∩xn)}. In order to define a parametrised version of Miller-Forcing we will need some notions about blocks. 1Again, we consider only those filters that contain the Fr´ echet filter as a subset. 24 Oberwolfach Report 14/2020 Definition 6. (1) For any set A we write [A]<ω = {t : t ⊆A, |t| < ω}. The elements of Fin = [ω]<ω \ {∅} are called blocks. (2) Let F be a filter over ω. We let F<ω ={[A]<ω \ {∅} : A ∈F} (F<ω)+ ={B ⊆Fin : ∀A ∈F([A]<ω ∩B ̸= ∅)} Note that F<ω is a filter over Fin. The following forcing notion was introduced by Guzm´ an and Kalajdzievski in order to prove that the ultrafilter number u may be smaller than the almost disjointness number a without using large cardinals. Definition 7. (See ) Let F be a filter over ω. The forcing PT(F) consists of all p ⊆ω<ω such that for each s ∈p there is t ⊵s, such that t ∈ω- spl(p) and sucsplp(t) := {rge(r) \ rge(t) : r a ⊳-minimal infinitely splitting node of p above t} ∈(F<ω)+. Such a t is called an F-splitting node. We furthermore require of p that each ω-splitting node is a F-splitting node 2 and there is a unique ⊳-minimal ω-splitting node called the trunk of p, tr(p). The set of F-splitting nodes of p is denoted by spl(p). Consider t to be a function, e.g. t ∈ω↑<ω. The symbol rge(t) denotes the range of a function, and vice versa en(r) ∈ω↑<ω denotes the increasing enumeration of r ∈[ω]<ω. Note that in contrast to Guzm´ an and Kalajdzievski, we do not identify t ∈p ⊆ω↑<ω with its range. The function sending s ∈ω↑<ω to its range is an isomorphism witnessing (ω↑<ω, ⊴) ∼ = ([ω]<ω, ⊑). The main result is Proposition 8. We assume CH. (A) There is a countable support iteration P = ⟨Pγ, Qβ : γ ≤ℵ1, β < ℵ1⟩that is defined as follows: (1) P0 = {0}, and (2) For β < ℵ2 we have the following: If - for γ < β, rγ is the PT(Uγ)-generic real over VPγ∗Fσ(Fγ), - Fβ = filter({rge(rγ) : γ < β}) and - Uβ is the Fσ(Fβ)-generic ultrafilter over VPβ, then Pβ ⊩Qβ = Fσ(Fβ) ∗PT(Uβ). (B) Any P as in (A) is proper, does not collapse ℵ2, and forces that any P-point from the ground model generates still a P-point (and hence there is a simple Pℵ1-point) and forces that 2We do not know whether the first condition can be waived. There might be finitely splitting nodes. The set of conditions without finitely splitting nodes is possibly not dense. Set Theory 25 filter({rge(rγ) : γ < ℵ1}) is a simple Pℵ1-point and is Canjar. Of course, this is a CH model and the objects could be constructed without forcing at all. The hope is that the reflection arguments that leads to the Canjar property in the conclusion would give the analogous result at limits of uncountable cofinality of the same iteration of length ℵ2. Remark 9. The iteration given in (A) is not as uniform as it may look at first sight. For β ≤ω2 such that cf(β) ≥ω1 we have Pβ ⊩Fβ is an ultrafilter. Hence for β < ω2 such that cf(β) = ω1, the forcing Fσ(Fβ) is the one point forcing {Fβ}. Definition 10. A filter F over ω is called a Canjar filter if for any sequence ⟨Xn : n < ω⟩of elements of (F<ω)+ there is a sequence sn ∈[Xn]<ω such that S{sn : n < ω} ∈(F<ω)+. A filter is Canjar iffMathias forcing with second components in the filter does not add a dominating real [6, Theorem 5]. There are more equivalent formulations, see, e.g., [1, 3, 4]. We use the following two known and crucial facts about Canjar filters. Lemma 11. (a) The forcing PT(F) is proper for any Canjar Filter F. (b) The generic filter of the forcing Fσ is a Canjar ultrafilter. Proof. See Propositions 17 and 48 of . □ We add a couple of new lemmas. Lemma 12. For cf(β) ≤ω, Pβ ∗Fσ(Fβ) forces the following: (a) Uβ is Canjar. (b) Uβ is not nearly coherent to any P-point in V Pβ. Lemma 13. For α = ω1, Pα forces that Fα is a Canjar ultrafilter that is not nearly coherent to any P-point in S γ<α V Pγ. The conjecture is that the countability and Π1 1-absoluteness argument in the proof of the previous lemma would allow to prove: For α ≤ω2, cf(α) ≥ω1, Pα forces that Fα is a Canjar ultrafilter that is not nearly coherent to any P-point in S γ<α V Pγ. The Canjar property is open strictly above ℵ1. Definition 14. Let F be a filter, p ∈PT(F) and let A be a PT(F)-name for a subset of ω. We say p decides A in pace if (∀t ∈spl(p))(∀r ∈sucsplp(t)) (∀i ≤max(rge(t))(p ↾(t⌢en(r)) decides i ∈A) 26 Oberwolfach Report 14/2020 Lemma 15. Let F be a filter, p ∈PT(F) and let A be a PT(F)-name for a subset of ω. Then there exists a trunk-preserving extension q ≤p that decides A in pace. Definition 16. Let f : ω →ω be a strictly increasing function with f(0) = 0. A condition p ∈PT(F) is said to have f-block structure if (∀t ∈spl(p))(∀r ∈sucsplp(t)) (∃k ∈ω)(rge(r) \ rge(t) ⊆[f(k), f(k + 1))). With the help of block structure and decision in pace we prove a preservation theorem that builds on . Lemma 17. (a) If U is a Canjar ultrafilter that is not nearly coherent to a P-point W, then forcing with PT(U) preserves W. (b) Let W be an P-point in the ground model. If a filter F is not almost ultra, then Fσ(F) ∗PT(U ∼) preserves W. Together with known preservation theorems for countable support iterations, by induction on α ≤ω1 the lemmata yield a proof of a technically enhanced length-α version of the main theorem. References Andreas Blass, Michael Hruˇ s´ ak, and Jonathan Verner. On strong P -points. Proc. Amer. Math. Soc., 141(8):2875–2883, 2013. Andreas Blass and Saharon Shelah. There may be simple Pℵ1- and Pℵ2-points and the Rudin-Keisler ordering may be downward directed. Annals of Pure and Applied Logic, 33:213–243, 1987. David Chodounsk´ y, Duˇ san Repovˇ s, and Lyubomyr Zdomskyy. Mathias forcing and combi-natorial covering properties of filters. J. Symb. Log., 80(4):1398–1410, 2015. Osvaldo Guzm´ an, Michael Hruˇ s´ ak, and Arturo Mart´ ınez-Celis. Canjar filters. Notre Dame J. Form. Log, 58(1):79–95, 2017. Osvaldo Guzm´ an and Damjan Kalajdzievski. The Ultrafilter and the Almost-Disjointness Numbers. Preprint, 2018. Michael Hruˇ s´ ak and Hiroaki Minami. Mathias-Prikry and Laver-Prikry type forcing. Ann. Pure Appl. Logic, 165(3):880–894, 2014. Saharon Shelah and Otmar Spinas. The distributivity numbers of P (ω)/fin and its square. Trans. Amer. Math. Soc., 352:2023 – 2047, 2000. The Feldman-Moore, Glimm-Effros, and Lusin-Novikov theorems over quotients Benjamin D. Miller We give countably-infinite bases of minimal counterexamples to generalizations of the results mentioned in the title to quotients spaces. For all k ≥2, let Fk denote the index k subequivalence relation of E0 given by c Fk d ⇐ ⇒∃n ∈N∀m ≥n P ℓ<m c(ℓ) ≡P ℓ<m d(ℓ) mod k. A partial transversal of an equivalence relation E on X over a subequivalence relation F is a set Y ⊆X for which E ↾Y = F ↾Y . Set Theory 27 Theorem 1. Suppose that X is a Hausdorffspace, E is an analytic equivalence relation on X, and F is a Borel equivalence relation on X for which every E-class is a countable union of (E ∩F)-classes. Then exactly one of the following holds: (1) The set X is a countable union of (E∩F)-invariant Borel partial transver-sals of E over E ∩F. (2) There exists F ∈{∆(2N)} ∪{Fp | p is prime} for which there is a contin-uous embedding π: 2N ֒ →X of (E0, F) into (E, F). A partial uniformization of a set R ⊆X × Y over an equivalence relation F on Y is a subset of R whose vertical sections are contained in F-classes. Theorem 2. Suppose that X and Y are Hausdorffspaces, E is an analytic equiv-alence relation on X, F is a Borel equivalence relation on Y , and R ⊆X × Y is an (E × ∆(Y ))-invariant analytic set whose vertical sections are contained in countable unions of F-classes. Then exactly one of the following holds: (1) The set R is a countable union of (E×F) ↾R)-invariant Borel-in-R partial uniformizations of R over F. (2) There exists F ∈{∆(2N)}∪{Fp | p is prime} for which there are continuous embeddings πX : 2N ֒ →X of E0 into E and πY : 2N ֒ →Y of F into F such that (πX × πY )(E0) ⊆R. We say that a set R ⊆X×X is a graph of a partial injection over an equivalence relation F on X if every horizontal and vertical section of R is contained in an F-class. Theorem 3. Suppose that X is a Hausdorffspace, E is an analytic equivalence relation on X, F is a Borel equivalence relation on X, and every E-class is a countable union of (E ∩F)-classes. Then exactly one of the following holds: (1) The set E is a countable union of ((E∩F)×(E∩F))-invariant Borel-in-E graphs of partial injections over E ∩F. (2) There exists F ∈{∆(2N)} ∪{Fp | p is prime} for which there is a continu-ous embedding π: 2N ×2 ֒ →X of (E0 ×I(2), (E0 ×∆({0}))∪(F×∆({1}))) into (E, F). A transversal of an equivalence relation E over a subequivalence relation F is a partial transversal of E over F that intersects every E-class, a uniformization of a set R ⊆X × Y over an equivalence relation F on Y is a partial uniformization of R over F with the same projection onto X as R, and a graph of a bijection over an equivalence relation F on X is a graph of a partial injection over F whose horizontal and vertical sections are non-empty. If X is a Polish space and E is a countable Borel equivalence relation, then Theorems 1 and 2 easily imply their strengthenings in which the sets in condition (1) satisfy these stronger requirements, and it is not substantially more difficult to establish the analogous strengthening of Theorem 3, since every partial injection from a set to itself gives rise to a bijection with the same orbits in a canonical fashion. If there is no continuous embedding of E0 into E, then condition (2) fails in all three theorems. In this case, one need only require that F is co-analytic, albeit at 28 Oberwolfach Report 14/2020 the cost that the corresponding sets do not enjoy the same level of invariance. The further special case of Theorem 2 where E = ∆(X) previously arose in unpublished work due to Conley-Miller. The still further special case where E = F = ∆(X) is essentially the Lusin-Novikov uniformization theorem. If there is no continuous embedding of E0 into F, then the instances of condition (2) where F ∈{Fp | p is prime} fail in all three theorems, so the only possibility is that F = ∆(2N), and even this fails in Theorem 3. The further special case of Theorem 1 where F = ∆(X) is essentially the Glimm-Effros dichotomy for countable analytic equivalence relations, the analogous special case of Theorem 2 answers a question posed by Kechris , and the analogous special case of Theorem 3 is essentially the Feldman-Moore theorem. Finitely generated groups of piecewise linear homeomorphisms Justin T. Moore The group PLoI consisting of all orientation preserving piecewise linear homeo-morphisms of the unit interval has long been an interesting source of examples in group theory. On one hand, by work of Brin and Squier, PLoI does not contain nonabelien free groups. On the other hand, it is not an elementary amenable group and contains a rich hierarchy of groups which are. It also contains Richard Thompson’s group F, which itself has served as an important example in group theory since at least the early 1980s. More recently a program has been initiated which attempts the understand the quasiorder of all finitely generated subgroups of PLoI, ordered by homomorphic embedding. Is there a substantial initial segment of this quasiorder which is classi-fiable is some suitable sense? Can one identify the point(s) at which the quasiorder becomes intractable? Some more precise questions are the following: (1) (Brin) Which finitely generated subgroups G of PLoI have the property that whenever H is a finitely generated subgroup of PLoI, then either G embeds into H or H embeds into G? Does F have this property? (2) (Brin-Sapir) If a subgroup of PLoI does not contain a copy of F, must it be elementary amenable? (3) (Moore) Are the finitely generated subgroups of F well quasiordered? At present it seems reasonable to conjecture that F is a bottleneck in the sense of (1) and that it provides the dividing line for nonelementary amenability in the sense of (2). Given this, it is natural to try to identify the obstructions for when a subgroup of PLoI embeds into F. In joint work with James Hyde, I have isolated the notion of an F-obstruction and shown that, at least for one orbital groups, F-obstructions generate groups that do not embed into F and which moreover contain F. It is correctly unknown whether every subgroup of PLoI which does not embed into F contains an F-obstruction, although we conjecture that this is not true — that our working notion of F-obstruction is incomplete. The notion of an F-obstruction comes from Poincar´ e’s rotation number as-sociated to a homeomorphism of the circle. Specifically, if f, g ∈PLoI and Set Theory 29 s < f(s) ≤g(s) < f(g(s)) = g(f(s)), then we can define the rotation number of f modulo g at s to be the rotation number of the map γ : [s, f(g(s))) →[s, f(g(s))) defined by γ(t) = gm(f(t)) where m ∈Z is such that s ≤gm(f(t) < f(g(s)). If the rotation number of f modulo g at s is defined and irrational for some s, we say that (f, g) is an F-obstruction. Theorem 1. If f, g ∈PLoI is an F-obstruction and ⟨f, g⟩has one component of support, then for every embedding φ : ⟨f, g⟩→PLoI, ⟨φ(f), φ(g)⟩contains an F-obstruction. It is routine to show that the standard representations of F inside PLoI do not contain F-obstructions. Thus the theorem implies that a subgroup of PLoI generated by an F-obstruction is not embeddable into F. The next theorem gives some evidence for (1). Theorem 2. If f, g ∈PLoI is an F-obstruction, then ⟨f, g⟩contains an isomor-phic copy of F. In the direction of (3), there is the following theorem, which is joint work with Collin Bleak and Matthew G. Brin. Theorem 3. There is a transfinite sequence (Gξ | ξ < ε0) of finitely generated elementary amenable subgroups of F such that: • G0 is the trivial group and Gξ+1 ∼ = Gξ + Z; • Gξ embeds into Gη if and only if ξ ≤η; • Given 0 ≤α < ε0 and n < ω, let ξ = ω(ωα)·(2n). If α > 0, then the EA-class of Gξ is ω ·α + n+ 2. If α = 0, then the EA-class of Gξ is n+ 1. Σ-Prikry forcings and their iterations Alejandro Poveda (joint work with Assaf Rinot and Dima Sinapova) In a series of papers we introduce the class of Σ-Prikry forcing, where Σ := ⟨κn | n < ω⟩is a non decreasing sequence of regular uncountable cardinals converging to some cardinal κ. In we argue that this new concept yields an interesting class of forcing in the sense that many of the known Prikry-type posets that centers around singular cardinals of countable cofinality fall within this new paradigm. Among these forcing one can find, for instance, the standard Prikry forcing , Gitik-Sharon poset or the Extender-Based Prikry forcing . Also, in a functor A(·, ·) between the class of Σ-Prikry forcing and P-names is defined. For each Σ-Prikry forcing P and each P-name ˙ T for a non-reflecting stationary subset of Eκ+ ω , this functor produces a Σ-Prikry notion of forcing A(P, ˙ T) that This work was partially supported by the Spanish Government under grants MTM2017-86777-P and MECD FPU15/00026 and by Generalitat de Catalunya (Catalan Government) under grant SGR 270-2017. 30 Oberwolfach Report 14/2020 messes up the stationarity of ˙ T. A key feature of this functor is that the projection from A(P, ˙ T) to P splits: that is, in addition to a projection map π from A(P, ˙ T) onto P, there is a map ⋔that goes in the other direction, and the two maps commute in a very strong sense. The exact details can be found in our definition of forking projection . Our work is also narrowly tied with the broad program of finding viable iteration schemes for relevant families of forcings. The first successful transfinite iteration scheme was devised by Solovay and Tennenbaum in , who solved a problem concerning a particular type of linear orders of size ℵ1 known as Souslin lines. The Solovay-Tennenbaum technique is very useful, but it admits no generalizations that allow to tackle problems concerning objects of size > ℵ1. One crucial reason for the lack of generalizations has to do with the poor behavior of the higher analogues of ccc at the level of cardinals > ℵ1 (see [7, 8, 9] for a discussion and counterexamples). Still, various iteration schemes for posets having strong forms of the κ+-chain-condition for κ regular were devised in [10, 11, 12, 13]. In contrast, there is a dearth of works involving iterations at the level of the successor of singular cardinals. A few ad-hoc treatments of iterations that are centered around a singular cardinal may be found in [14, §2], [15, §10] and [16, §1], and a more general framework is offered by [17, §3]. In , the authors took another approach in which they first pursue a forcing iteration along a successor of a regular cardinal κ, and at the very end they singularize κ by appealing to Prikry forcing. This was latter generalized to the context of Radin forcing in . In our project, we propose yet another approach: we allow to put the Prikry-type forcing centered at κ as the very first step of our iteration, and then continue up to length κ++ without collapsing cardinals. In we materialize this idea by developing a general scheme for iterating Σ-Prikry posets. The motivation for this new approach is as follows. Suppose that one would like to produce a generic extension where certain combinatorial principle holds at the successor of a singular cardinal κ. The first thing that one has to be concerned about is that the resulting forcing iteration Pκ++ enjoys the κ++-chain condition. The arguments developed in guarantee that, if P is a given Σ-Prikry notion of forcing, this property is preserved along the way of defining Pκ++, the κ++-length iteration of P. Thus, in particular, Pκ++ has the κ++-cc and, actually, more than that (see [2, §1]). Provided 22κ = κ++ notice that, by using a bookkeeping enumeration, we have a way to ensure that all counterexamples for this hypothetical principle show up at some intermediate stage in the process of defining Pκ++. Thus, we fix a bookkeeping list ⟨zα | α < κ++⟩of all these problems, and shall want that, for any α < κ++, Pα+1 will amount to force over the model V Pα to solve the problem suggested by zα. The standard approach to achieve this is to set Pα+1 := Pα ∗˙ Qα, where ˙ Qα is a Pα-name for a poset that takes care of zα. However, the disadvantage of this approach is that if P is a notion of forcing that blows up 2κ, then any typical poset Q1 in V P1 which is designed to add a subset of κ+ via bounded approximations will fail to have the κ++-cc. To work around this, in our scheme, Set Theory 31 we set Pα+1 := A(Pα, zα), where A(·, ·) is a functor that to each Σ-Prikry poset P and a problem z, produces a Σ-Prikry poset A(P, z) that projects onto P and solves the problem z. At the end of this process we will have defined a poset Pκ++ which will yield the desired generic extension. A special case of our main result from may be roughly stated as follows. Theorem 1. Suppose that Σ = ⟨κn | n < ω⟩is a strictly increasing sequence of regular uncountable cardinals converging to a cardinal κ. For simplicity, let us say that a notion of forcing P is nice if P ⊆Hκ++ and P does not collapse κ+. Now, suppose that: • Q is a nice Σ-Prikry notion of forcing; • A(·, ·) is a functor that produces for every nice Σ-Prikry notion of forcing P and every z ∈Hκ++, a corresponding nice Σ-Prikry notion of forcing A(P, z) that admits a forking projection to P; • 22κ = κ++, so that we may fix a bookkeeping list ⟨zα | α < κ++⟩. Then there exists a sequence ⟨Pα | α ≤κ++⟩of nice Σ-Prikry forcings such that P1 is isomorphic to Q, Pα+1 is isomorphic to A(Pα, zα), and, for every pair α ≤ β ≤κ++, Pβ projects onto Pα. In [2, §5] we also present the very first application of our scheme. Here our aim is to obtain the consistency of finite simulatenous reflection of stationary subsets of κ+ joint with a genuine failure of the SCHκ. For this purpose we carry out an iteration of length κ++ where P is the Extender Based Prikry Forcing relative to Σ for making 2κ = κ++. For the definition of the later steps we invoke the functor A(P, z) from , which is devised to kill the nonreflecting stationary set z. As a corollary, we obtain a correct proof of one of the main result of A. Sharon’s dissertation [20, §3]. Theorem 2. Let ⟨κn | n < ω⟩be a strictly increasing sequence of supercompact cardinals. Set κ := supn<ω κn. Then there exists a cofinality-preserving forcing extension of the universe where κ remains strong limit, every finite collection of stationary subsets of κ+ reflects simultaneously, and 2κ = κ++. Corollary 3. If ZFC is consistent with the existence of ω-many supercompact cardinals, then ZFC is also consistent with Refl(<ω, κ+) + ¬SCHκ, where κ is a strong limit singular cardinal with cof(κ) = ω. It is worth mentioned that the following question remains open: Question 4. Is it possible to obtain the above consistency result for κ = ℵω? References Poveda, Alejandro and Rinot, Assaf and Sinapova, Dima, Sigma-Prikry forcing I: The Axioms, Submitted to Canadian Journal of Mathematics, 2019. Poveda, Alejandro and Rinot, Assaf and Sinapova, Dima, Sigma-Prikry forcing and its iteration, Part II, Submitted to Journal of Mathematical Logic, 2019. Prikry, Karel Libor, Changing measurable into accessible cardinals, Dissetationes Mathe-maticae, 1970. 32 Oberwolfach Report 14/2020 Gitik, Moti, and Assaf Sharon, On SCH and the approachability property, Proceedings of the American Mathematical Society 136.1, 311-320, 2008. Gitik, Moti and Menachem Magidor, Extender based forcings, The Journal of Symbolic Logic 59.2, 445-460, 1994. Solovay, R. M. and Tennenbaum, S., Iterated Cohen extensions and Souslin’s problem, An-nals of Mathematics. Second Series, Vol 94, 201–245, 1971. Rinot, Assaf, Chain conditions of products, and weakly compact cardinals, Bulletin of Sym-bolic Logic, Vol 20, 293–314. 2014. Lambie-Hanson, Chris and Rinot, Assaf, Knaster and friends I: Closed colorings and pre-calibers, Algebra Universalis, Vol 79, 2018. Explicit example of collapsing kappaˆ+ in iteration of kappa-proper forcings, Roslanowski, Andrzej, arXiv preprint arXiv:1808.01636, 2018. Shelah, Saharon, A weak generalization of MA to higher cardinals, Israel Journal of Math-ematics, Vol 30, 297–306, 1978. Shelah, Saharon, Not collapsing cardinals ≤κ in (< κ)–support iterations, Israel Journal of Mathematics, Vol 136, 29–115, 2003. Roslanowski, Andrzej and Shelah, Saharon, Iteration of λ-complete forcing notions not collapsing λ+., International Journal of Mathematics and Mathematical Sciences, Vol 28, 63–82, 2001. Eisworth, Todd, On iterated forcing for successors of regular cardinals, Fundamenta Math-ematicae, Vol 179, 249–266, 2003. Shelah, Saharon, Diamonds, uniformization, The Journal of Symbolic Logic, Vol 49, 1022– 1033, 1984. Cummings, James and Foreman, Matthew and Magidor, Menachem, Squares, scales and stationary reflection, Journal of Mathematical Logic, Vol 1, 35–98, 2001. Gitik, Moti and Rinot, Assaf, The failure of diamond on a reflecting stationary set, Trans-actions of the American Mathematical Society, Vol 364, 1771–1795, 2012. Shelah, Saharon, Successor of singulars: combinatorics and not collapsing cardinals ≤κ in (< κ)-support iterations, Israel Journal of Mathematics, Vol 134, 127–155, 2003. Dzamonja, Mirna and Shelah, Saharon, Universal graphs at the successor of a singular cardinal, Journal of Symbolic Logic, Vol 68, 366–388, 2003. Cummings, James and Dˇ zamonja, Mirna and Magidor, Menachem and Morgan, Charles and Shelah, Saharon, A framework for forcing constructions at successors of singular cardinals, Transactions of the American Mathematical Society, Vol 369, 7405–7441, 2017. Sharon, Assaf, Weak squares, scales, stationary reflection and the failure of SCH, Thesis (Ph.D.)–Tel University, 2005. Transformations of the transfinite plane Assaf Rinot (joint work with Jing Zhang) Ramsey’s theorem [Ram30] asserts that every infinite graph contains an infinite subgraph which is either a clique or an anti-clique. In other words, for every function (or coloring, or partition, depending on one’s perspective) c : [N]2 →2, there exists an infinite X ⊆N which is monochromatic in the sense that, for some i ∈2, c(x, y) = i for every pair x < y of elements of X. A strengthening of Ramsey’s theorem due to Hindman [Hin74] concerns the additive structure (N, +) and asserts that for every partition c : N →2, there exists an infinite X ⊆N which Set Theory 33 is monochromatic in the sense that, for some i ∈2, for every finite increasing sequence x0 < · · · < xn of elements of X, c(x0 + · · · + xn) = i. A natural generalization of Ramsey’s and Hindman’s theorems would assert that in any 2-partition of an uncountable structure, there must exist an uncountable monochromatic subset. However, this is not case. Already in the early 1930’s, Sierpi´ nski found a coloring c : [R]2 →2 admitting no uncountable monochromatic set [Sie33]. In contrast, a counterexample concerning the additive structure (R, +) was discovered only a few years ago [HLS17], by Hindman, Leader and Strauss. In this work [RZ20], we study the existence of transformations of the transfinite plane that allow, among other things, to reduce the additive problem into to the considerably simpler Ramsey-type problem. By convention, hereafter, κ denotes a regular uncountable cardinal, and θ, χ denote (possibly finite) cardinals ≤κ. The transformation of interest is captured by the following definition. Definition 1. Pℓ1(κ) asserts the existence of a transformation t : [κ]2 →[κ]2 satisfying the following: • for every (α, β) ∈[κ]2, if t(α, β) = (α∗, β∗), then α∗≤α < β∗≤β; • for every family A consisting of κ many pairwise disjoint finite subsets of κ, there exists a stationary S ⊆κ such that, for every pair α∗< β∗of elements of S, there exists a pair a < b of elements of A with t[a × b] = {(α∗, β∗)}. Theorem 2. If Pℓ1(κ) holds, then the following are equivalent: • There exists a coloring c : [κ]2 →θ such that, for every X ⊆κ of size κ, and every τ ∈θ, there exist x ̸= y in X such that c(x, y) = τ; • For every Abelian group (G, +) of size κ, there exists a coloring c : G →θ such that, for all X, Y ⊆G of size κ, and every τ ∈θ, there exist x ∈X and y ∈Y such that c(x + y) = τ. As the proof of Theorem 2 will make clear, the theorem remains valid even after relaxing Definition 1 to omit the first bullet and to weaken “stationary S ⊆κ” into “cofinal S ⊆κ”. The reason we have added these extra requirements is to connect this line of investigation with other well-known problems, such as the problem of whether the product of any two κ-cc posets must be κ-cc (cf. [Rin14a]): Theorem 3. If Pℓ1(κ) holds, then there exists a κ-cc poset of size κ whose square does not satisfy the κ-cc. Now, let us consider a more informative variation of Pℓ1(κ). Definition 4. Pℓ1(κ, θ, χ) asserts the existence of a function t : [κ]2 →[κ]3 satisfying the following: • for all (α, β) ∈[κ]2, if t(α, β) = (τ ∗, α∗, β∗), then τ ∗≤α∗≤α < β∗≤β; • for all σ < χ and a family A ⊆[κ]σ consisting of κ many pairwise disjoint sets, there exists a stationary S ⊆κ such that, for all (α∗, β∗) ∈[S]2 and τ ∗< min{θ, α∗}, there exist (a, b) ∈[A]2 with t[a × b] = {(τ ∗, α∗, β∗)}. 34 Oberwolfach Report 14/2020 In [Rin12], by building on the work of Eisworth in [Eis13a, Eis13b], Rinot proved that Pℓ1(λ+, cf(λ), cf(λ)) holds for every singular cardinal λ.1 The proof of that theorem was a combination of walks on ordinals, club-guessing considerations, applications of elementary submodels, and oscillation of pcf scales. In this work, we replace the last ingredient by the oscillation oracle Pℓ6(. . .) from [Rin14b]. Our main result reads as follows: Theorem 5. For χ = cf(χ) ≥ω, Pℓ1(κ, θ, χ) holds in any of the following cases: (1) χ < χ+ < θ = κ and □(κ) holds; (2) χ < χ+ < θ = κ and Eκ ≥χ admits a stationary set that does not reflect; (3) χ < χ+ = θ < κ, κ is inaccessible, and Eκ ≥χ admits a stationary set that does not reflect at inaccessibles. Note that the principle Pℓ1(κ, θ, χ) is strictly stronger than Shelah’s princi-ple Pr1(κ, κ, θ, χ). Thus, Clause (1) improves the main result of [Rin14a] and Clause (2) improves the main result of [Rin14b]. Clause (2) is also consistently sharp, in the sense that it is consistent that for some strongly inaccessible cardinal κ, there exists a nonreflecting stationary subset of Eκ ω, and yet, Pℓ1(κ, 1, ω1) fails. The result of Clause (3) provides, in particular, an affirmative answer to a question posed by Eisworth to the first author at the Set Theory meeting in Ober-wolfach, January 2014. We also have some news on Shelah’s classical principles: Theorem 6. (1) For any infinite regular cardinal µ such that 2µ = µ+, if Pr1(µ+, µ+, µ+, µ) fails, then µ+ is a Mahlo cardinal in L; (2) For any infinite cardinal λ such that 22λ = λ++, if Pr0(λ++, λ++, λ++, λ+) fails, then λ++ is weakly compact in L. References [Eis13a] Todd Eisworth. Getting more colors I. J. Symbolic Logic, 78(1):1–16, 2013. [Eis13b] Todd Eisworth. Getting more colors II. J. Symbolic Logic, 78(1):17–38, 2013. [Hin74] Neil Hindman. Finite sums from sequences within cells of a partition of N. J. Combi-natorial Theory Ser. A, 17:1–11, 1974. [HLS17] Neil Hindman, Imre Leader, and Dona Strauss. Pairwise sums in colourings of the reals. Abh. Math. Semin. Univ. Hambg., 87(2):275–287, 2017. [Ram30] F.P. Ramsey. On a problem of formal logic. Proc. London Math. Soc., pages 264–286, 1930. [Rin12] Assaf Rinot. Transforming rectangles into squares, with applications to strong colorings. Adv. Math., 231(2):1085–1099, 2012. [Rin14a] Assaf Rinot. Chain conditions of products, and weakly compact cardinals. Bull. Symb. Log., 20(3):293–314, 2014. [Rin14b] Assaf Rinot. Complicated colorings. Math. Res. Lett., 21(6):1367–1388, 2014. [RZ20] Assaf Rinot and Jing Zhang. Transformations of the transfinite plane. 2020. Submitted March 2020. [Sie33] Waclaw Sierpi´ nski. Sur un probl` eme de la th´ eorie des relations. Ann. Scuola Norm. Sup. Pisa Cl. Sci. (2), 2(3):285–287, 1933. 1The first bullet of Definition 4 is not stated explicitly, but may be verified to hold in all the relevant arguments of [Eis13a, Eis13b, Rin12]. Set Theory 35 How much choice is needed to construct a discontinuous homomorphism? Christian Rosendal The closed graph theorem of Banach and Schauder is originally formulated for linear operators between Banach space, but has a well-known formulation also for groups. Namely, it states that a homomorphism G φ − →H between Polish groups whose graph is closed in G × H must also be continuous. I will present a recent generalisation of this result, relying on some ideas apparently inherent in unpublished work of Adian, which relaxes the condition of continuity at the identity of the homomorphism φ. More precisely, we show the following result. Theorem. Suppose G φ − →H is a homomorphism between Polish groups so that, for all identity neighbouhoods U ⊆G and V ⊆H, there is a finite set F ⊆U for which [ f∈F f · φ−1(V )f −1 is an identity neighbourhood in G. Then φ is continuous. This result in turn will allow us to address two seemingly unrelated issues. Namely, on the one hand, it provides a positive answer to an old question of JPR Christensen regarding the continuity of universally measurable homomorphisms between Polish groups. And, on the other hand, it gives general lower bounds on the amount of the axiom of choice needed to construct a discontinuous homomor-phism between Polish groups. In fact, under ZF + DC, we prove a quadrichotomy between various continuity properties of homomorphisms and colouring properties of the Hamming graph on products of finite spaces. These latter results are related to recent work by P. Larson and J. Zapletal. The consistency of the failure of the convergence of Kc constructions Grigor Sargsyan We will outline the proof of a recent result that ZFC alone is not sufficient to prove the convergence of Kc constructions. More specifically we will show that the failure of both squares at ω3 along with ωω 2 = ω3 has a consistency strength weaker than a Woodin cardinal that is a limit of Woodin cardinals. Earlier it was shown by Jensen-Schimmerling-Schindler-Steel that this particular com-binatorial configuration implies that Kc has a superstrong cardinals, provided it converges. 36 Oberwolfach Report 14/2020 The work combines techniques from Pmax forcing and HOD mice theory. Part of it is joint with Paul Larson. References Jensen, R., Schimmerling, E., Schindler, R., Steel, J. Stacking Mice. J. Symbolic Logic,Volume 74, Issue 1 (2009), 315–335. ZF rank-into-rank embeddings and non-definability Farmer Schlutzenberg Assume ZF throughout. A Reinhardt cardinal, introduced by William Reinhardt, is the critical point of an elementary embedding j : V →V . We consider here such embeddings, and variants like elementary j : Vδ →Vδ. Most of the results men-tioned below can be seen in the notes Reinhardt cardinals and non-definability, version v2 (will replace v1),1 arxiv.org/abs/2002.01215. Given a transitive structure M and A ⊆M, we say that A is definable over M from parameters iffthere is a formula ϕ in the language of set theory and p ∈M such that A = {x ∈M M | = ϕ(x, p)}. Suzuki proved in No elementary embedding from V into V is definable from parameters what is stated by its title, working in ZF. We generalize this result as follows: Theorem 1 (§3 of ). Assume ZF. Let δ be an ordinal and j : Vδ →Vδ be Σ1-elementary and definable over Vδ from the parameter x ∈Vδ, and j ̸= id. Then: δ = β + 1 is a successor, and if j is fully elementary then rank(x) = β. Recall that if j : Vλ+1 →Vλ+1 is elementary and λ a limit ordinal then j is definable over Vλ+1 from parameter x = j ↾Vλ, because given A ⊆Vλ, we have j(A) = S α<λ j(A ∩Vα). Assume ZF and let δ ≤OR be a limit and j : Vδ →Vδ be Σ1-elementary. Given A ⊆Vδ we define j(A) = [ α<δ j(A ∩Vα). The finite iterates jn : Vδ →Vδ are defined by setting j1 = j and jn+1 = jn(jn). In the proof of Suzuki’s fact, it is useful that if M, N | = ZF are proper classes and j : M →N is Σ1-elementary then j is fully elementary. A generalization: Funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) un-der Germany’s Excellence Strategy EXC 2044-390685587, Mathematics M¨ unster: Dynamics-Geometry-Structure. 1Errata to v1: In the development of extenders, the assertion on p.21 that “for every finite set a there is a finite set b with a ⊆b, ..., and b extensional”, is false; a counterexample will be given in v2, as will a corrected development. In the introduction, Theorem 3.2 is over-stated. The reference to definability of set-grounds in Footnote 7, p. 46, is incorrect. Set Theory 37 Theorem 2 (§3 in ). Assume ZF and let δ ∈OR be a limit and j : Vδ →Vδ be Σ1-elementary. Then there is m < ω such that for all n ≥m, jn is fully elementary, and in fact for all A ⊆Vδ, jn : (Vδ, A) →(Vδ, jn(A)) is fully elementary in the expanded language with predicate for A, jn(A). Together with Andreas Lietz, we show that the m < ω is necessary: Theorem 3 (§3 in ). Assume ZF. Suppose j : Vλ+ →Vλ+ is elementary and j ̸= id. Let m < ω. Then there is a limit η < λ+ such that letting k = j ↾Vη, then k : Vη →Vη is Σ1-elementary, but k1, k2, . . . , km are not Σ2-elementary. Beyond direct definability over Vδ, we have: Theorem 4 (§8 in ). Assume ZF + V = L(Vδ) for a limit δ of uncountable cofinality. Then there is no Σ1-elementary j : Vδ →Vδ. Goldberg proved this earlier under the further assumption that δ is inaccessible, via a different method. The situation is, however, more subtle when cof(δ) = ω. The proof of the theorem above uses a development of the theory of ultrapowers by extenders under ZF; this is also used in to show that if there is a proper class of weak L¨ owenheim-Skolem cardinals, then being the critical point of an elementary j : V →M with M transitive is first-order. We next consider some connections between HOD = HODV and the iterates of (V, j) when (V, j) | = ZF and j : V →V is elementary. Let M0 = (V, j) and Mα = (Nα, jα) where Mα+1 = (Nα, jα(jα)), and we take direct limits at limit α. Hamkins observed that the usual arguments show that all Mα are wellfounded, (Nα, jα) | = ZF and jα : Mα →Mα is elementary. Let λ = supn<ω sup(crit(jn)). Theorem 5 (§9 in ). Suppose (V, j) | = ZF where j : V →V is elementary and let λ, etc, be as above. Then: (1) λ = crit(jω) is inaccessible in Nω, (2) V HOD λ = V HODNω λ and V HOD λ+1 ⊆V HODNω λ+1 and HOD ⊆HODNω jω , (3) HOD ̸⊆T α∈OR Nα, (4) V is not a set-generic extension of Nα for α ≥ω, (5) every set X is contained in a set-generic extension of Nα, for each α, (6) there is G ∈Nω which is set-generic over HOD and such that HOD[G] | =“λ is weakly compact” and V HOD[G] λ = V HOD λ = V HODNω λ , Question 6. Is λ weakly compact in HOD? Using the analysis of the iterates Mα, one can deduce in second order set theory ZF2, that if X is a set and A a class, then (i) if V = HOD(X) there is no Reinhardt cardinal, and (ii) if V = HODA(X) then V is not total Reinhardt and there is no Berkeley cardinal.2 2Goldberg and Usuba have independently proved stronger results (in particular that if there is a Reinhardt cardinal then AC is not set-forceable), via a quite different proof, which is more direct. 38 Oberwolfach Report 14/2020 Finally recall that given a set X, the mouse Mn(X) is the least proper class mouse over X with n Woodin cardinals, and M # n (X) is its sharp. The following will appear in v2 of ; the case n = 0 (all sets have sharps) is due to Goldberg: Theorem 7. Suppose (V, j) | = ZF and j : V →V is elementary. Then M # n (X) exists and is OR-iterable (above X) for every set X. References Farmer Schlutzenberg. Reinhardt cardinals and non-definability. arxiv.org/abs/2002.01215 Transfinite sequences of topologies, descriptive complexity, and approximating equivalence relations S lawomir Solecki The aim of the present work is to describe the following general phenomenon: un-der appropriate topological conditions, increasing transfinite sequences of topolo-gies interpolating between two given topologies σ ⊆τ stabilize at τ and, under appropriate additional descriptive set theoretic conditions, the stabilization occurs at a countable stage of the interpolation. Increasing sequences of topologies play an important role in certain descriptive set theoretic considerations; see, for exam-ple, [8, Section 1], [2, Sections 5.1–5.2], [1, Section 2], [9, Section 2], [6, Chapter 6], [5, Section 3], [10, Sections 2–4], , , and, implicitly, [3, Sections 3–5]. In this context, such sequences of topologies are often used to approximate an equiv-alence relation by coarser, but more manageable, ones. We relate our theorems on increasing interpolations between two topologies to this theme. The results of this work are expected to have applications to a Scott-like analysis of quite general Borel equivalence relations. Filtrations. Unless otherwise stated, all topologies are assumed to be defined on a fixed set X. We write clτ and intτ for the operations of closure and interior with respect to a topology τ. If τ is a topology and x ∈X, by a neighborhood of x we understand a subset of X that contains x in its τ-interior. A neighborhood basis of τ is a family A of subsets of X such that for each x ∈X and each neighborhood B of x, there exists A ∈A that is a neighborhood of x and A ⊆B. So a neighborhood basis need not consist of open sets. A topology is called Baire if a countable union of nowhere dense sets has dense complement. The notion of filtration defined below is the main new notion of the work. Let σ ⊆τ be topologies and let ρ be an ordinal. A transfinite sequence (τξ)ξ<ρ of topologies is called a filtration from σ to τ if (1) σ = τ0 ⊆τ1 ⊆· · · ⊆τξ ⊆· · · ⊆τ Research supported by NSF grant DMS-1800680 Set Theory 39 and, for each α < ρ, if F is τξ-closed for some ξ < α, then (2) intτα(F) = intτ(F). Note that if F ⊆X is an arbitrary set and (τξ)ξ is a transfinite sequence of topologies fulfilling (1), then for each α intτα(F) ⊆intτ(F). So condition (2) says that if F is simple from the point of view of τα, that is, if F is τξ-closed for some ξ < α, then intτα(F) is as large as possible, in fact, equal to intτ(F). We write (τξ)ξ≤ρ for (τξ)ξ<ρ+1. Each filtration from σ to τ can be extended to all ordinals by setting τξ = τ for all ξ ≥ρ. For this reason, it will be harmless to assume that a filtration is defined on all ordinals. Let σ ⊆τ be two topologies. The first question is to determine whether a given filtration (τξ)ξ from σ to τ reaches τ, that is, whether there exists an ordinal ξ with τξ = τ. Since all the topologies τξ are defined on the same set, there exists an ordinal ξ0 such that τξ = τξ0 for all ξ ≥ξ0; the question is whether τξ0 = τ. If the answer happens to be positive, we want to obtain information on the smallest ordinal ξ for which τξ = τ. We achieve these goals in Theorems 1 and 2 assuming that τ is regular and Baire and that it has a neighborhood basis consisting of sets that are appropriately definable with respect to σ. So, informally speaking, termination at τ of a filtration from σ to τ has to do with the attraction exerted by τ, which is expressed by τ being Baire, and with the distance from σ to τ, which is expressed by the complexity, with respect to σ, of a neighborhood basis of τ. Given an equivalence relation E on a set X, with X equipped with a topology τ, we can define a canonical equivalence relation that approximates E from above: make x, y ∈X equivalent when the τ-closures of the E equivalence classes of x and y are equal. Given a filtration, this procedure gives rise to a transfinite sequence of upper approximations of E. We consider the question of these approximations stabilizing to E, and answer it in Theorem 4. In addition to the results described above, we also define and study a canonical, slowest filtration from σ to τ. Statements of results. Recall that C-sets with respect to a topology is the smallest σ-algebra of sets closed under the Souslin operation and containing all open sets with respect to this topology. Theorem 1. Let σ ⊆τ be topologies. Assume that τ is regular, Baire, and has a neighborhood basis consisting of sets that are C-sets with respect to σ. Let (τξ)ξ be a filtration from σ to τ. If τξ0 = τξ0+1 for some ξ0, then τξ0 = τ. Theorem 2 contains a more refined version of stabilization. It makes a connec-tion with descriptive set theoretic complexity of neighborhood bases. Note that the assumptions of Theorem 2 ensure that Theorem 1 applies, but the conclusion of Theorem 2 gives an upper estimate on the smallest ξ0 with τξ0 = τ, which we do not get from Theorem 1. 40 Oberwolfach Report 14/2020 Theorem 2. Let σ ⊆τ be topologies, with τ being regular and Baire. For an ordinal α ≤ω1, let (τξ)ξ≤α be a filtration from σ to τ, with τξ metrizable, for ξ < α, and τα Baire. If τ has a neighborhood basis consisting of sets in S ξ<α Π0 1+ξ with respect to σ, then τα = τ. Remark 3. 1. Note that in Theorem 2 we do not make any separability assump-tions. 2. One can relax the assumption of metrizability; it suffices to assume that τξ are paracompact and that sets that are τξ-closed are intersections of countably many sets that are τξ-open, for all ξ < α. 3. When α = ω1, then, of course, S ξ<α Π0 1+ξ is the family of all Borel sets with respect to σ. Fix (τξ)ξ<ρ, a transfinite sequence of topologies as in (1). Let E be an equiva-lence relation on X. There exists a natural way of producing a transfinite sequence of upper approximations of E using (τξ)ξ<ρ. For each ξ < ρ define the equivalence relation Eξ on X by letting xEξy if and only if clτξ([x]E) = clτξ([y]E). Note that (3) E0 ⊇E1 ⊇· · · ⊇Eξ ⊇· · · ⊇E. The main question is when the transfinite sequence of equivalence relations in (3) stabilizes at E. Theorem 4. Let σ ⊆τ be topologies, with τ being Baire. Let α ≤ω1, and let (τξ)ξ<α be a filtration from σ to τ, with τξ completely metrizable for each ξ < α. Assume E is an equivalence relation whose equivalence classes are τ-open. If all E equivalence classes are in S ξ<α Π0 1+ξ with respect to σ, then E = T ξ<α Eξ. Remark 5. 1. Each E equivalence class being τ-open, as in Theorem 4, is equiv-alent to saying that E is a (τ × τ)-open subset of X × X. 2. In Theorem 4, if α < ω1 is a successor, say α = β + 1, then the conclusion reads: if all equivalence classes of E are in Π0 1+β with respect to σ, then E = Eβ. References H. Becker, Polish group actions: dichotomies and generalized elementary embeddings, J. Amer. Math. Soc. 11 (1998), 397–449. H. Becker, A. S. Kechris, The Descriptive Set Theory of Polish Group Actions, London Math-ematical Society Lecture Note Series, 232, Cambridge University Press, 1996. I. Ben Yaacov, M. Doucha, A. Nies, T. Tsankov, Metric Scott analysis, Adv. Math. 318 (2017), 46–87. O. Drucker, Hjorth analysis of general Polish group actions, arXiv:1512.06369, December 2015. I. Farah, S. Solecki, Borel subgroups of Polish groups, Adv. Math. 199 (2006), 499–541. G. Hjorth, Classification and Orbit Equivalence Relations, Mathematical Surveys and Mono-graphs, 75. American Mathematical Society, 2000. Set Theory 41 G. Hjorth, The fine structure and Borel complexity of orbits, www.math.ucla.edu/∼greg/ fineorbits.pdf, November 2010. A. Louveau, A separation theorem for Σ1 1 sets, Trans. Amer. Math. Soc. 260 (1980), 363–378. S. Solecki, Polish group topologies, in Sets and Proofs, London Mathematical Society Lecture Notes Series 258, Cambridge University Press 1999, pp. 339–364. S. Solecki, The coset equivalence relation and topologies on subgroups, Amer. J. Math. 131 (2009), 571–605. Ramsey degrees of products of infinite sets Stevo Todorˇ cevi´ c We consider finite colourings of finite products X1×X2×···×Xn of infinite sets and determine the minimal number of colours a subproduct Y1 ×Y2×···×Yn of infinite subsets could achieve. It is well known and easily seen that if X1 ×X2 ×···×Xn is a finite sequence of countable infinite sets then there is a colouring of their product Qn i=1 Xi with n! colours each of which shows up in any subproduct Qn i=1 Yi with Yi ⊆Xi are infinite. For example, letting Xi = N for all i and colouring a given one-to-one sequence (k1, k2, ..., kn) of integers by the permutation σ of {1, 2, ..., n} such that σ(i) < σ(j) is equivalent to ki < kj for all i < j, it is clear that all permutations show up in any n-product of infinite subsets of N. On the other hand, a simple application of Ramsey’s theorem shows that for every finite colouring of Qn i=1 Xi there exist infinite Yi ⊆Xi such that the subproduct Qn i=1 Yi uses no more than n! colours. As said above, we investigate this phenomenon in the case when some of the sets Xi are uncountable and in fact have different cardinalities. For example, we show that if one of the sets Xi is uncountable then we can find a subproduct of infinite sets that use no more than (n −1)! colours and that this number in general cannot be lowered. on the other hand, if among the sets Xi one can find sets of three different cardinalities then the minimal number of colours a subproduct of infinite subsets could drops to (n −1)!, and so on. More precisely, we shall see that there is a general result of this kind that naturally fits in the classical set-theoretic study of the Ramsey degree phenomenon. References P. Erd˝ os and A. Hajnal. Unsolved problems in set theory. in. D.S. Scot ed. Axiomatic Set Theory, Proc. Sympos. Pure Math., Vol 13, Part I. Amer, Math, Soc., Providence 1971. pp.17–48. S. Todorˇ cevi´ c. Walks on ordinals and their characteristics. Progress in Mathematics No.263, Birkh¨ auser, Basel 2007. S. Todorˇ cevi´ c. Introduction to Ramsey spaces. Annals of Mathematics Studies. No.174, Princeton University Press, Princeton 2010. N.H. Williams. Combinatorial set theory. North-Holland Publ. Co., Amsterdam, 1977. The research on this paper is partially supported by grants from NSERC(455916) and CNRS(UMR7586) 42 Oberwolfach Report 14/2020 Universal minimal flows of homeomorphism groups of high-dimensional manifolds are not metrizable Todor Tsankov The universal minimal flow (UMF) of a topological group G is a canonical ob-ject associated to the group which is of prime importance in abstract topological dynamics. For most classical groups (for example, infinite discrete and more gen-erally, locally compact, non-compact), the UMF is a non-metrizable space that is difficult to describe explicitly. Somewhat surprisingly, for many large Polish groups of interest, the UMF is a metrizable compact space and a rather concrete object that carries interesting combinatorial and dynamical information. The first interesting case of a non-trivial, metrizable UMF of a Polish group was computed by Pestov who proved that the UMF of the homeomorphism group of the circle is the circle itself. This naturally led to the question whether a similar result is true for homeomorphism groups of other manifolds (or more general topo-logical spaces). A few years later, Uspenskij proved that the action of a group on its UMF is never 3-transitive, thus giving a negative answer to the question for a vast collection of topological spaces. Still, the question of metrizability of their UMFs remained open and he asked specifically whether the UMF of the homeo-morphism group of the Hilbert cube is metrizable. We give a negative answer to this question for the Hilbert cube and all closed manifolds of dimension at least 2, thus showing that metrizability of the UMF of a homeomorphism group is es-sentially a one-dimensional phenomenon. In dimension 3 or higher, we also prove that the universal minimal flow does not have a comeager orbit (which implies non-metrizability). References V. Uspenskij, On universal minimal compact G-spaces. Proceedings of the 2000 Topology and Dynamics Conference (San Antonio, TX), (2000), 301–308. Y. Gutman, T. Tsankov, A. Zucker, Universal minimal flows of homeomorphism groups of high-dimensional manifolds are not metrizable, Preprint arXiv:1910.12220. Hyperfinite subequivalence relations of treed equivalence relations Anush Tserunyan (joint work with Robin Tucker-Drob) A large part of measured group theory studies structural properties of countable groups that hold “on average”. This is made precise by studying the orbit equiv-alence relations induced by free measurable actions of these groups on a standard probability space. In this vein, the amenable groups correspond to hyperfinite equivalence relations, and the free groups to the treeable ones. In joint work with R. Tucker-Drob, we give a detailed analysis of the structure of hyperfinite sube-quivalence relations of a treed equivalence relation on a standard probability space, deriving the analogues of structural properties of amenable subgroups (copies of Z) Set Theory 43 of a free group. Most importantly, just like every such subgroup is contained in a unique maximal one, we show that even in the non-pmp setting, every hyperfinite subequivalence relation is contained in a unique maximal one. We now define all the notions mentioned in the previous paragraph and explain its content in more detail. Let (X, µ) be a standard probability space, which may as well be equal to [0, 1] with Lebesgue measure. An equivalence relation E on X is said to be Borel if it is a Borel subset of X2. We say that E is countable (resp. finite) if each E-class is countable (resp. finite). Amenable groups ⇄hyperfinite equivalence relations. Recall that a count-able group is amenable if it admits an invariant mean, i.e. a finitely additive prob-ability measure defined on all subsets of the group and invariant under (left) trans-lation. An equivalence relation E on X is called hyperfinite (resp. µ-hyperfinite) if it is equal to an increasing union of finite Borel equivalence relations (resp. modulo a µ-null set). It is a theorem of Slaman and Steel , 6 that hyperfinite equiv-alence relations are precisely the orbit equivalence relations of Borel actions of Z. In the measurable context, the Ornstein–Weiss theorem states that in fact mea-surable actions of all countable amenable groups induce µ-hyperfinite equivalence relations. Free groups ⇄treeable equivalence relations. A Borel graph G on X is just an irreflexive symmetric Borel subset of X2. An equivalence relation E is called treeable (resp. µ-treeable) if there is an acyclic Borel graph T whose connected components (trees) are precisely the E-classes; call such a T a treeing of E. It is clear that any free measurable action of the free group Finn on n ≤∞generators induce a µ-treeable equivalence relation E because the action of the standard gen-erators of Fn provides a 2n-regular treeing of E. Conversely, a theorem of Hjorth says that up to, so-called, stable orbit equivalence, all probability measure preserving (pmp) µ-treeable equivalence relations arise in this fashion. (We call a Borel equivalence relation E on (X, µ) probability measure preserving (pmp) if every Borel automorphism γ of X with graph(γ) ⊆E preserves the measure µ.) Hyperfinite inside treeable. We study hyperfinite subequivalence relations F of a treeable equivalence relation E on (X, µ) and their interaction with a fixed treeing T of E. An analogy to keep in mind is: a copy of Z inside F2. The following was proven in for not just treeable, but more generally, for equivalence relations acting on a bundle of hyperbolic spaces: Theorem 1 (Bowen). Let E be a treeable equivalence relation on (X, µ). If E is pmp, then every µ-hyperfinite subequivalence relation F ⊆E admits a unique maximal µ-hyperfinite extension F ⊆E. The proof of this, and in general, any analysis of µ-hyperfinite subequivalence relations F of a treeable equivalence relation E is done using end selection: a result of Adams and Jackson, Kechris, and Louveau (Lemma 3.21) that given a 44 Oberwolfach Report 14/2020 treeing T of E, each F-class measurably selects zero, one or two ends of T , and there is a maximum such selection. Since end selection holds without the pmp assumption, it is natural to ask whether Theorem 1 is true more generally for all E (not necessarily pmp). Towards answering this question, we first realized that Theorem 1 easily follows from the following observation, which we vaguely state here: Lemma 2. Let E be a treeable equivalence relation on (X, µ), T a treeing of E, and F ⊆E a µ-hyperfinite subequivalence relation. If E is pmp, then each F-class spans exactly the ends it maximally selects. However, this lemma is false without the pmp assumption: the action of Fin2 on its boundary induces a hyperfinite equivalence relation F and a natural 4-regular treeing of it, so each F-class spans all of the continuum-many ends, yet selecting only one. Nevertheless, using other methods, we answer the question positively: Theorem 3 (Ts.–Tucker-Drob). Let E be a treeable equivalence relation on (X, µ). Every µ-hyperfinite subequivalence relation F ⊆E admits a unique maximal µ-hyperfinite extension F ⊆E. This is corollary of our main result: a complete structural analysis of F with respect to the geometry of a given treeing T of E and the Radon–Nikodym cocycle on E associated with µ (assuming without loss of generality that µ is E-quasi-invariant). References S. Adams. Trees and amenable equivalence relations. Ergodic Theory Dynam. Systems 10 (1990), 1–14. L. Bowen. Equivalence relations that act on bundles of hyperbolic spaces. Ergodic Theory and Dynamical Systems 38 (2018), no. 7, 2447–2492. R. Dougherty, S. Jackson and A. S. Kechris. The Structure of Hyperfinite Borel Equivalence Relations. Trans. of the Amer. Math. Soc. 341 (1994), no. 1, 193–225. G. Hjorth. A lemma for cost attained. Ann. Pure Appl. Logic 143 (2006), no. 1-3, 87–102. S. Jackson, A. S. Kechris, and A. Louveau. Countable Borel equivalence relations. Journal of Math. Logic 2 (2002), no. 1, 1–80. A. S. Kechris and B. Miller. Topics in Orbit Equivalence. Lecture Notes in Math., vol. 1852, Springer, 2004. D. Ornstein and B. Weiss. Ergodic theory of amenable group actions. I. The Rohlin lemma. Bull. Amer. Math. Soc. (N.S.) 2 (1980), no. 1, 161–164. T. A. Slaman and J. R. Steel. Definable functions on degrees. Cabal Seminar 81–85, Lecture Notes in Math., vol. 1333, Springer, Berlin, 1988, pp. 37–55. Set Theory 45 Tameness for Set Theory Matteo Viale This brief report accounts on the main results of [4, 5, 6] where it is shown that there is a recursive signature τ extending the signature {∈} for set theory and a definable recursive extension T in signature τ of the ∈-theory ZFC such that: • The universal fragment of T is provably invariant across set-sized forcing extensions of any of its models (cfr. Thm. 1). • T admits a model companion which is the provable fragment of the τ-theory of Hω2 in any model of MM++ (cfr. Thm. 4). Note that the model companion of a τ-theory T is the unique τ-theory S which satisfies exactly the same universal sentences of T and is model complete (i.e. given models M, N of S with M a substructure of N, M ≺N). The relevance of the above results is that they show that the notion of forcibility and consistency for Π2-properties in signature τ overlap (cfr. Thm. 4). Let ZFC−denote the theory ZFC without the powerset axiom. Let τST be a sig-nature containing predicate symbols Rψ of arity m for all bounded ∈-formulae ψ(x1, . . . , xm), function symbols fθ of arity k for for all bounded ∈-formulae θ(y, x1, . . . , xk), constant symbols ω and ∅. ZFCST ⊇ZFC is the τST-theory ob-tained adding axioms which force in each of its τST-models ∅to be interpreted by the empty set, ω to be interpreted by the first infinite ordinal, each Rψ as the class of k-tuples defined by the bounded formula ψ(x1, . . . , xk), each fθ as the l-ary class function whose graph is the extension of the bounded formula θ(x1, . . . , xl, y) (whenever θ defines a functional relation), see [5, Notation 2] for details. We supplement [5, Notation 2] with the following: Notation 1. • τNSω1 is the signature τST ∪{ω1} ∪{NSω1} with ω1 a constant symbol, NSω1 a unary predicate symbol. • TNSω1 is the τNSω1 -theory given by TST together with the axioms ω1 is the first uncountable cardinal, ∀x [(x ⊆ω1 is non-stationary) ↔NSω1(x)]. • ZFC− NSω1 is the τNSω1 -theory ZFC− ST + TNSω1. • Accordingly we define ZFCNSω1. We can immediately formulate our first main result: The author acknowledges support from INDAM through GNSAGA and from the project: PRIN 2017-2017NWTM8R Mathematical Logic: models, sets, computability. MSC: 03E35 03E57 03C25. 46 Oberwolfach Report 14/2020 Theorem 1. Assume1 (V, τ V NSω1) models ZFCNSω1+ there are class many Woodin cardinals. Then the Π1-theory of V for the language τNSω1 ∪UB is invariant under set sized forcings. To formulate our second result we need more notations and definitions. Let UB denote the family of universally Baire sets (see for details [5, Section 4.2]), and L(UB) denote the smallest transitive model of ZF which contains UB. We briefly introduce the key definitions of MAX(UB) and (∗)-UB which are preliminary to the formulation of our main results. Definition 2. MAX(UB): There are class many Woodin cardinals in V , and for all G V -generic for some forcing notion P ∈V : (1) Any subset of (2ω)V [G] definable in (HV [G] ω1 ∪UBV [G], ∈) is universally Baire in V [G]. (2) Let H be V [G]-generic for some forcing notion Q ∈V [G]. Then2: (HV [G] ω1 ∪UBV [G], ∈) ≺(HV [G][H] ω1 ∪UBV [G][H], ∈). We observe that MAX(UB) is a (slightly weaker) form of sharp for the family of universally Baire sets which holds if V has class many Woodin cardinals and is a generic extension obtained by collapsing a supercompact cardinal to become countable (see [3, Thm 3.4.17]). Moreover if MAX(UB) holds in V , it remains true in all further set forcing extensions of V . It is open whether MAX(UB) is a direct consequence of suitable large cardinal axioms. We now turn to the definition of (∗)-UB, a natural maximal strengthening of Woodin’s axiom (∗). Key to all results of this report is an analysis of the properties of generic extensions by Pmax of L(UB). In this analysis MAX(UB) is used to argue (among other things) that all sets of reals definable in L(UB) are universally Baire, so that most of the results established in on the properties of Pmax for L(R) can be also asserted for L(UB). Here we will not define the Pmax-forcing; our reference on this topic is . Definition 3. Let A be a family of dense subsets of Pmax. • (∗)-A holds if NSω1 is saturated3 and there exists a filter G on Pmax meeting all the dense sets in A. • (∗)-UB holds if NSω1 is saturated and there exists an L(UB)-generic filter G on Pmax. Woodin’s definition of (∗) [2, Def. 7.5] is equivalent to (∗)-A+there are class many Woodin cardinals for A the family of dense subsets of Pmax existing in L(R). 1We follow the convention introduced in [5, Notation 2.1] to define (V, τ V NSω1 ). 2Elementarity is witnessed via the map defined by A 7→AV [G][H] for A ∈UBV [G] and the identity on HV [G] ω1 (See [5, Notation 4.6] for the definition of AV [G][H]). 3See [3, Section 1.6, pag. 39] for a discussion of saturated ideals on ω1. Set Theory 47 Notation 2. • σST is the signature containing a predicate symbol Sφ of arity n for any τST-formula φ with n-many free variables. • σω,NSω1 is the signature τST ∪σST. • Tl-UB is the σω,NSω1 -theory given by the axioms ∀x1 . . . xn [Sψ(x1, . . . , xn) ↔( n ^ i=1 xi ⊆ω<ω ∧ψL(UB)(x1, . . . , xn))] as ψ ranges over the τST-formulae. • ZFC∗− l-UB is the σω-theory ZFC− ST ∪Tl-UB; • ZFC∗− l-UB,NSω1 is the σω,NSω1 -theory ZFC− NSω1 ∪Tl-UB; • Accordingly we define ZFC∗ l-UB, ZFC∗ l-UB,NSω1 . A key observation is that ZFC− ST, ZFC− NSω1 , ZFC∗− l-UB, ZFC∗− l-UB,NSω1 are all de-finable extension of ZFC; more precisely any ∈-structure (M, E) of ZFC−admits a unique extension to a τ-structure satisfying the extra axioms outlined in the above items for τ among the signatures written above (for τST ∪{ω1, NSω1} the ∈-model must satisfy the sentence stating the existence of a smallest uncountable cardinal). The same considerations apply to ZFCST, ZFCNSω1, ZFC∗ l-UB, ZFC∗ l-UB,NSω1. Theorem 4. Let T be any σω,NSω1 -theory extending ZFC∗ l-UB,NSω1+MAX(UB)+ there is a supercompact cardinal and class many Woodin cardinals Then T has a model companion T ∗. Moreover TFAE for any for any Π2-sentence ψ for σω,NSω1: (A) T ∗⊢ψ; (B) For any complete theory S ⊇T, S∀∪{ψ} is consistent; (C) T proves4 ∃P (P is a partial order ∧⊩P ψ ˙ Hω2 ); (D) T proves L(UB) | = [Pmax ⊩ψ ˙ Hω2 ]; (E) T∀+ ZFC∗ l-UB,NSω1 + MAX(UB) + (∗)-UB ⊢ψHω2 . 4 ˙ Hω2 denotes a canonical P -name for Hω2 as computed in generic extension by P . 48 Oberwolfach Report 14/2020 Crucial to the proof of Theorem 4 is the recent breakthrough of Asper´ o and Schindler establishing that (∗)-UB follows from MM++. Acknowledgements: This research has been completed while visiting the ´ Equipe de Logique Math´ ematique of the IMJ in Paris 7 in the fall semester of 2019. The author thanks Boban Veliˇ ckovi´ c, David Asper´ o, and Giorgio Venturi for the many fruitful discussions held on the topics of the present report. References D. Asper´ o and R. Schindler. MM++ implies (∗). (2019) P. B. Larson. Forcing over models of determinacy. In Handbook of set theory. Vols. 1, 2, 3, pages 2121-2177. Springer, Dodrecht, 2010 P. B. Larson. The stationary tower. Volume 32 of University Lecture Series. American Mathematical Society, Providence, RI, 2004. Notes on a course by W. Hugh Woodin. G. Venturi and M. Viale. The model companions of set theory. 2019. M. Viale Model companionship versus generic absoluteness I 2020. M. Viale. Model companionship versus generic absoluteness II. 2020. Rigidity conjectures in C∗-algebras Alessandro Vignati We study automorphisms’ groups of corona C∗-algebras. C∗-algebras are Banach self-adjoint subalgebras of B(H), the algebra of bounded operators on a complex Hilbert space H. Via Gelfand’s transform, abelian C∗-algebras arise as algebras of continuous functions on locally compact spaces, so the study of C∗-algebra can be viewed as noncommutative topology. In the same way from a locally compact space X one associates its ˇ Cech-Stone compactification βX and its remainder βX \ X, to a nonunital C∗-algebra A one associates its multiplier algebra M(A) and its corona Q(A) = M(A)/A. If A = C0(X), then M(A) = C(βX) and Q(A) = C(βX \ X), hence coronas pro-vide noncommutative analogues of ˇ Cech-Stone remainders. As automorphisms of C(βX\X) correspond to homeomorphisms of βX\X, the study of automorphisms of commutative coronas feeds back into topology. The interest on homeomorphisms’ groups of ˇ Cech-Stone remainders takes its origin from the work of Rudin, Shelah, and Veliˇ coki´ c among others ([6, 7, 8]), who proved that the existence of a nontrivial homeomorphism of βω \ ω depends on set theory. This intuition was later brought to the setting C∗-algebras when Phillips and Weaver, and Farah ([5, 3]) showed that whether all automorphisms of the Calkin algebra Q(H) are inner depends on set theory. (Q(H) is the quotient of B(H) by the ideal of compact operators K(H), when H is a separable Hilbert space. Q(H) is the corona algebra of K(H).) A topological notion of triviality for automorphisms of general coronas was given in . (Other, algebraic, notions of Set Theory 49 triviality where introduced, discussed, and linked with Ulam stability phaenomena in and .) Conjecture 1 (). Let A be a separable nonunital C∗-algebra. Then • CH implies that there exist 2ℵ1 automorphisms of Q(A) that are not topo-logically trivial; • PFA implies that all automorphisms of Q(A) are topologically trivial. We confirmed the rigidity part of the conjecture. A crucial step was obtained in , where the noncommutative version the OCA lifting theorem of was proved. Theorem 2 (). OCA+ MAℵ1 imply that if A is a separable C∗-algebra then all automorphisms of Q(A) are topologically trivial. References S. Coskey and I. Farah. Automorphisms of corona algebras, and group cohomology. Trans. Amer. Math. Soc., 366(7):3611–3630, 2014. I. Farah. Analytic quotients: theory of liftings for quotients over analytic ideals on the integers. Mem. Amer. Math. Soc., 148(702):xvi+177, 2000. I. Farah. All automorphisms of the Calkin algebra are inner. Ann. of Math. (2), 173(2):619– 661, 2011. P. McKenney and A. Vignati. Forcing axioms and coronas of C∗-algebras. arXiv:1806.09676. N. C. Phillips and N. Weaver. The Calkin algebra has outer automorphisms. Duke Math. J., 139(1):185–202, 2007. W. Rudin. Homogeneity problems in the theory of ˇ Cech compactifications. Duke Math. J., 23:409–419, 1956. S. Shelah. Proper forcing, volume 940 of Lecture Notes in Mathematics. Springer-Verlag, Berlin, 1982. B. Veliˇ ckovi´ c. OCA and automorphisms of P(ω)/fin. Topology Appl., 49(1):1–13, 1993. A. Vignati. Rigidity conjectures. arXiv:1812.01306, 2018. Weak Vopˇ enka cardinals Trevor Wilson Vopˇ enka’s Principle, in one of its several equivalent formulations, says that for every proper class of structures in a common signature with any number of finitary function and relation symbols,1 there is a homomorphism between two structures in that class. Ad´ amek, Rosick´ y, and Trnkov´ a observed that Vopˇ enka’s Principle is equivalent to the statement that no sequence of structures ⟨Mα : α ∈Ord⟩has both of the following properties: (1) whenever α ≤β there is a unique homomorphism Mα →Mβ, and (2) whenever α < β there is no homomorphism Mβ →Mα. They then defined Weak Vopˇ enka’s Principle as the dual statement (obtained by reversing the arrows) which says that no sequence of structures ⟨Mα : α ∈ Ord⟩has both of the following properties: (1) whenever α ≤β there is a unique homomorphism Mβ →Mα, and (2) whenever α < β there is no homomorphism 1It is equivalent to consider structures with just one binary relation, i.e. graphs, but we will not need this fact. 50 Oberwolfach Report 14/2020 Mα →Mβ. They showed that this dual statement is in fact a consequence of Vopˇ enka’s Principle, justifying its “weak” designation. Similar to the notion of a Vopˇ enka cardinal, we may define the notion of a weak Vopˇ enka cardinal as a cardinal below which the local version of Weak Vopˇ enka’s Principle holds. First we define: Definition 1. A weak Vopˇ enka sequence for a regular cardinal κ is a sequence of structures ⟨Mα : α < κ⟩in a common signature with fewer than κ function and relation symbols such that (1) whenever α ≤β < κ there is a unique homomorphism Mβ →Mα, and (2) whenever α < β < κ there is no homomorphism Mα →Mβ. Example 2. The sequence of unital rings ⟨Z/2iZ : i < ω⟩is a weak Vopˇ enka sequence for ω: the only (unital) homomorphisms among these rings are the ones mapping n + 2jZ to n + 2iZ for i ≤j. Definition 3. A regular cardinal κ is a weak Vopˇ enka cardinal (or has the weak Vopˇ enka property) if there is no weak Vopˇ enka sequence for κ. Every supercompact cardinal is a weak Vopˇ enka cardinal (Wilson ). Since the least supercompact cardinal is not a Vopˇ enka cardinal, this result showed the inequivalence of Weak Vopˇ enka’s Principle with Vopˇ enka’s Principle, which was an open problem. It can be sharpened as follows. Theorem 4 (Wilson ). Let κ be an inaccessible cardinal. Then κ has the weak Vopˇ enka property if and only if it is a Woodin cardinal. Although every Vopˇ enka cardinal is strong limit and therefore inaccessible, the same is not necessarily true for weak Vopˇ enka cardinals. Indeed, the weak Vopˇ enka property seems to be the “algebraic essence” of Woodinness, similar to how the su-per tree property ITP is the combinatorial essence of supercompactness as argued by Weiss .2 One can show (using AC) that the weak Vopˇ enka property must fail at ω1, and that under CH it must fail at ω2 also. More generally, one can use a □∗ ν sequence to construct a weak Vopenka sequence for ν+. On the other hand, the proof that every supercompact cardinal has the weak Vopˇ enka property can be modified to show: Theorem 5. For every regular cardinal κ, if ITP(κ) holds, then κ has the weak Vopˇ enka property. Because ITP(ω2) holds under PFA (Weiss ) and ITP(ν+) holds whenever ν is a countable cofinality limit of supercompact cardinals (Hachtman and Sinapova ), we thereby obtain two different examples of the weak Vopˇ enka property at successor cardinals. Naturally, the weak Vopˇ enka property is weaker than ITP and we have the expected upper bound on its consistency strength: 2Perhaps amusingly, this idea suggests that Example 2 is the essential reason that ω is not a Woodin cardinal. Set Theory 51 Theorem 6. If κ is a Woodin cardinal, then it retains the weak Vopˇ enka property after the Mitchell forcing to make κ equal to ω2 (or to the double successor of any other regular cardinal less than κ.) For consistency strength lower bounds, the picture is somewhat murky. It seems difficult to obtain anything beyond virtual large cardinals in L from the hypothesis that ω2 is a weak Vopˇ enka cardinal. However, it is not hard to show that if there is a weak Vopˇ enka cardinal larger than ω2, then 0♯exists. Moreover, if there is a weak Vopˇ enka cardinal that is countably closed, it should be possible to show that there is an inner model with a Woodin cardinal, although the details of this have not yet been worked out. Nevertheless, the idea of the weak Vopˇ enka property as the algebraic essence of Woodinness is supported by the following result: Theorem 7. If κ is a weak Vopˇ enka cardinal and W is an inner model with the countable approximation property in which κ is inaccessible, then κ is a Woodin cardinal in W. We end with the remark that neither the weak Vopˇ enka property nor the tree property implies the other, which is no surprise since neither Woodinness nor weak compactness implies the other. References J. Ad´ amek, J. Rosick´ y, and V. Trnkov´ a. Are all limit-closed subcategories of locally pre-sentable categories reflective? In Categorical algebra and its applications (Louvain-La-Neuve, 1987), volume 1348 of Lecture Notes in Math., pages 1–18. Springer, Berlin, 1988. S. Hachtman and D. Sinapova. The super tree property at the successor of a singular. arXiv preprint arXiv:1806.00820, 2018. C. Weiss. Subtle and Ineffable Tree Properties. Ph.D. Thesis, Ludwig Maximilians Univer-sit¨ at M¨ unchen, 2010. C. Weiss. The combinatorial essence of supercompactness. Annals of Pure and Applied Logic, 163(11):1710–1717, 2012. T. Wilson. Weak Vopˇ enka’s Principle does not imply Vopˇ enka’s Principle. Advances in Mathematics, 363, 2020. T. Wilson. The large cardinal strength of Weak Vopˇ enka’s Principle. arXiv preprint arXiv:1907.00284, 2020. Coloring algebraic hypergraphs without choice Jindrich Zapletal An algebraic hypergraph of arity n is a subset of [Rk]n defined by an algebraic equation with integer coefficients, for some dimension k ≥1. The chromatic num-bers of such hypergraphs have been studied for many years, notably by Erd˝ os, Hajnal, and Komj´ ath. Several years ago, Schmerl completely characterized al-gebraic hypergraphs of countable chromatic number. He showed that for each such hypergraph Γ, exactly one of the following holds. Either ZFC proves that χ(Γ) ≤ℵ0, or ZFC proves that χ(Γ) > ℵ0, or there is a natural number m ≥1 such that ZFC proves that χ(Γ) ≤ℵ0 is equivalent to 2ℵ0 ≤ℵm. Moreover, there is a 52 Oberwolfach Report 14/2020 computer algorithm which determines the appropriate slot of this multichotomy for each algebraic hypergraph Γ. In a similar spirit, we attempt to compare algebraic hypergraphs by their chro-matic number in the weaker theory ZF+DC. Such a task must result in a chart much more complex and informative than that of Schmerl. We describe the first general result and the first independence results of this program. Definition 1. A hypergraph Γ on a set X is redundant if for each set a ⊂X, the set {x ∈X : a ∪{x} ∈Γ} is finite. Redundant algebraic hypergraphs include the hypergraph on R2 of arity 3 con-sisting of all triples of vertices of equilateral triangles, the hypergraph on Rn of arity 4 consisting of all quadruples of vertices of squares, the hypergraph on R of arity 3 consisting of solutions to x3 + y3 + z3 −3xyz = 0. Non-algebraic re-dundant hypergraphs include the hypergraph on G consisting of all solutions to x0x−1 1 x2x−1 3 = 1 for any Polish group G. An example of a hypergraph which is not redundant: the triples of vertices of isosceles triangles in R2. Algebraic redundant hypergraphs are relatively easy to color in choiceless set theory. We construct a balanced forcing which adds a coloring to each such hy-pergraph over the symmetric Solovay model and obtain the following: Theorem 2. Let Γ be a redundant algebraic hypergraph. It is consistent relative to an inaccessible cardinal that ZF+DC holds, the chromatic number of Γ is countable, and (1) there is no uncountable sequence of pairwise distinct Borel sets of bounded rank; (2) there is no discontinuous homomorphism between Polish groups; (3) no turbulent orbit equivalence relation has a selector. Comparing chromatic numbers of specific hypergraphs, we get theorems such as Theorem 3. It is consistent relative to an inaccessible cardinal that ZF+DC holds, the square hypergraph in R2 is countably chromatic, and the equilateral triangle hypergraph in R2 is uncountably chromatic. Theorem 4. It is consistent relative to an inaccessible cardinal that ZF+DC holds, the square hypergraph in R2 is countably chromatic, and the square hypergraph in R3 is uncountably chromatic. Theorem 5. (Joint with Paul Larson) It is consistent relative to an inaccessible cardinal that ZF+DC holds, the equilateral triangle hypergraph in R2 is countably chromatic, and the equilateral triangle hypergraph in R3 is uncountably chromatic. As a final word of caution we point out that there are algebraic hypergraphs for which countable coloring provides objects very close to a well-ordering of the reals. Theorem 6. (ZF) Let Γ be the right triangle hypergraph in R2. If the chromaic number of Γ is countable, then there is a countable-to-one map from R to ω1. Set Theory 53 Topological dynamics beyond Polish groups Andy Zucker (joint work with Gianluca Basso) To each topological group G, one can construct its universal minimal flow M(G), a minimal G-flow which admits a G-map onto every other minimal flow. This property characterizes M(G) up to isomorphism. In the past two decades, much work has gone into the case where G is a Polish group, i.e. a topological group whose underlying topological space is a separable, completely metrizable space. For a number of Polish groups, M(G) turns out to be trivial, for instance when G = U(H) for an infinite-dimensional Hilbert space, or when G = Aut(Q), the group of order-preserving bijections of the rationals with the pointwise topology. Other times, M(G) is non-trivial, but still metrizable, for instance when G = Sym(ω) or G = Homeo(2ω). In the remaining cases, M(G) is extremely large, for instance whenever G is a locally compact, non-compact Polish group. For Polish groups, the works of Kechris, Pestov, and Todorˇ ceviˇ c ; Melleray, Nguyen Van Th´ e, and Tsankov ; Zucker ; and Ben Yaacov, Melleray, and Tsankov provide an almost complete understanding of when M(G) is metrizable and what M(G) looks like if so. In the case that G = Aut(K) for K a countable ultrahomogeneous structure, M(G) is trivial iffAge(K) is a Ramsey class, and M(G) is metrizable iffAge(K) has finite Ramsey degrees. In the latter case, there is a canonical expansion of the class Age(K) so that M(G) is the associated space of expansions of K ¯. As an example, if K ¯ is the Random graph, then M(G) is the space of all linear orders of K. If G is a general Polish group with M(G) metrizable, then there is a closed, co-precompact subgroup H with M(H) trivial and with M(G) = [ G/H, the completion of G/H with respect to the metric inherited from any compatible right-invariant metric on G. Hence in the case that G is a Polish group, the property of having M(G) metrizable is a natural dividing line, capturing those groups with “nice” dynamics. When we move beyond the class of Polish groups, far less is known. The first effort in this direction is due to Bartoˇ sov´ a , who considers groups of the form Aut(K) for K an uncountable, ω-homogeneous structure. Endowed with the point-wise topology, Aut(K) is a topological group, and Bartoˇ sov´ a extends many of the results of to this uncountable setting. For instance, M(Aut(K)) is trivial iff Age(K) is a Ramsey class, and if K is an uncountable, ω-homogeneous graph which embeds every finite graph, then M(G) is the space of linear orders of K. This is no longer a metrizable space, but it is still somehow “nice.” So if we seek to extend our dynamical dividing line to all topological groups, a new criterion is needed. In this work, we propose a dividing line which makes sense for any topological group. If G is a topological group and X is a G-flow, then the set of almost periodic points of X is the set AP(X) := {x ∈X : Gx is minimal}. We say that a topological group is CAP, for “closed AP,” if for every G-flow X, the set AP(X) ⊆X is closed. While this appears to have nothing to do with our 54 Oberwolfach Report 14/2020 earlier discussion, one can show that when G is Polish, then G is CAP iffM(G) is metrizable. There are a number of equivalent ways of saying that a topological group G is CAP. While the definition is the easiest to state, the most useful formulation refers to how copies of M(G) can sit inside S(G), the Samuel compactification of G. While S(G) comes with a compact topology, one can also equip S(G) with a finer topology called the UEB topology. This in turn equips M(G) ⊆S(G) with a finer topology; one can show that this will not depend on the choice of minimal subflow of S(G). We show that G is CAP precisely when these two topologies on M(G) coincide, generalizing . We show that the class of CAP groups is closed under arbitrary products, surjective inverse limits, and group extensions. If G is CAP and H is arbitrary, we have that M(G × H) = M(G) × M(H); if {Gi : i ∈I} is a family of CAP groups and G = Q i Gi, then M(G) = Q i M(Gi). We use this to compute the universal minimal flow of the group G = Homeo(ω1), a group recently investigated by Gheysens . When G = Aut(K) for K an uncountable, ω-homogeneous structure, we show that G is CAP iffAge(K) has finite Ramsey degrees, generalizing . We also have a weak version of the result from , namely, if H ⊆G is a closed, co-precompact subgroup with M(H) trivial and [ G/H a minimal flow, then G is CAP and M(G) = [ G/H. The converse remains open, and is related to a question asked in . As an example of this question, suppose that K is an uncountable, ω-homogeneous graph which embeds every finite graph. Then is there some linear order on K so that ⟨K, <⟩is also ω-homogeneous? References D. Bartoˇ sov´ a, Topological dynamics of automorphism groups of ω-homogeneous structures via near ultrafilters, Ph.D. Thesis, University of Toronto, 2013. I. Ben Yaacov, J. Melleray, and T. Tsankov. Metrizable universal minimal flows of Polish groups have a comeagre orbit. Geom. Funct. Anal. 27(1) (2017), 67–77. M. Gheysens. The homeomorphism group of the first uncountable cardinal. L’Enseignement math´ ematique, to appear. A. S. Kechris, V. G. Pestov, and S. Todorˇ ceviˇ c. Fra¨ ıss´ e limits, Ramsey theory, and topolog-ical dynamics of automorphism groups. Geom. Funct. Anal., 15(1) (2005), 106–189. J. Melleray, L. Nguyen Van Th´ e, and T. Tsankov. Polish groups with metrizable universal minimal flows. Int. Math. Res. Not., 5 (2016), 1285–1307. A. Zucker. Topological dynamics of automorphism groups, ultrafilter combinatorics, and the Generic Point Problem. Trans. Amer. Math. Soc., 368(9) (2016), 6715–6740. Reporter: Andreas Lietz Set Theory 55 Participants Dr. David Aspero School of Mathematics University of East Anglia Norwich Research Park Norwich UNITED KINGDOM Dr. Omer Ben-Neria Institute of Mathematics The Hebrew University Givat-Ram 91904 Jerusalem ISRAEL Prof. Dr. J¨ org Brendle Group of Logic, Statistics and Informatics Graduate School of System Informatics Kobe University Rokko-dai 1-1, Nada Kobe JAPAN Dr. William Chan Department of Mathematics University of North Texas 1155 Union Circle #311430 Denton UNITED STATES Dr. Ruiyuan Chen Dept. of Mathematics, University of Illinois at Urbana Champaign 273 Altgeld Hall 1409 West Green Street Urbana, IL 61801 UNITED STATES Clinton T. Conley Department of Mathematical Sciences Carnegie Mellon University Pittsburgh UNITED STATES Prof. Dr. James W. Cummings Department of Mathematical Sciences Carnegie Mellon University Pittsburgh UNITED STATES Prof. Dr. Natasha Dobrinen Department of Mathematics University of Denver C.M. Knudson Hall 302 2390 S. York St. Denver, CO 80208 UNITED STATES Prof. Dr. Mirna Dzamonja School of Mathematics University of East Anglia Norwich UNITED KINGDOM Prof. Dr. Ilijas Farah Department of Mathematics and Statistics York University 4700 Keele Street Toronto, ONT M3J 1P3 CANADA Dr. Vera Fischer University of Vienna, Institute of Mathematics, Kurt G¨ odel Research Center W¨ ahringer Str. 25 1090 Wien AUSTRIA Prof. Dr. Matthew D. Foreman Department of Mathematics University of California, Irvine Irvine UNITED STATES 56 Oberwolfach Report 14/2020 Prof. Dr. Moti Gitik Department of Mathematics School of Mathematical Sciences Tel Aviv University P.O.Box 39040 Ramat Aviv, Tel Aviv ISRAEL Gabriel Goldberg Department of Mathematics Harvard University Science Center One Oxford Street Cambridge UNITED STATES Prof. Dr. Joel David Hamkins University College Oxford High Street Oxford UNITED KINGDOM Dr. Haim Horowitz Department of Mathematics University of Toronto 100 St. George Street Toronto CANADA Prof. Dr. Stephen C. Jackson Department of Mathematics University of North Texas P.O.Box 311430 Denton UNITED STATES Prof. Dr. John Krueger Department of Mathematics University of North Texas P.O.Box 311430 Denton UNITED STATES Dr. Aleksandra Kwiatkowska Mathematisches Institut Universit¨ at M¨ unster Einsteinstrasse 62 48149 M¨ unster GERMANY Prof. Dr. Paul B. Larson Department of Mathematics Miami University Oxford, OH 45056 UNITED STATES Andreas Lietz Mathematisches Institut Universit¨ at M¨ unster Einsteinstr. 62 48149 M¨ unster GERMANY Prof. Dr. Menachem Magidor Institute of Mathematics The Hebrew University Edmond J. Safra Campus 91904 Jerusalem ISRAEL Andrew Marks Department of Mathematics UCLA P.O. Box BOX 951555 Los Angeles CA 90095-1555 UNITED STATES Prof. Dr. Heike Mildenberger Abteilung f¨ ur Mathematische Logik Universit¨ at Freiburg Ernst-Zermelo-Str. 1 79104 Freiburg i. Br. GERMANY Set Theory 57 Prof. Dr. Itay Neeman Department of Mathematics UCLA 405 Hilgard Ave. Los Angeles UNITED STATES Alejandro Poveda Facultat de Matematiques Universitat de Barcelona Gran Via, 585 08071 Barcelona, Catalonia SPAIN Dr. Dilip Raghavan Department of Mathematics National University of Singapore 10 Lower Kent Ridge Road Singapore SINGAPORE Prof. Dr. Christian Rosendal Department of Mathematics University of Illinois at Chicago 851 S Morgan St Chicago, IL 60607 UNITED STATES Dr. Marcin Sabok Dept. of Mathematics and Statistics McGill University 805, Sherbrooke Street West Montreal CANADA Dr. Hiroshi Sakai Graduate School of System Informatics Kobe University Rokko-dai 1-1, Nada Kobe JAPAN Prof. Dr. Grigor Sargsyan Department of Mathematics Rutgers University Hill Center, Busch Campus 110 Frelinghuysen Road Piscataway, NJ 08854 UNITED STATES Prof. Dr. Ralf Schindler Institut f¨ ur Mathematische Logik und Grundlagenforschung Universit¨ at M¨ unster Einsteinstrasse 62 48149 M¨ unster GERMANY Dr. Farmer Schlutzenberg Mathematisches Institut Universit¨ at M¨ unster Einsteinstrasse 62 48149 M¨ unster GERMANY Prof. Dr. Dima Sinapova Department of Mathematics, Statistics and Computer Science, M/C 249 University of Illinois at Chicago Chicago UNITED STATES Prof. Dr. Stevo Todorcevic Department of Mathematics University of Toronto Toronto CANADA Dr. Nam Trang Department of Mathematics University of North Texas Denton, TX 76203 UNITED STATES 58 Oberwolfach Report 14/2020 Dr. Todor Tsankov Institut Camille Jordan Universit´ e Claude Bernard - Lyon 1 43, boulevard du 11 novembre 1918 69622 Villeurbanne Cedex FRANCE Dr. Anush Tserunyan Department of Mathematics University of Illinois at Urbana-Champaign 273 Altgeld Hall 1409 West Green Street Urbana UNITED STATES Dr. Spencer Unger Einstein Institute of Mathematics The Hebrew University Givat Ram 91904 Jerusalem ISRAEL Dr. Andrea Vaccaro Department of Mathematics, Ben Gurion University of the Negev 84105 Beer-Sheva ISRAEL Prof. Dr. Boban D. Velickovic IMJ-PRG, Universit´ e de Paris 8 Place Aur´ elie Nemours P.O. Box 7012 75205 Paris Cedex 13 FRANCE Prof. Matteo Viale Dipartimento di Matematica Universita degli Studi di Torino Via Carlo Alberto, 10 10123 Torino ITALY Dr. Alessandro Vignati Universit´ e de Paris Institut des Math´ ematiques de Jussieu, Paris Rive-Gauche (IMJ-PRG) 8, Place Aur´ elie Nemours 75013 Paris FRANCE Dr. Trevor Wilson Department of Mathematics Miami University Oxford UNITED STATES Prof. Dr. Jindrich Zapletal Department of Mathematics University of Florida 358 Little Hall Gainesville UNITED STATES Prof. Dr. Martin Zeman Department of Mathematics University of California, Irvine Irvine, CA 92697-3875 UNITED STATES Andy Zucker Universit´ e Paris Diderot UFR de Math´ ematiques Bˆ atiment Sophie Germain 75205 Paris Cedex 13 FRANCE
363
The Nutcracker – Sugar Plum pas de deux: Adagio (Nuñez, Muntagirov, The Royal Ballet) Royal Ballet and Opera 1480000 subscribers 139108 likes Description 8029794 views Posted: 26 Dec 2018 A Christmas treat for the whole family and a classic with a special place in the hearts of ballet fans around the world. Principals of The Royal Ballet Marianela Nuñez and Vadim Muntagirov perform the Adagio from the Sugar Plum grand pas de deux in Act II of The Nutcracker. Peter Wright’s enchanting production of The Nutcracker will be performed at the Royal Opera House 22 November 2025–5 January 2026. Book tickets now – The Nutcracker is available to watch now – along with over 80 other extraordinary productions – on Royal Ballet and Opera Stream. Watch the whole performance now at Peter Wright's interpretation of The Nutcracker has been enchanting children and adults alike since its first performance by The Royal Ballet in 1984. Lev Ivanov's 1892 ballet combined with Tchaikovsky's sumptuous, iconic score are presented in a festive period setting with vivid designs to make this a charming and magical production. Loosely based on the story by E.T.A. Hoffmann, the ballet begins in the 19th-century German home of the Stahlbaums, where they are hosting a lively Christmas party. The period setting is captured in opulent detail by Julia Trevelyan Oman's designs, which include authentic Christmas tree decorations that are magically brought to life. Wright's choreography ingeniously incorporates surviving fragments of the ballet's original material, including the sublime pas de deux for the Sugar Plum Fairy and her Prince. But in emphasizing the relationship between Clara and the Nutcracker, the production also gains a touching subtext of first love. 2800 comments Transcript:
364
Published Time: 2024-12-22T09:17:25Z A Minimally Invasive Procedure for Fibroblasts Isolation from 1-mm Skin Punch Biopsies in Pediatric Patients =============== There is a newer version of this protocol available. Close View the newer version Sign up free Edit and publish protocols, collaborate in communities, share insights through comments, and track progress with run records. Sign up free Already have an account? Log in Features Plans Case study Contact Sales Search Log in Sign up free Table of contents 0 Dec 22, 2024 Version 1 A Minimally Invasive Procedure for Fibroblasts Isolation from 1-mm Skin Punch Biopsies in Pediatric Patients V.1 DOI dx.doi.org/10.17504/protocols.io.81wgbrrjylpk/v1 María Heredia-Torrejón 1, Begoña Puga-López 2, Dolores M. Guerrero-López 2, Alfonso M. Lechuga-Sancho 1,3, Raúl Montañez4 1 Department of Child and Mother Health and Radiology, Instituto de Investigación e Innovación Biomédica de Cádiz (INiBICA), Medical School, Universidad de Cádiz, Cádiz, Spain.; 2 Research Unit, Instituto de Investigación e Innovación Biomédica de Cádiz (INiBICA), Hospital Universitario Puerta del Mar, Universidad de Cádiz, Cádiz, Spain.; 3 Division of Endocrinology, Department of Pediatrics, Hospital Universitario Puerta del Mar, Universidad de Cádiz, Instituto de Investigación e Innovación Biomédica de Cádiz (INiBICA), Cádiz, Spain.; 4 Department of Molecular Biology and Biochemistry, University of Málaga, Andalucía Tech, E-29071 Málaga, Spain. María Heredia-Torrejón: These authors contributed equally; Begoña Puga-López: These authors contributed equally; Raúl Montañez: Correspondence: [email protected]; Raúl Montañez Universidad de Málaga 3 0 Run Create an editable copy for use in your research Got it Copy / Fork Steps Guidelines Warnings References Forks Metadata Materials Metrics Acknowledgements Abstract The integration of multi-omics techniques has revolutionized the diagnosis of rare genetic diseases. However, interpreting pediatric variants of uncertain significance (VUS) remains a significant challenge. Skin biopsies offer a valuable source of culturable cells for analyzing molecular phenotypes by functional and omic studies, especially in patients with rare genetic disorders. Traditional methods for isolating fibroblasts from skin biopsies often require large tissue samples, invasive procedures, and subsequent wound care, which can be particularly distressing for pediatric patients and deviates from ethical standards. To overcome this limitation, we developed a simplified protocol that minimizes patient discomfort. By reducing the punch biopsy diameter to as little as 1 mm, we significantly reduce invasiveness while achieving high yield and purity of isolated fibroblasts. This streamlined, minimally invasive approach is well-suited for molecular diagnostics laboratories and facilitates the study of rare genetic diseases in children. By enabling advanced diagnostics, therapeutic studies, and personalized medicine, this protocol represents a meaningful advancement in pediatric research and clinical care. Before starting Informed consent will be obtained from all participants or, in the case of minors, from their parents or legal guardians. This consent process will ensure participants and their representatives receive comprehensive and understandable information about the objectives, procedures, and potential risks and benefits of the protocol. They will also be informed about their rights to withdraw from the protocol at any time without any adverse consequences. Create & collaborate more with a free account Edit and publish protocols, collaborate in communities, share insights through comments, and track progress with run records. Create free account Introduction 1 The field of biomedical research has entered the “omics” era, marked by a surge in our capacity to mine molecular information. The integration of multi-omics techniques offers unprecedented power to explore genomic anomalies and regulatory alterations , far beyond the limitation of the restricted set of targets we had a decade ago. These technological improvements have enhanced our diagnostic yield in patients with rare genetic diseases, where clinical diagnosis alone does not suffice due to phenotypic overlap among different conditions, often resulting in misdiagnoses . A correct molecular diagnosis is critical for establishing the natural course of a disease, therefore tailoring an optimal management plan. It empowers clinicians to implement preventive measures, monitor disease progression and manage symptoms effectively, ultimately leading to personalized therapies and better patient outcomes. Additionally, a confirmed diagnosis alleviates anxiety for patients and their families, facilitating informed decision-making through genetic counseling . However, a significant challenge remains in this diagnostic process: data interpretation. Advances in omics techniques enable exploration of organizational and regulatory complexity across multiple levels. For instance, next-generation sequencing (NGS) techniques give us the means to explore entire genome sequences, but they also involve the interpretation of an average of 5·106 variants . This limitation in data integration and interpretation, often left genetic testing to identify candidate variants classified as variants of uncertain significance (VUS). These variants hinder the establishment of a definitive diagnosis, becoming a serious problem that worsens as more genomes are sequenced. For instance, in the case of Noonan Syndrome, up to 65% of variants in PTPN11 and up to 87,6% in SOS1 are classified as VUS . These VUS require further evidence to be reclassified as “benign/likely benign” or “pathogenic/likely pathogenic” . However, every VUS reclassified will benefit other patients harboring this variant. To overcome this limitation and achieve a definitive diagnosis, cells derived from skin biopsy emerge as an invaluable tool to analyze cellular and molecular phenotypes and gather experimental evidence to carry out this reclassification. The limited availability of statistical evidence and specialized expertise, inherently present within the context of RDs, highlights the critical need for reclassifying VUS to improve patient care . Patient-derived cell cultures offer a powerful tool for achieving these functional validations. However, traditional skin biopsy protocols (requiring 3–4 mm biopsy punches) are invasive, painful, and associated with significant risks such as bleeding, infection, and the need for sutures and wound care . Recent advancements in biopsy techniques have introduced smaller 1 mm punches, which, being only marginally thicker than standard blood collection needles, present a less painful and non-invasive alternative [9,10]. While these smaller samples are not suitable for anatomopathological studies, they suffice for establishing primary cultures critical for molecular biology research. This transition to minimally invasive methods greatly enhances patient comfort and reduces psychological distress, particularly in pediatric populations. Our optimized protocol successfully isolates fibroblasts from 1 mm skin tissue explants (TE), ensuring ethical and patient-friendly research practices. To facilitate widespread adoption in clinical laboratories, we simplified the protocol, reducing procedural complexity and contamination risks, thereby promoting its utility for Bed-to-Bench studies and translational research. Over 32 applications of this method, we have observed no infections, need for sutures, or specific wound care requirements, with only one instance requiring a procedural repeat due to initial cell growth failure. These data underscore the efficiency, reliability, and potential for routine clinical integration of the proposed protocol. Procedure 2h 15m 2 This protocol builds upon well-established practices for fibroblast isolation from skin biopsies [8,11], refining them to incorporate the minimally invasive 1 mm punch biopsy technique. While the suggested method offers a significant improvement over traditional approaches, it remains invasive and unpleasant, particularly for young children. Therefore, the benefits must be carefully weighed against the risks for patients. The protocol was approved by the local institutional ethical committee (Code# FPS-CMER-2022), ensuring adherence to ethical guidelines. Additionally, informed consent was obtained from all participants, and/or their legal guardians, ensuring they fully understood the nature of the procedure and any potential risks involved. To ensure the safety and reliability of the procedure, strict sterile techniques must be adhered to at all times to minimize the risk of pathogen transmission. 3 STEP 1: OBTAINING A TISSUE EXPLANT (TE) Considerations 1) Consistent biopsy site selection across patients is crucial to ensure reliable and comparable results, as cellular composition and gene expression can vary significantly between skin regions. We chose the area between the lower scapula and the spine for its accessibility, minimal patient discomfort, and thicker dermis, which yields sufficient tissue for analysis. Additionally, this dorsal skin originates from the somitic mesoderm, aligning it developmentally with chondrocytes. 2) Handle the skin biopsy with care to avoid contact with materials that could cause fixation, such as ethanol on tweezers or Prontosan on the skin or gauze. 3) Utilizing disposable biopsy punches with a plunger (Kai Medical; Japan), minimizes tissue handling, reduces the risk of contamination, and prevents the loss of the tissue cylinder. 4) When immediate culture is not possible, the sample may be stored at room temperature for up to 3-4 hours. For extended storage of up to 24 h, or during transport to another laboratory, the sample should be kept at 4ºC in the same conical tube used for collection. . 40m 3.1 Procedure: Prepare a 15 mL conical tube with 10 mL of fibroblast medium (FBM) (DMEM Low Glucose Pan-Biotech Catalog #P04-01550, 20% Fetal Bovine Serum Corning Catalog #35-079-CV, 100 U/mL Penicillin Pan-Biotech Catalog #P06-07100, 100 ng/mL Streptomycin Pan-Biotech Catalog #P06-07100) tempered to 37 °C. Thinly apply EMLA Aspen Surgical Catalog #679290 to the desired area and allow it to act for at least half an hour. Using a sterile gauze, cleanse the skin with a commercially available solution of purified water, betaine surfactant and 0.1% Polyaminopropyl Biguanide, Prontosan B.Braun Medical Inc Catalog #400403. Pinch the skin surrounding the biopsy site using the thumb and index finger. Carefully, punch down the 1-mm trocar with a rotating movement through the epidermis and dermis. Equipment 1-mm Biopsy Punch NAME Biopsy Punches Plunger Type BRAND BPP-10F SKU LINK After the extraction, clean the area with a sterile gauze soaked in Prontosan B.Braun Medical Inc Catalog #400403 and apply an adhesive bandage. Transfer the sample into the conical tube with 10 ml of FBM, ensuring it settles at the bottom. Characterization of Primary Fibroblast Cultures Discussion [email protected] About What is a protocol? About us Blog Ambassadors Request a demo Contact Sales Contact us Platform For institutions Premium partners We enter protocols Protocols entry methods For developers Analytics RSS Plans Billing policy Security More info Release notes Webinars Case studies Startup program Branding Help Accessibility (VPAT) Tutorials FAQ protocols.io is perfect for science methods, assays, clinical trials, operational procedures and checklists for keeping your protocols up to date as recommended by Good Laboratory Practice (GLP) and Good Manufacturing Practice (GMP). ISSN 2473-1838. Terms of Service Privacy Policy Manage Cookies Code of Conduct Trademarks Supporters Copyright 2025 The protocols.io website uses cookies. By continuing to browse the site, you accept our use of cookies, Privacy Policy and Terms of Service. Got it
365
Wind Drake · Kaladesh (KLD) #70 · Scryfall Magic The Gathering Search =============== Skip to main content Skip to footer ScryfallSearch for Magic cards Find Cards AdvancedSyntaxSetsRandomYour decksSign In Menu Add to Deck Wind Drake {2}{U} Creature — Drake Flying Drakes prowl the skies of Kaladesh, waiting for the perfect moment to strike. 2/2 Illustrated by Todd Lockwood Standard Not Legal Alchemy Not Legal Pioneer Legal Historic Not Legal Modern Legal Brawl Not Legal Legacy Legal Timeless Not Legal Vintage Legal Pauper Legal Commander Legal Penny Legal Oathbreaker Legal Kaladesh (KLD) #70 · Common · English · Nonfoil/Foil enesfrdeitptjakoru汉语漢語⋮ | Prints | USD | EUR | TIX | | --- | --- | --- | --- | | The List | $0.02 | €0.05 | | | Kaladesh #70 | $0.02 | €0.03 | 0.04 | | Tempest Remastered | | | 0.06 | | Dragon's Maze | $0.02 | €0.04 | 0.04 | | Magic 2013 | $0.04 | €0.04 | 0.04 | | Magic 2010 | $0.05 | €0.10 | 0.04 | | Ninth Edition #112 | $0.06 | €0.13 | 0.04 | | Ninth Edition #112★ | ✶$0.24 | | | | Eighth Edition #114 | $0.09 | €0.05 | 0.04 | | Eighth Edition #114★ | ✶$0.25 | | | | View all prints → | | Variations | | --- | | Starter Pack Wind Drake #70† | Toolbox TCGplayer articles about this card TCGplayer decks with this card Search MTGTop8 for this card Moxfield decks with this card Card analysis on EDHREC Card stats on Cube Cobra Open on Scryfall Tagger Buy This Card Buy on TCGplayer$0.02Buy foil on TCGplayer✶$0.15 Buy on Cardmarket€0.03Buy foil on Cardmarket✶€0.12 Buy on Cardhoarder 0.04 Images and Data Download PNG image (1.61 MB) Download art crop (78.42 KB) Copy-pasteable text Copy-pasteable JSON View on Gatherer Report card issue Cards Advanced Search Syntax Guide All Sets Your Decks Random Card Scryfall Slack & Discord Bots Site FAQs & Help Scryfall Blog Scryfall Tagger MTG Wiki Art Game Account Your Account Register Terms of Service Contact Us Privacy & Security Developers API Documentation Card Objects Card Imagery Bulk Data API Changelog More Donate to Scryfall Patreon Bluesky GitHub Portions of Scryfall are unofficial Fan Content permitted under the Wizards of the Coast Fan Content Policy. The literal and graphical information presented on this site about Magic: The Gathering, including card images and mana symbols, is copyright Wizards of the Coast, LLC. Scryfall is not produced by or endorsed by Wizards of the Coast. The Slack, Discord, Cash App, PayPal, and Patreon logos are copyright their respective owners. Scryfall is not produced by or endorsed by these services. Card prices and promotional offers represent daily estimates and/or market values provided by our affiliates. Absolutely no guarantee is made for any price information. See stores for final prices and details. All other content © 2025 Scryfall, LLC.
366
Engineered AAV capsid transport mutants overcome transduction deficiencies in the aged CNS - PMC =============== Skip to main content An official website of the United States government Here's how you know Here's how you know Official websites use .gov A .gov website belongs to an official government organization in the United States. Secure .gov websites use HTTPS A lock ( ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites. PMC Search Update PMC Beta search will replace the current PMC search the week of September 7, 2025. Try out PMC Beta search now and give us your feedback. Learn more Search Log in Dashboard Publications Account settings Log out Search… Search NCBI Primary site navigation Search Logged in as: Dashboard Publications Account settings Log in Search PMC Full-Text Archive Search in PMC Advanced Search Journal List User Guide New Try this search in PMC Beta Search View on publisher site Download PDF Add to Collections Cite Permalink PERMALINK Copy As a library, NLM provides access to scientific literature. Inclusion in an NLM database does not imply endorsement of, or agreement with, the contents by NLM or the National Institutes of Health. Learn more: PMC Disclaimer | PMC Copyright Notice Mol Ther Nucleic Acids . 2024 Sep 12;35(4):102332. doi: 10.1016/j.omtn.2024.102332 Search in PMC Search in PubMed View in NLM Catalog Add to search Engineered AAV capsid transport mutants overcome transduction deficiencies in the aged CNS Ivette M Sandoval Ivette M Sandoval 1 Parkinson’s Disease Research Unit, Department of Translational Neuroscience, Barrow Neurological Institute, Phoenix, AZ 85013, USA Find articles by Ivette M Sandoval 1, Christy M Kelley Christy M Kelley 1 Parkinson’s Disease Research Unit, Department of Translational Neuroscience, Barrow Neurological Institute, Phoenix, AZ 85013, USA Find articles by Christy M Kelley 1, Luis Daniel Bernal-Conde Luis Daniel Bernal-Conde 1 Parkinson’s Disease Research Unit, Department of Translational Neuroscience, Barrow Neurological Institute, Phoenix, AZ 85013, USA Find articles by Luis Daniel Bernal-Conde 1, Kathy Steece-Collier Kathy Steece-Collier 2 Department of Translational Neuroscience, Michigan State University College of Human Medicine, Grand Rapids, MI 49506, USA Find articles by Kathy Steece-Collier 2, David J Marmion David J Marmion 1 Parkinson’s Disease Research Unit, Department of Translational Neuroscience, Barrow Neurological Institute, Phoenix, AZ 85013, USA Find articles by David J Marmion 1, Marcus Davidsson Marcus Davidsson 1 Parkinson’s Disease Research Unit, Department of Translational Neuroscience, Barrow Neurological Institute, Phoenix, AZ 85013, USA Find articles by Marcus Davidsson 1, Sean M Crosson Sean M Crosson 3 Division of Cellular and Molecular Therapy, Department of Pediatrics, University of Florida, Gainesville, FL 32610, USA Find articles by Sean M Crosson 3, Sanford L Boye Sanford L Boye 4 Powell Gene Therapy Center, Department of Pediatrics, University of Florida, Gainesville, FL 32610, USA Find articles by Sanford L Boye 4, Shannon E Boye Shannon E Boye 3 Division of Cellular and Molecular Therapy, Department of Pediatrics, University of Florida, Gainesville, FL 32610, USA Find articles by Shannon E Boye 3, Fredric P Manfredsson Fredric P Manfredsson 1 Parkinson’s Disease Research Unit, Department of Translational Neuroscience, Barrow Neurological Institute, Phoenix, AZ 85013, USA Find articles by Fredric P Manfredsson 1,∗ Author information Article notes Copyright and License information 1 Parkinson’s Disease Research Unit, Department of Translational Neuroscience, Barrow Neurological Institute, Phoenix, AZ 85013, USA 2 Department of Translational Neuroscience, Michigan State University College of Human Medicine, Grand Rapids, MI 49506, USA 3 Division of Cellular and Molecular Therapy, Department of Pediatrics, University of Florida, Gainesville, FL 32610, USA 4 Powell Gene Therapy Center, Department of Pediatrics, University of Florida, Gainesville, FL 32610, USA ∗ Corresponding author: Fredric P. Manfredsson, Parkinson’s Disease Research Unit, Department of Translational Neuroscience, Barrow Neurological Institute, Phoenix, AZ 85013, USA. [email protected] Received 2023 Nov 22; Accepted 2024 Sep 5; Collection date 2024 Dec 10. © 2024 The Author(s) This is an open access article under the CC BY-NC-ND license ( PMC Copyright notice PMCID: PMC11497394 PMID: 39445231 Abstract Adeno-associated virus (AAV)-based gene therapy has enjoyed great successes over the past decade, with Food and Drug Administration-approved therapeutics and a robust clinical pipeline. Nonetheless, barriers to successful translation remain. For example, advanced age is associated with impaired brain transduction, with the diminution of infectivity depending on anatomical region and capsid. Given that CNS gene transfer is often associated with neurodegenerative diseases where age is the chief risk factor, we sought to better understand the causes of this impediment. We assessed two AAV variants hypothesized to overcome factors negatively impacting transduction in the aged brain; specifically, changes in extracellular and cell-surface glycans, and intracellular transport. We evaluated a heparin sulfate proteoglycan null variant with or without mutations enhancing intracellular transport. Vectors were injected into the striatum of young adult or aged rats to address whether improving extracellular diffusion, removing glycan receptor dependence, or improving intracellular transport are important factors in transducing the aged brain. We found that, regardless of the viral capsid, there was a reduction in many of our metrics of transduction in the aged brain. However, the transport mutant was less sensitive to age, suggesting that changes in the cellular transport of AAV capsids are a key factor in age-related transduction deficiency. Keywords: MT: Delivery Strategies, AAV, aging, receptor, transport, CNS, capsid mutation, diffusion, retrograde transduction Graphical abstract Open in a new tab Age, the chief risk factor for numerous neurodegenerative disorders, impairs AAV CNS transduction. Sandoval and colleagues tested AAV2 capsid mutants in their ability to overcome this impairment in aged rats. Mutants exhibited improved transduction, albeit still reduced with age. Interestingly, one variant exhibited improved intracellular transport to projection areas. Introduction Adeno-associated virus (AAV)-mediated gene therapy for central nervous system (CNS) indications took a big step forward with the Food and Drug Administration approval of Zolgensma for the treatment of infantile spinal muscular atrophy.1 Nonetheless, numerous gene therapy trials in the CNS, including trials for Parkinson’s disease (PD), while demonstrating a strong safety profile, have failed to demonstrate efficacy in terms of disease modification.2,3,4 Although the underlying reasons for this lack of translation are unclear, several hypotheses have been brought forth. For instance, there is a staunch paucity of animal models that fully recapitulate the disease etiology of sporadic forms of neurological disease, bringing into question whether gene therapy payloads will provide therapeutic translation.5,6,7 Moreover, the vast majority of preclinical studies are conducted in young animals, and thus fail to account for a chief risk factor in the etiology of neurodegenerative disease, i.e., age.8 But equally important, significant changes in transduction efficacy and transgene expression have been documented in the aged CNS depending on anatomical target and serotype. For instance, we have demonstrated that AAV 2, 5, and 9 all exhibit age-dependent reductions in transduction efficacy when delivered to the rodent basal ganglia.9,10 While the higher cost and availability of aged animals can be prohibitive, advanced age of experimental subjects is a critical factor for gene therapy research aimed at age-related diseases. As such, it is imperative to identify AAV capsids that do not show reduced transduction efficiency following delivery to the aged CNS. To do so, one must first identify the mechanisms conferring the apparent resistance to productive infectivity in the aged environment. Although the reason for the age-related impairment in AAV-mediated transduction remains unclear, several factors could contribute to this phenomenon. We have previously identified that specific brain regions show age-related changes in glycans, the primary receptor of AAV. Specifically, we noted an overall reduction in N-acetylated and N-sulfated heparan sulfate (HS) disaccharides (with HS proteoglycans [HSPG] being the primary receptor for AAV2) in the aged rat striatum, as well as region-specific changes in N-glycans with terminal galactose— the primary receptor for AAV9.11 These findings suggest that the overall infectability of AAV within the aged brain declines in part as a result of altered cell receptor dynamics. Moreover, the composition of proteoglycan content changes with age and disease,12,13 a scenario that could alter the degree to which AAV particles are sequestered extracellularly, negatively impacting diffusion and effectively reducing the multiplicity of infection (MOI).14 Other crucial facets of AAV infection are the intracellular factors that mediate the infective cascade following receptor binding: internalization, intracellular trafficking through the endo-/lysosomal pathway, and subsequent nuclear translocation. All these cellular phenomena change with advanced cellular age. For instance, clathrin-mediated endocytosis (CME) is impaired with senescence and aging is associated with lysosomal dysregulation.15,16,17,18 Similarly, the final transport step, nuclear translocation, is also impaired with aging.19 All these age-related changes thus provide avenues by which AAV infectivity declines with advancing age. AAV capsid development has exhibited a veritable explosion in the past decade with researchers utilizing various random approaches, such as directed evolution or random/guided peptide insertion, to generate large AAV libraries.20,21,22,23,24,25 In addition, there has been a significant amount of work focused on rational design of AAV capsids, based on solved capsid structures and an improved understanding of the intracellular pathways that mediate AAV infection, and the constant identification of additional AAV receptors.26 For instance, we and others have characterized the infectivity of various AAV trafficking phosphomimetics, carrying mutations preventing phosphorylation of key exposed surface moieties that guide the capsid to endosomal escape thus avoiding subsequent degradation.27,28,29 These mutants exhibit vastly improved transduction properties when the vector is delivered to various organ systems, including following intraparenchymal injections into rodent and non-human primate brain.27,30,31 Another engineered variant of AAV includes mutations in the binding site for the canonical receptor for AAV2: HSPG.32 Interestingly, in contrast to early reports of strong dependence of AAV2 infection on this receptor, HSPG null mutants exhibit improved transduction properties in the CNS and in retina.27,31,33 In addition, our previous work showed that incorporation of HSPG disrupting mutations vastly improved diffusion throughout the brain parenchyma, rivaling transduction efficacy to that seen with, for instance, AAV5 and AAV8.27 Following intraparenchymal delivery of AAV in human PD clinical trials (average age 57–60 years4,34), infectivity and diffusion were surprisingly limited in postmortem examinations.2,3,4,35 We argued that one key aspect for this observed lack of translation from preclinical models to human patients is the limited AAV infectivity associated with the aged brain discussed above. Further, we hypothesized that utilizing AAV variants that can bypass such age-associated limitations, transduction efficacy, and associated therapeutic benefit would be maintained in the aged brain. Here we evaluated the infectivity and diffusion properties of two distinct AAV2 variants injected into the striatum of both young and aged rats. One variant, “HS,” is an HSPG null mutant predicted to exhibit improved parenchymal diffusion and use a non-canonical receptor for entry, thus avoiding age-associated changes in HSPG receptors and extracellular matrix components. The second variant, “YH,” in addition to the HS mutation, also contained tyrosine (Y)to phenylalanine (F)capsid mutations,29 aimed at overcoming deficiencies in intracellular trafficking by improving the overall efficacy of intracellular transport. These AAV2 variants were compared for infectivity, transgene expression, diffusion, and intracellular transport relative to wild-type (WT) AAV2 in young adult (3-month-old) and aged (20-month-old) rats. Our results demonstrate that both variants, while outperforming WT AAV2, still exhibited reduced transduction efficiency with age. However, the YH variant was less impaired, especially in the context of long-range retrograde transduction, suggesting that components of intracellular transport are key mediators in conferring age-related impairments in transduction. Results Transduction The striatum represents a relevant clinical target for PD and is a large structure ideal for assessing vector spread. Accordingly, to evaluate the efficacy of the mutant vectors, we delivered a single 2 μL bolus of either WT or mutant AAV variants (1.0×10 12 vg/mL) to the center of this structure in either young or aged F344 rats. To assess overall transduction, serial sections were stained for the transgene (mCherry), and the area of transgene within the confines of the striatum was outlined (Figures 1A–1F). The total numbers of transduced cells per section were enumerated using an artificial intelligence (AI) convolutional neural network (CNN) framework, and stereological principles were applied to calculate the total number of transgene+ cells per hemi-striatum (Figure 1G), which were analyzed with a two-way ANOVA (significant differences were seen for the main effects of age (F(1, 37)= 11.98, p= 0.001), and vector (F(2,37)= 59.24, p<0.0001). Tukey honestly significant difference (HSD) post hoc tests were performed where relevant. As expected, WT AAV2 exhibited the lowest level of transduction with 17,412 and 13,088 mCherry+ cells in young and aged animals, respectively. There was not a significant difference in the number of WT AAV2 transgene+ cells between young and aged animals, similar to prior results.10 Both mutant capsids exhibited significantly higher transduction compared with WT (HS young: 136,333 cells, HS aged: 90,467 cells, YH young: 135,051 cells; YH aged 93,841 cells; p<0.0005 for all mutant capsid groups vs. young and aged animals treated with WT AAV2). Although not statistically significant, a notable decrease in the average number of transduced cells is observed with age for both mutants (HS: 33.6% decrease and YH: 30.5% decrease). Finally, because we were chiefly interested in the role of aging in transduction efficiency, we also ran an independent t test in a secondary analysis (as per Polinski et al.10; Figure 1H) to compare young vs. aged subjects injected in the same structure with the identical vector construct where we noted a difference with HS (p<0.05) but not with YH (p= 0.06) or WT (p= 0.11). Figure 1. Open in a new tab Striatal transduction is negatively impacted by advanced age Young adult or aged Sprague-Dawley rats received an intrastriatal injection of WT AAV2, AAV2 HS, or AAV2 YH (2 μL of 1.0×10 12 vg/mL). One month later, animals were euthanized, and the number of striatal transgene-positive cells were enumerated using a combination of AI-based enumeration and stereological principles. (A–F) Representative images of striatal immunoreactivity in young animals injected with WT (A; n= 8), HS (C; n= 7), and YH (E; n= 7) and aged animals injected with WT (B; n= 8), HS (D; n= 6), and YH (F; n= 7). (G) Enumeration of the total number of transduced cells in the striatum analyzed with a two-way ANOVA show a main effect of age (p= 0.001) and vector (p<0.00001) with significantly higher transduction with either mutant. Individual comparisons correspond to post hoc Tukey HSD test. (H) When analyzed using a simple t test with age as the only independent variable, the only age-related effect was seen with HS with lesser immunoreactive cells in the aged brain. (∗p<0.05, ∗∗p= 0.001, ∗∗∗p<0.0005, ∗∗∗∗p<0.0001). Scale bar in (A), 2500 μm and applies to all histograms. Given that this specific AI algorithm had not previously been validated for striatal neurons, we also performed manual cell counts (stereology) of striatal sections from animals from each group. There was no difference between the number of transgene-positive cells per striatum using either methodology as assessed using a regression analysis of total number of cells (R 2= 0.9838; Figure S1). Volumetric spread Lessons learned from clinical trials show that not only is transduction efficiency important, but so is the ability of the virus to diffuse extracellularly and infect target cells throughout a structure. This is particularly true in large areas such as the striatum, a putative target in diseases such as PD and Huntington’s disease (HD). Striatal outlines in AIforia were enumerated for area and converted to volume to represent size differences in physiologically relevant terms (Figure 2A). Two-way ANOVA revealed a significant effect of vector (F(2, 35)= 62.93, p<0.0001) but not age (F(1, 35)= 32.86, p= 0.05). Post hoc comparisons showed that WT transduced young (9.31 mm 3) and aged (7.65 mm 3) animals at a significantly smaller volume than HS (young: 22.78 mm 3, aged: 26.92 mm 3) and YH (young: 28.78 mm 3, aged: 28.94 mm 3); p<0.0001 for all comparisons. There was no difference between transduction spread in YH compared with HS at either age, and there was no effect of age on diffusion for either vector (Figure 2B). Figure 2. Open in a new tab Volumetric distribution In order to better understand the influence of age on striatal vector transduction we utilized the distribution of immunolabeling to outline an area of transduction and used this to calculate volume. Further we mapped transduction along the striatal rostro-caudal axis. (A) We did not observe an effect due to age, but again, both mutants transduced a much larger volume than WT. (B) No age-related effects were found when each vector was analyzed with a t test. (C) Schematic of striatum (shaded red) inside the rat brain (shaded cyan) is represented in three planes: coronal, horizontal, and sagittal. Striatum was spatially defined using reference from rat brain atlas.36 (D) Area transduced per section and (E)mCherry+ cells per section along the rostro-caudal axis. (F–H) Transduction represented as a heatmap for visual comparison. (F) Color scale for total mCherry+ cell counts. (G) Schematic of total mCherry+ cells per section (represented by color assigned by heatmap scale on the right) overlaid across a dorsal view of the striatum. (H)transduction distribution in striatum combined per vector (top panel) or per age (bottom panel). Additionally, to qualitatively assess the differences in the pattern of the vectors’ spread with aging, we mapped the distribution of transduction within the striatum. The rat brain atlas was used as reference to create a schematic of the striatum in each plane: coronal, horizontal, and sagittal (Figure 2C) and a volumetric model (Video S1). Using the data acquired from the AIforia AI analysis of striatal tissue, we plotted: area transduced (mm 2, Figure 2D) and total mCherry+ cells (Figure 2E) per each section, along the rostro-caudal axis. Data lines are superimposed on a schematic of a dorsal view on the striatum (shaded in red and akin to Figure 2C, middle panel). As expected, both HS (blue) and YH (yellow) have a significantly higher spread than WT (orange) across the striatum measured by both area and number of transduced cells. There is no notable difference in the extent of the spread, i.e., area transduced (Figure 2D), between mutant vectors HS and YH, or with age. Similarly, when looking at the number of transduced cells, i.e., mCherry+ cell counts (Figure 2E), both mutants HS and YH display enhanced transduction when compared with WT; and no difference was observed between young and aged rats treated with WT vector. Then again, HS and YH mutants display almost identical patterns of spread in both young (dotted lines) and aged (solid lines) groups. Notably, there was a slight but nonsignificant decrease in transduction with age. Interestingly, however, an apparent shift in transduction caudally within the striatum was seen with age with all viral capsids. Finally, to facilitate visual comparison of the aforementioned results, we assigned total count of mCherry+ cells per section to a heatmap scale (Figure 2F) and overlaid this on a spatial schematic of the striatum. Figure 2G shows counts along the rostro-caudal axis on a horizontal view and the Supplemental Video shows the entire 3D reconstruction. The same data grouped by either vector or age are shown in Figure 2H. Video S1. Vector spread illustrated in 3-D Download video file (63.5MB, mp4) Transgene expression Enumerating transgene-expressing cells is rather binary and does not necessarily reflect the potency of transduction of the various AAVs since various MOIs can give rise to the same number of transgene-positive cells, but each cell may contain various degrees of transgene expression. In previous work, we showed that different capsid variants will exhibit a differential pattern of expression (e.g., more or less degrees of expression within a focal area).27 Thus, given the big differences in volumetric spread of striatal transduction, we measured total and average transgene expression levels per area of striatal transduction (arbitrary units [AU]/pixel units; Figure 3). To that end, using quantitative near-infrared (nIR) imaging on the LI-COR Odyssey system, we quantified protein levels within the striatum (Figure 3) as well as total transgene expression throughout the hemisphere, including afferent and efferent areas like the substantia nigra, hippocampus, thalamus, and cortex (Figure 4). Figure 3. Open in a new tab Striatal transgene expression is reduced with old age Since enumeration of transgene+ cells is binary and not a full measure of infectivity, we also measured the level of transgene expression in the striatum using densitometry of near-infrared imaging. (A) To measure transgene signal, first the full striatum was outlined using TH immunoreactivity in the 680 channel, then area of mCherry+ expression was outlined using a heatmap scale in the 800 channel (B–H). Representative images of mCherry expression in young WT (C), HS (D), YH (E), and aged WT (F), HS (G), YH (H). A two-way ANOVA showed a main effect of age on both total (I)and average (signal/area; K). (J)and (L) Individual t tests showed that age negatively impacted expression in both variants but not WT. ∗p<0.05, ∗∗p<0.01, ∗∗∗p<0.001, ∗∗∗∗p<0.0001. Figure 4. Open in a new tab Transduction in the midbrain is significantly impacted by age: combination of anterograde transport of transgene to terminals and retrograde transduction of nigral cells The midbrain region containing the SN is an important target in PD. Accordingly we quantified the total level of transgene in this area. (A) A two-way ANOVA showed a main effect of age on mCherry expression, and individual t tests (B)showed that this age-related impairment in transduction was significant across all the capsid variants. Representative images of mCherry immunoreactivity in the SN (low magnification; left panels), SNc (middle panels), and SNr (right panels) from Young WT (C–E), Aged WT (F–H), Young HS (I–K), Aged HS (L–N), Young YH (O–Q), and Aged YH (R–T). ∗p<0.05, ∗∗p<0.01, ∗∗∗∗p<0.0001. Scale bar in (R), 500 μm and applies to (C), (F), (I), (L), (O), and (R). Scale bars in (S)and (T), 50 μm and applies to (D), (E), (G), (H), (J), (K), (M), (N), (P), (Q), (S), and (T). Striatum The striatum was first outlined (Figure 3A) based on tyrosine hydroxylase immunoreactivity, and thereafter the area of transduction was outlined within the striatum (Figures 3B–3H), thus giving us expression per area (Figures 3K and 3L) as well as total transgene levels (Figures 3I and 3J). Total expression: data normalized to non-transduced area is reported in AU (raw data divided by 10,000 for illustration purposes). A two-way ANOVA (Figure 3I) yielded a significant interaction effect between Age×Vector (F(2, 35)= 3.509, p<0.05) and a simple main effect of Age (F(1, 35)= 32.86, p<0.0001) and Vector (F(2, 35)= 62.93, p<0.0001). In contrast to previous report,9,10 no differences in protein levels (Figures 3I and 3J) were observed between WT AAV2-treated young (Figure 3C; 361 AU) and aged (Figure 3F; 167 AU) rats. However, an age-related difference was seen with HS between young (Figure 3D; 1089 AU) and aged (Figure 3G; 448 AU) rats (Figure 3I; p= 0.006, as well as between YH in young (Figure 3E; 1817 AU) and aged (Figure 3H; 1097 AU) rats (p<0.005; Figure 3I). Young animals treated with WT AAV showed significantly lower expression than young YH and young HS (p<0.0002), while the aged WT group was only different from YH (p<0.0001) but not HS. Interestingly, we observed a significant difference between HS and YH in both young and aged groups (p= 0.0004 and 0.006, respectively) suggesting a significant effect of the extra capsid mutation on protein expression. Again, we ran simple t tests with age as the only independent variable (Figure 3J) in which case only HS and YH show a significant decrease with age (HS [p= 0.001], and YH [p= 0.004]), but not WT (p= 0.08). When we assessed the average transgene expression per area (AU; raw data divided by 100 for illustration purposes) an interesting profile emerged. Specifically, a two-way ANOVA analysis (Figure 3K) revealed a significant effect of vector (F(2, 35)= 173.1, p<0.0001) and age F(1,35)= 141.7, p= 0.0006). There was no difference between young (155.8 AU) and aged (147.3 AU) WT, or young (117.7 AU) or aged (82.32 AU) HS animals, although there was a difference in young (175.0 AU) and aged (128.7 AU) YH animals (p<0.05). Interestingly, compared with the other capsids, HS provided less focal expression in younger animals than YH (p= 0.002), but this difference was more pronounced in aged animals (p<0.05 vs. aged WT and YH), again, underscoring the additional effect of the extra capsid mutations in YH on protein expression. Individual t test analyses revealed (Figure 3L) an effect of age in HS (p<0.01) and YH (p= 0.0001), but not WT subjects. Substantia nigra The midbrain area, composed of the substantia nigra (SN), pars reticulata (SNr), and the SN pars compacta (SNc), is a key target in PD, both in terms of the expression of cell-autonomous (i.e., in nigral neurons) and non–cell-autonomous factors (i.e., expression originating from striatal terminals). The entire SN was outlined and mCherry expression was measured as a function of area using LI-COR. A two-way ANOVA yielded a significant interaction effect between Age×Vector (F(2, 31)= 4.051, p<0.05) and a simple main effect of Age (F(1, 31)= 46.21, p<0.0001) and Vector (F(2, 31)= 22.79, p<0.0001). Young WT was lower in general than young HS and YH (p<0.01 and p<0.0001, respectively), but those differences were not significantly different in the aged animals. There was no age-related effect on expression with WT AAV2 (young-25.7, aged-12.9). However, interestingly both YH (young-70.3, aged-31.1) and HS (young-47.3, aged-23.0) exhibited a significant reduction in midbrain expression (p<0.01 and <0.0001, respectively). When we analyzed expression with age as the only variable (Figure 4B), individual t test comparisons revealed significantly decreased expression in all vector groups: WT (p<0.0001), HS (p<0.01), and YH (p<0.005). Representative images of the SN showing mCherry immunoreactivity are shown in Figures 4C–4T. Sporadic positive neuronal cell bodies were present in all groups, with most of the signal originating from striatonigral terminals. Broad distribution The ability to achieve robust retrograde transduction may be relevant in certain clinical settings. For example, in HD, the striatum is the chief anatomical target; however, additional anatomical structures such as the cortex and thalamus are also affected in HD.37 The ideal gene therapy approach in such situations should cover multiple brain loci over a broad anatomical area. To assess retrograde transduction (Figure 5), we chose to perform LI-COR densitometric analyses of areas that would not be directly influenced by the injection sphere or needle track per se; specifically, the hippocampus and the thalamus. As previously described, the transduced area within each structure (as identified by anatomical landmarks) was outlined and transgene levels (AU) were computed as a function of area (AU raw data divided by 100 for illustration purposes). Figure 5. Open in a new tab Retrograde transduction is significantly impacted by advanced age Although our main anatomical target was the striatum, we also wanted to better understand the degree of infection of striatal projection areas. To that end, we outlined transduced areas of the thalamus, hippocampus (HC), and cortex (CTX) and quantified expression levels using LI-COR-assisted near-infrared densitometry. We observed a main effect of age in the thalamus (A), but not hippocampus (C)or cortex (E). Individual t tests showed a significant reduction in transgene expression in the aged thalamus across all mutant capsids (D), an effect that was not seen in the hippocampus (B)or cortex (F). Representative images of mCherry immunoreactivity in the dorsal hippocampus (left panels), medial thalamus (middle panels), and somatosensory cortex (right panels). ∗p<0.05, ∗∗p<0.01, ∗∗∗p<0.001, ∗∗∗∗p<0.0001. Scale bar in (V), 1000 μm and applies to (G), (J), (M), (P), (S), and (V). Scale bars in (W)and (X), 250 μm and applies to (H), (I), (K), (L), (N), (O), (Q), (R), (T), (U), (W), and (X). Hippocampus: Transduction was similar between groups (young: WT- 24.4, HS- 28.0, YH- 50.7; aged: WT- 23.3, HS- 25.2, YH- 39.2) with the only significant difference emerging between young WT and YH (p<0.05). It is worth noting that the average expression value for YH is much higher than the WT and HS in both age groups. However, we observed substantial variability between subjects, therefore our results failed to reach a statistically significant difference (images representing the mean are shown in Figure 5). A two-way ANOVA analysis (Figure 5A) revealed an effect of vector (F(2, 36)= 8.989, p= 0.0007), but not age (F(1, 36)= 1.339, p= 0.25). Individual t tests (Figure 5B) revealed no further differences due to age. Thalamus: (Figure 5C) A two-way ANOVA yielded an effect for Vector (F(2, 36)= 21.06, p<0.0001) and Age (F(1, 34)= 19.22, p<0.0001). Young YH (61.9) exhibited significantly higher expression than young HS (44.3; p<0.05) and WT (29.2; p<0.0001). There was no difference between young HS and young WT, or between either aged treatment group (WT-24.7, HS-29.1, YH-40.9). There was a significant difference due to age in YH groups (p<0.01). Individual t tests (Figure 5D) revealed an effect due to age with both YH and HS, but not WT. Cortex: The cortex represents another area with significant projections to the striatum and exhibits retrograde transduction following intrastriatal injections. As described for other brain regions, cortex (as defined by outlines based on morphological landmarks), and LI-COR was used to quantify expression (Figures 5E and 5F). We observed a significant effect of Vector (F(2, 36)= 25.17, p<0.0001), but not Age (F(1, 36)= 0.1211, p= 0.7; Figures 4M and 4N). Multiple comparisons revealed no differences with age (WT young-17.3, aged-17.4; HS young-34.5, aged-30.6; YH young-60.0.6, aged-69.7). YH resulted in a higher degree of cortical expression compared to young and aged WT (p<0.001), and aged HS (p<0.01). Subcellular localization To try to begin to elucidate mechanisms underlying the differences in transduction and transport, we performed RNAscope in situ hybridization against a non-transcribed portion of the viral genome (Figure 6). Qualitative observations revealed a diverse pattern of transduction with WT AAV2, with variable number of genomes present within the nucleus, but also with genomes persistent outside the nucleus to a higher degree than with the mutant variants. Possibly explaining the general impairment in transduction seen with the WT capsid. For both mutants (HS and YH) and age groups, the genomes are found widely distributed throughout the tissue and mostly nuclear with a single genome. Although, these data were not quantified, they are in agreement with previous reports using fluorescently tagged AAV2 capsids.38 Figure 6. Open in a new tab In situ hybridization of viral genomes Striatal tissue sections for each one of the groups were processed for detection of AAV viral genomes (ISH- brown puncta) and counterstained with thionin for identification of nuclei (blue). Representative images at low and high magnification, respectively, are shown, arranged as follows: WT Young (A and B), WT Aged (C and D), HS Young (E and F), HS Aged (G and H), YH Young (I and J), YH Aged (K and L). Low-magnification images depict distribution of AAV genomes across the striatum. High-magnification images show localization of AAV genomes relative to nuclei. Black arrows indicate examples of single AAV inside a nucleus, red arrowheads indicate examples multiple genomes inside a single nucleus, and yellow arrowheads indicate examples of non-nuclear viral genomes. Scale bar in (A), 100 μm, applies to all low-magnification images (C, E, G, I, and K). Scale bar in B, 10 μm, applies to all high-magnification insets (B, D, F, H, and J). Tropism In our previous work comparing various mutants or WT AAV in young and aged animals, no overt change in vector tropism was observed. Here we qualitatively assessed the tropism of the various capsids by co-labeling transduced cells (mCherry+) with either Olig2 (oligodendrocyte transcription factor 2, oligodendrocyte marker; Figure S2), GFAP (glial fibrillary acidic protein; astrocyte marker; Figure S3), or Iba1 (I onized calcium b inding a daptor molecule 1; microglia marker; Figure S4). Transgene+ oligodendrocytes (Figure S2), were observed across capsids and age most frequently within or in proximity to the corpus callosum. Transgene+ astrocytes (Figure S3), were rare but observed in all groups in areas close to the injection site (Figures S3M–S3O). Other astrocytes that appear to display positive transduction at low magnification (Figures S3A–S3L) were found to be non-transduced, but closely associated with mCherry+ neurons upon close examination of high-magnification confocal z stacks. No transgene+ microglia were observed in either group (Figure S4). At least three striatal sections per animal and three animals per group were stained and analyzed. Discussion The use of AAV in CNS clinical trials is becoming commonplace. However, for a variety of reasons, there has been a failure to translate preclinical findings into clinical efficacy.5,6,7 One such example was the case of neurturin, which failed to demonstrate meaningful clinical improvements despite a wealth of preclinical efficacy data.4,34 An important clue to this disconnect came from postmortem examinations of treated brains which showed significantly less AAV transduction in PD patients compared with preclinical animal studies.2,3,4,35 Although there may be many reasons for this, two may be related to the fact that: (1) essentially all preclinical studies with neurturin/GDNF were done in young animals, and (2) the brain region of vector injection, involving nigrostriatal and striatonigral systems, exhibits reduced transduction efficacy with advanced age.9,10 It is therefore clear that translational studies must consider these important variables in order to understand and predict infectivity in therapeutically relevant populations, including PD where the average age of onset is approximately 60 years of age. In an attempt to decipher the factors that influence age-related changes in infectivity and to assess vector transduction using the clinically relevant variable of age, we utilized two AAV2-based capsid variants reflecting distinct properties as it relates to the infectious process. The first set of mutations (capsid “HS”) disrupts the canonical HSPG binding receptor but exhibits a paradoxical enhancement in transduction. The second capsid variant also includes mutations (Y→F) preventing endosomal escape in addition to the HS mutations (capsid “YH”), thus, facilitating a higher number of virions reaching the nucleus resulting in an increased effective MOI.28,29 To assess the effect of these mutations on overcoming age-related impairments in transduction, we compared these variants head-to-head with WT AAV2 using a variety of important parameters following a single injection into the striatum, a relevant anatomical target for several neurodegenerative and neurological disorders. Importantly, these studies were undertaken in 20-month-old rats, an age roughly analogous to a 60-year-old human,39 the average age in PD gene therapy clinical trials. Changes in volume and transduction To our knowledge, this was the first study to combine AI and stereological principles to quantify transduction. Whereas AI previously has been utilized to enumerate cell populations on single histological sections,40 in our study we were able to estimate an entire neuronal population within a defined area, deriving a physiologically relevant count for total number of transduced cells in the structure of interest in each brain hemisphere. This will become an important tool moving forward, facilitating comparisons between older studies relying upon stereology proper. Moreover, stereology is dependent on optimization of parameters at the section level and variance can be introduced dependent on subjects used for optimization, stringency of lab protocols in re-running early subjects analyzed, or definition of the profile (e.g., transgene+ cell of interest) which is liable to change with familiarity and screen fatigue. In contrast, using a CNN deep learning model customized to this study and further trained on the histological images used, we were able to avoid many of the human factors that introduce variability into stereological counting while providing concrete data that future studies can use as benchmarks, or for large-scale comparisons without an added bias inherent in meta-analyses. As was seen in prior studies, both capsid mutants exhibited significantly enhanced infectivity (Figure 1G) and volumetric (Figure 2A) spread as compared with WT AAV2 in both young and aged animals. As for previous results,10 using the metric of infected (i.e., immunoreactive for the transgene) cells as the key readout, we did not observe a significant impact of aging on the number of infected cells (Figure 1H), nor spread (Figure 2B) with aging, with WT AAV2 or YH. Surprisingly the only difference that was seen in the number of infected cells in aged animals was a significant decrease compared with young in subjects receiving HS (Figure 1H). Interestingly, and in contrast to what we had hypothesized, when we assessed total transgene protein levels, we observed a reduction in total striatal transgene with the more infectious mutant capsids in aged animals (Figure 3J). Similarly, when normalized to area, only the mutant capsids exhibited an age-related decline in expression (Figure 3L), which again may relate to the difference in diffusion properties between WT AAV2 and tested capsid mutants, with the enhanced diffusion exhibited by the mutant effectively “drawing” infectious particles away from the center of injection. There were no quantitative differences in volumetric spread due to age (Figure 2B), and no difference in spread was seen between mutants, which both spread throughout a much larger volume than WT AAV2 (Figure 2A). An interesting qualitative observation was the apparent rostro-caudal shift in transduction with aging, which was seen with all capsids tested (Figures 2D and 2E; Video S1). While the underlying reason for this is unknown, there may be age-associated differences in structure and volume, or there may be regional differences in extracellular factors influencing diffusion and infection. Intracellular transport of AAV is impaired with aging Interestingly, a pattern emerged when comparing HS and YH in aged subjects. Specifically, when assessing total expression levels (Figure 3J), age had a much more profound effect on HS (58.9% decline) compared with the YH variant (39.6% decline). Since mutations affecting intracellular transport are the only differences between these capsid variants, our findings suggest that age-related changed in this cellular process is a key factor in mediating age-related effects on AAV transduction. We found further support for this idea when assessing expression in striatal projection areas requiring long-range retrograde transport of the virion such as the cortex (Figures 5E and 5F) and thalamus (Figures 5C and 5D) where the YH variant again yielded much higher transduction/transgene expression than that of HS. Although improved retrograde transport with Y → F mutations has been described in the context of the AAV-retro capsid,41 this is the first study to establish age as an important factor in the efficacy of AAV retrograde transduction. Finally, we assessed transgene expression in the SN, a therapeutically relevant target in the treatment of PD. Delivering trophic factors to nigral neurons via a targeted injection to the striatum can confer a therapeutic effect either in a cell non-autonomous fashion (e.g., released factors from the SNr terminals of transduced direct pathway striatal neurons) or a cell-autonomous fashion via the direct retrograde transduction of nigrostriatal dopamine neurons in the SNc. To that end, in doing our analysis, we did not segregate out terminal vs. midbrain dopamine neuron transgene expression, with our values thus accounting for both these modes of expression. With every capsid we saw a significant reduction in midbrain expression with age (Figure 4B), but again, the YH expression produced significantly higher expression. It is important to note, however, that a qualitative assessment of midbrain expression suggested that YH produced more transgene+ SNc neurons (Figures 4D–4G, 4J, 4M, 4P, and 4S) supporting the notion of enhanced retrograde transduction, but also potentially skewing the data as these cells contain high levels of transgene. This aging effect emulates what we previously observed with AAV 2, 5, and 9, all showing approximately 50% less transgene in the striatonigral system following an intrastriatal injection, suggesting a ubiquitous impairment regardless of capsid.10 Nonetheless, the expression in the aged SN with our capsid mutants was no different from that seen in young WT brains (Figure 4A), thus, these capsid variants provide a means to overcome deficiencies seen with AAV2 in the aged brain. Whereas this study focused on elucidating aspects of aging that can impair CNS transduction, we only evaluate a single capsid and its derivatives. Whereas a comparative analysis with other capsids certainly is warranted, it goes beyond the scope of this work. Nevertheless, it is important to note that other naturally occurring serotypes such as AAV5, AAV8, and AAV9, and engineered variants such as AAV-retro and MNM-104, among others, exhibit strong infectivity and intracellular transport and thus represent alternative capsids for consideration.23,42,43 No change in tropism We observed no changes in the overall tropism of either capsid variant. This is not unsurprising given that prior studies of AAV2 with an ablated HSPG binding motif show that the virus remains largely neurotropic.27,31,44 Indeed, with either mutant we observed a small number of transduced astrocytes and oligodendrocytes, with no transgene-positive microglia observed. Translational impact It is clear from our analyses that different capsids behave differently following direct intraparenchymal delivery into the young and aged brain, and an understanding of the underlying reasons is important as one considers translational aspects of AAV CNS gene therapy. For example, a variant such as YH provides robust transduction throughout a number of structures due to retrograde infection of neurons projecting directly to the injection area, making it a strong therapeutic candidate for diseases such as HD where the transduced areas in direct contact with the injected area all exhibit dysfunction and degeneration, and represent important anatomical targets. Similarly, the significant improvement in diffusion with these mutant capsids renders them ideal when transduction of large structures, such as the striatum in PD and HD, need to be transduced. On the other hand, in this study, we chose to target the striatum as it is largely a structure encapsulated by the corpus callosum, allowing us to address biological questions such as those herein (e.g., assessing diffusion versus virion transport). However, our results highlight the large degree of diffusability when the HSPG null mutation in incorporated into the AAV2 capsid, and as was shown by Naidoo and colleagues, by simply altering the trajectory and target of injection, one can achieve close to brain-wide distribution of this vector.31 Conversely, although WT AAV2 was much less efficient in most of our metrics, the almost complete lack of diffusion (e.g., Figure 1), provides for a vector with much greater precision, making this capsid well suited for targeted injections of small populations of neurons. Moreover, although we have not noted any overt toxicity with these mutant capsids, the changes in vector biology warrants caution, and safety of these mutants needs to be further assessed. However, one key aspect about the improved efficacy is the ability to reduce overall titer and still achieve the same effective MOI as for the WT virus. Finally, this work deals with targeted injections into the CNS, but depending on the disease, alternate routes of delivery such as intrathecal or intravenous could also be considered. It is important to note that herein we are considering a single risk factor in disease: age. However, other risk factors may also be important considerations in gene therapy for neurological conditions. For example, sex is a risk factor in certain disorders,45 and sex can also influence CNS transduction,46 both in terms of infectivity and tropism. This phenomenon is, at least in part, mediated by cycling hormones47 and their role in receptor expression and should be considered in clinical translation. Moreover, we are not assessing the efficacy of these vectors in a disease environment. For example, ongoing inflammation may have a significant impact on the behavior of AAVs, and future studies should include the assessment of these capsids in neurodegenerative disease models. A key factor in age-related impairments is reduced intracellular transport Herein we have taken the first step toward improving our understanding of AAV transduction efficiency incorporating a clinically meaningful variable: advancing age, where our overall goal was to better understand what biological factors underly the impairment of AAV transduction associated with aging. Although in our a priori hypothesis we postulated that a key aspect was the changing extracellular environment that with age would act to further sequester the “sticky” AAV2 capsid14,48 and thus impair transduction. This seemingly was not the case since there was no difference in spread with our HS mutant with age (Figure 2B), only a reduction in transgene+ striatal neurons (Figure 1B). However, when incorporating mutations aimed at improving net intracellular trafficking to the nucleus, we did see a significant differential with aging, suggesting that intracellular transport of AAV is impaired in aged cells, either through a general reduction in transport efficacy or through increased endosomal escape. Thus, a key aspect of improving gene therapy in the aged brain is to incorporate such YF (or analogous) mutations into the AAV capsid to facilitate an increase in the effective MOI (particles making it into nucleus) and thus higher expression both locally and distally. One case when this did not hold true was when assessing expression in the SN where transgene expression was impaired in all aged groups (Figure 4B). The reason for this is likely due to impaired anterograde transgene transport, a common feature in aging neurons,49 since we still observed a higher degree of retrograde transduction with the YH mutant. This agrees with our previous studies where we performed precision dissections of the SNr only and saw reduced striatonigral expression. Conclusions In conclusion, we show that whereas HS and YH AAV2 variants exhibit improved infectivity and diffusion over WT AAV2 in both the young and aged brain, these mutants, in some cases, also exhibit impaired efficacy in the aged brain. We observed that one key contributing factor to the reduced infectivity of AAV likely is the result of a change in the way that the virion traverses the cell in association with aging, and by reducing endosomal escape using the YH transport mutant this deficiency in effective infectivity can be partially overcome. Materials and methods Animals Young adult (3-month-old) or aged (20-month-old) male F344 rats from the National Institutes of Aging Aged Rodent Colonies were utilized and all procedures were in accordance with the Michigan State University Institutional Animal Care and Use Committee (IACUC). Rats were housed two per cage, maintained on a controlled 12-h light cycle and temperature (22°C) environment. Food and water were available ad libitum. Vector production Vectors were produced as previously described.50 Briefly, a self-complimentary viral genome plasmid encoding mCherry under control of a truncated hybrid chicken β-actin/cytomegalovirus promoter (CBA/CAG) promoter was transfected into 293T cells together with plasmids encoding helper function (pXX6) as well as AAV rep and cap genes. Cap genes were WT AAV2, HS AAV2 (R585S, R588T, R487G), or YH AAV2 (HS mutations plus Y444F, Y500F, Y730F).27 Cells were harvested 72 h later, and viral particles were purified using a discontinuous iodixanol gradient followed by column chromatography. Viral titers were assessed using a qPCR assay and normalized to 1.0×10 12 vector/genomes (vg)/mL using balanced salt solution (Alcon Laboratories). Surgery With the lack of striatal interhemispheric connections, each hemisphere was considered an “n” of 1, and animals were randomly assigned vectors. Surgery was performed under 1.7%–2.0% isoflurane anesthesia with rats placed in a stereotaxic frame. Surgical coordinates were anterior/posterior±0.0 medial/lateral±3.0 mm, dorsal/ventral (from dura) −4.0 mm. The injection apparatus consisted of a Hamilton syringe fitted with a glass capillary needle coated in SigmaCote51 (Hamilton Gas Tight syringe 80,000, 26 s/2″ needle [Hamilton]). The capillary was lowered to the injection site and 2 μL vector was injected at a rate of 0.5 μL/min. Following vector delivery, the needle remained in place for 1 min, after which it was raised 1 mm and held in placed for an additional 4 min before being fully retracted. Euthanasia Four weeks following the vector delivery, animals were euthanized in accordance with the recommendations of the American Veterinary Medical Association and as approved by the Michigan State University IACUC. Rats were deeply anesthetized with an intraperitoneal injection of 60 mg/kg pentobarbital then transcardially perfused with Tyrode’s solution followed by ice-cold 4% paraformaldehyde. Brains were thereafter placed in 4% paraformaldehyde for 24 h after which they were placed in 30% sucrose (in 0.1M PO4). Once dehydrated in sucrose, brains were cut coronally into 40-μm-thick sections using a freezing stage sliding microtome. Sections for each brain were serially distributed into six groups, arranged in plates filled with cryoprotectant, and stored in −20°C until used. Immunostaining of brain sections All immunostaining procedures (i.e., immunohistochemistry [IHC], immunofluorescence [IF], and LI-COR near-infrared [nIR] imaging) were performed on free-floating sections as previously described.52 Sections for brightfield imaging were quenched with 0.3% hydrogen peroxide, followed by blocking in the appropriate serum. Sections were incubated in primary antibody overnight at room temperature (RT), and thereafter in the appropriate secondary antibodies for 2 h at RT. Brightfield sections (treated with biotinylated secondary antibodies) were incubated with an avidin-biotin complex as per manufacturer instructions (Vector ABC kit) to amplify signal and thereafter developed using 3,3′ diaminobenzidine and 0.03% hydrogen peroxide in tris buffer. IF or nIR sections were immediately mounted following incubation with secondary antibody. Sections were mounted on subbed slides, dehydrated using increasing concentrations of ethanol followed by xylene, and coverslipped using Cytoseal (Fisher Scientific) for IHC and DPX (Sigma Cat# 06522) for IF and nIR. Antibody details are listed in Table 1. Table 1. Antibodies for immunohistochemistry | Primary antibody | Secondary antibody | | --- | --- | | Antigen | Host/Class | Cat# | Dilution | Host/Class | Cat# | Dilution | | RFP | Rabbit polyclonal | Rockland #600-401-379 | IHC | 1:4000 | Goat anti-rabbit IgG Biotin conjugated | Millipore #AP132B | 1:500 | | mCherry | Goat polyclonal | LS Bio # LS-C204207 | IF | 1:1000 | Donkey anti-goat IgG Alexa Fluor®594 conjugated | Thermo-Fisher Scientific #A11058 | 1:500 | | nIR | 1:1000 | Donkey anti-goat IgG IRDye®800CW conjugated | LI-COR #926-32214 | 1:5000 | | TH | Rabbit polyclonal | Millipore #AB152 | nIR | 1:4000 | Donkey anti-rabbit IgG IRDye®680RD conjugated | LI-COR #926-68073 | 1:5000 | | Olig2 | Rabbit monoclonal | Invitrogen #MA5-42372 | IF | 1:1000 | Goat anti-rabbit Alexa Fluor®488 conjugated | Thermo-Fisher Scientific #A11008 | 1:500 | | GFAP | Mouse monoclonal | Sigma-Aldrich #G6171 | IF | 1:1000 | Goat anti-mouse Alexa Fluor®488 conjugated | Thermo-Fisher Scientific #A11001 | 1:500 | Open in a new tab RNAscope in situ hybridization of AAV genomes In order to assess subcellular localization of vector genomes, we performed in situ hybridization (ISH) using a probe against a non-transcribed portion of the genome paired with a Nissl (thionin) stain.ISH was carried out using the Advanced Cell Diagnostic RNAscope2.5 HD detection Kit-Brown (ACD, Cat#322310) and probe CMV-Enh-CBA promoter-O4-C1 (ACD, Cat#1211301-C1). Protocol was carried out as suggested by the manufacturer’s instructions with a few modifications for 40-μM free-floating frozen sections.53 First, the sections were washed with TBS 6×5 min to remove cryoprotectant. Sections were then incubated with peroxide treatment for 45 min at RT, washed 3×5 min. Sections were mounted onto Superfrost Plus+ slides and air-dried for approximately 20 min, then washed three times in H 2 O to remove salts and air-dried overnight at RT. The following day, sections were incubated in target retrieval solution pre-heated to 99°C–100°C for 10 min followed immediately by four times wash in H 2 O, then eight times in 100% ethanol and allowed to air-dry before drawing a hydrophobic barrier. Next, protease (protease III) treatment, probe hybridization, signal amplification, and signal detection steps were carried out exactly as described per the manufacturer’s protocol. After the final rinse, sections were allowed to air-dry overnight at RT. The following day, sections were counterstained with thionin. Briefly, slides were rehydrated in H 2 O for 2 min, stained with thionin working solution for 8 min, rinsed twice in H 2 O for 2 min each, then serially dehydrated in ethanol (70%, 95%, 100%, 1 min each), cleared in xylene for 2 min and coverslipped using Cytoseal mounting media. Imaging Slides for AI-based quantitation of mCherry+ cells were imaged on a ZEISS Axioscan (ZEISS Group; Oberkochen, Germany). Each scan consisted of a series of image tiles acquired across the X-Y plane, in the tissue area with positive mCherry signal. In addition, each tile consisted of a stack of images acquired with the ×20 objective across 13 μm on the z axis with a 0.8-μm step size. Finally, acquired photomicrographs were processed and stitched together by the ZEN advanced software, producing a single high-resolution digital image per each scan. Other brightfield images were acquired on a Nikon Eclipse Ni microscope. Sections stained with LI-COR secondary nIR antibodies were scanned on the LI-COR Odyssey CLx at 21 μm resolution for SN and 84 μm resolution for the rest of the brain regions. Fluorescent images were taken on a Nikon A1R HD25 confocal imaging system using a ×60 objective, z stacks were acquired at 0.2 μm steps across the thickness of the section. Representative images were selected from subjects that displayed mean values in terms of transduced cells or transgene expression. AI-based enumeration of transduced cells and volumetric and stereological conversion Although we observed transduction in areas beyond the striatum, we focused quantitation on the striatum. Sections were uploaded to an image analysis platform developed by AIforia (AIforia Technologies, Helsinki, Finland), which uses individualized AI deep CNN learning to facilitate quantitative histological image analysis.40 Briefly, high-resolution brightfield photomicrographs acquired using the ZEISS Axioscan were imported into AIforia software. Striatal regions of interest, limited to the areas of transduction and guided using a rat brain atlas,36 were outlined using a contour tool in AIforia. A custom-developed deep learning process was used to train a CNN to enumerate the total number of transduced cells and to calculate the region area of transduction. After automated counting, stereological mathematical principles were applied to derive a per-hemisphere total count of vector-positive neurons and transduction spread (region area). To derive transduced cell counts, the final number from automation was summed across sections and multiplied by the reciprocal of serial section interval (×6) (Equation 1). Equation 1. Calculation for stereological estimation of cell numbers based on AI enumeration (Equation 1) where C H is the total counts (C)for a hemisphere (H), derived by suμing (Σ) the vector-positive profile counts (c v+) for each section (s)ranging from one section (i s) to the total number of sections (n s), then multiplying the sum by the reciprocal of the section interval (constant of 1/6). Inherent in this workflow is the stereological principle through the preservation of systematic random sampling while further subsampling within each section. Thus, subsampling one in six sections as would be accomplished using a stereology program is obviated by use of the AI software. Total region volume was determined by summing the immunolabeled spread within the contour to derive a total transduction area for the section series. As with neuron counts, this number was multiplied by the reciprocal of serial section interval (×6) and then by the section cut thickness (×0.04 mm) (Equation 2). Equation 2. Calculation for volumetric estimation of transduction (Equation 2) where V H is the total regional volume (V)for a hemisphere (H), derived by summing (Σ) the vector-positive regional area (a v+) for eaμch section (s)ranging from one section (i s) to the total number of sections (n s), then multiplying the sum by the reciprocal of the section interval (constant of 1/6) and by the volume of section thickness (constant of 0.04 mm). The methodology was also validated using traditional stereology per our established methodology.54 Immunostained tissue sections from one to two animals/group (seven in total) was utilized. The region of interest was outlined at a low magnification (4×) based on the area containing RFP+ cells throughout the rostro-caudal extent of the striatum. The Paxinos and Watson rat brain atlas was used a reference to precisely identify the borders of this region.36 Every sixth section was sampled using the optical fractionator method with a counting frame size of 175×175 μm and a grid size of 600×600 μm in Stereo Investigator software (Version 2020 1.1; Microbrightfield, Inc., Williston, VT, USA). Counting of cells was performed using a ×40 objective on a Nikon Eclipse Ni-E microscope equipped with a motorized XY stage (Ludl Electronic Products, Hawthorne, NY, USA). The coefficient of error for each estimate was calculated and found to be less than 0.1 (Gundersen, m= 1). The results for each striatum (stereology or AI enumeration) were assessed using a regression analysis (Figure S1). An aid to conversion from AI enumeration to total population estimates and volume has been established at (please cite the current manuscript). LI-COR-based quantitation of transgene expression Due to the broad dynamic range, the use of near-infrared imaging provides a robust alternative to biochemical methods (e.g., PCR) to quantify transduction in fixed tissue. First, brain anatomical regions of interest were outlined in the IR680 channel defined by morphological landmarks for hippocampus, cortex, and thalamus, and by the presence of TH immunoreactivity in the striatum and SN. Following, the IR800 channel (i.e., mCherry expression) was displayed as a heatmap (Figures 3C–3H) and regions of interest were drawn around mCherry+ signal defined as >3 AU in the heatmap scale (Figure 3B). A total of 15 sections were included for analysis of the striatum, seven sections for hippocampus, thalamus, and cortex, and eight sections for the SN. Statistics Power analyses were performed to determine required sample sizes to detect a statistical difference at p<0.05 with a power of 0.8. Our only exclusion criterion (complete absence of transduction) was established a priori. All data were collected by experimenters blinded to the experimental conditions. Two-way analysis of variance (ANOVA) tests were used to detect statistical significance between all groups with age (young, aged) and vector capsid (WT, HS, YH) as the independent variables. To further define any relationship across, within, and between variables, Tukey HSD post hoc test was used where a main or interaction factor significance was found. To compare young adult vs. aged rats injected in the same structure with the identical vector construct, independent t tests were used to assess the effect of age on each capsid.10 For all analyses, a p value of less than or equal to 0.05 was considered statistically significant. All statistics were conducted in Prism (GraphPad) or R (4.1.0, build "Camp Pontanezen") using base libraries. Data and code availability The data necessary to interpret, verify, and extend the findings of this study will be available upon request. Acknowledgments We would like to thank Nathan Kuhn and Angela Velazquez for technical assistance. This work was supported by Barrow Neurological Foundation, NIH R56 AG052328-01 (F.P.M.). Author contributions F.P.M., I.M.S.: Conceived the project, designed experiments, performed data analysis, and wrote the first draft of the manuscript. C.M.K.: Wrote the code for the processing of histological data and assisted with data analysis. S.E.B., S.L.B., S.M.C.: Assisted with mutant design and vector production. K.S.-C., D.J.M., M.D., L.D.B.-C., S.M.C.: Assisted with the execution of the study. All authors critically read the manuscript. Declaration of interests I.M.S.: Co-founder of n Vector. Has received financial support from Aspen Neurosciences. F.P.M.: Co-founder of n Vector Therapeutics, CavGene Therapeutics, and Neuralina Therapeutics. Has received financial support from Regenex Bio, Aspen Neurosciences, Seelos Therapeutics. K.S.-C.: Co-founder of CavGene Therapeutics, Inc, which holds intellectual property in CaV1.3 gene silencing and has received financial support from Regenex Bio. S.E.B.: Co-founder of Atsena Therapeutics. S.L.B.: Co-founder of Atsena Therapeutics. M.D.: Co-founder of rAAVEN. D.J.M.: Has received financial support from FujiFilm Cellular Dynamics Inc and Aspen Neurosciences. Currently an employee of Biogen. I.M.S., F.P.M., S.E.B., S.L.B., and M.D. hold patents related to AAV technology. Footnotes Supplemental information can be found online at Supplemental information Document S1. Figures S1‒S4 mmc1.pdf (1.1MB, pdf) Document S2. Article plus supplemental information mmc3.pdf (7.3MB, pdf) References 1.Al-Zaidy S., Pickard A.S., Kotha K., Alfano L.N., Lowes L., Paul G., Church K., Lehman K., Sproule D.M., Dabbous O., et al. Health outcomes in spinal muscular atrophy type 1 following AVXS-101 gene replacement therapy. Pediatr. Pulmonol. 2019;54:179–185. doi: 10.1002/ppul.24203. [DOI] [PMC free article] [PubMed] [Google Scholar] 2.Bartus R.T., Herzog C.D., Chu Y., Wilson A., Brown L., Siffert J., Johnson E.M., Jr., Olanow C.W., Mufson E.J., Kordower J.H. Bioactivity of AAV2-neurturin gene therapy (CERE-120): differences between Parkinson's disease and nonhuman primate brains. Mov. Disord. 2011;26:27–36. doi: 10.1002/mds.23442. [DOI] [PMC free article] [PubMed] [Google Scholar] 3.Chu Y., Bartus R.T., Manfredsson F.P., Olanow C.W., Kordower J.H. Long-term post-mortem studies following neurturin gene therapy in patients with advanced Parkinson's disease. Brain. 2020;143:960–975. doi: 10.1093/brain/awaa020. [DOI] [PMC free article] [PubMed] [Google Scholar] 4.Marks W.J., Jr., Bartus R.T., Siffert J., Davis C.S., Lozano A., Boulis N., Vitek J., Stacy M., Turner D., Verhagen L., et al. Gene delivery of AAV2-neurturin for Parkinson's disease: a double-blind, randomised, controlled trial. Lancet Neurol. 2010;9:1164–1172. doi: 10.1016/S1474-4422(10)70254-4. [DOI] [PubMed] [Google Scholar] 5.Fischer D.L., Gombash S.E., Kemp C.J., Manfredsson F.P., Polinski N.K., Duffy M.F., Sortwell C.E. Viral Vector-Based Modeling of Neurodegenerative Disorders: Parkinson's Disease. Methods Mol. Biol. 2016;1382:367–382. doi: 10.1007/978-1-4939-3271-9_26. [DOI] [PubMed] [Google Scholar] 6.Manfredsson F.P., Polinski N.K., Subramanian T., Boulis N., Wakeman D.R., Mandel R.J. The Future of GDNF in Parkinson's Disease. Front. Aging Neurosci. 2020;12:593572. doi: 10.3389/fnagi.2020.593572. [DOI] [PMC free article] [PubMed] [Google Scholar] 7.Manfredsson F.P., Bloom D.C., Mandel R.J. Regulated protein expression for in vivo gene therapy for neurological disorders: progress, strategies, and issues. Neurobiol. Dis. 2012;48:212–221. doi: 10.1016/j.nbd.2012.03.001. [DOI] [PubMed] [Google Scholar] 8.Collier T.J., Kanaan N.M., Kordower J.H. Ageing as a primary risk factor for Parkinson's disease: evidence from studies of non-human primates. Nat. Rev. Neurosci. 2011;12:359–366. doi: 10.1038/nrn3039. [DOI] [PMC free article] [PubMed] [Google Scholar] 9.Polinski N.K., Gombash S.E., Manfredsson F.P., Lipton J.W., Kemp C.J., Cole-Strauss A., Kanaan N.M., Steece-Collier K., Kuhn N.C., Wohlgenant S.L., Sortwell C.E. Recombinant adenoassociated virus 2/5-mediated gene transfer is reduced in the aged rat midbrain. Neurobiol. Aging. 2015;36:1110–1120. doi: 10.1016/j.neurobiolaging.2014.07.047. [DOI] [PMC free article] [PubMed] [Google Scholar] 10.Polinski N.K., Manfredsson F.P., Benskey M.J., Fischer D.L., Kemp C.J., Steece-Collier K., Sandoval I.M., Paumier K.L., Sortwell C.E. Impact of age and vector construct on striatal and nigral transgene expression. Mol. Ther. Methods Clin. Dev. 2016;3:16082. doi: 10.1038/mtm.2016.82. [DOI] [PMC free article] [PubMed] [Google Scholar] 11.Raghunathan R., Polinski N.K., Klein J.A., Hogan J.D., Shao C., Khatri K., Leon D., McComb M.E., Manfredsson F.P., Sortwell C.E., Zaia J. Glycomic and Proteomic Changes in Aging Brain Nigrostriatal Pathway. Mol. Cell. Proteomics. 2018;17:1778–1787. doi: 10.1074/mcp.RA118.000680. [DOI] [PMC free article] [PubMed] [Google Scholar] 12.van Horssen J., Wesseling P., van den Heuvel L.P.W.J., de Waal R.M.W., Verbeek M.M. Heparan sulphate proteoglycans in Alzheimer's disease and amyloid-related disorders. Lancet Neurol. 2003;2:482–492. doi: 10.1016/s1474-4422(03)00484-8. [DOI] [PubMed] [Google Scholar] 13.Jenkins H.G., Bachelard H.S. Developmental and age-related changes in rat brain glycosaminoglycans. J.Neurochem. 1988;51:1634–1640. doi: 10.1111/j.1471-4159.1988.tb01134.x. [DOI] [PubMed] [Google Scholar] 14.Perabo L., Goldnau D., White K., Endell J., Boucas J., Humme S., Work L.M., Janicki H., Hallek M., Baker A.H., Büning H. Heparan sulfate proteoglycan binding properties of adeno-associated virus retargeting mutants and consequences for their in vivo tropism. J.Virol. 2006;80:7265–7269. doi: 10.1128/JVI.00076-06. [DOI] [PMC free article] [PubMed] [Google Scholar] 15.Lopez-Otin C., Blasco M.A., Partridge L., Serrano M., Kroemer G. Hallmarks of aging: An expanding universe. Cell. 2023;186:243–278. doi: 10.1016/j.cell.2022.11.001. [DOI] [PubMed] [Google Scholar] 16.Burrinha T., Cunha C., Hall M.J., Lopes-da-Silva M., Seabra M.C., Guimas Almeida C. Deacidification of endolysosomes by neuronal aging drives synapse loss. Traffic. 2023;24:334–354. doi: 10.1111/tra.12889. [DOI] [PubMed] [Google Scholar] 17.Nixon R.A. The aging lysosome: An essential catalyst for late-onset neurodegenerative diseases. Biochim. Biophys. Acta. Proteins Proteom. 2020;1868:140443. doi: 10.1016/j.bbapap.2020.140443. [DOI] [PMC free article] [PubMed] [Google Scholar] 18.Shin E.Y., Soung N.K., Schwartz M.A., Kim E.G. Altered endocytosis in cellular senescence. Ageing Res. Rev. 2021;68:101332. doi: 10.1016/j.arr.2021.101332. [DOI] [PMC free article] [PubMed] [Google Scholar] 19.Martins F., Sousa J., Pereira C.D., da Cruz E Silva O.A.B., Rebelo S. Nuclear envelope dysfunction and its contribution to the aging process. Aging Cell. 2020;19:e13143. doi: 10.1111/acel.13143. [DOI] [PMC free article] [PubMed] [Google Scholar] 20.Koerber J.T., Jang J.H., Schaffer D.V. DNA shuffling of adeno-associated virus yields functionally diverse viral progeny. Mol. Ther. 2008;16:1703–1709. doi: 10.1038/mt.2008.167. [DOI] [PMC free article] [PubMed] [Google Scholar] 21.Koerber J.T., Maheshri N., Kaspar B.K., Schaffer D.V. Construction of diverse adeno-associated viral libraries for directed evolution of enhanced gene delivery vehicles. Nat. Protoc. 2006;1:701–706. doi: 10.1038/nprot.2006.93. [DOI] [PubMed] [Google Scholar] 22.Maheshri N., Koerber J.T., Kaspar B.K., Schaffer D.V. Directed evolution of adeno-associated virus yields enhanced gene delivery vectors. Nat. Biotechnol. 2006;24:198–204. doi: 10.1038/nbt1182. [DOI] [PubMed] [Google Scholar] 23.Davidsson M., Wang G., Aldrin-Kirk P., Cardoso T., Nolbrant S., Hartnor M., Mudannayake J., Parmar M., Björklund T. A systematic capsid evolution approach performed in vivo for the design of AAV vectors with tailored properties and tropism. Proc. Natl. Acad. Sci. USA. 2019;116:27053–27062. doi: 10.1073/pnas.1910061116. [DOI] [PMC free article] [PubMed] [Google Scholar] 24.Borner K., Kienle E., Huang L.Y., Weinmann J., Sacher A., Bayer P., Stullein C., Fakhiri J., Zimmermann L., Westhaus A., et al. Pre-arrayed Pan-AAV Peptide Display Libraries for Rapid Single-Round Screening. Mol. Ther. 2020;28:1016–1032. doi: 10.1016/j.ymthe.2020.02.009. [DOI] [PMC free article] [PubMed] [Google Scholar] 25.Kellish P.C., Marsic D., Crosson S.M., Choudhury S., Scalabrino M.L., Strang C.E., Hill J., McCullough K.T., Peterson J.J., Fajardo D., et al. Intravitreal injection of a rationally designed AAV capsid library in non-human primate identifies variants with enhanced retinal transduction and neutralizing antibody evasion. Mol. Ther. 2023;31:3441–3456. doi: 10.1016/j.ymthe.2023.10.001. [DOI] [PMC free article] [PubMed] [Google Scholar] 26.Meyer N.L., Chapman M.S. Adeno-associated virus (AAV) cell entry: structural insights. Trends Microbiol. 2022;30:432–451. doi: 10.1016/j.tim.2021.09.005. [DOI] [PMC free article] [PubMed] [Google Scholar] 27.Kanaan N.M., Sellnow R.C., Boye S.L., Coberly B., Bennett A., Agbandje-McKenna M., Sortwell C.E., Hauswirth W.W., Boye S.E., Manfredsson F.P. Rationally Engineered AAV Capsids Improve Transduction and Volumetric Spread in the CNS. Mol. Ther. Nucleic Acids. 2017;8:184–197. doi: 10.1016/j.omtn.2017.06.011. [DOI] [PMC free article] [PubMed] [Google Scholar] 28.Zhong L., Li B., Jayandharan G., Mah C.S., Govindasamy L., Agbandje-McKenna M., Herzog R.W., Weigel-Van Aken K.A., Hobbs J.A., Zolotukhin S., et al. Tyrosine-phosphorylation of AAV2 vectors and its consequences on viral intracellular trafficking and transgene expression. Virology. 2008;381:194–202. doi: 10.1016/j.virol.2008.08.027. [DOI] [PMC free article] [PubMed] [Google Scholar] 29.Zhong L., Li B., Mah C.S., Govindasamy L., Agbandje-McKenna M., Cooper M., Herzog R.W., Zolotukhin I., Warrington K.H., Jr., Weigel-Van Aken K.A., et al. Next generation of adeno-associated virus 2 vectors: point mutations in tyrosines lead to high-efficiency transduction at lower doses. Proc. Natl. Acad. Sci. USA. 2008;105:7827–7832. doi: 10.1073/pnas.0802866105. [DOI] [PMC free article] [PubMed] [Google Scholar] 30.Boye S.L., Bennett A., Scalabrino M.L., McCullough K.T., Van Vliet K., Choudhury S., Ruan Q., Peterson J., Agbandje-McKenna M., Boye S.E. Impact of Heparan Sulfate Binding on Transduction of Retina by Recombinant Adeno-Associated Virus Vectors. J.Virol. 2016;90:4215–4231. doi: 10.1128/JVI.00200-16. [DOI] [PMC free article] [PubMed] [Google Scholar] 31.Naidoo J., Stanek L.M., Ohno K., Trewman S., Samaranch L., Hadaczek P., O'Riordan C., Sullivan J., San Sebastian W., Bringas J.R., et al. Extensive Transduction and Enhanced Spread of a Modified AAV2 Capsid in the Non-human Primate CNS. Mol. Ther. 2018;26:2418–2430. doi: 10.1016/j.ymthe.2018.07.008. [DOI] [PMC free article] [PubMed] [Google Scholar] 32.Summerford C., Samulski R.J. Membrane-associated heparan sulfate proteoglycan is a receptor for adeno-associated virus type 2 virions. J.Virol. 1998;72:1438–1445. doi: 10.1128/jvi.72.2.1438-1445.1998. [DOI] [PMC free article] [PubMed] [Google Scholar] 33.Crosson S.M., Bennett A., Fajardo D., Peterson J.J., Zhang H., Li W., Leahy M.T., Jennings C.K., Boyd R.F., Boye S.L., et al. Effects of Altering HSPG Binding and Capsid Hydrophilicity on Retinal Transduction by AAV. J.Virol. 2021;95:e02440-20. doi: 10.1128/JVI.02440-20. [DOI] [PMC free article] [PubMed] [Google Scholar] 34.Warren Olanow C., Bartus R.T., Baumann T.L., Factor S., Boulis N., Stacy M., Turner D.A., Marks W., Larson P., Starr P.A., et al. Gene delivery of neurturin to putamen and substantia nigra in Parkinson disease: A double-blind, randomized, controlled trial. Ann. Neurol. 2015;78:248–257. doi: 10.1002/ana.24436. [DOI] [PubMed] [Google Scholar] 35.Chu Y., Kordower J.H. Post-Mortem Studies of Neurturin Gene Therapy for Parkinson's Disease: Two Subjects with 10 Years CERE120 Delivery. Mov. Disord. 2023;38:1728–1736. doi: 10.1002/mds.29518. [DOI] [PubMed] [Google Scholar] 36.Paxinos G., Watson C. Elsevier; 2006. The Rat Brain in Stereotaxic Coordinates: Hard Cover Edition. [Google Scholar] 37.Raymond L.A., André V.M., Cepeda C., Gladding C.M., Milnerwood A.J., Levine M.S. Pathophysiology of Huntington's disease: time-dependent alterations in synaptic and receptor function. Neuroscience. 2011;198:252–273. doi: 10.1016/j.neuroscience.2011.08.052. [DOI] [PMC free article] [PubMed] [Google Scholar] 38.Lux K., Goerlitz N., Schlemminger S., Perabo L., Goldnau D., Endell J., Leike K., Kofler D.M., Finke S., Hallek M., Büning H. Green fluorescent protein-tagged adeno-associated virus particles allow the study of cytosolic and nuclear trafficking. J.Virol. 2005;79:11776–11787. doi: 10.1128/JVI.79.18.11776-11787.2005. [DOI] [PMC free article] [PubMed] [Google Scholar] 39.Sengupta P. The Laboratory Rat: Relating Its Age With Human's. Int. J. Prev. Med. 2013;4:624–630. [PMC free article] [PubMed] [Google Scholar] 40.Penttinen A.M., Parkkinen I., Blom S., Kopra J., Andressoo J.O., Pitkänen K., Voutilainen M.H., Saarma M., Airavaara M. Implementation of deep neural networks to count dopamine neurons in substantia nigra. Eur. J. Neurosci. 2018;48:2354–2361. doi: 10.1111/ejn.14129. [DOI] [PMC free article] [PubMed] [Google Scholar] 41.Nakahama R., Saito A., Nobe S., Togashi K., Suzuki I.K., Uematsu A., Emoto K. The tyrosine capsid mutations on retrograde adeno-associated virus accelerates gene transduction efficiency. Mol. Brain. 2022;15:70. doi: 10.1186/s13041-022-00957-0. [DOI] [PMC free article] [PubMed] [Google Scholar] 42.Surdyka M.M., Figiel M. Retrograde capabilities of adeno-associated virus vectors in the central nervous system. Biotechnologia. 2021;102:473–478. doi: 10.5114/bta.2021.111111. [DOI] [PMC free article] [PubMed] [Google Scholar] 43.Tervo D.G.R., Hwang B.Y., Viswanathan S., Gaj T., Lavzin M., Ritola K.D., Lindo S., Michael S., Kuleshova E., Ojala D., et al. A Designer AAV Variant Permits Efficient Retrograde Access to Projection Neurons. Neuron. 2016;92:372–382. doi: 10.1016/j.neuron.2016.09.021. [DOI] [PMC free article] [PubMed] [Google Scholar] 44.Sullivan J.A., Stanek L.M., Lukason M.J., Bu J., Osmond S.R., Barry E.A., O'Riordan C.R., Shihabuddin L.S., Cheng S.H., Scaria A. Rationally designed AAV2 and AAVrh8R capsids provide improved transduction in the retina and brain. Gene Ther. 2018;25:205–219. doi: 10.1038/s41434-018-0017-8. [DOI] [PubMed] [Google Scholar] 45.Bianco A., Antonacci Y., Liguori M. Sex and Gender Differences in Neurodegenerative Diseases: Challenges for Therapeutic Opportunities. Int. J. Mol. Sci. 2023;24:6354. doi: 10.3390/ijms24076354. [DOI] [PMC free article] [PubMed] [Google Scholar] 46.Maguire C.A., Crommentuijn M.H., Mu D., Hudry E., Serrano-Pozo A., Hyman B.T., Tannous B.A. Mouse gender influences brain transduction by intravascularly administered AAV9. Mol. Ther. 2013;21:1470–1471. doi: 10.1038/mt.2013.95. [DOI] [PMC free article] [PubMed] [Google Scholar] 47.Shi W.F., Bartlett J.S. Estrogen plays a critical role in AAV2-mediated gene transfer in ovarian cancer. Acta Pharmacol. Sin. 2008;29:1440–1450. doi: 10.1111/j.1745-7254.2008.00894.x. [DOI] [PubMed] [Google Scholar] 48.Cabanes-Creus M., Westhaus A., Navarro R.G., Baltazar G., Zhu E., Amaya A.K., Liao S.H.Y., Scott S., Sallard E., Dilworth K.L., et al. Attenuation of Heparan Sulfate Proteoglycan Binding Enhances In Vivo Transduction of Human Primary Hepatocytes with AAV2. Mol. Ther. Methods Clin. Dev. 2020;17:1139–1154. doi: 10.1016/j.omtm.2020.05.004. [DOI] [PMC free article] [PubMed] [Google Scholar] 49.Milde S., Adalbert R., Elaman M.H., Coleman M.P. Axonal transport declines with age in two distinct phases separated by a period of relative stability. Neurobiol. Aging. 2015;36:971–981. doi: 10.1016/j.neurobiolaging.2014.09.018. [DOI] [PMC free article] [PubMed] [Google Scholar] 50.Sandoval I.M., Kuhn N.M., Manfredsson F.P. Multimodal Production of Adeno-Associated Virus. Methods Mol. Biol. 2019;1937:101–124. doi: 10.1007/978-1-4939-9065-8_6. [DOI] [PubMed] [Google Scholar] 51.Benskey M.J., Manfredsson F.P. Intraparenchymal Stereotaxic Delivery of rAAV and Special Considerations in Vector Handling. Methods Mol. Biol. 2016;1382:199–215. doi: 10.1007/978-1-4939-3271-9_14. [DOI] [PubMed] [Google Scholar] 52.Sellnow R.C., Steece-Collier K., Altwal F., Sandoval I.M., Kordower J.H., Collier T.J., Sortwell C.E., West A.R., Manfredsson F.P. Striatal Nurr1 Facilitates the Dyskinetic State and Exacerbates Levodopa-Induced Dyskinesia in a Rat Model of Parkinson's Disease. J.Neurosci. 2020;40:3675–3691. doi: 10.1523/JNEUROSCI.2936-19.2020. [DOI] [PMC free article] [PubMed] [Google Scholar] 53.Grabinski T.M., Kneynsberg A., Manfredsson F.P., Kanaan N.M. A Method for Combining RNAscope In Situ Hybridization with Immunohistochemistry in Thick Free-Floating Brain Sections and Primary Neuronal Cultures. PLoS One. 2015;10:e0120120. doi: 10.1371/journal.pone.0120120. [DOI] [PMC free article] [PubMed] [Google Scholar] 54.Steece-Collier K., Stancati J.A., Collier N.J., Sandoval I.M., Mercado N.M., Sortwell C.E., Collier T.J., Manfredsson F.P. Genetic silencing of striatal CaV1.3 prevents and ameliorates levodopa dyskinesia. Mov. Disord. 2019;34:697–707. doi: 10.1002/mds.27695. [DOI] [PMC free article] [PubMed] [Google Scholar] Associated Data This section collects any data citations, data availability statements, or supplementary materials included in this article. Supplementary Materials Video S1. Vector spread illustrated in 3-D Download video file (63.5MB, mp4) Document S1. Figures S1‒S4 mmc1.pdf (1.1MB, pdf) Document S2. Article plus supplemental information mmc3.pdf (7.3MB, pdf) Data Availability Statement The data necessary to interpret, verify, and extend the findings of this study will be available upon request. Articles from Molecular Therapy. Nucleic Acids are provided here courtesy of The American Society of Gene & Cell Therapy ACTIONS View on publisher site PDF (6.2 MB) Cite Collections Permalink PERMALINK Copy RESOURCES Similar articles Cited by other articles Links to NCBI Databases On this page Abstract Graphical abstract Introduction Results Discussion Materials and methods Data and code availability Acknowledgments Author contributions Declaration of interests Footnotes Supplemental information References Associated Data Cite Copy Download .nbib.nbib Format: Add to Collections Create a new collection Add to an existing collection Name your collection Choose a collection Unable to load your collection due to an error Please try again Add Cancel Follow NCBI NCBI on X (formerly known as Twitter)NCBI on FacebookNCBI on LinkedInNCBI on GitHubNCBI RSS feed Connect with NLM NLM on X (formerly known as Twitter)NLM on FacebookNLM on YouTube National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894 Web Policies FOIA HHS Vulnerability Disclosure Help Accessibility Careers NLM NIH HHS USA.gov Back to Top
367
Tarkovsky’s Solaris and Cyberneticist Snaut’s speech in the library. What a beautiful film. : r/TrueFilm =============== Skip to main contentTarkovsky’s Solaris and Cyberneticist Snaut’s speech in the library. What a beautiful film. : r/TrueFilm Open menu Open navigationGo to Reddit Home Log InLog in to Reddit Expand user menu Open settings menu Go to TrueFilm r/TrueFilm r/TrueFilm An in-depth discussion of film 539K Members Online •1 yr. ago stavis23 Tarkovsky’s Solaris and Cyberneticist Snaut’s speech in the library. What a beautiful film. At about the 2 hour mark Snaut is celebrating his birthday and Sartorius proposes a toast, Sartorius- “To science and to Snaut” Snaut : “Science? Nonsense. In this situation mediocrity and genius are equally useless. We have no interest in conquering any cosmos. We want to extend the Earth to the borders of the cosmos. We don’t know what to do with other worlds. We need a mirror. We struggle for contact but we’ll never find it. We’re in the foolish human predicament of striving for a goal that he fears, that he has no need for. Man needs man.” What do you all think about this? I think it’s the heart of the picture. Those beginning scenes of nature: a leaf floating on water, plants swaying in the current, the rain on the picnic table with cherries and apples and fine china tea cups. The little girl observing the young man before bowing and saying hello. The horses and their whinny etc. etc. Tarkovsky is always elaborating on this theme, the meditative and contemplative, the mystical view point. Thoughts? Read more Share New to Reddit? Create your account and connect with a world of communities. Continue with Google Continue with Google. Opens in new tab Continue with Email Continue With Phone Number By continuing, you agree to ourUser Agreementand acknowledge that you understand thePrivacy Policy. Public Anyone can view, post, and comment to this community Top Posts Reddit reReddit: Top posts of August 27, 2024 Reddit reReddit: Top posts of August 2024 Reddit reReddit: Top posts of 2024 Reddit RulesPrivacy PolicyUser AgreementAccessibilityReddit, Inc. © 2025. All rights reserved. Expand Navigation Collapse Navigation
368
Identifying Open Reading Frames in DNA Sequences - CliffsNotes =============== Lit NotesStudy GuidesDocumentsQ&AAsk AI Chat PDF Log InSign Up Literature NotesStudy GuidesDocumentsHomework QuestionsChat PDFLog InSign Up Identifying Open Reading Frames in DNA Sequences School Pendleton Heights High SchoolWe aren't endorsed by this school Course CHEM 123A Subject Biology Date Mar 28, 2025 Pages 8 Uploaded by ConstableDragonflyMaster18 Download Helpful Unhelpful Download Helpful Unhelpful Home/ Biology This is a preview Want to read all 8 pages? Go Premium today. View Full Document Already Premium? Sign in here Lab 3: Activity 3.1. Identifying ORF Understanding Reading Frames In-LAB Assignment ( take time to read through this) Introduction In this lesson, you will perform a paper exercise designed to reinforce your understanding of the complementary nature of DNA and how that complementarity leads to six potential protein reading frames in any given DNA sequence. You also gain familiarity with the circular format codon table. Learning Objectives At the end of this lesson, you will know that: • Each DNA molecule is composed of two complementary strands arranged anti-parallel to one another. • There are three potential reading frames on each strand of DNA and a total of six potential reading frames for protein translation in any given region of the DNA molecule (three on each strand). At the end of this lesson, students will be able to: • Identify the best open reading frame among the six possible reading frames for their protein of interest. • Use the circular format codon table to translate a region of DNA/RNA. Key Concepts DNA sequences can be read in any of six possible reading frames, but only one is usually translated by the cell into a protein. This is called the open reading frame. DNA sequences can be translated by hand using a codon table. Still, bioinformatics tools like ORF Finder make the process much faster and easier for scientists to identify the proper reading frame. Background: Open Reading Frames Much of the genome of eukaryotic organisms do not appear to code for proteins. Only 2-5% of the 3 billion base pairs in the human genome are thought to code for protein (or approximately 25,000-30,000 genes). The function of the rest of this DNA is a subject of much debate among scientists. However, when scanning a genome for genes that may encode proteins, scientists use bioinformatics programs like ORF Finder to look for start codons, stop codons, and stretches of DNA between the two that code for proteins at least 50 to 300 amino acids long. These open reading frames can then be analyzed further using bioinformatics tools like BLAST searches and phylogenetic analyses to determine whether these areas are similar to other known genes from different organisms, which may warrant further study in the lab. The strand of DNA that encodes a gene is often called the coding strand or the sense strand. We often refer to the protein-coding strand as the sense strand and the non-coding strand as the anti-sense strand, but by just looking at the DNA sequence, you are unlikely to know which is which. The non-coding strand serves as the template strand, as this is the DNA strand used as a template to make the messenger RNA (mRNA). Look at the following example about "What are Reading Frames?" "What are Reading Frames?" How do we know how to read the "gene" sequence. The open reading frame is the gene's portion that could encode a protein because it contains a start and a stop codon and codons to make amino acids in between. The "open" reading frame is the "correct" reading frame. The reading frame is said to be "open" because it is not interrupted by stop codons. Reading frame:A reading frame is a contiguous and non-overlapping sequence of three- nucleotide codons in DNA or RNA. There are three possible reading frames in an mRNA strand and six in a double-stranded DNA molecule (three reading frames from each of the two DNA strands). Open reading frame: A reading frame that contains a start codon and a stop codon, with multiple three-nucleotide codons in between. The open reading frame in a particular region of DNA is the correct reading frame from which to translate the DNA into protein. The longest open reading frame is the one that's most likely to be right. Why is this page out of focus? Because this is a premium document. Subscribe to unlock this document and more. Analyzing a DNA Sequence Here are the rules for finding an ORF in a piece of bacterial DNA: 1)It must start with ATG. In this exercise, the first ATG is the Start codon. In a real gene search, you would not have this information. In bacteria, an ORF on mRNA piece almost always begins with AUG, which corresponds to ATG in the DNA segment that codes for the mRNA. 2)It must end with TAA, TAG, or TGA. According to the standard genetic code, there are three Stop codons on mRNA: UAA, UAG, and UGA corresponding to TAA, TAG, and TGA in the parent DNA segment. 3)It must be at least 300 nucleotides long (coding for 100 amino acids). Why is this page out of focus? Because this is a premium document. Subscribe to unlock this document and more. 4)The ATG Start codon and the Stop codon must be in the frame. This means that the total number of bases in the sequence from the Start to the Stop codon must be evenly divisible by 3. Find an open reading frame (ORF) in this segment of DNA and answer questions. TACGCAATGCGTATCATTCTGCTGGGCGCTCCGGGCGCAGGTAAAGGTACTCAGGCTCAATTCATCATGGAGAAA TACGGCATTCCGCAAATCTCTACTGGTGACATGTTGCGCGCCGCTGTAAAAGCAGGTTCTGAGTTAGGTCTGAAAG CAAAAGAAATTATGGATGCGGGCAAGTTGGTGACTGATGAGTTAGTTATCGCATTAGTCAAAGAACGTATCACAC AGGAAGATTGCCGCGATGGTTTTCTGTTAGACGGGTTCCCGCGTACCATTCCTCAGGCAGATGCCATGAAAGAAG CCGGTATCAAAGTTGATTATGTGCTGGAGTTTGATGTTCCAGACGAGCTGATTGTTGAGCGCATTGTCGGCCGTCG GGTACATGCTGCTTCAGGCCGTGTTTATCACGTTAAATTCAACCCACCTAAAGTTGAAGATAAAGATGATGTTACC GGTGAAGAGCTGACTATTCGTAAAGATGATCAGGAAGCGACTGTCCGTAAGCGTCTTATCGAATATCATCAACAA ACTGCACCATTGGTTTCTTACTATCATAAAGAAGCGGATGCAGGTAATACGCAATATTTTAAACTGGACGGAACCC GTAATGTAGCAGAAGTCAGTGCTGAACTGGCGACTATTCTCGGTTAATTCTGGATGGCCTTATAGCTAAGGCGGTT TAAGGCCGCCTTAGCTATTTCAAGTAAGAAGGGCGTAGTACCTACAAAAGGAGATTTGGCATGATGCAAAGCAAA CCCGGCGTATTAATGGTTAATTTGGGGACACCAGATGCTCCAACGTCGAAAGCTATCAAGCGTTATTTAGCTGAGT TTTTGAGTGACCGCCGGGTAGTTGATACTTCCCCATTGCTATGGTGGCCATTGCTGCATGGTGTTATTTTACCGCTT CGGTCACCACGTGTAGCAAAACTTTATCAATCCGTTTGGATGGAAGAGGGCTCTCCTTTATTGGTTTATAGCCGCC GCCAGCAGAAAGCACTGGCAGCAAGAATGCCTGATATTCCTGTAGAATTAGGCATGAGCTATGGTTCAC In this problem, you will use a computer to help you identify an open reading frame, determine the protein that it will express, and find the bacterial source for that protein. Admittedly, trying to find an ORF by hand is a tedious approach. Here is an easier one: 1.Highlight the entire DNA sequence from the pre-lab activity and copy it. Then go to the Translate tool on the ExPASy server: 2.Paste the sequence into the box entitled "Please enter a DNA or RNA sequence in the box below (numbers and blanks are ignored)." 3.Then select "Verbose ("Met," "Stop," spaces between residues)"as the output format and click on "Translate Sequence."PART 1. Using Bioinformatics Tools to Analyze Protein Sequences [TA-Guided] Why is this page out of focus? Because this is a premium document. Subscribe to unlock this document and more. 4.The "Results of Translation" page that appears contains six different reading frames. Q.1 Identify the reading frame that contains a protein(more than 100 continuous amino acids with no interruptions by a Stop codon). Paste your results here. Paste a screenshot here [1 point -screenshot] Q. 3 Note the frame# and orientation of the DNA sequence. [1 point] Now go back to the Translate tool page, leave the DNA sequence in the sequence box, but select "Compact ("M," "-," no spaces)" as the output format. Why is this page out of focus? Because this is a premium document. Subscribe to unlock this document and more. Q.4 Go to the same reading frame as before and copy the protein sequence (by one-letter abbreviations),[1 point]starting with "M" for methionine (include it) and ending in "-" for the Stop codon (don't include this symbol). Now you will identify the protein and the bacterial source. Go to the NCBI BLAST page. Remember we are working with aa sequence. On the BLAST page, select "Protein-protein BLAST." Enter your protein sequence in the "Search" box. Use the default values for the rest of the page and click on the "BLAST!" button. Q.5 Paste a screenshot of your results here with top 3 hits. What is the protein, and what is the source? [1 point; 0.5 for screenshot, 0.25 for protein name, 0.25 for source] Answer: Part 1: Exercise 2 A top-rated tool for translating DNA into protein is ORF Finder—Open Reading Frame Finder— available through the NCBI. Once you have translated your protein in silico1 (or "in the computer"), you will select the correct reading frame to use for the rest of your analyses. In silico: An expression used to mean "performed on computer or via compute. Aim: Today, you as a bioinformatician; 1. Translate your DNA sequence using ORF Finder. Why is this page out of focus? Because this is a premium document. Subscribe to unlock this document and more. Page 1 of 8 Related Biology Documents Telomere Article Questions.pdf Telomere Article Questions Name: NI Ctexy (1 ' Ola Period: (¢ " \lemt Date: Telomere Article Questions g5 Directions: After reading the article, "An Instant Update on Telomeres and Telomerase", answer the following questions. You may discuss the questions Cinco Ranch High School SCIENCE BIOLOGY Study Guide Lecture Exam 1 Biol2420 Microbiology.docx Study Guide Lecture Exam 1 Biol2420 Microbiology -Know the domains and kingdoms of life, and which ones are prokaryotic vs eukaryotic. -Be able to match the following example organisms to the correct kingdom or domain:1)Homo sapiens, 2) Escherichia coli, Texas A&M University, Corpus Christi BIOL 2420 chapter 21 lymphatic sytem.docx The notes in the images cover the Lymphatic System, which is a part of the body's immune system. It helps with fluid balance, fat absorption, and immune responses. Here's a simplified explanation of the key topics: Key Points: 1. Lymphatic System Function West Georgia Technical College BIOL 2114 GENE REGULATION.docx GENE REGULATION REVISION QUESTIONS 1.Define the following (a)Gene (b)Promoter (c)Regulator (d)Structural genes (e)Operator (f)Repressor 2.Is an on/off switch for an Operon? 3.is a protein that bind to the operator and blocks the RNA polymerase 4.is a smal University of Lusaka BSPH 111 Viruses and Bacteria Station Lab.pdf iruses and Bacteria Station ab i Period: Viruses and Bacteria Station Lab Complete the Virus and Bacteria Sta tion Lab student worksheet, using the Virus and Bacteria Directions: ' Station Lab slides on Can Viruses Station 1: Virus Ch 1. Study the image a Cinco Ranch High School SCIENCE BIOLOGY BIOAO2 LAB 3 NOTES.pdf Lab 3 Physiological Responses to Temperature using Daphnia spp. Overview In this lab, you will examine the impact of an environmental factor, namely temperature, on the physiological functioning of a small invertebrate. The organism is a crustacean zoopla University of Toronto, Scarborough BIOLOGY BIOAO2 Study your doc or PDF Upload your materials to get instant AI help Try for free Other related materials Week3&4 Bioinformatics RESULTS SHEET-2 (1).docx ANGLIA RUSKIN UNIVERSITY SCHOOL OF LIFE SCIENCES Laboratory Techniques in DNA Manipulation (MOD007185) BIOINFORMATICS RESULTS SHEET Exercise 1. In silico cloning of GST Design of PCR primers Write down the GST primer sequences here: GSTFor: 5' ATGTCCCCTAT University of Manchester BIOL 10212 Pharmaceutical_Microbiology_1_molecular_microbiology_student_activity.pdf Pharmaceutical Microbiology 1 Spring 2024 Molecular technique in microbiology Student Activity Student name: ................................................................................................................................... Student Number............................................................................................................................. Group:................... Activity 1: Restriction enzy University of Tripoli BIOLOGY MISC FA2024 Problem Set 3.pdf PROBLEM SET 3 Question 1. Cloning b-thalassemias are a group of inherited blood disorders caused by mutations in the b-globin gene. Below are indicated potential positions (A-E) of point mutations in the b-globin gene. Exons are shown as rectangles with University of California, San Diego BIMM 100 2024 Practice_Exam_1A_KEY.pdf BIMM100 - A00 Midterm 1 Exam (PRACTICE 1A) Name: Student ID: Version: A (please add to bubble sheet) INSTRUCTIONS: This midterm exam consists of 6 pages including this cover page and a gradescope bubble-sheet, which you will use to report your answers. Be University of California, San Diego BIMM 100 Copy of Lab 10 - Missed Lab Vertebrate Anatomy Report.pdf Missed Lab Report Last Name He LAB 10: VERTEBRATE ANATOMY First Name Lisha Bio 1AL Fall 2024 Lab Section # 417 1. Anatomy Diagram (1.5 pt.). In the empty human body diagram shown below, draw and label the following 16 organs Esophagus Stomach Small intest University of California, Berkeley BIOLOGY 1al Class 14 Central Dogma Capstone Worksheet.pdf Carmen Wimmer u1383806 Student name and uNID_ Class 14 Central Dogma Capstone Worksheet Please. work together but each student must complete and turn in their own worksheet. 1. In the sequence grid below, fill in all boxes with end label (polarity or dire University of Utah BIOL 1610 You might also like Telomere Article Questions.pdf Telomere Article Questions Name: NI Ctexy (1 ' Ola Period: (¢ " \lemt Date: Telomere Article Questions g5 Directions: After reading the article, "An Instant Update on Telomeres and Telomerase", answer the following questions. You may discuss the questions Cinco Ranch High School SCIENCE BIOLOGY Study Guide Lecture Exam 1 Biol2420 Microbiology.docx Study Guide Lecture Exam 1 Biol2420 Microbiology -Know the domains and kingdoms of life, and which ones are prokaryotic vs eukaryotic. -Be able to match the following example organisms to the correct kingdom or domain:1)Homo sapiens, 2) Escherichia coli, Texas A&M University, Corpus Christi BIOL 2420 chapter 21 lymphatic sytem.docx The notes in the images cover the Lymphatic System, which is a part of the body's immune system. It helps with fluid balance, fat absorption, and immune responses. Here's a simplified explanation of the key topics: Key Points: 1. Lymphatic System Function West Georgia Technical College BIOL 2114 GENE REGULATION.docx GENE REGULATION REVISION QUESTIONS 1.Define the following (a)Gene (b)Promoter (c)Regulator (d)Structural genes (e)Operator (f)Repressor 2.Is an on/off switch for an Operon? 3.is a protein that bind to the operator and blocks the RNA polymerase 4.is a smal University of Lusaka BSPH 111 Viruses and Bacteria Station Lab.pdf iruses and Bacteria Station ab i Period: Viruses and Bacteria Station Lab Complete the Virus and Bacteria Sta tion Lab student worksheet, using the Virus and Bacteria Directions: ' Station Lab slides on Can Viruses Station 1: Virus Ch 1. Study the image a Cinco Ranch High School SCIENCE BIOLOGY BIOAO2 LAB 3 NOTES.pdf Lab 3 Physiological Responses to Temperature using Daphnia spp. Overview In this lab, you will examine the impact of an environmental factor, namely temperature, on the physiological functioning of a small invertebrate. The organism is a crustacean zoopla University of Toronto, Scarborough BIOLOGY BIOAO2 CliffsNotes study guides are written by real teachers and professors, so no matter what you're studying, CliffsNotes can ease your homework headaches and help you score high on exams. Quick Links Literature NotesStudy GuidesDocumentsHomework Questions Company About CliffsNotesContact us Do Not Sell My Personal Information Legal Service TermsPrivacy policyCopyright, Community Guidelines & other legal resourcesHonor CodeDisclaimer CliffsNotes, a Learneo, Inc. business © Learneo, Inc. 2025 AI homework help Explanations instantly Do Not Sell or Share My Personal Information When you visit our website, we store cookies on your browser to collect information. The information collected might relate to you, your preferences or your device, and is mostly used to make the site work as you expect it to and to provide a more personalized web experience. However, you can choose not to allow certain types of cookies, which may impact your experience of the site and the services we are able to offer. Click on the different category headings to find out more and change our default settings according to your preference. You cannot opt-out of our First Party Strictly Necessary Cookies as they are deployed in order to ensure the proper functioning of our website (such as prompting the cookie banner and remembering your settings, to log into your account, to redirect you when you log out, etc.). For more information about the First and Third Party Cookies used please follow this link. More information Allow All Manage Consent Preferences Essential Cookies Always Active Essential Cookies are required for providing you with features or services that you have requested. For example, certain Cookies enable you to log into secure areas of our Services. Share or Sale of Personal Data Always Active Under the California Consumer Privacy Act, you have the right to opt-out of the sale of your personal information to third parties. These cookies collect information for analytics and to personalize your experience with targeted ads. You may exercise your right to opt out of the sale of personal information by using this toggle switch. If you opt out we will not be able to offer you personalised ads and will not hand over your personal information to any third parties. Additionally, you may contact our legal department for further clarification about your rights as a California consumer by using this Exercise My Rights link. If you have enabled privacy controls on your browser (such as a plugin), we have to take that as a valid request to opt-out. Therefore we would not be able to track your activity through the web. This may affect our ability to personalize ads according to your preferences. Advertising Cookies Always Active Advertising Cookies collect data about your online activity and identify your interests so that we can provide advertising that we believe is relevant to you. Advertising Cookies may include Retargeting Cookies. Analytics Cookies Always Active Analytics Cookies allow us to understand how visitors use our Services. They do this by collecting information about the number of visitors to the Services, what pages visitors view on our Services and how long visitors are viewing pages on the Services. Analytics Cookies also help us measure the performance of our advertising campaigns in order to help us improve our campaigns and the Services’ content for those who engage with our advertising. Cookie List Clear [x] checkbox label label Apply Cancel Consent Leg.Interest [x] checkbox label label [x] checkbox label label [x] checkbox label label Confirm My Choices
369
Published Time: 2009-06-12T01:50:37Z Vieta jumping - Wikipedia =============== Jump to content [x] Main menu Main menu move to sidebar hide Navigation Main page Contents Current events Random article About Wikipedia Contact us Contribute Help Learn to edit Community portal Recent changes Upload file Special pages Search Search [x] Appearance Appearance move to sidebar hide Text Small Standard Large This page always uses small font size Width Standard Wide The content is as wide as possible for your browser window. Color (beta) Automatic Light Dark This page is always in light mode. Donate Create account Log in [x] Personal tools Donate Create account Log in Pages for logged out editors learn more Contributions Talk [x] Toggle the table of contents Contents move to sidebar hide (Top) 1 History 2 Standard Vieta jumping 3 Constant descent Vieta jumping 4 Geometric interpretation 5 See also 6 Notes 7 External links Vieta jumping [x] 7 languages العربية Deutsch Español Français 日本語 Русский 中文 Edit links Article Talk [x] English Read Edit View history [x] Tools Tools move to sidebar hide Actions Read Edit View history General What links here Related changes Upload file Permanent link Page information Cite this page Get shortened URL Download QR code Edit interlanguage links Print/export Download as PDF Printable version In other projects Wikidata item From Wikipedia, the free encyclopedia Mathematical proof technique In number theory, Vieta jumping, also known as root flipping, is a proof technique. It is most often used for problems in which a relation between two integers is given, along with a statement to prove about its solutions. In particular, it can be used to produce new solutions of a quadraticDiophantine equation from known ones. There exist multiple variations of Vieta jumping, all of which involve the common theme of infinite descent by finding new solutions to an equation using Vieta's formulas. History [edit] Vieta jumping is a classical method in the theory of quadratic Diophantine equations and binary quadratic forms. For example, it was used in the analysis of the Markov equation back in 1879 and in the 1953 paper of Mills. In 1988, the method came to the attention to mathematical olympiad problems in the light of the first olympiad problem to use it in a solution that was proposed for the International Mathematics Olympiad and assumed to be the most difficult problem on the contest: Let a and b be positive integers such that ab + 1 divides a 2 + b 2. Show that a 2+b 2 a b+1{\displaystyle {\frac {a^{2}+b^{2}}{ab+1}}} is the square of an integer. Arthur Engel wrote the following about the problem's difficulty: Nobody of the six members of the Australian problem committee could solve it. Two of the members were husband and wife George and Esther Szekeres, both famous problem solvers and problem creators. Since it was a number theoretic problem it was sent to the four most renowned Australian number theorists. They were asked to work on it for six hours. None of them could solve it in this time. The problem committee submitted it to the jury of the XXIX IMO marked with a double asterisk, which meant a superhard problem, possibly too hard to pose. After a long discussion, the jury finally had the courage to choose it as the last problem of the competition. Eleven students gave perfect solutions. Among the eleven students receiving the maximum score for solving this problem were Ngô Bảo Châu, Ravi Vakil, Zvezdelina Stankova, and Nicușor Dan. Emanouil Atanassov (from Bulgaria) solved the problem in a paragraph and received a special prize. Standard Vieta jumping [edit] The concept of standard Vieta jumping is a proof by contradiction, and consists of the following four steps: Assume toward a contradiction that some solution (a 1, a 2, ...) exists that violates the given requirements. Take the minimal such solution according to some definition of minimality. Replace some a i by a variable x in the formulas, and obtain an equation for which a i is a solution. Using Vieta's formulas, show that this implies the existence of a smaller solution, hence a contradiction. Example Problem #6 at IMO 1988: Let a and b be positive integers such that ab + 1 divides a 2 + b 2. Prove that ⁠a 2 + b 2/ab + 1⁠ is a perfect square. Fix some value k that is a non-square positive integer. Assume there exist positive integers (a, b) for which k = ⁠a 2 + b 2/ab + 1⁠. Let (A, B) be positive integers for which k = ⁠A 2 + B 2/AB + 1⁠ and such that A + B is minimized, and without loss of generality assume A ≥ B. Fixing B, replace A with the variable x to yield x 2 – (kB)x + (B 2 – k) = 0. We know that one root of this equation is x 1 = A. By standard properties of quadratic equations, we know that the other root satisfies x 2 = kB – A and x 2 = ⁠B 2 – k/A⁠. The first expression for x 2 shows that x 2 is an integer, while the second expression implies that x 2 ≠ 0 since k is not a perfect square. From ⁠x 2 2 + B 2/x 2 B + 1⁠ = k> 0 it further follows that x 2 B> −1, and hence x 2 is a positive integer. Finally, A ≥ B implies that x 2 = ⁠B 2 − k/A⁠<⁠B 2/A⁠ ≤ A, hence x 2<A, and thus x 2 + B<A + B, which contradicts the minimality of A + B. Constant descent Vieta jumping [edit] The method of constant descent Vieta jumping is used when we wish to prove a statement regarding a constant k having something to do with the relation between a and b. Unlike standard Vieta jumping, constant descent is not a proof by contradiction, and it consists of the following four steps: The equality case is proven so that it may be assumed that a>b. b and k are fixed and the expression relating a, b, and k is rearranged to form a quadratic with coefficients in terms of b and k, one of whose roots is a. The other root, x 2 is determined using Vieta's formulas. For all (a, b) above a certain base case, show that 0 <x 2<b<a and that x 2 is an integer. Thus, while maintaining the same k, we may replace (a, b) with (b, x 2) and repeat this process until we arrive at the base case. Prove the statement for the base case, and as k has remained constant through this process, this is sufficient to prove the statement for all ordered pairs. Example Let a and b be positive integers such that ab divides a 2 + b 2 + 1. Prove that 3 ab = a 2 + b 2 + 1. If a = b, a 2 dividing 2 a 2 + 1 implies that a 2 divides 1, and hence the positive integers a = b = 1, and 3(1)(1) = 1 2 + 1 2 + 1. So, without loss of generality, assume that a>b. For any (a, b) satisfying the given condition, let k = ⁠a 2 + b 2 + 1/ab⁠ and rearrange and substitute to get x 2 − (kb) x + (b 2 + 1) = 0. One root to this quadratic is a, so by Vieta's formulas the other root may be written as follows: x 2 = kb − a = ⁠b 2 + 1/a⁠. The first equation shows that x 2 is an integer and the second that it is positive. Because a>b and they are both integers, a ≥ b + 1, and hence ab ≥ b 2 + b; As long as b> 1, we always have ab>b 2 + 1, and therefore x 2 = ⁠b 2 + 1/a⁠<b. Thus, while maintaining the same k, we may replace (a, b) with (b, x 2) and repeat this process until we arrive at the base case. The base case we arrive at is the case where b = 1. For (a, 1) to satisfy the given condition, a must divide a 2 + 2, which implies that a divides 2, making a either 1 or 2. The first case is eliminated because a = b. In the second case, k = ⁠a 2 + b 2 + 1/ab⁠ = ⁠6/2⁠ = 3. As k has remained constant throughout this process of Vieta jumping, this is sufficient to show that for any (a, b) satisfying the given condition, k will always equal 3. Geometric interpretation [edit] Vieta jumping can be described in terms of lattice points on hyperbolas in the first quadrant. The same process of finding smaller roots is used instead to find lower lattice points on a hyperbola while remaining in the first quadrant. The procedure is as follows: From the given condition we obtain the equation of a family of hyperbolas that are unchanged by switching x and y so that they are symmetric about the line y = x. Prove the desired statement for the intersections of the hyperbolas and the line y = x. Assume there is some lattice point (x, y) on some hyperbola and without loss of generality x<y. Then by Vieta's formulas, there is a corresponding lattice point with the same x-coordinate on the other branch of the hyperbola, and by reflection through y = x a new point on the original branch of the hyperbola is obtained. It is shown that this process produces lower points on the same branch and can be repeated until some condition (such as x = 0) is achieved. Then by substitution of this condition into the equation of the hyperbola, the desired conclusion will be proven. Example This method can be applied to problem #6 at IMO 1988: Let a and b be positive integers such that ab + 1 divides a 2 + b 2. Prove that ⁠a 2 + b 2/ab + 1⁠ is a perfect square. Let ⁠a 2 + b 2/ab + 1⁠ = q and fix the value of q. If q = 1, q is a perfect square as desired. If q = 2, then (a-b)2 = 2 and there is no integral solution a, b. When q> 2, the equation x 2 + y 2 − qxy − q = 0 defines a hyperbola H and (a,b) represents an integral lattice point on H. If (x,x) is an integral lattice point on H with x> 0, then (since q is integral) one can see that x = 1. This proposition's statement is then true for the point (x,x). Now let P = (x, y) be a lattice point on a branch H with x, y> 0 and x ≠ y (as the previous remark covers the case x = y). By symmetry, we can assume that x<y and that P is on the higher branch of H. By applying Vieta's Formulas, (x, qx − y) is a lattice point on the lower branch of H. Let y′ = qx − y. From the equation for H, one sees that 1 + x y′> 0. Since x> 0, it follows that y′ ≥ 0. Hence the point (x, y′) is in the first quadrant. By reflection, the point (y′, x) is also a point in the first quadrant on H. Moreover from Vieta's formulas, yy′ = x 2 - q, and y′ = ⁠x 2 - q/y⁠. Combining this equation with x<y, one can show that y′<x. The new constructed point Q = (y′, x) is then in the first quadrant, on the higher branch of H, and with smaller x,y-coordinates than the point P we started with. The process in the previous step can be repeated whenever the point Q has a positive x-coordinate. However, since the x-coordinates of these points will form a decreasing sequence of non-negative integers, the process can only be repeated finitely many times before it produces a point Q = (0, y) on the upper branch of H; by substitution, q = y 2 is a square as required. See also [edit] Vieta's formulas Proof by contradiction Infinite descent Markov number Apollonian gasket Notes [edit] ^Mills 1953. ^ Jump up to: abArthur Engel (1998). Problem Solving Strategies. Problem Books in Mathematics. Springer. p.127. doi:10.1007/b97682. ISBN978-0-387-98219-9. ^"The Return of the Legend of Question Six". Numberphile. August 16, 2016. Archived from the original on 2021-12-20 – via YouTube. ^"International Mathematical Olympiad". www.imo-official.org. Retrieved 29 September 2020. ^"Results of the 1988 International Mathematical Olympiad". Imo-official.org. Retrieved 2013-03-03. ^"Individual ranking of Emanouil Atanassov". International Mathematical Olympiad. ^Yimin Ge (2007). "The Method of Vieta Jumping"(PDF). Mathematical Reflections. 5. ^"AoPS Forum – One of my favourites problems, yeah!". Artofproblemsolving.com. Retrieved 2023-01-08. ^K. S. Brown. "N = (x^2 + y^2)/(1+xy) is a Square". MathPages.com. Retrieved 2016-09-26. ^"AoPS Forum — Lemur Numbers". Artofproblemsolving.com. Retrieved 2023-01-08. ^"AoPS Forum – xy | x^2+y^2+1". Artofproblemsolving.com. 2005-06-07. Retrieved 2023-01-08. External links [edit] Vieta Root Jumping at Brilliant.org Mills, W. H. (1953). "A system of quadratic Diophantine equations". Pacific J. Math. 3 (1): 209–220. Retrieved from " Categories: Number theory Diophantine equations Hidden categories: Articles with short description Short description is different from Wikidata This page was last edited on 17 February 2024, at 14:08(UTC). Text is available under the Creative Commons Attribution-ShareAlike 4.0 License; additional terms may apply. By using this site, you agree to the Terms of Use and Privacy Policy. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit organization. Privacy policy About Wikipedia Disclaimers Contact Wikipedia Code of Conduct Developers Statistics Cookie statement Mobile view Edit preview settings Search Search [x] Toggle the table of contents Vieta jumping 7 languagesAdd topic
370
Published Time: 2004-09-06T09:00:43Z Peter Sloterdijk - Wikipedia =============== Jump to content [x] Main menu Main menu move to sidebar hide Navigation Main page Contents Current events Random article About Wikipedia Contact us Contribute Help Learn to edit Community portal Recent changes Upload file Special pages Search Search [x] Appearance Donate Create account Log in [x] Personal tools Donate Create account Log in Pages for logged out editors learn more Contributions Talk Contents move to sidebar hide (Top) 1 Biography 2 Philosophical stance 3 Philosophical style 4 Major worksToggle Major works subsection 4.1 Critique of Cynical Reason 4.2 Spheres 4.3 Nietzsche Apostle 4.4 Globalization 4.5 Rage and Time 5 Reprogenetics dispute 6 Welfare state dispute 7 Honours and awards 8 Film appearances 9 List of worksToggle List of works subsection 9.1 Works in English translation 9.2 Works in Spanish translation 9.3 Original German titles 10 References 11 External links [x] Toggle the table of contents Peter Sloterdijk [x] 34 languages Afrikaans العربية Български Boarisch Bosanski Català Čeština Dansk Deutsch Eesti Español Esperanto فارسی Français 한국어 Hrvatski Italiano Lietuvių Limburgs Македонски مصرى Nederlands 日本語 Polski Português Română Русский Slovenčina Slovenščina Suomi Svenska Türkçe Українська 中文 Edit links Article Talk [x] English Read Edit View history [x] Tools Tools move to sidebar hide Actions Read Edit View history General What links here Related changes Upload file Permanent link Page information Cite this page Get shortened URL Download QR code Print/export Download as PDF Printable version In other projects Wikimedia Commons Wikiquote Wikidata item Appearance move to sidebar hide From Wikipedia, the free encyclopedia German philosopher (born 1947) | Peter Sloterdijk | | --- | | Sloterdijk in 2009 | | Born | (1947-06-26) 26 June 1947 (age 78) Karlsruhe, Württemberg-Baden, Germany | | | | Education | | Alma mater | University of Munich University of Hamburg | | Philosophical work | | Era | 21st-century philosophy | | Region | Western philosophy | | School | Phenomenology, philosophical anthropology, posthumanism | | Notable ideas | Spherology (Sphärologie), Human Park (Menschenpark), Gifts instead of Taxes | | | Peter Sloterdijk (/ˈ s l oʊ t ər d aɪ k/; German:[ˈsloːtɐˌdaɪk]; born 26 June 1947) is a German philosopher and cultural theorist. He was a professor of philosophy and media theory at and Rector from 2001 to 2015 of the University of Art and Design Karlsruhe. He co-hosted the German television show Das Philosophische Quartett from 2002 until 2012. Biography [edit] Sloterdijk's father was Dutch, his mother German. He studied philosophy, German studies and history at the University of Munich and the University of Hamburg from 1968 to 1974. In 1975, he received his PhD from the University of Hamburg. In the 1980s, he worked as a freelance writer, and published his Critique of Cynical Reason (German: Kritik der zynischen Vernunft) in 1983. Sloterdijk has since published a number of philosophical works acclaimed in Germany. In 2001, he was named chancellor of the University of Art and Design Karlsruhe, part of the Center for Art and Media Karlsruhe. His best-known Karlsruhe student and former assistant is Marc Jongen, a member of the Bundestag. In 2002, Sloterdijk began to co-host Das Philosophische Quartett ('The Philosophical Quartet'), a show on the German ZDF television channel devoted to discussing key contemporary issues in-depth. Philosophical stance [edit] Sloterdijk rejects the existence of dualisms—body and soul, subject and object, culture and nature, etc.—since their interactions, "spaces of coexistence", and common technological advancement create hybrid realities. Sloterdijk's ideas are sometimes referred to as posthumanism, and seek to integrate different components that have been, in his opinion, erroneously considered detached from each other. Consequently, he proposes the creation of an "ontological constitution" that would incorporate all beings—humans, animals, plants, and machines. Philosophical style [edit] In the style of Nietzsche[citation needed], Sloterdijk remains convinced that contemporary philosophers have to think dangerously and let themselves be "kidnapped" by contemporary "hyper-complexities": they must forsake our present humanist and nationalist world for a wider horizon at once ecological and global. Sloterdijk's philosophical style strikes a balance between the firm academicism of a scholarly professor and a certain sense of anti-academicism (witness his ongoing interest in the ideas of Osho, of whom he became a disciple in the late seventies). Taking a sociological stance, Andreas Dorschel sees Sloterdijk's timely innovation at the beginning of the 21st century in having introduced the principles of celebrity into philosophy. Sloterdijk himself, viewing exaggeration as necessary to catch attention, describes the way he presents his ideas as "hyperbolic" (hyperbolisch). Major works [edit] Critique of Cynical Reason [edit] The Kritik der zynischen Vernunft, published by Suhrkamp in 1983 (and in English as Critique of Cynical Reason, 1987), became the best-selling work on philosophy in the German language since the Second World War and launched Sloterdijk's career as an author. Spheres [edit] The trilogy Spheres is the philosopher's magnum opus. The first volume was published in 1998, the second in 1999, and the last in 2004. Spheres deals with "spaces of coexistence", spaces commonly overlooked or taken for granted which conceal information crucial to developing an understanding of humanity. The exploration of these spheres begins with the basic difference between mammals and other animals: the biological and utopian comfort of the mother's womb, which humans try to recreate through science, ideology, and religion. From these microspheres (ontological relations such as fetus-placenta) to macrospheres (macro-uteri such as states), Sloterdijk analyzes spheres where humans try but fail to dwell and traces a connection between vital crises (e.g., emptiness and narcissistic detachment) and crises created when a sphere shatters. Sloterdijk has said that the first paragraphs of Spheres are "the book that Heidegger should have written", a companion volume to Being and Time, namely, "Being and Space".[citation needed] He was referring to his initial exploration of the idea of Dasein, which is then taken further as Sloterdijk distances himself from Heidegger's positions. Nietzsche Apostle [edit] On 25 August 2000, in Weimar, Sloterdijk gave a speech on Nietzsche; the occasion was the centennial of the latter philosopher's death. The speech was later printed as a short book and translated into English. Sloterdijk presented the idea that language is fundamentally narcissistic: individuals, states and religions use language to promote and validate themselves. Historically however, Christianity and norms in Western culture have prevented orators and authors from directly praising themselves, so that for example they would instead venerate God or praise the dead in eulogies, to demonstrate their own skill by proxy. In Sloterdijk's account, Nietzsche broke with this norm by regularly praising himself in his own work. For examples of classical Western "proxy-narcissism", Sloterdijk cites Otfrid of Weissenburg, Thomas Jefferson and Leo Tolstoy, each of whom prepared edited versions of the four Gospels: the Evangelienbuch, the Jefferson Bible and the Gospel in Brief, respectively. For Sloterdijk, each work can be regarded as "a fifth gospel" in which the editor validates his own culture by editing tradition to conform to his own historical situation. With this background, Sloterdijk explains that Nietzsche also presented his work Thus Spoke Zarathustra as a kind of fifth gospel. In Sloterdijk's account, Nietzsche engages in narcissism to an embarrassing degree, particularly in Ecce Homo, promoting a form of individualism and presenting himself and his philosophy as a brand. However, just as the Christian Gospels were appropriated by the above editors, so too was Nietzsche's thought appropriated and misinterpreted by the Nazis. Sloterdijk concludes the work by comparing Nietzsche's individualism with that of Ralph Waldo Emerson, as in Self-Reliance. Globalization [edit] Sloterdijk also argues that the current concept of globalization lacks historical perspective. In his view it is merely the third wave in a process of overcoming distances (the first wave being the metaphysical globalization of the Greekcosmology and the second the nautical globalization of the 15th and 16th centuries). The difference for Sloterdijk is that, while the second wave created cosmopolitanism, the third is creating a global provincialism. Sloterdijk's sketch of a philosophical history of globalization can be found in Im Weltinnenraum des Kapitals (2005; translated as In the World Interior of Capital), subtitled "Die letzte Kugel" ("The final sphere"). In an interview with Noema Magazine, Sloterdijk expanded upon the idea of “planetary co-immunism”, referring to the need to "share the means of protection even with the most distant members of the family of man/woman" when faced with shared threats such as pandemics. Rage and Time [edit] Main article: Rage and Time In his Zorn und Zeit (translated as Rage and Time), Sloterdijk characterizes the emotion of rage as a psychopolitical force throughout human history. The political aspects are especially pronounced in the Western tradition, beginning with the opening words of Homer's Iliad, "Of the rage of Achilles, son of Peleus, sing, O Goddess...". Sloterdijk acknowledges the contributions of psychoanalysis for our understanding of strong emotional attitudes: "In conformity with its basic erotodynamic approach, psychoanalysis brought much hatred to light, the other side of life." (Rage and Time, p.14) Importantly, for Sloterdijk, Judeo-Christian conceptions of God ultimately "piggyback" on the feelings of rage and resentment, creating "metaphysical revenge banks". For Sloterdijk, "God thus becomes the location of a transcendent repository of suspended human rage-savings and frozen plans of revenge." Reprogenetics dispute [edit] This article is part of a series on Eugenics Historical trajectory Ancient Jus trium liberorum Lex Papia Poppaea Jewish views on incest Incest in the Bible British eugenics Malthusian League Eugenic feminism Nazi eugenics "Racial hygiene" Law for the Prevention of Hereditarily Diseased Offspring Hereditary Health Court Lebensborn Romani Holocaust Aktion T4 Doctors' Trial US eugenics Slave breeding in the US Eugenics Survey of Vermont Oneida stirpiculture Immigration Act of 1924 US birth control movement Sterilization law in the United States Buck v. Bell Doe ex. rel. Tarlow v. District of Columbia Madrigal v. Quilligan Poe v. Lynchburg Training School & Hospital Skinner v. Oklahoma Stump v. Sparkman Virginia Sterilization Act of 1924 Canadian eugenics The Famous Five Sexual Sterilization Act French Eugenics Japanese eugenics Hispanic eugenics Mexican eugenics Swedish sterilization program (1906–1975) Peruvian sterilization program (1990–2000) Population planning in Singapore Neo-eugenics He Jiankui genome editing incident Human genetic enhancement Religious response to assisted reproductive technology Pre-war academic proponents Bell Brigham Burbank Carrell Davenport Darwin (Charles) Darwin (Leonhard) DeCourcy Ward von Ehrenfels Elderton Evang Fisher Galton Gates Goldscheid Grant Gruber Günther Guyer Haldane Herseni Holmes Hrdlička Jennings Jordan Kang Key Kraepelin Lenz Lewis London Magnussen Manuilă Mittmann Moreira Munro Nordau Osborn Pan Pearson (Karl) Perkins Pérez Ploetz Quetelet Rainer Relgis Ross Schallmayer Scharffenberg Serebrovsky Sergi Slater Southard Stoddard Taussig Terman Tesla Thorndike Vacher de Lapouge Verschuer Yerkes Post-war academic remnants Agar Bell (Julia) Blacker Carl Carter Cattell Coleman Darlington Fleischman Garrett Glad Hardin Haldane Hanania Herndon Huxley Ingle Itzkoff José Figueredo Kallmann Kirkconnell Koch Laski Lederberg Lorenz Ludovici Lundman Lynn MacEachran Magnussen Miller Muller Murray Nijenhuis Nyborg Osborn Pearson (Roger) Pendell Pitt-Rivers Popenoe Rostand Savulescu Shapiro Shockley Verschuer Vining Jr. Weiss Pamphlets and manifestos Hereditary Genius(1869) Degeneration(1892–1893) Gallia(1895) "The Blood of the Nation" (1901/1910) Anticipations(1901) The Training of the Human Plant(1907) Essays in Eugenics(1909) Heredity in Relation to Eugenics(1911) Mankind at the Crossroads(1923) Daedalus(1924) La raza cósmica(1925) Marriage and Morals(1929) The Genetical Theory of Natural Selection(1930) "Charter for Rationalists" (1932) Man, the Unknown(1935) After Us(1936) "Eugenics manifesto" (1939) New Bottles for New Wine(1950) The Marching Morons(1951) The Dance with the Devil(1958) Civilized Man's Eight Deadly Sins(1973) Beyondism(1987) Dysgenics(1996) Eugenics: A Reassessment(2001) Jewish Eugenics(2011) Selective Breeding and the Birth of Philosophy(2023) Organizations Without significant post-war activity ABCL Alberta Eugenics Board AASPIM AES Carnegie Institution for Science Carrel Foundation CSHL Co-operative Women's Guild EBNC ERO Fabian Society Galton Laboratory German Society for Racial Hygiene Heredity Commission HBF Human Betterment League Immigration Restriction League INED IAAEE International Eugenics Conference IFEO Kaiser Wilhelm Institute Max Planck Institute of Psychiatry PAA Progressive League Race Betterment Foundation RHA Rockefeller Foundation State Institute for Racial Biology With significant post-war activity Annals of Eugenics(1954) CIS The Eugenics Review(1968) FREED HDF J. Soc. Political Econ. Stud. LCI Mankind Quarterly OpenPsych Project Prevention Repository for Germinal Choice Ulster Institute for Social Research Related Demographic engineering Dysgenics Fujimorism Genetic discrimination History of eugenics Idiocracy Pedigree chart Political views of Bertrand Russell Political views of H. G. Wells Raymond Cattell bibliography Ronald Fisher bibliography Selective breeding History of Science portal v t e Shortly after Sloterdijk conducted a symposium on philosophy and Heidegger, he stirred up controversy with his essay "Regeln für den Menschenpark" ("Rules for the Human Park", 1999). In this text, Sloterdijk regards cultures and civilizations as "anthropogenic hothouses," installations for the cultivation of human beings; just as we have established wildlife preserves to protect certain animal species, so too ought we to adopt more deliberate policies to ensure the survival of Aristotle's zoon politikon. "The taming of man has failed", Sloterdijk laments. "Civilisation's potential for barbarism is growing; the everyday bestialisation of man is on the increase." Because of the eugenic policies of the Nazis in Germany's recent history, such discussions are seen in Germany as carrying a sinister load. Breaking a German taboo on the discussion of genetic manipulation, Sloterdijk's essay suggests that the advent of new genetic technologies requires more forthright discussion and regulation of "bio-cultural" reproduction. In the eyes of Habermas, this made Sloterdijk a "fascist". Sloterdijk replied that this was, itself, resorting to "fascist" tactics to discredit him. The core of the controversy was not only Sloterdijk's ideas but also his use of the German words Züchtung ("breeding", "cultivation") and Selektion ("selection"). Sloterdijk rejected the accusation of Nazism, which he considered alien to his historical context. Still, the paper started a controversy in which Sloterdijk was strongly criticized, both for his alleged usage of a fascist rhetoric to promote Plato's vision of a government with absolute control over the population, and for committing a non-normative, simplistic reduction of the bioethical issue itself. This second criticism was based on the vagueness of Sloterdijk's position on how exactly society would be affected by developments in genetic science. After the controversy multiplied positions both for and against him, Die Zeit published an open letter from Sloterdijk to Habermas in which he vehemently accused Habermas of "criticizing behind his back" and espousing a view of humanism that Sloterdijk had declared dead. Welfare state dispute [edit] Another dispute emerged after Sloterdijk's article "Die Revolution der gebenden Hand" (13 June 2009; transl. "The revolution of the giving hand") in the Frankfurter Allgemeine, one of Germany's most widely read newspapers. There Sloterdijk claimed that the national welfare state is a "fiscal kleptocracy" that had transformed the country into a "swamp of resentment" and degraded its citizens into "mystified subjects of tax law". Sloterdijk opened the text with the famous quote of leftist critics of capitalism (made famous in the 19th century by Proudhon in his "What Is Property?") "Property is theft", stating, however, that it is nowadays the modern state that is the biggest taker. "We are living in a fiscal grabbing semi-socialism – and nobody calls for a fiscal civil war." He repeated his statements and stirred up the debate in his articles titled "Kleptokratie des Staates" (transl. "Kleptocracy of the state") and "Aufbruch der Leistungsträger" (transl. "Uprising of the performers") in the German monthly Cicero – Magazin für politische Kultur. According to Sloterdijk, the institutions of the welfare state lend themselves to a system that privileges the marginalized, but relies, unsustainably, on the class of citizens who are materially successful. Sloterdijk's provocative recommendation was that income taxes should be deeply reduced, the difference being made up by donations from the rich in a system that would reward higher givers with social status. Achievers would be praised for their generosity, rather than being made to feel guilty for their success, or resentful of society's dependence on them. In January 2010, an English translation was published, titled "A Grasping Hand – The modern democratic state pillages its productive citizens", in Forbes and in the Winter 2010 issue of City Journal. Sloterdijk's 2010 book, Die nehmende Hand und die gebende Seite, contains the texts that triggered the 2009–2010 welfare state dispute. Honours and awards [edit] 1993: Ernst Robert Curtius Prize for Essay Writing 2000: Friedrich Märker Prize for Essay Writing 2001: Christian Kellerer Prize for the future of philosophical thought 2005: Business Book Award for the Financial Times Deutschland 2005: Sigmund Freud Prize for Scientific Prose 2005: Austrian Decoration for Science and Art 2006: Commander of the Ordre des Arts et des Lettres 2008: Lessing Prize for Criticism[de] 2008: Cicero Prize[de] 2008: Internationaler Mendelssohn-Preis zu Leipzig[de] (category Social Responsibility) 2009: BDA award for architectural criticism 2013: Ludwig Börne Prize 2021: European Prize for Political Culture of the Hans Ringier Foundation (50,000 Franc) Honorary doctorates 2011: Honorary doctorate from the University of Nijmegen, Netherlands 2023: Honorary doctorate from the West University of Timișoara, Romania Film appearances [edit] Marx Reloaded, Arte, April 2011 List of works [edit] Works in English translation [edit] Critique of Cynical Reason, translation by Michael Eldred; foreword by Andreas Huyssen, Minneapolis, University of Minnesota Press, 1988. ISBN0-8166-1586-1 Thinker on Stage: Nietzsche's Materialism, translation by Jamie Owen Daniel; foreword by Jochen Schulte-Sasse, Minneapolis, University of Minnesota Press, 1989. ISBN0-8166-1765-1 Theory of the Post-War Periods: Observations on Franco-German relations since 1945, translation by Robert Payne; foreword by Klaus-Dieter Müller, Springer, 2008. ISBN3-211-79913-3 Terror from the Air, translation by Amy Patton, Los Angeles, Semiotext(e), 2009. ISBN1-58435-072-5 God's Zeal: The Battle of the Three Monotheisms, Polity Pr., 2009. ISBN978-0-7456-4507-0 Derrida, an Egyptian, Polity Pr., 2009. ISBN0-7456-4639-5 Rage and Time, translation by Mario Wenning, New York, Columbia University Press, 2010. ISBN978-0-231-14522-0 Neither Sun nor Death, translation by Steven Corcoran, Semiotext(e), 2011. ISBN978-1-58435-091-0 – Sloterdijk answers questions posed by German writer Hans-Jürgen Heinrichs, commenting on such issues as technological mutation, development media, communication technologies, and his own intellectual itinerary. Bubbles: Spheres Volume I: Microspherology, translation by Wieland Hoban, Los Angeles, Semiotext(e), 2011. ISBN1-58435-104-7 The Art of Philosophy: Wisdom as a Practice, translation by Karen Margolis, New York, Columbia University Press, 2012. ISBN978-0-231-15870-1 You Must Change Your Life, translation by Wieland Hoban, Cambridge, Polity Press, 2013. ISBN978-0-7456-4921-4 In the World Interior of Capital: Towards a Philosophical Theory of Globalization, translation by Wieland Hoban, Cambridge, Polity Press, 2013. ISBN978-0-7456-4769-2 Nietzsche Apostle, (Semiotext(e)/Intervention Series), translation by Steve Corcoran, Los Angeles, Semiotext(e), 2013. ISBN978-1-58435-099-6 Globes: Spheres Volume II: Macrospherology, translation by Wieland Hoban, Los Angeles, Semiotext(e), 2014. ISBN1-58435-160-8 Foams: Spheres Volume III: Plural Spherology, translation by Wieland Hoban, Los Angeles, Semiotext(e), 2016. ISBN1-58435-187-X Not Saved: Essays after Heidegger, translation by Ian Alexander Moore and Christopher Turner, Cambridge, Polity Press, 2016. "The Domestication of Human Beings and the Expansion of Solidarities", in J. Koltan (ed.) Solidarity and the Crisis of Trust, translated by Jeremy Gaines, Gdansk: European Solidarity Centre, 2016, pp.79–93 ( What Happened in the 20th Century?, translation by Christopher Turner, Cambridge, Polity Press, 2018. After God, translation by Ian Alexander Moore, Polity Press, 2020. Infinite Mobilization, translation by Sandra Berjan, Polity Press, 2020. Making the Heavens Speak: Religion as Poetry, translation by Robert Hughes, Polity Press, 2022. Prometheus’s Remorse: From the Gift of Fire to Global Arson, translated by Hunter Bolin, Semiotext(e), 2024. The Terrible Children of Modernity: An Antigenealogical Experiment, translation by Oliver Berghof, Columbia University Press, 2025. Works in Spanish translation [edit] Estrés y Libertad: traducción de Paula Kuffer, Buenos Aires, Ediciones Godot, 2017. ISBN978-987-4086-20-4 Crítica de la razón cínica, Ediciones Siruela; Edición 2019. ISBN978-841-7996-07-9 Esferas I, Ediciones Siruela; Edición 2003. ISBN978-847-8446-54-4 Esferas II, Ediciones Siruela; Edición 2014. ISBN978-847-8447-54-1 Esferas III, Ediciones Siruela; Edición 2014. ISBN978-847-8449-51-4 Original German titles [edit] Kritik der zynischen Vernunft, 1983. Der Zauberbaum. Die Entstehung der Psychoanalyse im Jahr 1785, 1985. Der Denker auf der Bühne. Nietzsches Materialismus, 1986. (Thinker on Stage: Nietzsche's Materialism) Kopernikanische Mobilmachung und ptolmäische Abrüstung, 1986. Zur Welt kommen – Zur Sprache kommen. Frankfurter Vorlesungen, 1988. Eurotaoismus. Zur Kritik der politischen Kinetik, 1989. Versprechen auf Deutsch. Rede über das eigene Land, 1990. Weltfremdheit, 1993. Falls Europa erwacht. Gedanken zum Programm einer Weltmacht am Ende des Zeitalters seiner politischen Absence, 1994. Scheintod im Denken – Von Philosophie und Wissenschaft als Übung, Frankfurt am Main (Suhrkamp), 1995. Im selben Boot – Versuch über die Hyperpolitik, Frankfurt am Main (Suhrkamp), 1995. Selbstversuch, Ein Gespräch mit Carlos Oliveira, 1996. Der starke Grund zusammen zu sein. Erinnerungen an die Erfindung des Volkes, 1998. Sphären I – Blasen, Mikrosphärologie, 1998. (Spheres I) Sphären II – Globen, Makrosphärologie, 1999. (Spheres II) Regeln für den Menschenpark. Ein Antwortschreiben zu Heideggers Brief über den Humanismus, 1999. Die Verachtung der Massen. Versuch über Kulturkämpfe in der modernen Gesellschaft, 2000. Über die Verbesserung der guten Nachricht. Nietzsches fünftes Evangelium. Rede zum 100. Todestag von Friedrich Nietzsche, 2000. Nicht gerettet. Versuche nach Heidegger, 2001. Die Sonne und der Tod, Dialogische Untersuchungen mit Hans-Jürgen Heinrichs, 2001. Tau von den Bermudas. Über einige Regime der Phantasie, 2001. Luftbeben. An den Wurzeln des Terrors, 2002. Sphären III – Schäume, Plurale Sphärologie, 2004. (Spheres III) Im Weltinnenraum des Kapitals, 2005. Was zählt, kehrt wieder. Philosophische Dialogue, with Alain Finkielkraut (from French), 2005. Zorn und Zeit. Politisch-psychologischer Versuch, 2006. ISBN3-518-41840-8 Der ästhetische Imperativ, 2007. Derrida, ein Ägypter, 2007. Gottes Eifer. Vom Kampf der drei Monotheismen, Frankfurt am Main (Insel), 2007. Theorie der Nachkriegszeiten, (Suhrkamp), 2008. Du mußt dein Leben ändern, Frankfurt am Main (Suhrkamp), 2009. Philosophische Temperamente Von Platon bis Foucault, München (Diederichs) 2009. ISBN978-3-424-35016-6 Die nehmende Hand und die gebende Seite, (Suhrkamp), 2010. Die schrecklichen Kinder der Neuzeit, (Suhrkamp), 2014. Was geschah im 20. Jahrhundert? Unterwegs zu einer Kritik der extremistischen Vernunft, (Suhrkamp), 2016. Das Schelling-Projekt. Ein Bericht. Suhrkamp, Berlin 2016, ISBN978-3-518-42524-4. Nach Gott: Glaubens- und Unglaubensversuche. Suhrkamp, Berlin 2017, ISBN978-3-518-42632-6 bzw. ISBN3-518-42632-X. Neue Zeilen und Tage. Notizen 2011–2013. Suhrkamp, Berlin 2018, ISBN978-3-518-42844-3. Polyloquien. Ein Brevier. Hrsg. v. Raimund Fellinger, Suhrkamp, Berlin 2018, ISBN978-3-518-42775-0. Den Himmel zum Sprechen bringen. Über Theopoesie. Suhrkamp, Berlin 2020, ISBN978-3-518-42933-4. Der Staat streift seine Samthandschuhe ab. Ausgewählte Gespräche und Beiträge 2020–2021. Suhrkamp, Berlin 2021, ISBN978-3-518-47222-4. Wer noch kein Grau gedacht hat. Eine Farbenlehre. Suhrkamp, Berlin 2022, ISBN978-3-518-43068-2. Die Reue des Prometheus. Von der Gabe des Feuers zur globalen Brandstiftung. Suhrkamp, Berlin 2023, ISBN978-3-518-02985-5. Zeilen und Tage III. Notizen 2013-2016. Suhrkamp, Berlin 2023, ISBN978-3-518-43147-4 Der Kontinent ohne Eigenschaften. Lesezeichen im Buch Europa. Suhrkamp, Berlin 2024 ISBN978-3-518-43214-3 References [edit] ^Cf. Rudolf Walther in taz (die tageszeitung), 27 May 2016; "Man macht sich zum Knecht" (Marc Jongen, interviewed by Jens Jessen and Ijoma Mangold), Die Zeit, no. 23/2016 (subscription required) ^"Im Glashaus – Das philosophische Quartett, ZDF website". Archived from the original on 17 February 2009. Retrieved 5 May 2020. ^Michael Kempe, "Neulich im Menschenpark: Die phantastische Anthropologie des Peter Sloterdijk", in Bernhard Kleeberg et alii (eds.), Die List der Gene: Stratageme eines neuen Menschen (Tübingen: Gunter Narr, 2001), pp. 151—170, specifically pp. 162—164. ^Book Description for "Neither Sun Nor Death", MIT Press 2011 ^Die Tageszeitunginterview dd. 13 June 2006, interview in Lettre International(in German) ^Andreas Dorschel, "Denktagebücher: Zur Poetik des philosophischen Journals", Philosophische Rundschau LX (2013), no. 4, pp. 264—298, specifically pp. 293—297. ^Holger von Dobeneck, Das Sloterdijk-Alphabet: Eine lexikalische Einführung in seinen Ideenkosmos, 2nd. ed. (Würzburg: Königshausen & Neumann, 2006), p. 10. ^See Stefan Lorenz Sorgner, "In Search of Lost Cheekiness. An Introduction to Peter Sloterdijk's Critique of Cynical Reason", in Tabula Rasa, 2003. ^In more recent years, a deepening of this take on Heidegger's thought is given expression through a focus on the primordiality of space over time in fundamental ontology as articulated in numerous studies by the British-Lebanese philosopher and architect Nader El-Bizri in his investigation of "the place of being", of "dwelling", and primarily of Khôra. ^Sloterdijk, Peter (2001). Über die Verbesserung der guten Nachricht – Nietzsches fünftes "Evangelium". Suhrkamp. ISBN9783518066157. ^Sloterdijk, Peter (2013). Nietzsche Apostle. Semiotext(e) Intervention Series. Vol.16. Semiotext(e). ISBN9781584350996. ^Sloterdijk, Peter; Gardels, Nathan (12 June 2020). "Co-Immunism In The Age Of Pandemics And Climate Change". Noema Magazine. Retrieved 27 August 2020. ^"Francesco Klauser in The Berlin Review of Books, December 2010". Archived from the original on 11 December 2010. Retrieved 8 December 2010. ^See Frank Mewes, "Regulations for the Human Park: On Peter Sloterdijk's Regeln für den Menschenpark"Archived 15 February 2009 at the Wayback Machine, in Gnosis, Volume VI, No. 1, 2002. ^"Peter Sloterdijk sème le trouble en prédisant l'avènement du surhomme". Le Temps. 28 September 1999. ^"Anger as philosopher revives vocabulary of Third Reich" ^Peter Sloterdijk "Die Revolution der gebenden Hand" ^"Die Revolution der gebenden Hand – Peter Sloterdijk „belehrt“ Marx eines Richtigeren" ^"Wo bleibt der Bürgerkrieg?". Archived from the original on 22 December 2010. Retrieved 7 August 2010. ^"Peter Sloterdijk rebelliert gegen den "Zwangssteuerstaat" – Blasen zu Phrasen" ^de:Cicero (Zeitschrift) ^Kleptokratie des Staates, in Cicero, July 2009, p.42Archived 17 August 2009 at the Wayback Machine ^"Aufbruch der Leistungsträger", in Cicero, November 2009, p.94Archived 29 January 2010 at the Wayback Machine ^"Germany's welfare state under fire – One of Germany's foremost public intellectuals, Peter Sloterdijk, began the offensive on the welfare state" ^"A Grasping Hand – The modern democratic state pillages its productive citizens" ^"The Grasping Hand-The modern democratic state pillages its productive citizens" ^ abc"Peter Sloterdijk". ZKM. 26 June 1947. Retrieved 8 August 2021. ^ abc"At UVT, Culture is Capital!". West University of Timișoara. 8 August 2021. Retrieved 27 February 2023. ^"Patron • Albert Schweitzer Foundation". Albert Schweitzer Foundation (in German). Retrieved 8 August 2021. ^"Aedes Network Campus". ANCB. 29 November 2011. Retrieved 8 August 2021. ^"Reply to a parliamentary question"(PDF) (in German). p.521. Retrieved 11 December 2012. ^ ab"Peter Sloterdijk & Peter Weibel – Kunsthalle Wien". Kunsthalle Wien. 4 March 2020. Retrieved 8 August 2021. ^"CICERO Rednerpreis". CICERO Rednerpreis (in German). Archived from the original on 17 June 2021. Retrieved 8 August 2021. ^"Internationaler Mendelssohn-Preis zu Leipzig 2008". Mendelssohn-Haus Leipzig & Felix-Mendelssohn-Bartholdy-Stiftung (in German). 17 August 2016. Retrieved 8 August 2021. ^"Philosoph Sloterdijk erhält "Europapreis für politische Kultur"". Nau (in German). 8 August 2021. Retrieved 8 August 2021. ^"Hans Ringier Foundation: Peter Sloterdijk awarded the European Prize for Political Culture 2021". ringier.com. 7 August 2021. Retrieved 8 August 2021. ^"Honorary doctorates 2011". University of Nijmegen. 2011. Retrieved 27 February 2023. ^"At UVT, Culture is Capital!". West University of Timișoara. 8 August 2021. Retrieved 27 February 2023. ^"Marx Reloaded Film". Marx Reloaded. 10 February 2012. Archived from the original on 11 September 2012. Retrieved 26 July 2025. ^"Prometheus's Remorse". External links [edit] Media related to Peter Sloterdijk at Wikimedia Commons Quotations related to Peter Sloterdijk at Wikiquote Peter Sloterdijk website Peter Sloterdijk at IMDb The Grasping Hand, by Peter Sloterdijk, City Journal, Winter 2010 See Stefan Lorenz Sorgner, "In Search of Lost Cheekiness, An Introduction to Peter Sloterdijk's Critique of Cynical Reason", in: Tabula Rasa, 20 (2003) The Operable Man, a Sloterdijk essay on the Ethical State of Gene Technology Review of Bubbles, in: Los Angeles Review of Books Europe’s Times and Unknown Waters, Cluj-Napoca, Braşoveanu, Narcisa (April 2009). "The Narcissistic and the Cynical Attitudes – Two Identitary Masks, Gilles Lipovetsky, L'ère du vide. Essais sur l'individualisme contemporain and Peter Sloterdijk, Kritik der zynischen Vernunft" Topics: Spheres (Feb/Mar 2005) (interview) Barthélémy on Sloterdijk & Stiegler Michel Weber, "The Art of Philosophy—Critical review," Cosmos and History: The Journal of Natural and Social Philosophy, vol. 10, no. 2, 2014, pp.327–333. Derek R. Ford, "The Pneumatic Common: Learning in, with, and from the air." Educational Philosophy and Theory, vol. 47, no. 13–14, pp.1405–1418. | v t e Peter Sloterdijk | | --- | | Books | Critique of Cynical Reason (1983) Infinite Mobilization (1989) Spheres trilogy (1998–2004) Rage and Time (2006) God's Zeal (2007) You Must Change Your Life (2009) Making the Heavens Speak (2020) | | Libretto | Babylon (2012) | | v t e Recipients of the Sigmund Freud Prize | | --- | | Hugo Friedrich (1964) Adolf Portmann (1965) Emil Staiger (1966) Hannah Arendt (1967) Karl Barth (1968) Bruno Snell (1969) Werner Heisenberg (1970) Werner Kraft (1971) Erik Wolf (1972) Karl Rahner (1973) Günter Busch (1974) Ernst Bloch (1975) Jürgen Habermas (1976) Harald Weinrich (1977) Siegfried Melchinger (1978) Hans-Georg Gadamer (1979) Hans Blumenberg (1980) Kurt von Fritz (1981) Arno Borst (1982) Peter Graf Kielmansegg (1983) Odo Marquard (1984) Hermann Heimpel (1985) Hartmut von Hentig (1986) Gerhard Ebeling (1987) Carl Friedrich von Weizsäcker (1988) Ralf Dahrendorf (1989) Walther Killy (1990) Werner Hofmann (1991) Günther Anders (1992) Norbert Miller (1993) Peter Gülke (1994) Gustav Seibt (1995) Peter Wapnewski (1996) Paul Parin (1997) Ilse Grubrich-Simitis (1998) Reinhart Koselleck (1999) Kurt Flasch (2000) Horst Bredekamp (2001) Klaus Heinrich (2002) Walter Burkert (2003) Karl Schlögel (2004) Peter Sloterdijk (2005) Johannes Fried (2006) Josef H. Reichholf (2007) Michael Hagner (2008) Julia Voss (2009) Luca Giuliani (2010) Arnold Esch (2011) Ernst-Wolfgang Böckenförde (2012) Angelika Neuwirth (2013) Jürgen Osterhammel (2014) Peter Eisenberg (2015) Jan Assmann (2016) Barbara Stollberg-Rilinger (2017) Wolfgang Kemp (2018) Thomas Macho (2019) Ute Frevert (2020) Hubert Wolf (2021) Iris Därmann[de] (2022) Matthias Glaubrecht[de] (2023) | | v t e Continental philosophy | | --- | | Philosophers | Adorno Agamben Althusser Arendt Aron Bachelard Badiou Barthes Bataille Baudrillard Bauman Benjamin de Beauvoir Bergson Blanchot Bourdieu Buber Butler Camus Cassirer Castoriadis Cioran Cixous Croce de Man Debord Deleuze Derrida Dilthey Eco Eagleton Engels Fanon Fichte Fisher Foucault Gadamer Gentile Guattari Gramsci Habermas Hegel Heidegger Husserl Ingarden Irigaray Jameson Jaspers Kierkegaard Kojève Koyré Kołakowski Kristeva Lacan Latour Lefebvre Lévi-Strauss Levinas Luhmann Lukács Lyotard Marcel Marcuse Marx Meillassoux Merleau-Ponty Nancy Negri Nietzsche Ortega y Gasset Rancière Ricœur Said Sartre Schelling Schmitt Schopenhauer Serres Sloterdijk Spengler Stein Strauss Weber Weil Williams Žižek | | Theories | Absurdism Critical theory Deconstruction Existentialism Frankfurt School German idealism Hegelianism Hermeneutics Marxism Western Freudo- Neo-Kantianism Non-philosophy Non-representational theory Phenomenology Philosophical pessimism Postmodernism Post-structuralism Psychoanalysis Psychoanalytic theory Romanticism Speculative realism Structuralism | | Concepts | Alterity Always already Angst Apollonian and Dionysian Authenticity Being in itself Binary oppositions Boredom Class struggle Critique Immanent Ideological Postcritique Dasein Death of God Death drive Différance Difference Existence precedes essence Existential crisis Facticity Gaze Genealogy Habitus Hauntology Historical materialism Ideology Interpellation (philosophy) Intersubjectivity Leap of faith Master–slave dialectic Master–slave morality Oedipus complex Ontic Ontopoetics Other Power Ressentiment Self-deception Totalitarianism Trace Transvaluation of values Wertkritik Will to power Hermeneutics of suspicion Discourse Linguistic theory Linguistic determinism Semantics Semiotics Media studies Film theory Linguistic turn Postcolonialism Philosophy of language | | Category | | Authority control databases | | --- | | International | ISNI VIAF FAST WorldCat | | National | Germany United States France BnF data Japan Italy Australia Czech Republic Spain Portugal Netherlands Norway Latvia Croatia Korea Sweden Poland Israel Catalonia Belgium | | Academics | CiNii PhilPeople | | Artists | MusicBrainz RKD Artists ADK FID | | People | Trove Deutsche Biographie DDB | | Other | IdRef Yale LUX | Retrieved from " Categories: 1947 births Living people 20th-century German philosophers 21st-century German philosophers Continental philosophers German people of Dutch descent Ludwig Maximilian University of Munich alumni Writers from Karlsruhe Posthumanists Rajneesh movement University of Hamburg alumni Recipients of the Austrian Decoration for Science and Art Members of the European Academy of Sciences and Arts Members of the Academy of Arts, Berlin Philosophical anthropology Philosophers of social science German philosophers of technology Phenomenologists German male essayists Transhumanists 20th-century German essayists 21st-century German essayists 20th-century German male writers Academic staff of the Karlsruhe University of Arts and Design German opera librettists Hidden categories: Pages containing links to subscription-only content Articles with German-language sources (de) Webarchive template wayback links CS1 German-language sources (de) Articles with short description Short description matches Wikidata Use dmy dates from February 2016 Articles with hCards Pages with German IPA Articles containing German-language text All articles with unsourced statements Articles with unsourced statements from June 2024 Articles with unsourced statements from May 2007 Commons category link from Wikidata This page was last edited on 26 July 2025, at 01:15(UTC). Text is available under the Creative Commons Attribution-ShareAlike 4.0 License; additional terms may apply. By using this site, you agree to the Terms of Use and Privacy Policy. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit organization. Privacy policy About Wikipedia Disclaimers Contact Wikipedia Code of Conduct Developers Statistics Cookie statement Mobile view Search Search [x] Toggle the table of contents Peter Sloterdijk 34 languagesAdd topic
371
Coefficients of Lagrange polynomials - Mathematics Stack Exchange =============== Join Mathematics By clicking “Sign up”, you agree to our terms of service and acknowledge you have read our privacy policy. Sign up with Google OR Email Password Sign up Already have an account? Log in Skip to main content Stack Exchange Network Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Visit Stack Exchange Loading… Tour Start here for a quick overview of the site Help Center Detailed answers to any questions you might have Meta Discuss the workings and policies of this site About Us Learn more about Stack Overflow the company, and our products current community Mathematics helpchat Mathematics Meta your communities Sign up or log in to customize your list. more stack exchange communities company blog Log in Sign up Home Questions Unanswered AI Assist Labs Tags Chat Users Teams Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Try Teams for freeExplore Teams 3. Teams 4. Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Explore Teams Teams Q&A for work Connect and share knowledge within a single location that is structured and easy to search. Learn more about Teams Hang on, you can't upvote just yet. You'll need to complete a few actions and gain 15 reputation points before being able to upvote. Upvoting indicates when questions and answers are useful. What's reputation and how do I get it? Instead, you can save this post to reference later. Save this post for later Not now Coefficients of Lagrange polynomials Ask Question Asked 10 years, 10 months ago Modified2 years, 1 month ago Viewed 8k times This question shows research effort; it is useful and clear 1 Save this question. Show activity on this post. Let n∈N∗,A=(a 1,…,a n)∈K[X]n n∈N∗,A=(a 1,…,a n)∈K[X]n all different numbers and B=(b 1,...,b n)∈K[X]n B=(b 1,...,b n)∈K[X]n all different numbers. Let L A,B L A,B be the polynomial of degree n−1 n−1 verifying ∀i∈[|1,n|],L A,B(a i)=b i∀i∈[|1,n|],L A,B(a i)=b i. ([|1,n|]={1,2,…,n}[|1,n|]={1,2,…,n}) We know that this is a Lagrange interpolation polynomial and can be written L A,B(X)=∑i=1 n b i∏k=1,k≠i n X−a k a i−a k L A,B(X)=∑i=1 n b i∏k=1,k≠i n X−a k a i−a k However, that gives us a pretty 'abstract' definition of the polynomial. What is a good formula of the coefficient C k C k before X k X k in L A,B(X)L A,B(X) ? polynomials Share Share a link to this question Copy linkCC BY-SA 3.0 Cite Follow Follow this question to receive notifications edited Sep 24, 2014 at 15:50 Hippalectryon asked Sep 24, 2014 at 15:34 HippalectryonHippalectryon 7,830 2 2 gold badges 40 40 silver badges 68 68 bronze badges 2 Solve the set of equations in the unknown coefficients given by all the f(a i)=b i f(a i)=b i. While theoretically possible, it is probably not worth it to write down general forms if n n is much bigger than 4 4 or so. Also, you probably mean that (a 1,…,a n)∈K n(a 1,…,a n)∈K n, while L A,B∈K[X]L A,B∈K[X]. –Arthur Commented Sep 24, 2014 at 15:50 @Arthur I know the end result is likely to be slightly ugly, but I am still interested in knowing it. –Hippalectryon Commented Sep 24, 2014 at 19:42 Add a comment| 2 Answers 2 Sorted by: Reset to default This answer is useful 4 Save this answer. Show activity on this post. You can get a closed-form expression for Lagrange coefficients if you use a different representation. "Beginner's guide to mapping simplexes affinely", section "Lagrange interpolation", describes a determinant form of Lagrange polynomial that interpolates (a 0;b 0)(a 0;b 0), ……, (a n;b n)(a n;b n) P(x)=(−1)det⎛⎝⎜⎜⎜⎜⎜⎜0 x n x n−1⋯1 b 0 a n 0 a n−1 0⋯1 b 1 a n 1 a n−1 1⋯1⋯⋯⋯⋯⋯b n a n n a n−1 n⋯1⎞⎠⎟⎟⎟⎟⎟⎟det⎛⎝⎜⎜⎜⎜a n 0 a n−1 0⋯1 a n 1 a n−1 1⋯1⋯⋯⋯⋯a n n a n−1 n⋯1⎞⎠⎟⎟⎟⎟.P(x)=(−1)det(0 b 0 b 1⋯b n x n a 0 n a 1 n⋯a n n x n−1 a 0 n−1 a 1 n−1⋯a n n−1⋯⋯⋯⋯⋯1 1 1⋯1)det(a 0 n a 1 n⋯a n n a 0 n−1 a 1 n−1⋯a n n−1⋯⋯⋯⋯1 1⋯1). Using Laplace expansion along the first column in the numerator you can get expressions for coefficients at x i x i. Result should look as follows c i=(−1)n−i det⎛⎝⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜b 0 a n 0⋯a i+1 0 a i−1 0⋯1 b 1 a n 1⋯a i+1 1 a i−1 1⋯1⋯⋯⋯⋯⋯⋯⋯b n a n n⋯a i+1 n a i−1 n⋯1⎞⎠⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟det⎛⎝⎜⎜⎜⎜a n 0 a n−1 0⋯1 a n 1 a n−1 1⋯1⋯⋯⋯⋯a n n a n−1 n⋯1⎞⎠⎟⎟⎟⎟,c i=(−1)n−i det(b 0 b 1⋯b n a 0 n a 1 n⋯a n n⋯⋯⋯⋯a 0 i+1 a 1 i+1⋯a n i+1 a 0 i−1 a 1 i−1⋯a n i−1⋯⋯⋯⋯1 1⋯1)det(a 0 n a 1 n⋯a n n a 0 n−1 a 1 n−1⋯a n n−1⋯⋯⋯⋯1 1⋯1), where c i c i is coefficient at x i x i in the polynomial. For practical example you may want to check "Workbook on mapping simplexes affinely", section "Lagrange interpolation". Share Share a link to this answer Copy linkCC BY-SA 4.0 Cite Follow Follow this answer to receive notifications edited Jul 11, 2023 at 16:48 Mark 3 2 2 bronze badges answered Jun 6, 2019 at 23:01 guestguest 1,814 1 1 gold badge 8 8 silver badges 11 11 bronze badges Add a comment| This answer is useful 2 Save this answer. Show activity on this post. Take a polynome P of degree n, it could be writen: P(x)=c n x n+c n−1 x n−1+⋯+c 0 P(x)=c n x n+c n−1 x n−1+⋯+c 0 or P(x)=c n(x−r 1)(x−r 2)⋯(x−r n)P(x)=c n(x−r 1)(x−r 2)⋯(x−r n) You can then define symetrical polynome: σ 1(r 1,...,r n)=∑n i=1 r i=r 1+⋯+r n σ 1(r 1,...,r n)=∑i=1 n r i=r 1+⋯+r n σ 2(r 1,...,r n)=∑1≤i<j≤n r i r j=r 1 r 2+⋯+r n−1 r n σ 2(r 1,...,r n)=∑1≤i<j≤n r i r j=r 1 r 2+⋯+r n−1 r n σ k(r 1,...,r n)=∑1≤i 1<⋯<i k≤n r i 1 r i 2…r i k σ k(r 1,...,r n)=∑1≤i 1<⋯<i k≤n r i 1 r i 2…r i k σ n(r 1,...,r n)=r 1 r 2…r n σ n(r 1,...,r n)=r 1 r 2…r n Or in other words, σ k σ k is the sum of products of k roots. Then you have the relatonship: σ k=(−1)k⋅c n−k c n σ k=(−1)k⋅c n−k c n Keeping in mind that the lagrange polynomes are of degree n-1, the coefficient for X k X k in L i=b i∏n k=1,k≠i X−a k a i−a k L i=b i∏k=1,k≠i n X−a k a i−a k is given by: c n−1=b i∏n k=1,k≠i 1 a i−a k c n−1=b i∏k=1,k≠i n 1 a i−a k c k=c n−1(−1)n−1−k σ n−1−k(a 1,...,a i−1,a i+1,...,a n)c k=c n−1(−1)n−1−k σ n−1−k(a 1,...,a i−1,a i+1,...,a n)=b i(∏n k=1,k≠i 1 a i−a k)(−1)n−1−k σ n−1−k(a 1,...,a i−1,a i+1,...,a n)=b i(∏k=1,k≠i n 1 a i−a k)(−1)n−1−k σ n−1−k(a 1,...,a i−1,a i+1,...,a n) Then you can sum for i: c n−1=∑n i=1 b i∏n k=1,k≠i 1 a i−a k c n−1=∑i=1 n b i∏k=1,k≠i n 1 a i−a k c k=∑n i=1 b i(∏n k=1,k≠i 1 a i−a k)(−1)n−1−k σ n−1−k(a 1,...,a i−1,a i+1,...,a n)c k=∑i=1 n b i(∏k=1,k≠i n 1 a i−a k)(−1)n−1−k σ n−1−k(a 1,...,a i−1,a i+1,...,a n) Note: I can't think of a situation where this would be handy. Share Share a link to this answer Copy linkCC BY-SA 3.0 Cite Follow Follow this answer to receive notifications edited Sep 29, 2014 at 12:36 answered Sep 29, 2014 at 1:32 Lucas MorinLucas Morin 313 2 2 silver badges 11 11 bronze badges Add a comment| You must log in to answer this question. Start asking to get answers Find the answer to your question by asking. Ask question Explore related questions polynomials See similar questions with these tags. Featured on Meta Will you help build our new visual identity? Upcoming initiatives on Stack Overflow and across the Stack Exchange network... Community help needed to clean up goo.gl links (by August 25) Report this ad Related 4Equal sums of powers 3Show that there is a polynomial of degree at most 2 n−1 2 n−1 for which f(x i)=a i f(x i)=a i and f′(x i)=b i f′(x i)=b i 2How to show that this polynomial has n n positive roots? 0A Problem Regarding Polynomials 1How to prove ∑n i=1(1−a i)n a i∏j≠i(a j−a i)=1 a 1⋯a n−1∑i=1 n(1−a i)n a i∏j≠i(a j−a i)=1 a 1⋯a n−1? 1Given shapes of two tuples, is there a polynomial which maps one of the shapes onto the other, if there is no congruence condition preventing this? Hot Network Questions How can I colour text with metafun colours in context? If linear negation is interpreted as representing destructors, how to make sense of double linear negation elimination? How to debug/correct missing number error in plug during memoization? Does the Melf's Acid Arrow spell require a ranged attack roll? Why was there a child at the dig site in Montana? SPDX: GPL-2.0-or-later vs the + operator In Isa. 46:9 why is וְאֵ֣ין עֹ֔וד אֱלֹהִ֖ים not translated "and there are no other gods?" How to defend against GDPR being used to access anti-fraud measures? Are there other LEGO Duplo track layouts with two trains that trigger all the switches indefinitely? Intel NUC automatically shuts down when trying Ubuntu Which set has greater cardinality and why? Where should I host software for individual papers when GitHub is now part of Microsoft AI? Why לֶחֶם instead of לַחַם? Proper way to power off a Ubuntu 22.04-5 desktop from single user mode Rectangle and circle with same area and circumference Does Germany have the highest wolf density in the world? Highlight everything after 500 characters Is it possible that death existed before the fall? Ptolemy theorem for spatial 4-gons Drawing a 3D vector field with vortices and perspective axis labels How soon after parking a car in a paid parking area must I provide proof of payment? Is laser engraving on an interstellar object feasible? Could a Manned Jupiter Mission use a Shadow Shield? What wire/cable for motion detector in drop ceiling? Question feed Subscribe to RSS Question feed To subscribe to this RSS feed, copy and paste this URL into your RSS reader. Why are you flagging this comment? It contains harassment, bigotry or abuse. This comment attacks a person or group. Learn more in our Code of Conduct. It's unfriendly or unkind. This comment is rude or condescending. Learn more in our Code of Conduct. Not needed. This comment is not relevant to the post. Enter at least 6 characters Something else. A problem not listed above. Try to be as specific as possible. Enter at least 6 characters Flag comment Cancel You have 0 flags left today Mathematics Tour Help Chat Contact Feedback Company Stack Overflow Teams Advertising Talent About Press Legal Privacy Policy Terms of Service Your Privacy Choices Cookie Policy Stack Exchange Network Technology Culture & recreation Life & arts Science Professional Business API Data Blog Facebook Twitter LinkedIn Instagram Site design / logo © 2025 Stack Exchange Inc; user contributions licensed under CC BY-SA. rev 2025.8.13.32804 By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy. Accept all cookies Necessary cookies only Customize settings Cookie Consent Preference Center When you visit any of our websites, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences, or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and manage your preferences. Please note, blocking some types of cookies may impact your experience of the site and the services we are able to offer. Cookie Policy Accept all cookies Manage Consent Preferences Strictly Necessary Cookies Always Active These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information. Cookies Details‎ Performance Cookies [x] Performance Cookies These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance. Cookies Details‎ Functional Cookies [x] Functional Cookies These cookies enable the website to provide enhanced functionality and personalisation. They may be set by us or by third party providers whose services we have added to our pages. If you do not allow these cookies then some or all of these services may not function properly. Cookies Details‎ Targeting Cookies [x] Targeting Cookies These cookies are used to make advertising messages more relevant to you and may be set through our site by us or by our advertising partners. They may be used to build a profile of your interests and show you relevant advertising on our site or on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device. Cookies Details‎ Cookie List Clear [x] checkbox label label Apply Cancel Consent Leg.Interest [x] checkbox label label [x] checkbox label label [x] checkbox label label Necessary cookies only Confirm my choices
372
Lecture 28, Tues May 2: Stabilizer Formalism Today we’ll see a beautiful formalism that was originally invented to describe quantum-error correcting codes, but now plays many different roles in quantum computation. First, some definitions: Stabilizer Gates ( 1 0 ) are the gates CNOT, Hadamard, and P = ( 0 ​i​ ) (also called the “phase gate”) Stabilizer Circuits are quantum circuits made entirely of stabilizer gates Stabilizer States are states that a stabilizer circuit can generate, starting from ​|00…0⟩ We briefly met stabilizer gates earlier in the course, when we discussed universal quantum gate sets, and needed to include a warning that the set ​S​={CNOT,Hadamard,P} is ​not​ universal. At first, this failure of universality might be surprising. After all, the set ​S​ seems to have everything: the Hadamard gate can create superpositions, CNOT acts on two qubits and (in concert with Hadamard) can create complicated entangled states, and P can even add complex phases. What’s more, many of the weird quantum effects and protocols that we saw in this course can be demonstrated entirely using stabilizer gates. Examples include superdense coding, quantum teleportation, BB84 quantum key distribution, Wiesner’s quantum money, the Deutsch-Jozsa and Bernstein-Vazirani algorithms, and the Shor 9-qubit code. So then what prevents ​S​ from being universal? Well, if you try playing around with the CNOT, Hadamard, and Phase gates, you’ll notice that you tend to reach certain discrete states, but never anything between them. You’ll also notice that, whenever you can create an ​n​-qubit superposition that assigns nonzero amplitudes to the strings in some set ​A​∈{0,1}​n​, it’s always an ​equal​ superposition over ​A (possibly with +1,-1,+​i​,-​i​ phases), ​and furthermore ​A​ is always an affine subspace of F​2​ n​ (so in particular, |​A​| is always a power of 2). With only 1 qubit, the H and P gates can only get us to 6 states in total (ignoring global phases), via the reachability diagram shown on the right. These 6 states--​|​0⟩,|1⟩,​|​+⟩,|-⟩,​|​i​⟩,|-​i​⟩--are the 1-qubit stabilizer states. What about with two qubits? Now you can reach some more interesting states, like or . ​But these √2 |00⟩ + i |11⟩ √2 |01⟩ − i |10⟩ always follow certain patterns, as mentioned above. For example, they’re always equal superpositions over power-of-2 numbers of strings, and a measurement of a given qubit in the {|​0⟩,​|​1⟩​} basis always produces either (1) always |​0⟩, (2) always ​|​1⟩, or (3) ​|​0⟩ and ​|​1⟩ with equal probabilities. So what gives? To answer that question, it will help to define a few concepts. We say that a unitary U ​stabilizes​ a pure state ​|​Ψ⟩ if U​|​Ψ⟩ = ​|​Ψ⟩. In other words, if |Ψ⟩ is an eigenstate of U with eigenvalue +1. Crucially, global phase matters here! If U|Ψ⟩ = –|Ψ⟩, then U ​does not​ stabilize |Ψ⟩. Notice that if U and V both stabilize ​|​Ψ⟩, then any product of them, like UV or VU, ​also​ stabilizes |​Ψ⟩, as do their inverses U​-1​ and V​-1​. Also, the identity matrix, I, stabilizes everything. This means that the set the unitaries that stabilize ​|​Ψ⟩ form a ​group​ under multiplication. We already know that unitaries have inverses and are associative. The next ingredient we need is the ​Pauli Matrices​. These four matrices come up a lot in quantum physics. They are: ​I​ = ( 1 0 ) ​X​ = ( 0 1 ) ​Y​ = ( 0 -​i​ ) ​Z​ = ( 1 0 ) ( 0 1 ) ( 1 0 ) ( ​i​ 0 ) ( 0 -1 ) Notice that these matrices match up with the errors we need to worry about in quantum error-correction: No error ( 1 0 ) ( 0 ) = ( 0 ) Bit flip ( 0 1 ) ( 0 ) = ( 1 ) I​|1⟩ = |1⟩ ( 0 1 ) ( 1 ) ( 1 ) X​|1⟩ = |0⟩ ( 1 0 ) ( 1 ) ( 0 ) Phase flip ( 1 0 ) ( 0 ) = ( 0 ) and Both ( 0 -​i​ ) ( 0 ) = ( -​i​ ) Z​|1⟩ = -|1⟩ ( 0 -1 ) ( 1 ) ( -1 ) Y​|1⟩ = -​i​|0⟩ ( ​i​ 0 ) ( 1 ) ( 0 ) That’s not a coincidence! The Pauli matrices satisfy several beautiful identities: X​2​ = Y​2​ = Z​2​ = I XY = ​i​Z YX = –​i​Z YZ = ​i​X ZY = –​i​X ZX = ​i​Y XZ = –​i​Y If you’ve seen the quaternions, you might recall that they’re defined using the same kinds of relations. This is also not a coincidence! Nothing is a coincidence in math! Also, all four Pauli matrices are both unitary and Hermitian. So what does each Pauli matrix stabilize? I stabilizes everything –I stabilizes nothing Remember: global phase matters, so –I|Ψ⟩ ≠ |Ψ⟩. X stabilizes ​|​+⟩ –X stabilizes ​|​–⟩ Z stabilizes ​|​0⟩ –Z stabilizes ​|​1⟩ Y stabilizes ​|​i​⟩ –Y stabilizes ​|​–​i​⟩ So each of the six 1-qubit stabilizer states corresponds to a Pauli matrix that stabilizes it. Next, given an ​n​-qubit pure state ​|​Ψ⟩, we define ​|​Ψ⟩’s ​stabilizer group​ as: The group of all tensor products of Pauli matrices that stabilize ​|​Ψ⟩. We know this is a group since being a Pauli matrix is closed under multiplication, and so is stabilizing |​Ψ⟩. As you can check, stabilizer groups have the additional interesting property of being abelian. For example, the stabilizer group of ​|​0⟩ is { I, Z } closed because Z​2​ = I while the stabilizer group of |​+⟩ is { I, X } The stabilizer group of ​|​0⟩⊗​|​+⟩ will be the Cartesian product of those two groups: { I⊗I, I⊗X, Z⊗I, Z⊗X } as a convention, from now on we omit the ⊗’s, so that for example the above is just { II, IX, ZI, ZX} For a slightly more interesting example, what’s the stabilizer group of a Bell pair? We know XX is in it because = = . √2 X|0⟩⊗X|0⟩ + X|1⟩⊗X|1⟩ √2 |11⟩ + |00⟩ √2 |00⟩ + |11⟩ A similar argument can be made for –YY. We can get another element by doing component-wise multiplication: XX ⋅ –YY = –(​i​Z)(​i​Z) = ZZ So the stabilizer group of contains { II, XX, –YY, ZZ }. And you can check that it doesn’t √2 |00⟩ + |11⟩ contain anything else. You can similarly compute the stabilizer group of to be { II, –XX, YY, ZZ }. √2 |00⟩ − |11⟩ Now, here’s an amazing fact, which we won’t give a proof of: The ​n​-qubit stabilizer states are exactly the ​n​-qubit states that have a stabilizer group of size 2​n​. So the 1-qubit stabilizer states are those states with a 2-element stabilizer group, the 2-qubit stabilizer states are those states with a 4-element stabilizer group, and so on. This is a completely different characterization of stabilizer states, a structural one. It makes no mention of stabilizer circuits, but tells us something about the invariant that stabilizer circuits are preserving. OK, so suppose we have an ​n​-qubit stabilizer state, which (by the above) has a 2​n​-element stabilizer group G​. Then here’s the next thing we might want to know: How can we succinctly specify ​G​? Does ​G​ always have a small generating set--that is, a few elements from which we can get all the others by multiplication? While we again won’t prove it, the answer turns out to be yes. Given any n-qubit stabilizer state, its stabilizer group is always generated by only ​n​ elements (i.e., ± tensor products of Pauli matrices). So, to specify a stabilizer group (and hence, a stabilizer state), you only need to specify ​n​ such generators. Let’s see an example. To specify the Bell pair, which has stabilizer group { II, XX, –YY, ZZ }, it’s enough to give the following generating set: ( X X) ( Z Z ) Or we could also give a different generating set, like ( X X ) -( Y Y ) Now we come to a crucial point: How many bits does it take to store such a generating set in your computer? Well, there are ​n​ generators, and each one takes 2​n​+1 bits to specify: 2 bits for each of the ​n​ Pauli matrices, plus 1 additional bit for the ± sign. So the total number of bits is n​(2​n​+1) = 2​n​2​ + ​n​ = O(​n​2​). Naïvely writing out the entire amplitude vector, or the entire stabilizer group, would have taken ~2​n​ bits, so we’ve gotten an exponential savings. We’re already starting to see the power of the stabilizer formalism. But that power turns out to go much further. Around 1998, Daniel Gottesman and Manny Knill proved the... Gottesman-Knill Theorem which says that there’s a polynomial-time classical algorithm to simulate any stabilizer circuit that acts on a stabilizer initial state like |​00…0⟩. Here, “simulate” means pretty much anything you could ask for: you can compute the probability of any possible sequence of measurement outcomes, or you can ​simulate​ the measurement outcomes if given access to a random bit source. A more negative interpretation is: stabilizer states and gates, by themselves, are useless for producing superpolynomial quantum speedups. So, how does the classical simulation work? Just by keeping track, at each point in time, of a list of generators for the current state’s stabilizer group! And updating the list whenever a CNOT, Hadamard, Phase, or measurement gate is applied. Almost the only time that Professor Aaronson (being a theorist) ever wrote code that other people actually used, was when he did a project in grad school for a Computer Architecture course. He wrote a fast simulator for stabilizer circuits called CHP, which could handle thousands of qubits on a normal laptop (limited only by the available RAM). He was only trying to pass the class, but the challenge of actually implementing the Gottesman-Knill algorithm in an optimized way led to the discovery of an ​even faster​ classical algorithm for simulating stabilizer circuits, so Aaronson ended up publishing a paper with Gottesman about this. Truth be told, this project had very little to do with Computer Architecture. He’s still not sure why the professor accepted it. So how does the Gottesman-Knill algorithm work? For simplicity, let’s assume the initial state is ​|​00…0⟩. Then the first step is to find a stabilizer representation (that is, a list of generators) for ​|​00…0⟩. We know the stabilizer group contains II…I, but we won’t put that into the generating set: it’s implied. Since ​|​0⟩ is a +1 eigenstate of Z, you can check that the following generating set works: ZIII…I IZII…I IIZI…I : IIII…Z For purposes of the algorithm, it’s useful to write these lists of generators in a slightly different way: Tableau Representation Here we’ll keep track of two ​n​×​n​ matrices of 1’s and 0’s (as well as ​n​ signs). The two matrices can be combined entrywise to produce an {I,X,Y,Z} matrix like the one above. We call them: ​The X Matrix​ and ​The Z Matrix + ( ​0 0 0 0​ |​ 1 0 0 0​ ) + ( ​0 0 0 0​ |​ 0 1 0 0​ ) ← Each row represents one generator of the stabilizer group + ( ​0 0 0 0​ | ​0 0 1 0 ​) + ( ​0 0 0 0​ | ​0 0 0 1​ ) ​↑ ↑ 1 if X or Y 1 if Z or Y 0 otherwise 0 otherwise Thus, the first row of the above tableau represents the generator +ZIII, the second +IZII, the third +IIZI, and the fourth +IIIZ. So this is just another way to represent the generating set {ZIII, IZII, IIZI, IIIZ} for the state ​|​0000⟩. We’re now going to provide rules for updating this tableau representation whenever a CNOT, Hadamard, or phase gate is applied. We won’t prove that the rules are correct, but you should examine them one by one and see if you can convince yourself. We’re also going to cheat a little. Keeping track of the +’s and –’s is tricky and not particularly illuminating, so we’ll just ignore them. What do we lose by ignoring them? Well, whenever measuring a qubit has a definite outcome (either |0⟩ or |1⟩), we need the +’s and -’s to figure out ​which​ of the two it is. On the other hand, if we only want to know whether measuring a qubit will give a definite outcome or a random outcome (and not ​which​ definite outcome, in the former case), then we can ignore the signs. So what are the rules? The gates available to us are CNOT, H, and P, so we need to figure out how to update the tableau for each. ● To apply H to the ​i​th​ qubit: ○ Swap the ​i​th​ column of the X matrix with the ​i​th​ column of the Z matrix. This should be pretty intuitive: the whole point of the Hadamard gate is to “swap the X and Z bases.” ● To apply P to the ​i​th​ qubit: ○ Bitwise XOR the ​i​th​ column of the X matrix into the ​i​th​ column of the Z matrix. Note that P has no effect on the tableau representation of |00…0⟩. Coincidence? I think not. ● To apply CNOT from the ​i​th​ qubit to the ​j​th​ qubit: ○ Bitwise XOR the ​i​th​ column of the X matrix into the ​j​th​ column of the X matrix. That seems reasonable enough, ​but ...​ remember how a CNOT from ​i​ to ​j​ is equivalent, when viewed in the Hadamard basis, to a CNOT from ​j​ to ​i​? That means we also have to… ○ Bitwise XOR the ​j​th​ column of the Z matrix into the ​i​th​ column of the Z matrix. Finally, whenever the ​i​th​ qubit is measured in the {|0⟩,|1⟩} basis: the measurement will have a determinate outcome ​if and only if​ the ​i​th​ column of the X matrix is all 0’s. (Can you figure out why?) There are also rules for updating the tableau in the case that the measurement outcome is not determinate, but we won’t cover them here. Here’s another cool fact: in our state, the number of basis states that have nonzero amplitudes is just 2​k​, where ​k​ is the rank of the X matrix. In the above example, rank(X) = 0, corresponding to the fact that our “superposition” only contains a single basis state, namely​ |0000⟩. Let’s test this out, by keeping track of the tableau for the circuit on the right. We start with ( 0 0 | 1 0 ) ( 0 0 | 0 1 ) After the Hadamard we get (swap 1​st​ columns of X and Z) ( 1 0 | 0 0 ) You could convert this back into Pauli notation by saying that the current state is ( 0 0 | 0 1 ) the one generated by +XI and +IZ. That makes sense, since those do indeed generate the stabilizer group for |0⟩⊗|+⟩ After the CNOT: (In X: XOR 1​st​ column into 2​nd​. In Z: XOR 2​nd​ column into 1​st​) ( 1 1 | 0 0 ) This is ( X X ), the stabilizer generator for a Bell pair. ( 0 0 | 1 1 ) ​ ( Y Y ) After the phase gate: (XOR 1​st​ column of X into 1​st​ column of Z) ( 1 1 | 1 0 ) ( 0 0 | 1 1 ) This corresponds to the state . √2 |00⟩ + i|11⟩ A ​stabilizer code​ is a quantum error-correcting code in which the encoding and decoding (at least if there are no errors!) can be done entirely by stabilizer circuits. In particular, this means that all the code states are stabilizer states. In quantum computing research, most of the error-correcting codes that have been seriously considered are stabilizer codes. The reason for this is similar to why linear codes play such a central role in classical error correction: namely, (1) it makes everything ​much​ easier to calculate and reason about, and (2) by insisting on it, we don’t seem to give up any of the error-correcting properties we want. As a result, the stabilizer formalism is the lingua franca of quantum error-correction; it’s completely indispensable there. To take an example: with Shor’s 9-qubit code, we were dealing with states of the form ( ) √2 |000⟩ ± |111⟩⊗3 We claim that a generating set for the above state’s stabilizer group is as follows: { Z Z I I I I I I I, I Z Z I I I I I I, I I I Z Z I I I I, I I I I Z Z I I I, I I I I I I Z Z I, I I I I I I I Z Z, X X X X X X I I I, I I I X X X X X X, ± X X X X X X X X X ​} The last line can have either a + or –, encoding or respectively. 0⟩ | 1⟩ | Why are the above elements in the stabilizer group? Well, phase-flips applied to any pair of qubits in the same block cancel each other out. Bit-flips also take us back to where we started, except possibly with the addition of a global -1 phase. You then just need to check that these 9 elements are linearly independent of each other, meaning that there aren’t any more to be found. Now that we know the stabilizer formalism, we’re finally ready to see an “optimal” (5-qubit) code for detecting and correcting an error in any one qubit. The codeword states would be a mess if we wrote them out explicitly--superpositions over 32 different 5-bit strings! But everything is much more compact if we use the stabilizer formalism. Here’s the code: { XZZXI, IXZZX, XIXZZ, ZXIXZ, ±​ XXXXX } Once again, the sign on the last generator is + if we want the logical state, ​or - if we want the logical 0⟩ | state. 1⟩ | One can check (we won’t prove it here) that this code can indeed correct either a bit-flip or a phase-flip error on any one of the five qubits. To conclude this lecture, let’s say a tiny bit about doing actual quantum ​computations​ on qubits that are encoded using stabilizer codes. Thus, suppose we have ​n​ logical qubits, each encoded with a stabilizer code, and we want to apply a gate to one or two of the logical qubits. The “obvious” way to do this would be: 1. Decode the qubits. 2. Apply the desired gate to the “bare,” unencoded qubits. 3. Re-encode the result. But doing all that is expensive, and creates lots of new opportunities for error! (E.g., while the qubits are unencoded, there’s “nothing to protect them” from decoherence.) So it would be awesome if we had a code where applying gates to encoded qubits was hardly more complicated than applying them to unencoded qubits. This motivates the following definition: The gate G is ​transversal​ for the code C, if in order to apply G to qubits encoded using C, all you need to do is: ● Apply G to the first qubits of the codewords ● Apply G to the second qubits of the codewords ● etc. So for example, the Hadamard gate is transversal if you can Hadamard a logical qubit by just separately Hadamarding each physical qubit in the codeword. You should check that the Hadamard gate is transversal for Shor’s 9-qubit code. It turns out that there are quantum error-correcting codes for which the CNOT, Hadamard, and Phase gates are ​all​ transversal. Thus, if you use one of these codes, then applying ​any​ stabilizer circuit to the encoded qubits is extremely cheap and easy. Unfortunately, we already saw that the stabilizer gates are non-universal---and there’s a theorem that says that non-stabilizer gates ​can’t​ all be transversal. This means that, if we want universal quantum computer, we’re going to need non-stabilizer gates like Toffoli or R​π/8​ that ​can’t​ be implemented transversally, but only via sequences of gates that are much more expensive. So the quantum computer engineers tend to adopt a worldview wherein stabilizer gates are “free”---they’re so cheap to implement that you might as well not even count them---and the “complexity” of a quantum circuit equals the number of non-stabilizer gates. The non-stabilizer gates are so much more expensive that they completely dominate the running time. In practice, a lot of quantum computer engineering has boiled down to designing improved methods for getting non-stabilizer gates into a circuit. There are various tricks, a famous example being ​Magic State Distillation​. The idea there is that, if you can just produce certain non-stabilizer states like cos(π/8)|0⟩ + sin(π/8)|1⟩ -- those are the “magic states” -- then applying stabilizer operations to those states, together with measurements (and adapting based on the outcome of the measurements), is enough to ​simulate​ the effect of non-stabilizer gates. In other words, with help from magic states, stabilizer operations can break out of the Gottesman-Knill prison and get all the way up to universal quantum computation. On the other hand, actually realizing this idea seems to require building a quantum computer where the ​overwhelming majority​ of the work would happen in “magic state factories,” with the actual quantum computation on the magic states almost an afterthought. There’s a different way to understand the importance of non-stabilizer states for quantum computation. The paper by Aaronson and Gottesman from 2004, mentioned earlier, also proved the following result: Suppose we have a quantum circuit on ​n​ qubits, which contains mostly stabilizer gates---say, ​n​O(1) of them---but also a small number ​T​ of non-stabilizer gates. Then there’s a classical algorithm to simulate the circuit in time that’s polynomial in ​n​ and exponential in ​T​. This tells us that, if we want an exponential quantum speedup, then not only do we need non-stabilizer gates in our circuit, we need a polynomial ​number​ of such gates.
373
Published Time: Thu, 14 Aug 2025 18:21:27 GMT math — Mathematical functions — Python 3.13.7 documentation =============== [x] Theme Table of Contents math — Mathematical functions Number-theoretic functions Floating point arithmetic Floating point manipulation functions Power, exponential and logarithmic functions Summation and product functions Angular conversion Trigonometric functions Hyperbolic functions Special functions Constants Previous topic numbers — Numeric abstract base classes Next topic cmath — Mathematical functions for complex numbers This page Report a bug Show source Navigation index modules | next | previous | Python » 3.13.7 Documentation » The Python Standard Library » Numeric and Mathematical Modules » math — Mathematical functions | Theme | math — Mathematical functions¶ This module provides access to common mathematical functions and constants, including those defined by the C standard. These functions cannot be used with complex numbers; use the functions of the same name from the cmath module if you require support for complex numbers. The distinction between functions which support complex numbers and those which don’t is made since most users do not want to learn quite as much mathematics as required to understand complex numbers. Receiving an exception instead of a complex result allows earlier detection of the unexpected complex number used as a parameter, so that the programmer can determine how and why it was generated in the first place. The following functions are provided by this module. Except when explicitly noted otherwise, all return values are floats. Number-theoretic functions comb(n, k)Number of ways to choose k items from n items without repetition and without order factorial(n)n factorial gcd(integers)Greatest common divisor of the integer arguments isqrt(n)Integer square root of a nonnegative integer n lcm(integers)Least common multiple of the integer arguments perm(n, k)Number of ways to choose k items from n items without repetition and with order Floating point arithmetic ceil(x)Ceiling of x, the smallest integer greater than or equal to x fabs(x)Absolute value of x floor(x)Floor of x, the largest integer less than or equal to x fma(x, y, z)Fused multiply-add operation: (x y) + z fmod(x, y)Remainder of division x / y modf(x)Fractional and integer parts of x remainder(x, y)Remainder of x with respect to y trunc(x)Integer part of x Floating point manipulation functions copysign(x, y)Magnitude (absolute value) of x with the sign of y frexp(x)Mantissa and exponent of x isclose(a, b, rel_tol, abs_tol)Check if the values a and b are close to each other isfinite(x)Check if x is neither an infinity nor a NaN isinf(x)Check if x is a positive or negative infinity isnan(x)Check if x is a NaN (not a number) ldexp(x, i)x (2i), inverse of function frexp() nextafter(x, y, steps)Floating-point value steps steps after x towards y ulp(x)Value of the least significant bit of x Power, exponential and logarithmic functions cbrt(x)Cube root of x exp(x)e raised to the power x exp2(x)2 raised to the power x expm1(x)e raised to the power x, minus 1 log(x, base)Logarithm of x to the given base (e by default) log1p(x)Natural logarithm of 1+x (base e) log2(x)Base-2 logarithm of x log10(x)Base-10 logarithm of x pow(x, y)x raised to the power y sqrt(x)Square root of x Summation and product functions dist(p, q)Euclidean distance between two points p and q given as an iterable of coordinates fsum(iterable)Sum of values in the input iterable hypot(coordinates)Euclidean norm of an iterable of coordinates prod(iterable, start)Product of elements in the input iterable with a start value sumprod(p, q)Sum of products from two iterables p and q Angular conversion degrees(x)Convert angle x from radians to degrees radians(x)Convert angle x from degrees to radians Trigonometric functions acos(x)Arc cosine of x asin(x)Arc sine of x atan(x)Arc tangent of x atan2(y, x)atan(y / x) cos(x)Cosine of x sin(x)Sine of x tan(x)Tangent of x Hyperbolic functions acosh(x)Inverse hyperbolic cosine of x asinh(x)Inverse hyperbolic sine of x atanh(x)Inverse hyperbolic tangent of x cosh(x)Hyperbolic cosine of x sinh(x)Hyperbolic sine of x tanh(x)Hyperbolic tangent of x Special functions erf(x)Error function at x erfc(x)Complementary error function at x gamma(x)Gamma function at x lgamma(x)Natural logarithm of the absolute value of the Gamma function at x Constants piπ = 3.141592… ee = 2.718281… tauτ = 2 π = 6.283185… infPositive infinity nan“Not a number” (NaN) Number-theoretic functions¶ math.comb(n, k)¶ Return the number of ways to choose k items from n items without repetition and without order. Evaluates to n! / (k! (n - k)!) when k <= n and evaluates to zero when k > n. Also called the binomial coefficient because it is equivalent to the coefficient of k-th term in polynomial expansion of (1 + x)ⁿ. Raises TypeError if either of the arguments are not integers. Raises ValueError if either of the arguments are negative. Added in version 3.8. math.factorial(n)¶ Return factorial of the nonnegative integer n. Changed in version 3.10: Floats with integral values (like 5.0) are no longer accepted. math.gcd(integers)¶ Return the greatest common divisor of the specified integer arguments. If any of the arguments is nonzero, then the returned value is the largest positive integer that is a divisor of all arguments. If all arguments are zero, then the returned value is 0. gcd() without arguments returns 0. Added in version 3.5. Changed in version 3.9: Added support for an arbitrary number of arguments. Formerly, only two arguments were supported. math.isqrt(n)¶ Return the integer square root of the nonnegative integer n. This is the floor of the exact square root of n, or equivalently the greatest integer a such that a ²≤n. For some applications, it may be more convenient to have the least integer a such that n≤a ², or in other words the ceiling of the exact square root of n. For positive n, this can be computed using a = 1 + isqrt(n - 1). Added in version 3.8. math.lcm(integers)¶ Return the least common multiple of the specified integer arguments. If all arguments are nonzero, then the returned value is the smallest positive integer that is a multiple of all arguments. If any of the arguments is zero, then the returned value is 0. lcm() without arguments returns 1. Added in version 3.9. math.perm(n, k=None)¶ Return the number of ways to choose k items from n items without repetition and with order. Evaluates to n! / (n - k)! when k <= n and evaluates to zero when k > n. If k is not specified or is None, then k defaults to n and the function returns n!. Raises TypeError if either of the arguments are not integers. Raises ValueError if either of the arguments are negative. Added in version 3.8. Floating point arithmetic¶ math.ceil(x)¶ Return the ceiling of x, the smallest integer greater than or equal to x. If x is not a float, delegates to x.__ceil__, which should return an Integral value. math.fabs(x)¶ Return the absolute value of x. math.floor(x)¶ Return the floor of x, the largest integer less than or equal to x. If x is not a float, delegates to x.__floor__, which should return an Integral value. math.fma(x, y, z)¶ Fused multiply-add operation. Return (x y) + z, computed as though with infinite precision and range followed by a single round to the float format. This operation often provides better accuracy than the direct expression (x y) + z. This function follows the specification of the fusedMultiplyAdd operation described in the IEEE 754 standard. The standard leaves one case implementation-defined, namely the result of fma(0, inf, nan) and fma(inf, 0, nan). In these cases, math.fma returns a NaN, and does not raise any exception. Added in version 3.13. math.fmod(x, y)¶ Return the floating-point remainder of x / y, as defined by the platform C library function fmod(x, y). Note that the Python expression x % y may not return the same result. The intent of the C standard is that fmod(x, y) be exactly (mathematically; to infinite precision) equal to x - ny for some integer n such that the result has the same sign as x and magnitude less than abs(y). Python’s x % y returns a result with the sign of y instead, and may not be exactly computable for float arguments. For example, fmod(-1e-100, 1e100) is -1e-100, but the result of Python’s -1e-100 % 1e100 is 1e100-1e-100, which cannot be represented exactly as a float, and rounds to the surprising 1e100. For this reason, function fmod() is generally preferred when working with floats, while Python’s x % y is preferred when working with integers. math.modf(x)¶ Return the fractional and integer parts of x. Both results carry the sign of x and are floats. Note that modf() has a different call/return pattern than its C equivalents: it takes a single argument and return a pair of values, rather than returning its second return value through an ‘output parameter’ (there is no such thing in Python). math.remainder(x, y)¶ Return the IEEE 754-style remainder of x with respect to y. For finite x and finite nonzero y, this is the difference x - ny, where n is the closest integer to the exact value of the quotient x / y . If x / y is exactly halfway between two consecutive integers, the nearest even integer is used for n. The remainder r = remainder(x, y) thus always satisfies abs(r) <= 0.5 abs(y). Special cases follow IEEE 754: in particular, remainder(x, math.inf) is x for any finite x, and remainder(x, 0) and remainder(math.inf, x) raise ValueError for any non-NaN x. If the result of the remainder operation is zero, that zero will have the same sign as x. On platforms using IEEE 754 binary floating point, the result of this operation is always exactly representable: no rounding error is introduced. Added in version 3.7. math.trunc(x)¶ Return x with the fractional part removed, leaving the integer part. This rounds toward 0: trunc() is equivalent to floor() for positive x, and equivalent to ceil() for negative x. If x is not a float, delegates to x.__trunc__, which should return an Integral value. For the ceil(), floor(), and modf() functions, note that all floating-point numbers of sufficiently large magnitude are exact integers. Python floats typically carry no more than 53 bits of precision (the same as the platform C double type), in which case any float x with abs(x) >= 252 necessarily has no fractional bits. Floating point manipulation functions¶ math.copysign(x, y)¶ Return a float with the magnitude (absolute value) of x but the sign of y. On platforms that support signed zeros, copysign(1.0, -0.0) returns -1.0. math.frexp(x)¶ Return the mantissa and exponent of x as the pair (m, e). m is a float and e is an integer such that x == m 2e exactly. If x is zero, returns (0.0, 0), otherwise 0.5 <= abs(m) < 1. This is used to “pick apart” the internal representation of a float in a portable way. Note that frexp() has a different call/return pattern than its C equivalents: it takes a single argument and return a pair of values, rather than returning its second return value through an ‘output parameter’ (there is no such thing in Python). math.isclose(a, b, , rel_tol=1e-09, abs_tol=0.0)¶ Return True if the values a and b are close to each other and False otherwise. Whether or not two values are considered close is determined according to given absolute and relative tolerances. If no errors occur, the result will be: abs(a-b) <= max(rel_tol max(abs(a), abs(b)), abs_tol). rel_tol is the relative tolerance – it is the maximum allowed difference between a and b, relative to the larger absolute value of a or b. For example, to set a tolerance of 5%, pass rel_tol=0.05. The default tolerance is 1e-09, which assures that the two values are the same within about 9 decimal digits. rel_tol must be nonnegative and less than 1.0. abs_tol is the absolute tolerance; it defaults to 0.0 and it must be nonnegative. When comparing x to 0.0, isclose(x, 0) is computed as abs(x) <= rel_tol abs(x), which is False for any nonzero x and rel_tol less than 1.0. So add an appropriate positive abs_tol argument to the call. The IEEE 754 special values of NaN, inf, and -inf will be handled according to IEEE rules. Specifically, NaN is not considered close to any other value, including NaN. inf and -inf are only considered close to themselves. Added in version 3.5. See also PEP 485 – A function for testing approximate equality math.isfinite(x)¶ Return True if x is neither an infinity nor a NaN, and False otherwise. (Note that 0.0is considered finite.) Added in version 3.2. math.isinf(x)¶ Return True if x is a positive or negative infinity, and False otherwise. math.isnan(x)¶ Return True if x is a NaN (not a number), and False otherwise. math.ldexp(x, i)¶ Return x (2i). This is essentially the inverse of function frexp(). math.nextafter(x, y, steps=1)¶ Return the floating-point value steps steps after x towards y. If x is equal to y, return y, unless steps is zero. Examples: math.nextafter(x, math.inf) goes up: towards positive infinity. math.nextafter(x, -math.inf) goes down: towards minus infinity. math.nextafter(x, 0.0) goes towards zero. math.nextafter(x, math.copysign(math.inf, x)) goes away from zero. See also math.ulp(). Added in version 3.9. Changed in version 3.12: Added the steps argument. math.ulp(x)¶ Return the value of the least significant bit of the float x: If x is a NaN (not a number), return x. If x is negative, return ulp(-x). If x is a positive infinity, return x. If x is equal to zero, return the smallest positive denormalized representable float (smaller than the minimum positive normalized float, sys.float_info.min). If x is equal to the largest positive representable float, return the value of the least significant bit of x, such that the first float smaller than x is x - ulp(x). Otherwise (x is a positive finite number), return the value of the least significant bit of x, such that the first float bigger than x is x + ulp(x). ULP stands for “Unit in the Last Place”. See also math.nextafter() and sys.float_info.epsilon. Added in version 3.9. Power, exponential and logarithmic functions¶ math.cbrt(x)¶ Return the cube root of x. Added in version 3.11. math.exp(x)¶ Return e raised to the power x, where e = 2.718281… is the base of natural logarithms. This is usually more accurate than math.e x or pow(math.e, x). math.exp2(x)¶ Return 2 raised to the power x. Added in version 3.11. math.expm1(x)¶ Return e raised to the power x, minus 1. Here e is the base of natural logarithms. For small floats x, the subtraction in exp(x) - 1 can result in a significant loss of precision; the expm1() function provides a way to compute this quantity to full precision: Copy>>> from math import exp, expm1 exp(1e-5) - 1 # gives result accurate to 11 places 1.0000050000069649e-05 expm1(1e-5) # result accurate to full precision 1.0000050000166668e-05 Added in version 3.2. math.log(x[, base])¶ With one argument, return the natural logarithm of x (to base e). With two arguments, return the logarithm of x to the given base, calculated as log(x)/log(base). math.log1p(x)¶ Return the natural logarithm of 1+x (base e). The result is calculated in a way which is accurate for x near zero. math.log2(x)¶ Return the base-2 logarithm of x. This is usually more accurate than log(x, 2). Added in version 3.3. See also int.bit_length() returns the number of bits necessary to represent an integer in binary, excluding the sign and leading zeros. math.log10(x)¶ Return the base-10 logarithm of x. This is usually more accurate than log(x, 10). math.pow(x, y)¶ Return x raised to the power y. Exceptional cases follow the IEEE 754 standard as far as possible. In particular, pow(1.0, x) and pow(x, 0.0) always return 1.0, even when x is a zero or a NaN. If both x and y are finite, x is negative, and y is not an integer then pow(x, y) is undefined, and raises ValueError. Unlike the built-in operator, math.pow() converts both its arguments to type float. Use or the built-in pow() function for computing exact integer powers. Changed in version 3.11: The special cases pow(0.0, -inf) and pow(-0.0, -inf) were changed to return inf instead of raising ValueError, for consistency with IEEE 754. math.sqrt(x)¶ Return the square root of x. Summation and product functions¶ math.dist(p, q)¶ Return the Euclidean distance between two points p and q, each given as a sequence (or iterable) of coordinates. The two points must have the same dimension. Roughly equivalent to: Copy sqrt(sum((px - qx) 2.0 for px, qx in zip(p, q))) Added in version 3.8. math.fsum(iterable)¶ Return an accurate floating-point sum of values in the iterable. Avoids loss of precision by tracking multiple intermediate partial sums. The algorithm’s accuracy depends on IEEE-754 arithmetic guarantees and the typical case where the rounding mode is half-even. On some non-Windows builds, the underlying C library uses extended precision addition and may occasionally double-round an intermediate sum causing it to be off in its least significant bit. For further discussion and two alternative approaches, see the ASPN cookbook recipes for accurate floating-point summation. math.hypot(coordinates)¶ Return the Euclidean norm, sqrt(sum(x2 for x in coordinates)). This is the length of the vector from the origin to the point given by the coordinates. For a two dimensional point (x, y), this is equivalent to computing the hypotenuse of a right triangle using the Pythagorean theorem, sqrt(xx + yy). Changed in version 3.8: Added support for n-dimensional points. Formerly, only the two dimensional case was supported. Changed in version 3.10: Improved the algorithm’s accuracy so that the maximum error is under 1 ulp (unit in the last place). More typically, the result is almost always correctly rounded to within 1/2 ulp. math.prod(iterable, , start=1)¶ Calculate the product of all the elements in the input iterable. The default start value for the product is 1. When the iterable is empty, return the start value. This function is intended specifically for use with numeric values and may reject non-numeric types. Added in version 3.8. math.sumprod(p, q)¶ Return the sum of products of values from two iterables p and q. Raises ValueError if the inputs do not have the same length. Roughly equivalent to: Copy sum(itertools.starmap(operator.mul, zip(p, q, strict=True))) For float and mixed int/float inputs, the intermediate products and sums are computed with extended precision. Added in version 3.12. Angular conversion¶ math.degrees(x)¶ Convert angle x from radians to degrees. math.radians(x)¶ Convert angle x from degrees to radians. Trigonometric functions¶ math.acos(x)¶ Return the arc cosine of x, in radians. The result is between 0 and pi. math.asin(x)¶ Return the arc sine of x, in radians. The result is between -pi/2 and pi/2. math.atan(x)¶ Return the arc tangent of x, in radians. The result is between -pi/2 and pi/2. math.atan2(y, x)¶ Return atan(y / x), in radians. The result is between -pi and pi. The vector in the plane from the origin to point (x, y) makes this angle with the positive X axis. The point of atan2() is that the signs of both inputs are known to it, so it can compute the correct quadrant for the angle. For example, atan(1) and atan2(1, 1) are both pi/4, but atan2(-1, -1) is -3pi/4. math.cos(x)¶ Return the cosine of x radians. math.sin(x)¶ Return the sine of x radians. math.tan(x)¶ Return the tangent of x radians. Hyperbolic functions¶ Hyperbolic functions are analogs of trigonometric functions that are based on hyperbolas instead of circles. math.acosh(x)¶ Return the inverse hyperbolic cosine of x. math.asinh(x)¶ Return the inverse hyperbolic sine of x. math.atanh(x)¶ Return the inverse hyperbolic tangent of x. math.cosh(x)¶ Return the hyperbolic cosine of x. math.sinh(x)¶ Return the hyperbolic sine of x. math.tanh(x)¶ Return the hyperbolic tangent of x. Special functions¶ math.erf(x)¶ Return the error function at x. The erf() function can be used to compute traditional statistical functions such as the cumulative standard normal distribution: Copy def phi(x): 'Cumulative distribution function for the standard normal distribution' return (1.0 + erf(x / sqrt(2.0))) / 2.0 Added in version 3.2. math.erfc(x)¶ Return the complementary error function at x. The complementary error function is defined as 1.0 - erf(x). It is used for large values of x where a subtraction from one would cause a loss of significance. Added in version 3.2. math.gamma(x)¶ Return the Gamma function at x. Added in version 3.2. math.lgamma(x)¶ Return the natural logarithm of the absolute value of the Gamma function at x. Added in version 3.2. Constants¶ math.pi¶ The mathematical constant π = 3.141592…, to available precision. math.e¶ The mathematical constant e = 2.718281…, to available precision. math.tau¶ The mathematical constant τ = 6.283185…, to available precision. Tau is a circle constant equal to 2 π, the ratio of a circle’s circumference to its radius. To learn more about Tau, check out Vi Hart’s video Pi is (still) Wrong, and start celebrating Tau day by eating twice as much pie! Added in version 3.6. math.inf¶ A floating-point positive infinity. (For negative infinity, use -math.inf.) Equivalent to the output of float('inf'). Added in version 3.5. math.nan¶ A floating-point “not a number” (NaN) value. Equivalent to the output of float('nan'). Due to the requirements of the IEEE-754 standard, math.nan and float('nan') are not considered to equal to any other numeric value, including themselves. To check whether a number is a NaN, use the isnan() function to test for NaNs instead of is or ==. Example: Copy>>> import math math.nan == math.nan False float('nan') == float('nan') False math.isnan(math.nan) True math.isnan(float('nan')) True Added in version 3.5. Changed in version 3.11: It is now always available. CPython implementation detail: The math module consists mostly of thin wrappers around the platform C math library functions. Behavior in exceptional cases follows Annex F of the C99 standard where appropriate. The current implementation will raise ValueError for invalid operations like sqrt(-1.0) or log(0.0) (where C99 Annex F recommends signaling invalid operation or divide-by-zero), and OverflowError for results that overflow (for example, exp(1000.0)). A NaN will not be returned from any of the functions above unless one or more of the input arguments was a NaN; in that case, most functions will return a NaN, but (again following C99 Annex F) there are some exceptions to this rule, for example pow(float('nan'), 0.0) or hypot(float('nan'), float('inf')). Note that Python makes no effort to distinguish signaling NaNs from quiet NaNs, and behavior for signaling NaNs remains unspecified. Typical behavior is to treat all NaNs as though they were quiet. See also Module cmath Complex number versions of many of these functions. Table of Contents math — Mathematical functions Number-theoretic functions Floating point arithmetic Floating point manipulation functions Power, exponential and logarithmic functions Summation and product functions Angular conversion Trigonometric functions Hyperbolic functions Special functions Constants Previous topic numbers — Numeric abstract base classes Next topic cmath — Mathematical functions for complex numbers This page Report a bug Show source « Navigation index modules | next | previous | Python » 3.13.7 Documentation » The Python Standard Library » Numeric and Mathematical Modules » math — Mathematical functions | Theme | © Copyright 2001-2025, Python Software Foundation. This page is licensed under the Python Software Foundation License Version 2. Examples, recipes, and other code in the documentation are additionally licensed under the Zero Clause BSD License. See History and License for more information. The Python Software Foundation is a non-profit corporation. Please donate. Last updated on Aug 14, 2025 (18:19 UTC). Found a bug? Created using Sphinx 8.2.3.
374
CS 170 Efficient Algorithms and Intractable Problems Nika Haghtalab and John Wright EECS, UC Berkeley Lecture 1: Logistics, Introduction, Arithmetic (Some slides and material are inspired by courses taught by Nelson, Raghavendra, Tal, Vazirani, Haghtalab, Wright (UC Berkeley) and Wootters (Stanford) Today’s Plan Introductions • Who are we? • Who are you? • Why are we here? Course Overview • Course Goals and overview • Logistics Arithmetic! • Can we add and multiply? • Can we do them fast? • Applied Math • Architecture • Bioengineering • Business Administration • Chemical Biology • Computer Science • Data Science • EECS • Economics • Environmental Sciences • IEOR • Cognitive Science • Genetics & Plant Biology • Mathematics • Mechanical Engineering • Music • Philosophy • Physics • Statistics • … Studying Mostly junior (~35%), senior (~26%), and sophomore (~23%) Who are you? Who are we? Instructors Prof. John Wright Prof. Nika Haghtalab Jonny Carolyn David W Eric Meghal Andrew Xavier Bill Diana Jessica L Ryan Aaryan Ajay Alex Anushka David Y Divya George Jeffery Jessica L Richik Shu Thomas Will Y Yamuna Why are we all here? You need an upper div credit ... Algorithms are fundamental and useful. Algorithms are fun! Course Goals Design and analyze algorithms In this course you will learn: • Design: Acquire an algorithmic toolkit • Analysis: Learn to think analytically about algorithms • Understand limitations: Understand algorithmic limitations • Communication: Learn to formalize your thoughts and communicate clearly about algorithms Fundamental Questions about Algorithms Does it work? Is it fast? Can we do better? Bigger Picture Precise definitions Rigorous Proofs Corner cases Very detailed Big picture Intuitive understanding Broader connections Sometimes handwavy Detail-oriented Course Logistics Course website: • • Hosts lecture slides and notes, class calendar, assigned reading from textbook. Lectures • No livestream! COME TO LECTURES! • Video recordings: available on bCourses • Textbook readings linked on course website Homework • Weekly HWs (released on Sundays, due Saturdays) • HW Parties on Fridays Course Logistics Discussion Sections • Schedule TBD: Check the “Discussions” tab under the website • Discussions don’t replace lectures: We assume you have already attended the lecture and reviewed the material before coming to the discussion. • LOST Section: There will be a section with slower pace, more interactions, reinforces concepts Contact us or each others • Ed: Announcements and forum • Email: [email protected] for admins and logistics. More Course Logistics Office hours (See the schedule under the calendar tab) • Nika’s OH: After lecture on Tuesdays or by appointment → meet at the class entrance • John’s OH: TBD Exams: 2 midterms and 1 final. No alternate exams offered. Midterm 1 on Feb 25, Midterm 2 on April 3. Both 7pm-9pm Final: May 12, 11:30am-2:30pm Other resources and forms Course Policies: • Course policies and etiquettes will be listed on the website. • Academic Honesty code strictly enforced, ... • Read them and adhere to them. Feedback! • Help us improve the class! • Send us suggestions on Ed or in person. • We will set up a midsemester anonymous feedback form. A good way to learn in this course Lectures: • GO TO LECTURES! Ask questions and remain engaged in class. • Attendance is not mandatory, but highly encouraged. Help us record your attendance! • Review the slides and questions after the lecture, before attempting the homework. Assigned reading: • Read before or soon after class. Don’t leave until the exam time. Discussion section: • Attempt the discussion problems before the session. • GO TO SECTIONS! Algorithms! “Algorithms” Muḥammad ibn Mūsā al-Khwārizmī or al-Khwarizmi, was Persian polymath from Khwarazm (today’s Uzbekistan and Turkmenistan). Was a scholar in Baghdad contributed to Math, Astronomy, and Geography. In Latin, al-Khwarizmi’s name gave rise to “algorithm”. His books spread the Hindu-Arabic numeral system to Europe. Hindu-Arabic Numeral System Roman numerals not in any natural basis. Hard to do arithmetic. XXIX × XXXVII = ? 29 37 x 29 ft 37 ft How big is my plot of land? Let’s go back to elementary school How do we add integers? How fast is the grade-school addition algorithm? More formal: How many one-digit operations does it take? 12345 78910 + About 𝑛 one-digit operations 1234567891010987654321 1098765432112345678910 + Well … there are also at most 𝑛 carries, but that makes it still something like 2𝑛, maybe 3𝑛 …. 𝑛 digits Discuss Big-Oh Notation Recall 𝑂⋅ notation from 61B!! 61B Lecture 13 61B Lecture 13 Big-Oh Notation Recall 𝑂⋅ notation from 61B! • Ignore constants and focus on the largest dependence on 𝑛. We say that addition of 2 numbers with 𝑛 digits “runs in time O(n)” Still don’t remember 𝑂⋅ notation well? • We’ll dig deeper more formally next time. Also GO TO SECTIONS! What about multiplication? How fast is grade school integer multiplication? 1234567891010987654321 1098765432112345678910 x It runs in time O 𝑛ଶ! Discuss 𝑛 digits Well … there are at most 𝑛ଶ 1-digit multiplications, at most 𝑛ଶ carries to be added, and then we have to add 𝑛 numbers, each with at most 2𝑛 digits …. Can we do better? Easier question: Can we do better than 𝑂(𝑛)? • No! It takes at least 𝑛 steps to just read the numbers. One other fun algorithm for multiplications: 27 x 19 1. Repeat: Halve 1st number (floor) and double the second number, until we get 0 in the first column. 2. Remove any rows where the first column is even. 3. Add all remaining rows. At home, prove why this algorithm is correct and what its runtime is. Egyptian multiplication / Russian Peasant Algorithm There is a way to do better! • Karatsuba (1960): O 𝑛ଵ.଺! • Toom-3/Toom-Cook (1963): O 𝑛ଵ.ସ଺ହ • Schönhage–Strassen (1971): • Runs in time O(𝑛log 𝑛log log 𝑛) • Furer (2007) • Runs in time 𝑛log 𝑛⋅2୓(୪୭୥∗௡) • Harvey and van der Hoeven (2019) • Runs in time O(𝑛log 𝑛) We’ll see this algorithm! Uses the same technical tool as Karatsuba’s. Divide and Conquer Breaking up a big problem into smaller subproblems, recursively. Big problem Smaller problem Smaller problem Yet smaller problem Yet smaller problem Yet smaller problem Yet smaller problem Divide and Conquer for Multiplication Break up the multiplication of two integers with 𝑛 digits into multiplication of integers with 𝑛/2 digits: 1234 × 5678 = 1234 = 12×100 + 34 The algorithm Break up the multiplication of two integers with 𝑛 digits into multiplication of integers with 𝑛/2 digits: (simplify: assume even 𝑛) The algorithm Break up the multiplication of two integers with 𝑛 digits into multiplication of integers with 𝑛/2 digits: 𝑥× 𝑦= 𝑎× 10 ௡ ଶ+ 𝑏 𝑐× 10 ௡ ଶ+ 𝑑 𝑥ଵ𝑥ଶ⋯𝑥௡= 𝑥ଵ, 𝑥ଶ, ⋯, 𝑥௡/ଶ × 10 ௡ ଶ+ [𝑥⁄ ௡ଶାଵ𝑥⁄ ௡ଶାଶ⋯𝑥௡] One 𝑛-digit multiplication Four 𝑛/2-digit multiplications = 𝑎× 𝑐10௡+ 𝑎× 𝑑+ 𝑐× 𝑏10௡/ଶ+ (𝑏× 𝑑) P1 P2 P3 P4 𝑎 𝑏 (simplify: assume even 𝑛) Multiply two 4-digit Numbers We broke 1 multiplication of 4-digit numbers to 4 multiplications of 2-digit numbers. 1234 × 5678 We wanted to count 1-digit operations. So, what should we do now? Recurse! Break up each of the 2-digit multiplication problems, to 4 multiplications with 1-digit numbers. Write the pseudo-code, handling corner cases and odd 𝑛s too. Recursion tree for 4-digit numbers 4 digits 2 digits 1 digit 1 digit 1 digit 1 digit 2 digits 1 digit 1 digit 1 digit 1 digit 2 digits 1 digit 1 digit 1 digit 1 digit 2 digits 1 digit 1 digit 1 digit 1 digit What is the running time of this algorithm? We saw that multiplying two 4-digit numbers resulted in 16 one-digit multiplications. • How many one-digit multiplications for multiplying two 8-digit numbers? • What about multiplying 𝑛-digit numbers? Running time of the algorithm Claim: The runtime of the algorithm is 𝑂𝑛ଶ. Claim: We are creating 𝑂𝑛ଶ number of 1-digit operations. So, was there a point to Divide and Conquer? Karatsuba’s Clever Trick Divide and Conquer indeed can lead to a faster algorithm! 𝑥× 𝑦= 𝑎× 10 ௡ ଶ+ 𝑏 𝑐× 10 ௡ ଶ+ 𝑑 = 𝑎× 𝑐10௡+ 𝑎× 𝑑+ 𝑐× 𝑏10௡/ଶ+ (𝑏× 𝑑) P1 P2 P3 P4 Main idea: Could we write P2+P3 using what we compute in P1 and P4, and at most one other 𝑛/2-digit multiplication? The issue is that we are creating 4 sub-problems. What if we could create fewer subproblems? Karatsuba’s Clever Trick Let us only compute 3 things: • Q1: 𝑎× 𝑐 • Q2: 𝑏× 𝑑 • Q3: (𝑎+ 𝑏)(𝑐+ 𝑑) 𝑥× 𝑦= 𝑎× 10 ௡ ଶ+ 𝑏 𝑐× 10 ௡ ଶ+ 𝑑 = 𝑎× 𝑐10௡+ 𝑎× 𝑑+ 𝑐× 𝑏10௡/ଶ+ (𝑏× 𝑑) 𝑎× 𝑑+ 𝑐× 𝑏= 𝑎+ 𝑏 𝑐+ 𝑑−ac −bd Q1 Q3−Q1−Q2 Q2 Expressing P2+P3 differently Three subproblems What is the running time of Karatsuba’s Alg? Layer # of digits # problems 0 𝑛 1 𝑛 digits Technically • We only counted the number of 1-digit problems • There are other things we do: adding, subtracting, … • Shouldn’t we account for all of that? Absolutely! • We should be more formal, and we will be a bit more formal next time. • In this case, additions/subtractions end up in lower order terms • Don’t affect 𝑂(. ). Details we are skipping We used base 10 so far →Counted the # of 1-digit operations, assuming adding/multiplying single digits is easy (memorized our multiplication table!) What if we use base 2? →We would want to count # of 1-bit operations. How do we alter Karatsuba’s algorithm for binary numbers? What about binary representation? Easy to compute 10௞ in base 10. In base 2, it is easy to compute 2௞. N-bit integer multiplications 𝑏ଵ𝑏ଶ⋯𝑏௡= 𝑏ଵ, 𝑏ଶ, ⋯, 𝑏௡/ଶ × 2௡/ଶ+ [𝑏⁄ ௡ଶାଵ𝑏⁄ ௡ଶାଶ⋯𝑏௡] 𝑏ଵ 𝑏ଶ 𝑏ଷ ⋯ ⋯𝑏௡ିଶ 𝑏௡ିଵ 𝑏௡ 𝑏௅ 𝑏ோ 𝑎× 𝑏= 𝑎௅× 𝑏௅2௡+ 𝑎௅× 𝑏ோ+ 𝑎ோ× 𝑏௅2௡/ଶ+ (𝑎ோ× 𝑏ோ) = ⋯ Practice: Complete this equation the Karatsuba’s way and rederived 𝑂𝑛ଶ runtime for multiply two 𝑛-bit numbers. Other Algorithms • Karatsuba (1960): O 𝑛ଵ.଺! • Toom-3/Toom-Cook (1963): O 𝑛ଵ.ସ଺ହ • Schönhage–Strassen (1971): • Runs in time O(𝑛log 𝑛log log 𝑛) • Furer (2007) • Runs in time 𝑛log 𝑛⋅2୓(୪୭୥∗௡) • Harvey and van der Hoeven (2019) • Runs in time O(𝑛log 𝑛) Saw this! Divide and conquer too! Instead of breaking into three n/2-sized problems, break into five n/3-sized problems. (advanced) Hint: Start with 9 subproblems and reduce it to 5 subproblems. Next time • Big-Oh and Asymptotic notations more formally • Divide and Conquer some more • Matrix multiplications! Wrap up Divide and conquer: • A useful and fundamental algorithmic tool. Fun too! Karatsuba Integer Multiplication: • You can do better than grade school multiplication! • Example of divide-and-conquer in action
375
• Boltzmann Distribution • Partition Functions • Molecular Energies • The Canonical Ensemble • Internal energy and entropy • Derived functions Chapter 15. Statistical Thermodynamics Major Concepts Review Discrete Energy levels Particle in a box Rigid rotor Harmonic Oscillator Math Probability Lagrange Multipliers Properties of ln Microscopic Properties Quantum Mechanics Spectroscopy Vibrational frequencies Bond dissociations Macroscopic Properties Thermodynamics Heat capacity Coefficient of expansion Statistical Mechanics Statistical Thermodynamics Statistical thermodynamics provides the link between the microscopic (i.e., molecular) properties of matter and its macroscopic (i.e., bulk) properties. It provides a means of calculating thermodynamic properties from the statistical relationship between temperature and energy. Based on the concept that all macroscopic systems consist of a large number of states of differing energies, and that the numbers of atoms or molecules that populate each of these states are a function of the thermodynamic temperature of the system. One of the first applications of this concept was the development of the kinetic theory of gases and the resulting Maxwell-Boltzmann distribution of molecular velocities, which was first developed by Maxwell in 1860 on purely heuristic grounds and was based on the assumption that gas molecules in a system at thermal equilibrium had a range of velocities and, hence, energies. Boltzmann performed a detailed analysis of this distribution in the 1870’s and put it on a firm statistical foundation. He eventually extended the concept of a statistical basis for all thermodynamic properties to all macroscopic systems.  2 2 3/2 3/2 v Mv 2 2 2 2 v 4 v 4 v 2 2 m kT RT m M f e e kT RT                     Maxwell-Boltzmann Distribution: Statistical Thermodynamics Statistics and Entropy Macroscopic state :- state of a system is established by specifying its T, E ,S ... Microscopic state :- state of a system is established by specifying x, p, ε... of ind. constituents More than one microstate can lead to the same macrostate. Example: 2 particles with total E = 2 Can be achieved by microstates 1, 1 or 2, 0 or 0, 2 Configuration:- The equivalent ways to achieve a state W (weight):- The # of configurations comprising a state Probability of a state:- # configuration in state / total # of configurations (Assumes that the five molecules are distinguishable.) Weight of a Configuration 0 1 2 3 0 1 2 3 0 1 2 3 ! ! ! ! ! ! ln ln ! ! ! ! ln ! ln ! ! ! ! ln ! ln ! i i N W N N N N N W N N N N N N N N N N N       Using Sterling’s Approximation: ln N! ~ N ln N − N, which is valid for N » 1, ln ln ln i i i W N N N N   • Global maximum in f when df = 0 df f x    y dx f y       x dy • Seek a maximum in f(x,y) subject to a constraint defined by g(x,y) = 0 • Since g(x,y) is constant dg = 0 and: dg g x    y dx g y       x dy 0 • This defines dx  g y  g x       dy and dy  g x  g y       dx • Eliminating dx or dy from the equation for df: df f y f x g y g x       dy 0 or df f x f y g x g y       dx 0 • Defines undetermined multiplier  f y g y or  f x g x f x g x    0 or  x f g  0 f y g y      0  y f g  0 or Same as getting unconstrained maximum of K f g   Undetermined Multipliers (Chemist’s Toolkit 15A.1) Example of Undetermined Multipliers A rectangular area is to be enclosed by a fence having a total length of 16 meters, where one side of the rectangle does not need fence because it is adjacent to a river. What are the dimensions of the fence that will enclose the largest possible area? river y x x Thus, the principal (area) function is: F(x,y) = xy (1) and the constraint (16 meters) is: f(x,y) = 2x + y − 16 = 0 (2) If F(x,y) were not constrained, i.e., if x and y were independent, then the derivative (slope) of F would be zero: (3) and (4) However, this provides only two equations to be solved for the variables x and y, whereas three equations must be satisfied, viz., Eqs. (4) and the constraint equation f(x,y) = 0. dF x y F x dx F y dy ( , )        0 0 and 0 F F x y                  Example of Undetermined Multipliers The method of undetermined multipliers involves multiplying the constraint equation by another quantity, λ, whose value can be chosen to make x and y appear to be independent. This results in a third variable being introduced into the three-equation problem. Because f(x,y) = 0, maximizing the new function F’ F’(x,y) ≡ F(x,y) + λ f(x,y) (5) is equivalent to the original problem, except that now there are three variables, x, y, and λ, to satisfy three equations: (6) Thus Eq. 5 becomes F’(x,y) = xy + λ (2x + y -16) (7) Applying Eqs. (6), yielding λ = −4, which results in x = 4 and y = 8. Hence, the maximum area possible is A = 32 m2. ' ' 0 0 and ( , ) 0 F F f x y x y                   ' 2 0 2 F y y x               ' 0 F x x y                2 16 2 2 16 0 x y         Find a maximum of f x,y  ex2 y2   Subject to the constraint g x,y  x 4y17 0 From slope formula df f x       y dx f y       x dy 0 df 2xe x2 y2   dx 2ye x2 y2   dy 0 Global maximum: x = y = 0 Need to find constrained maximum • Find undetermined multiplier K(x,y) f (x,y)g(x, y) e x 2y2  x 4y 17   • An unconstrained maximum in K must K x 2xe x 2y2  0 K y 2ye x 2y2  40 2xf y 2 f • This implies 2x y 2 4x y • Original condition g = 0 g x,y  x 4 4x  17 0 • Constrained maximum : x = 1; y = 4 Example of Undetermined Multipliers Most Probable Distribution 𝐸= ෍ 𝑖 𝑁𝑖∈𝑖 𝑊= 𝑁! 𝑁1! 𝑁2! 𝑁3! … Configurations: (permutations) Total energy: Maximum probability (and, hence, maximum entropy) occurs when each particle is in a different energy level. But minimum energy occurs when all particles are in the lowest energy level. Thus, must find the maximum probability that is possible, consistent with a given total energy, E, and a given total number of particles, N. This is an example of a classic problem, in which one must determine the extrema (i.e., maxima and/or minima) of a function, e.g. entropy, that are consistent with constraints that may be imposed because of other functions, e.g., energy and number of particles. This problem is typically solved by using the so-called LaGrange Method of Undetermined Multipliers. Example: N = 20,000; E = 10,000; three energy levels 𝜖1 = 0, 𝜖2 = 1, 𝜖3 = 2. Constant E requires that N2 + 2N3 = 10,000; constant N requires that N1 + N2 + N3 = 20,000 0 < N3 < 3333; W is maximum when N3 ~1300. 𝑁= ෍ 𝑖 𝑁𝑖 Total number of particles: The most probable distribution is the one with greatest weight, W. Thus, must maximize lnW. Because there are two constraints (constant E and constant N), must use two undetermined multipliers: g(xi) = 0 and h(xi) = 0 so K = (f + ag +bh) • Use this to approach to find most probable population: K ln W a N  Nj j       b E  Njj j        (constant N) (constant E) Want constrained maximum of lnW (equivalent to unconstrained maximum of K) • Use Stirling's Approximation for lnW: K N ln N  Nj ln Nj j  a N  Nj j       b E  Njj j        • Can solve for any single population Ni (all others 0): N Ni 0 Ni Ni 1 Nj Ni 0 K Ni      lnNi Ni 1 Ni      a 1  b i 0 K Ni      lnNi 1a bi 0 ln Ni 1a  bi Most Probable Distribution N  Nj j  A ebj j  A  N ebj j  Ni Nebi ebj j  b 1 kbT pi Ni N ei kbT e j kbT j  A exp 1a     Ni Aebi If , then • A (a) can be eliminated by introducing N: Boltzmann Temperature (will prove later) Boltzmann Distribution Boltzmann Distribution ln Ni 1a  bi Τ 𝑁𝑖𝑁 𝑗= e−𝛽𝜀𝑖−𝜀𝑗= e− Τ 𝜀𝑖−𝜀𝑗 𝑘𝑇 For relative populations: Gives populations of states, not levels. If more than one state at same energy, must account for degeneracy of state, gi. Τ 𝑁𝑖𝑁 𝑗= Τ 𝑔𝑖𝑔𝑗e−𝛽𝜀𝑖−𝜀𝑗 Most Probable Distribution In summary, the populations in the configuration of greatest weight, subject to the constraints of fixed E and N, depend on the energy of the state, according to the Boltzmann Distribution: i i kT i kT i N e N e       The denominator of this expression is denoted by q and is called the partition function, a concept that is absolutely central to the statistical interpretation of thermodynamic properties which is being developed here. As can be seen in the above equation, because k is a constant (Boltzmann’s Constant), the thermodynamic temperature, T, is the unique factor that determines the most probable populations of the states of a system that is at thermal equilibrium. Most Probable Distribution If comparing the relative populations of only two states, εi and εj, for example, i i j j kT i kT j kT N e e N e           The Boltzmann distribution gives the relative populations of states, not energy levels. More than one state might have the same energy, and the population of each state is given by the Boltzmann distribution. If the relative populations of energy levels, rather than states, is to be determined, then this energy degeneracy must be taken into account. For example, if the level of energy εi is gi-fold degenerate (i.e., gi states have that energy), and the level of energy εj is gj-fold degenerate, then the relative total populations of these two levels is given by: i i j j kT i i i kT j j kT j N g e g e N g g e           Example Partition Function: Uniform Ladder 𝓆= 1 + e−𝛽𝜀+ e−2𝛽𝜀+e−3𝛽𝜀+ ⋯ 𝓆= 1 + e−𝛽𝜀+ e−𝛽𝜀2 + e−𝛽𝜀3 + ⋯ 1 1 −𝑥= 1 + 𝑥+ 𝑥2 +𝑥3 + ⋯ 𝓆= 1 1 −e−𝛽𝜀 Example Partition Function: Uniform Ladder Because the partition function for the uniform ladder of energy levels is given by: then the Boltzmann distribution for the populations in this system is: Fig. 15B.4 shows schematically how pi varies with temperature. At very low T, where q ≈ 1, only the lowest state is significantly populated. As T increases, higher states become more highly populated. Thus, the numerical value of the partition function gives an indication of the range of populated states at a given T. 1 1 1 1 kT q e e  b       (1 ) (1 ) i i i i kT kT i N e p e e e e N q   b b b            Two-Level System For a two-level system, the partition function and corresponding population distribution are given by: and 1 1 kT q e e  b     1 1 i i i kT i kT e e e p q e e  b b  b           Two-Level System In this case, because there are only two levels and, hence, only two populations, p0 and p1, and because ε0 = 0 and ε1 = 1, then and At T = 0 K, q = 1, indicating that only one state is occupied. With increasing temperature, q approaches 0.5, at which point both states are equally populated. Thus, it can be generalized that as T → ∞, all available states become equally populated. 0 1 1 1 1 kT p e e  b       1 1 1 kT kT e e p e e  b  b         Generalizations Regarding the Partition Function Conclusions regarding the partition function: • Indicates the number of thermally accessible states in a system. • As T → 0, the parameter β = 1/kT → ∞, and the number of populated states → 1, the lowest (ground) state, i.e., , where g0 is the degeneracy of the lowest state. • As T → ∞, each of the terms β = ε/kT in the partition function sum → 0, so each term = 1. Thus, , since the number of available states is, in general, infinite. • In summary, the molecular partition function q corresponds to the number of states that are thermally accessible to a molecule at the temperature of the system. 0 0 lim T q g   i e b   lim T q   Contributions to Partition Function ● Total energy of a molecule is the sum of the contributions from its different modes of motion (translational, rotational, vibrational), plus its electronic energy: ● Thus, the partition function for the molecule consists of the product of the components from each of the four individual types of energy: g: degeneracy of the corresponding energy level Translational Partition Function ●Translational energy levels are very closely spaced, thus, at normal temperatures, large numbers of them are typically accessible. ● Assume that gas is confined in a three-dimensional volume. ● Quantum states can be modeled by a particle in a 3D box with side lengths a, b, and c: ●The translational partition function for a single molecule is 2 2 2 2 2 2 2 2 2 ( , , ) 8 8 8 y x z x y z n h n h n h E n n n ma mb mc    2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 8 1 1 1 8 8 8 1 1 1 y x z x y z y x z x y z n n n h mkT a b c T n n n n h n n h h mkT b mkT mkT a c n n n q e e e e                                                         ● For a system having macroscopic dimensions, the summations can be replaced by integrals: ● The above definite integrals evaluate to: Thus, ● If the thermal de Broglie wavelength, Λ, is defined as , then where Λ has units of length, and qT is dimensionless. Translational Partition Function 2 2 2 2 2 2 2 2 2 8 8 8 0 0 0 y x z x y z n n n h h h T mkT mkT mkT a b c x y z n n n q e dn e dn e dn             3 T V q  Example: Calculate the translational partition function for an O2 molecule in a 1 L vessel at 25oC. Thus, under these conditions, an O2 molecule would have ~1029 quantum states thermally accessible to it. The thermal wavelength of the O2 molecule (Λ = h/(2πmkT)1/2) is ~18 pm, which is ~eight orders of magnitude smaller than the size of the containing vessel. In order for the above equation for qT to be valid, the average separation of the particles must be much greater than their thermal wavelength. Assuming that O2 molecules behave as a perfect gas at 298K and 1 bar, for example, the average separation between molecules is ~3 nm, which is ~168 times larger than the thermal wavelength. 34 1/2 27 23 1/2 11 3 3 29 3 11 3 3 6.626 10 J×s (2 ) (2 32 amu 1.67 10 kg/amu 1.38 10 J/K 298K) 1.78 10 m =17.8 pm 1 10 m 1.77 10 (1.78 10 ) m T h mkT V q                            Translational Partition Function Translational Partition Function As seen by its definition: the three-dimensional translational partition function increases with the mass of the particle, as m3/2, and with the volume, V, of the container. For a given particle mass and container volume, qT also increases with temperature, as T3/2 because an infinite number of states becomes accessible as the temperature increases: qT → ∞ as T → ∞   3/2 3 3 2 T mkT V q V h      ● The rotational energy of a rigid rotor is where J is the rotational quantum number (0, 1, 2,...) and I is the moment of inertia. ● The rotational partition function for a linear molecule is thus ● where the rotational constant is given by: ෨ 𝐵= Τ ℏ4π𝑐𝐼 𝑞𝑅= ෍ 𝐽=0 ∞ 𝑔𝐽𝑒−𝐸𝑟𝑜𝑡 𝑘𝑇= ෍ 𝐽=0 ∞ 2𝐽+ 1 𝑒−𝐽𝐽+1 ℎ2 8𝜋2𝐼𝑘𝑇= ෍ 𝐽=0 ∞ 2𝐽+ 1 𝑒−𝛽ℎ𝑐෨ 𝐵𝐽𝐽+1 Rotational Partition Function for Diatomic Molecules B ● For molecules with large moments of inertia or at sufficiently high temperature, the above sum approximates to ● In general, where σ is the symmetry number: – σ = 1 for heteronuclear diatomic molecules – σ = 2 for homonuclear diatomic molecules • The temperature above which the approximation shown above for qR is valid is termed the characteristic rotational temperature, θR, which is given by: . • At sufficiently high temperatures (T » θR), the rotational partition function for linear molecules is: Rotational Partition Function for Diatomic Molecules / R hcB k   R R T q   2 2 ( 1) 2 ( 1) 8 2 0 8 1 (2 1) (2 1) J J h R hcBJ J IkT IkT q J e dJ J e dJ h hcB b   b             2 2 8 1 R IkT q h hcB   b   Rotational Partition Function for Diatomic Molecules Rotational Partition Function for Diatomic Molecules ● The rotational energy of linear polyatomic molecules is the same as for diatomics, with s = 1 for nonsymmetric linear molecules (HCN) and 2 for symmetrical molecules (CO2). ● General polyatomic molecules may have 3 different values of I (moments of inertia), and so have 3 different rotational temperatures. – If symmetries exist, some of the moments of inertia may be equal. Θ𝐴= ℎ2 8𝜋2𝐼𝐴𝑘 𝑞𝑟𝑜𝑡= 𝜋ൗ 1 2 𝜎 𝑇3 Θ𝐴Θ𝐵Θ𝐶 ൗ 1 2 Rotational Partition Function for Polyatomic Molecules • The symmetry number, s, is the distinct number of proper rotational operations, plus the identity operator, i.e., the number is the number of indistinguishable positions in space that can be reached by rigid rotations. Origin of Symmetry Numbers Quantum mechanical in origin, viz., the Pauli principle forbids occupation of certain states. e.g. H2 occupies only even J-states if the nuclear spins are paired (para-H2) and only odd J-states if the spins are parallel (ortho-H2). Get about the same value as if each J term contributed only half its normal value to the sum. Thus, must divide by =2. Similar arguments exist for other symmetries, e.g. CO2: Vibrational Partition Function In the harmonic oscillator approximation, the vibrational energy levels in a diatomic molecule form a uniform ladder separated by ℏ𝜔(= hc ǁ 𝜈). ℏ𝜔= ℎ𝜈 ℏ= ൗ ℎ2𝜋 𝜔= 2𝜋𝜈 ǁ 𝜈= Τ 𝜈𝑐 Thus, using the partition function developed previously for a uniform ladder (Example 15B.1): At sufficiently high temperatures, such that T » θV, 1 1 1 1 1 1 1 1 V V hc hc kT T q e e e e  b b               where θV is the characteristic vibrational temperature, given by V hc k    V V kT T q hc    Vibrational Partition Function In molecules having sufficiently strong bonds, e.g., C-H bonds (~1000 – 2000 cm-1), the vibrational wavenumbers are typically large enough that . In such cases, the exponential term in the denominator of qV approaches zero, resulting in qV values very close to 1, indicating that only the zero-point energy level is significantly populated. 1 hc b  By contrast, when molecular bonds are sufficiently weak that , qV may be approximated by expanding the exponential (ex = 1 + x + …): Thus, for weak bonds at sufficiently high temperatures: 1 hc b  1 1 1 1 1 (1 ...) 1 1 V hc q hc e hc kT b   b                  V kT q hc  PHYSICAL CHEMISTRY: QUANTA, MATTER, AND CHANGE 2E| PETER ATKINS| JULIO DE PAULA | RONALD FRIEDMAN ©2014 W. H. FREEMAN D COMPANY Vibrational Partition Function Electronic Partition Function ● Except for hydrogen atoms, there are no simple formulas for electronic energy levels from quantum mechanics. ● The partition function for electronic states is: 𝑞𝐸= ෍ 𝑙𝑒𝑣𝑒𝑙𝑠 𝑔𝑖𝑒−𝛽𝜀𝑖= 𝑔0𝑒−𝛽𝜀0 + 𝑔1𝑒−𝛽𝜀1 + ⋯ ● Because the first excited electronic state is typically well above the ground state, i.e., 𝜀1 −𝜀0 ≫kT, only the ground state is populated. ● Exceptions are molecules with low lying electronic states, such as NO, NO2 and O2. Electronic, Vibrational and Rotational energy levels for the hydrogen molecule. PHYSICAL CHEMISTRY: QUANTA, MATTER, AND CHANGE 2E| PETER ATKINS| JULIO DE PAULA | RONALD FRIEDMAN ©2014 W. H. FREEMAN D COMPANY Example of Low Lying Excited Electronic State Mean Molecular Energy Because, as shown previously, the overwhelmingly most probable population in a system at temperature T is given by the Boltzmann distribution, ( ), then 𝜀= 1 𝑞σ𝑖ε𝑖𝑒−𝛽ℇ𝑖, where β = 1/kT. 1 i i i E N N N      For a system of non-interacting molecules, the mean energy of a molecule , relative to its ground state, is just the total energy of the system, E, divided by the total number of molecules in the system, N: / / i i N N e q b   The latter relationship can be manipulated to express in terms only of q by recognizing that Hence, where the partial derivatives recognize that q may depend on variables (e.g., V) other than only T. Because the above expression gives the mean energy of a molecule relative to its ground state, the complete expression for is: This result confirms the very important conclusion that the mean energy of a molecule can be calculated knowing only the partition function (as a function of temperature).     ( ) i i i e e b b  b      1 ( ) 1 1 i i i i V V V e q e q q q b b  b b b                                  1 ln gs gs V V q q q    b b                     Mean Molecular Energy Comparison of the fraction of populations vs. the total energy of a two-level system. Translational Energy Each of the three modes of motion (translational, rotational, and vibrational), as well as the potential energies represented by the electronic state and electron spins, contributes to the overall mean energy of a system. Translational Contribution: As developed previously, for a three-dimensional container of volume V, the translational partition function is given by where Λ3 is essentially a constant multiplied by β3/2. Thus, In one dimension,   3/2 3/2 3 3 3 2 2 T V V m V q mkT h h   b           3 3/2 3/2 3/2 3/2 1 d 1 d d 1 3 d 2 2 3 T T V T V T q V q V C V V k C b b b b b b b  b b                                           1 2 T kT    Rotational Energy As developed previously, the rotational partition function for a linear molecule is given by: 2 2 ( 1) ( 1) 8 (2 1) (2 1) J J h R hcBJ J IkT i i q J e J e b           At sufficiently low temperatures, such that , the term by term sum for a non-symmetrical molecule gives / R T hcB k    2 6 1 3 5 ... R hcB hcB q e e b b      Taking the derivative of qR with respect to β gives 2 6 d (6 30 ...) d R hcB hcB q hcB e e b b b      Hence, 2 6 2 6 1 d (6 30 ...) d 1 3 5 ... R hcB hcB R R hcB hcB q hcB e e q e e b b b b  b             At high temperatures (T >> θR), Thus, 1 R R T q hcB  b   1 d d 1 d d d 1 d 1 R R R q hcB q h T B k c  b b b b b b b b             Rotational Energy Vibrational Energy At high temperatures, T >> θV = ℎ𝑐ǁ 𝜈/𝑘, so 𝜀𝑉= ℎ𝑐ǁ 𝜈 e𝛽ℎ𝑐෥ 𝜈−1 = ℎ𝑐ǁ 𝜈 1 + 𝛽ℎ𝑐ǁ 𝜈+ ⋯ −1 ≈1 𝛽= 𝑘𝑇 As developed previously, the vibrational partition function for the harmonic oscillator approximation is: 1 1 V hc q e b    Because qV is independent of volume it can be differentiated with respect to β: 2 d d 1 d d 1 (1 ) V hc hc hc q hc e e e b  b  b   b b              and since the mean energy is given by   2 1 d (1 ) d (1 ) 1 V hc hc V hc V hc hc q hc e hc e e q e e b  b  b  b  b     b             then 1 V hc hc eb       However, because most values of θV are very high, (> 1000K) this condition is seldom satisfied. Equipartition of Energy ● Degrees of freedom receive equal amounts of energy, each of ½ kT. ● In diatomic molecules at sufficiently high temperature: – 3 translational degrees of freedom = 3/2 kT – 2 rotational degrees of freedom = kT – vibrational potential and kinetic energy = kT ● At sufficiently low temperatures, only the ground state is significantly populated. This causes degrees of freedom to freeze out and not contribute to the heat capacity. – This can be seen in the treatment of the vibrational partition function, as well as in the two-level system discussed previously. – Note that the treatment of the rotational partition function on the previous slides cannot predict the freezing out of the rotational degrees of freedom, because the energy levels were approximated as a continuum using the integral. Electronic & Electron Spin Energies Because statistical energies are measured relative to the ground state, and only the ground electronic state is usually occupied, then and 𝜀S = Τ 2𝜇Bℬ e2𝛽𝜇Bℬ+ 1 An electron spin in a magnetic field B can have two possible energy states (𝜀−1/2 = 0 and 𝜀+1/2 = 2𝜇Bℬ) and energy given by where ms is the magnetic quantum number, and μB is the Bohr magneton (eћ/2me = 9.274 x 10-24 J/T). 𝐸𝑚𝑠= 2𝜇Bℬ𝑚𝑠 𝑞S = ෍ 𝑚𝑠 𝑒−𝛽ℇ𝑚𝑠= 1 + 𝑒−2𝛽𝜇Bℬ 0 E    1 E q  Electronic Energies Electron Spin Energies The spin partition function is therefore and the mean energy of the spin is Internal Energy and the Partition Function As described previously, the mean energy of a system of independent non-interacting molecules is given by: where β = 1/kT. For a system containing N molecules, the total energy is thus , so the internal energy U(T) is: N   ln ( ) (0) (0) (0) V V N q q U T U N U U N q  b b                       If the system consists of interacting molecules (e.g., a non-ideal gas), then the canonical partition function Q must be used: 1 V q q  b          ln ( ) (0) V Q U T U b          𝐶𝑣= 𝜕𝑈 𝜕𝑇 𝑉 𝜀𝑉= ℎ𝑐෤ 𝜈 e𝛽ℎ𝑐෥ 𝜈−1 = 𝑘𝜃𝑣 e 𝜃𝑣 𝑇−1 Recall that the constant-volume heat capacity is: As shown previously, the mean vibrational energy of a collection of harmonic oscillators is given by where is the characteristic vibrational temperature. Thus, the vibrational contribution to the molar heat capacity at constant volume is / V hc k      2 / , 2 / / d d 1 d d 1 1 V V V V V T V V A v m T T N e C R R T T T e e                    or, expressed as a function of temperature: 2 2 /2 , / ( ), where ( ) 1 V V V T V v m T e C Rf T Rf T T e                     Heat Capacity and the Partition Function 𝐶𝑣= −𝑘𝛽2 𝜕𝑈 𝜕𝛽 𝑉 = −N𝑘𝛽2 𝜕𝜀 𝜕𝛽 𝑉 = 𝑁𝑘𝛽2 𝜕2𝑙𝑛𝑞 𝜕𝛽2 𝑉 If the derivative with respect to T is converted into a derivative with respect to β, then Cv can be expressed as If T >> θM, where θM is the characteristic temperature of each mode ( ), then the equipartition theorem can be applied. In this case, each of the three translational modes contributes ½ R. If the rotational modes are represented by νR, then νR = 2 for linear molecules and 3 for non-linear molecules, so the total rotational contribution is ½ νRR. If the temperature is sufficiently high for νV vibrational modes to be active, then the vibrational contribution is νVR. Thus, the total molar heat capacity is: In most cases, νV = 0. / and / R V hcB k hc k      CV,m = ½ (3 + R + 2V)R Heat Capacity and the Partition Function Entropy and the Partition Function Boltzmann equation: S = k lnW where S is the statistical entropy, and W is the weight of the most probable configuration of the system. The Boltzmann Equation is one of the most important relationships in statistical thermodynamics, and the statistical entropy is identical to the thermodynamic entropy, behaving exactly the same in all respects. • As the temperature decreases, for example, S decreases because fewer configurations are consistent with the constant total energy of the system. • As T → 0, W → 1, so ln W = 0, since only one configuration (viz., the one in which every molecule is in the lowest level) is consistent with E = 0. • As S → 0, T → 0, which is consistent with the Third Law of thermodynamics, i.e., that the entropies of all perfect crystals approach zero as T → 0. Entropy and the Partition Function Relationship of Boltzmann Equation to the partition function • For a system of non-interacting and distinguishable molecules, • For a system of non-interacting and indistinguishable molecules (e.g., a gas of identical molecules), • For a system of interacting molecules, use the canonical partition function, ( ) (0) ( ) ln U T U S T Nk q T    ( ) (0) ( ) ln U T U q S T Nk T N    ( ) (0) ( ) ln U T U S T Nk Q T    Entropy and the Partition Function As shown previously, the total energy of a molecule can be closely approximated by the sum of the independent contributions from translational (T), rotational (R), vibrational (V), and electronic (E) energies. The total entropy can be similarly treated as a sum of individual contributions. • For a system of distinguishable, non-interacting molecules, each contribution has the form of that for S(T) above: (for M = R, V, or E) • For M = T, the molecules are indistinguishable, so   ( ) (0)] ( ) ln M M U T U S T Nk q T      ( ) (0)] ( ) ln T T U T U q S T Nk T N    Translational Entropy: The Sackur-Tetrode Equation For a system consisting of a perfect monatomic gas, only translation contributes to the total energy and molar entropy, which is described by the Sackur-Tetrode Equation: where Λ is the thermal wavelength (h/(2πmkT)1/2 described previously, Vm is the molar volume, NA is Avogadro’s Number, and R/NA = k. Since for a perfect gas Vm = RT/p, then Sm can be calculated from Re-writing the Sackur-Tetrode Equation in the form: shows that when a perfect monatomic gas expands isothermally from Vi to Vf, ΔS is given by: which is identical to the expression determined from the thermodynamic definition of entropy.   3/2 5/2 5/2 5/2 3 3 3 2 ln ln ln m m m m A A V e mkT V e V e S R R k N h N                           5/2 5/2 3 3 ln ln m A RTe kTe S R R p N p                  5/2 5/2 3 3 ln ln , where A A Ve e S nR nR aV a nN nN            ln ln ln f f i i V S nR aV nR aV nR V     Entropy: Rotational Contribution At sufficiently high temperatures, where , which is usually the case, then , and the equipartition theorem predicts the rotational contribution to the molar entropy to be RT. Therefore, Hence, this relationship indicates that • The rotational contribution to the entropy increases with increasing T because more rotational states become accessible. • The rotational contribution will be large when the rotational constant is small because then the rotational levels are more closely spaced. ( / ) R T hcB k    / / R R q kT hcB T     (0) ln 1 ln 1 ln R R m m R U U kT T S R q R R T hcB                      B Entropy: Vibrational Contribution The vibrational contribution to the molar entropy, SV m , can be obtained by combining qV (= 1/(1 – e‒βε) with , the mean vibrational energy: where the final equality occurs because . • Both terms in the right-hand equality approach 0 as T → 0, so S = 0 at T = 0. • S increases as T increases because more vibrational states become thermally accessible. • At a given T, S is larger for higher M.W. molecules than for lower M.W. because their energy levels are more closely spaced, and thus more are thermally accessible.   / 1 V eb          (0) 1 ln ln 1 1 ln 1 ln 1 1 1 V V m m A m hc hc U U N k S R q R T e e hc R e R e e e b b b b  b b  b b b                                hc    Derived Functions: Enthalpy and Gibbs Energy 𝑆𝑇= 𝑈𝑇−U 0 𝑇 + 𝑘𝑙𝑛𝑄 𝑈𝑇= U 0 −𝜕𝑙𝑛𝑄 𝜕𝛽 𝑉 The partition function also can be used to calculate the pressure, enthalpy, and Gibbs Energy: Can use various thermodynamic relationships to calculate other quantities. In all of the equations below, the canonical partition function Q is related to the molecular partition function q by Q = qN for distinguishable molecules, and Q = qN/N! for indistinguishable molecules (e.g., as in a gas). As shown previously, the internal energy and entropy are related to the partition function as follows: 𝐻= 𝑈+ 𝑃𝑉 𝐻𝑇= H 0 − 𝜕𝑙𝑛𝑄 𝜕𝛽 𝑉 +𝑘𝑇𝑉 𝜕𝑙𝑛𝑄 𝜕𝑉 𝑇 𝐺= 𝐻−𝑇𝑆= 𝐴+ 𝑃𝑉 𝐺𝑇= G 0 −𝑘𝑇𝑙𝑛𝑄+𝑘𝑇𝑉 𝜕𝑙𝑛𝑄 𝜕𝑉 𝑇 𝐺𝑇= G 0 −𝑛𝑅𝑇𝑙𝑛𝑞 𝑁 (From 𝑄= Τ 𝑞𝑁𝑁! 𝑎𝑛𝑑𝑙𝑛𝑄= 𝑁𝑙𝑛𝑞−𝑙𝑛𝑁! 𝑎𝑛𝑑𝑙𝑛𝑁! = 𝑁𝑙𝑛𝑁−𝑁) 𝐺𝑇= G 0 −𝑛𝑅𝑇𝑙𝑛𝑞𝑚 𝑁 𝐴 Equilibrium Constants As shown previously, the equilibrium constant K of a reaction is related to the standard Gibbs energy of reaction, ΔrGo, (p = 1 bar) by: ΔrGo = ‒RTlnK From statistical thermodynamics, the Gibbs energy is related to the molar partition function by: qm = q/n and In order to calculate a value for K, these equations must be combined. To develop an expression for K, the standard molar Gibbs energy, Go/n, must be determined for each reactant and product in the reaction. For the gas-phase reaction, it can be shown that the equilibrium constant is given by: 𝐺𝑇= G 0 −𝑘𝑇𝑙𝑛𝑄+𝑘𝑇𝑉 𝜕𝑙𝑛𝑄 𝜕𝑉 𝑇 o o o , , / o o , , ( / ) ( / ) ( / ) ( / ) r c d C m A D m A E RT a b A m A B m A q N q N K e q N q N   aA bB cC dD    Equilibrium Constants where ΔrEo is the difference in molar energies of the ground states of the products and reactants and is calculated from the bond dissociation energies of the various reaction species, i.e., Do(products) – Do(reactants). Using the symbolism of (signed) stoichiometric numbers that was introduced previously, K is given by: o o , / J r J m E RT J A q K e N                       Contributions to equilibrium constants 𝐾= 𝑁𝑃 𝑁𝑅 = 𝑞𝑃 𝑞𝑅 𝑒 Τ −∆𝑟𝐸0 𝑅𝑇 𝑅⇌𝑃 For the reaction , Contributions to equilibrium constants For the reaction R → P, assume that R has only a single accessible level, so that qR = 1, and that P has a large number of closely spaced levels, so that qP = kt/ε. The equilibrium constant is: • It can be seen that when ΔrEo is very large, the exponential term dominates, and K << 1, indicating that very little P is present at equilibrium. • When ΔrEo is small, but positive, K can exceed 1 because the factor kT/ε may be large enough to offset the low value of the exponential term. The size of K then results from the large amount of P at equilibrium resulting from its high density of states. • At low temperatures, K << 1, and R predominates at equilibrium. • At high temperatures, the exponential function approaches 1, and P becomes dominant. • For this endothermic reaction, a temperature increase favors P because its states become increasingly accessible as the temperature increases. o / rE RT kT K e    Contributions to equilibrium constants
376
Published Time: Tue, 21 Nov 2023 02:50:04 GMT Effective graviton mass in de Sitter space D. I. Sadekov ∗1,2 1 Moscow Institute of Physics and Technology, 141701, Institutskiy pereulok 9, Dolgoprudny, Russia 2 NRC ”Kurchatov Institute”, 123182, Moscow, Russia November 21, 2023 Abstract We calculate the effective mass of gravitational perturbations induced by the interaction of the classical gravitational field with quantum matter in the background of the Poincaré patch of de Sitter space. Using the Schwinger-Keldysh diagrammatic technique, the one-loop effective action is calculated and it is shown that the graviton does not acquire mass for the most symmetric Bunch-Davies state. However, we have shown that even in this case, there is a nontrivial modification of the theory at one loop in the scalar sector of gravity. ∗ [email protected] 1 arXiv:2311.11053v1 [hep-th] 18 Nov 2023 Contents 1 Introduction 32 Preliminaries and definitions 6 2.1 The quantization of the scalar field . . . . . . . . . . . . . . . . . . . . . . . . . . . 62.2 Effective equation of motion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82.3 Implications of de Sitter isometries . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 3 Effective action 12 4 Effective mass of the tensor mode 14 5 Discussion on the scalar sector of gravity 17 5.1 General remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 5.2 Two-dimensional space-time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 6 Conclusion 23 A Bubble diagram 24 B Gauge invariance of the effective action 26 C Integral relation for Green functions 27 D Relations for the stress-energy tensor 27 E The mass of photon in AdS 4 28 21 Introduction Quantum field theory in curved space-time aims to shed light on the problems of the cosmological constant and the evolution of the early Universe. De Sitter space is the simplest example for investigating these questions, but there are still many subtleties that have not been studied so far in sufficient details, such as IR divergences in loop corrections [1–3], vacuum instabilities [1, 4–6], and the behavior of light fields . One way to explore the behavior of the system and the response of quantum matter to external conditions is to find the effective action for small perturbations of the external field. This paper’s main objective is to study the effective mass term for the graviton, which it may acquire in the one-loop effective action in de Sitter (dS) and anti-de Sitter (AdS) space-times. The motivation for this question comes from the natural Gibbons-Hawking temperature in dS [7,8], which suggests that photon and graviton can acquire a non-zero mass as it happens in the physics of plasma. Our work is inspired by the paper , where it is shown that despite the fact that an observer would detect some sort of thermal equilibrium with the canonical temperature TdS = H 2π (H is the Hubble constant here), there is no effective Debye mass for photon for the most symmetric Bunch-Davies state of the matter. We extend this discussion onto the case of the gravitational mass. The graviton field itself is considered at the classical level as a perturbation of the dS metric, and we consider free scalar field theory as the quantum matter. It is worth noting that perturbation of the metric in an external field has several physical modes that can exhibit different behavior in the effective theory. Consideration of these cosmological perturbations is important for understanding the propagation of gravitational waves and density fluctuations of matter in the early Universe . There are two well-studied types of massive terms [10, 11] that can be added to the gravity action: Smass = ˆ dDxp|g| ϵgh h2 + m2 g hμν hμν − h2 , (1.1) which break diffeomorphism invariance. The second one is called Fierz-Pauli massive term and it is known to bring no ghost-like degrees of freedom to the linearized gravity, while the first one is associated to the so called “scalar ghost” and leads to Ostrogradsky’s instability . In our work, we attribute the emergence of mass to the appearance of terms like (1.1), if any, in the long-wave expansion of the effective action, which in turn does respect gauge invariance. For instance, given the Minkowski background, we have for Ricci scalar R in the linear and second orders: R(1) ∼ k2  gμν − kμkν k2  hμν , p |g|R (2) ∼ k2 hμν hμν − h2 + 2 kμkν gαβ hμν hαβ − hμα hνβ  . (1.2) Therefore, small and slowly changing perturbations of metric acquire a mass if the effective action 3contains such covariant contributions as: ∆Γ eff = ˆ dDxp|g|  ϵgh R 1 □2 R + m2 g 1 □R  . (1.3) In general, the situation is much more intricate due to ultraviolet effects, renormalizations of cosmological constant, conformal anomalies, and other factors. For example, one of the primary contributions to induced gravity in two-dimensional space is the Mabuchi action [12, 13], which originates from the integral of the Green function for the covariant Laplacian taken at coincident points. We have not been able to solve all the puzzles that arise in this way up to this point. However, we define and analyze the quantity of effective mass as a measure of the backreaction of quantum matter immersed in the strong gravitational background for the simplest case of Bunch-Davies state in Poincaré patch of dS. As long as we treat the gravitational sector at the classical level, the notion of induced mass should not be referred to as some mass of the particle graviton but must be considered as a characteristic of matter’s behavior in the given state. For example, in the case of large positive masses, gravitational interaction is screened, and a negative squared mass corresponds to the decay of the initial external background. In particular, in the presence of classical stress-energy tensor, a negative squared mass for thermal state of matter leads to the well-known Jeans’ instability [9, 14, 15]. At the same time, taking into account such loop effects as secularly growing corrections [1, 16–20], the stability of dS is a separate interesting question with many unresolved problems, because these contributions can drastically affect the tree-level situation for different types of quantum fields and various initial states. This is why we believe that more approaches are needed to treat this issue. Another curious aspect of the appearance of the gauge field’s mass is the connection with the Higgs mechanism. We expect that the field of spin-s swallows the Goldstone boson of spin-(s − 1) to acquire a mass. In Minkowski space with λϕ 4 potential, there are two diagrams, local and non-local, which combine into a transverse structure and shift the pole of the gauge field’s propagator: Πμν = + ∝  gμν − kμkν k2  Mph , Figure 1: Higgs mechanism in Standard Model where crosses represent the vacuum expectation of the Higgs field, and the pole at k2 = 0 indicates the exchange of the massless boson. If there is no potential with spontaneous symmetry breaking 4mechanism and our photon or graviton interacts with free field theory, we have bubble and tadpole diagrams instead of the contributions of fig.1. For example, in the case of the graviton, the analogue of the Higgs mechanism will occur if a pole corresponding to the Goldstone vector appears in the non-local part of the graviton’s self-energy. This is the case when the state produced by the matter stress-energy tensor Tμν (x)|0⟩ has a non-zero overlap with the state of Goldstone vector, which can be easily seen if one inserts the sum over all states into the non-local part of the self-energy Σμν |αβ ∼ P state ⟨0|Tμν |state ⟩⟨ state |Tαβ |0⟩. As long as the stress-energy tensor is quadratic in fields, the Higgs mechanism requires the appearance of the Goldstone vector in the tensor product of states in the matter spectrum, which is a sum of infinite-dimensional positive-weight unitary irreducible representations of the isometry group of the space under consideration [21–23]. Although it is difficult to imagine that the stress-energy tensor of a free field theory can create a Goldstone vector as a bound state, this actually happens under certain conditions in AdS: e.g. the presence of the Goldstone vector in the bubble diagram is shown in [23, 24] using the expansion of propagators at large distances. In dS, the spectrum of states is different , and we do not expect the same phenomena to occur. Moreover, the analysis in dS should be more careful as it is a non-stationary background, so one has to adopt the Schwinger-Keldysh diagrammatic technique. In particular, it was shown in that Debye and magnetic masses of the photon in dS are zero in the maximally symmetric and analytic Bunch-Davies state. In this paper, we show that the effective mass of the tensor mode of the graviton is also zero up to a subtraction of UV divergent contact terms, which are present in Minkowsky space as well. Specifically, in Section 2, we describe the particular model in question and provide Schwinger-Keldysh diagrammatic technique for it. In Section 3, we derive the expression for gravity’s induced action in terms of loop integrals and then use it in Section 4 to calculate the mass mg of the tensor mode of the graviton. We give a definition to this quantity in a manner of non-equilibrium condensed matter physics . Finally, in Section 5, we discuss some features that arise in the scalar sector of gravity in the effective action. First, for space-time dimension D > 2, the loop integral for the mass of the scalar mode diverges, requiring a more careful regularization procedure that preserves the symmetries of the problem for further analysis. Second, in dS space, this mass already has a nonzero value at the classical level, with both the massive and kinetic terms entering the action with the wrong sign. This is not a problem in classical theory since scalar modes do not propagate in it. Third, we claim that divergent terms appear in a non-stationary gravitational background that are absent in the flat case. This feature should be related to a deficiency in defining effective mass as a term in the expansion of the effective action into a series and should be eliminated after resummation, so more detailed analysis needs to be carried out in subsequent studies. We separately considered the case of two-dimensional spacetime, where integrals converge and found that in dS, the effective mass of the scalar mode differs significantly from its formal value in flat space for light matter fields, indicating a significantly different response to external background in these two situations. Additionally, in 5Appendix E, we show the presence of Goldstone scalar and mass of a photon in one-loop photon’s self-energy in AdS 4 following the spirit of work to establish differences between field theories in AdS and dS. 2 Preliminaries and definitions Consider the action for gravity coupled to the real massive scalar field in D = d + 1 dimensions: S[gμν , ϕ ] = − 116 πG ˆ dDxp|g| [R + 2Λ] + 12 ˆ dDxp|g| gμν ∂μϕ∂ ν ϕ − M 2ϕ2 , (2.1) where the Λ-term is defined by the Hubble constant H as Λ = (D−1)( D−2) 2 H2, G is a Newton’s constant and below we use the dimensionless mass parameter m = MH . We will split the metric into the background in the Poincaré patch of dS and small perturbation, hμν , over it: gμν = ˆ gμν + hμν , ˆgμν = 1 H2η2 diag (1 , −1, . . . , −1) , (2.2) where η is the conformal time, which is related to the inertial observer time coordinate as η = 1 H e−Ht . Below we will also use the perturbation with raised indices hμν def = ˆ gμα ˆgνβ hαβ and the rescaled field hμν = H2η2hμν , such that gμν = 1 H2η2 [γμν + hμν ] , γ μν = diag (1 , −1, . . . , −1) . The field hμν is a more appropriate variable for the problem in question, e.g. the equations of motion for the naive linearized massive gravity take the form of the usual Klein-Gordon equation for the fields, obtained from the components of hμν by means of linear operations . We consider gravity as classical and quantize only the scalar field. 2.1 The quantization of the scalar field We quantize the scalar field in the standard way using the creation and annihilation operators with the canonical commutation relations: ϕ(η, x) = ˆ dD−1p (2 π)D−1 bapfp(η)eipx + ba† p f ∗ p (η)e−ipx  , bap, ba† q = (2 π)D−1δ(p − q),fp(η) = H D−22 η D−12 hν (pη ) , p ≡ | p|. (2.3) Here hν (pη ) can be expressed in terms of the Hankel function of the first kind H(1) ν (z) for comple-mentary m < D−12  and principle m > D−12  series as follows: hν (pη ) = √π 2 e− π 2ν H(1) iν (pη ) , ν = r m2 − (D − 1) 2 4 (principal series) ,hν (pη ) = √π 2 H(1) ν (pη ) , ν = r(D − 1) 2 4 − m2 (complementary series) , (2.4) 6so that the mode functions fp(η) obey the classical equation of motion ∇η∂ηfp(η) +  p2 + m2 η2  fp(η) = 0 , ∇η ≡ ∂η − D − 2 η . (2.5) Note, that by choosing the harmonics in the form (2.4) and by the condition bap |BD ⟩ = 0 we specify the Bunch-Davies state of the scalar field theory in the Poincaré patch of dS D – we will stick to this initial state throughout this paper as it preserves the highest number of symmetries in loop calculations , while the effects of various nontrivial initial states will be considered elsewhere. Next, in order to construct the Schwinger-Keldysh diagrammatic technique, it is appropriate to introduce the fields after the Keldysh rotation: ϕcl = ϕ+ + ϕ− 2 , ϕ q = ϕ+ − ϕ−; hμν cl = hμν + hμν − 2 , h μν q = hμν + − hμν − . (2.6) Here “ +”- and “ −”-parts are attributed to the upper and lower branches of the Keldysh contour C on t–plane: t η ∞ 0 C η = 1 H e−Ht Figure 2: Keldysh contour on t− and η−plane The corresponding propagators of the scalar field in these notations have the form ( TC is the or-dering operator along the contour C on the fig.2): G(x, x ′) = ⟨T C φ(x)φ(x′)⟩ = F (x, x ′) − i 2sign C (η − η′)ρ(x, x ′), ⟨ϕcl (x)ϕcl (y)⟩ = F (x, y ), ⟨ϕq(x)ϕcl (y)⟩ = iθ (η − η′)ρ(x, y ), ⟨ϕcl (x)ϕq(y)⟩ = −iθ (η′ − η)ρ(x, y ), ⟨ϕq(x)ϕq(y)⟩ = 0 , (2.7) where the sign function sign C is implemented along the contour C, F (x, y ) = 12 ⟨{ ϕ(x), ϕ (y)}⟩ and ρ(x, y ) = i ⟨[ϕ(x), ϕ (y)] ⟩ are the Keldysh and spectral functions respectively . In the following 7discussion we will use the spatially Fouriér-transformed propagators: F (k|η, η ′) = ˆ ddxF (η, η ′, |x − y|)e−ik(x−y) = Re fp(η)f ∗ p (η′) ,ρ(k|η, η ′) = ˆ ddxρ(η, η ′, |x − y|)e−ik(x−y) = −2Im fp(η)f ∗ p (η′) , (2.8) because the state that we consider is spatially homogeneous. Also it is worth noting here that the commutation relation [ϕ(x), π (y)] = iδ (d)(x − y) with the canonical momentum π(x, t ) = p|g|g00 (t)∂tϕ(x, t ) implies the following property of the spectral function: ∂ηρ(k|η, η ′) η=η′ = −HD−2ηD−2, ∂η′ ρ(k|η, η ′) η=η′ = HD−2ηD−2, (2.9) while the causality requires ρ(k|η, η ) = ∂η∂η′ ρ(k|η, η ′) η=η′ = 0 . 2.2 Effective equation of motion The propagators (2.7) allow us to find perturbatively the Keldysh effective action Γeff [hcl , h q], which is a powerful tool to study dynamics of non-equilibrium systems [29–32]. To accomplish this, we extend the integration in (2.1) onto the contour C, change the fields according to (2.6), expand the functional integral over the matter fields in powers of hμν and calculate loop integrals using the propagators (2.7). The contributions we are interested in are as follows: Γeff = Scl + ++ . Figure 3: Effective action As we will see, all the diagrams on the fig.3 are important for the effective action to be gauge invariant in the order under consideration. Also there can be some additional counterterms δren Γ needed to cure the UV divergences in these loops – we will discuss them in the next section and show, that the first diagram on the fig.3 can be subtracted by the term δΛΓeff , which renormalizes the cosmological constant. The graviton equation of motion (EOM) follows from the effective action Γeff [hcl , h q] as δδh q Γeff [hcl , h q] hq =0 = 0 . (2.10) 8To derive these equations we need the following interacting parts in the action (2.1): ∆S = − ˆ dDxp|ˆg|hμν cl T cl −qμν − 12 ˆ dDxp|ˆg|hμν q T cl −cl μν + ˆ dDxp|ˆg|hμν cl Γμν |αβ hαβ q , (2.11) where T cl −qμν = −12 ˆgμν ˆgλω ∂λϕcl ∂ωϕq − m2ϕcl ϕq + ∂μϕcl ∂ν ϕq, (2.12) Γμν |αβ = 18 (ˆ gαβ ˆgμν − 2ˆ gμα ˆgνβ ) ˆgλω ∂λϕcl ∂ωϕcl − m2ϕcl 2 − 12 ˆgαβ ∂μϕcl ∂ν ϕcl + 12 ˆgμα ∂β ϕcl ∂ν ϕcl (2.13) and the same for T cl −cl μν replacing q → cl in (2.12). Note that the bare correlation funcions (2.7) contain theta-functions, while the bubble diagram on the fig.3 has derivatives over time in the vertices as they appear in the stress-energy tensor (2.12). Hence, there can be delta-functions in the bubble diagram 1, so we will collect these local contributions ∆Π loc μν |αβ to the total polarization operator along with the tadpole diagram into the one expression Πloc μν |αβ in what follows, while we will denote by Πbub μν |αβ the non-local contributions, where all the derivatives in vertices act only on the Keldysh and spectral functions of the propagators (2.7) in this diagram. Then the effective EOM in the momentum space over the d space-coordinates has the form: 116 πG \EOM μν |αβ hμν (k, η )−−12 ˆ ∞ η dη ′ HDη′D Πbub μν |αβ (k|η′, η ) hμν (k, η ′) + Π loc μν |αβ (η) hμν (k, η ) = 12 T cl −cl αβ , (2.14) where the “source”-term on the RHS corresponds to the first tadpole diagram on the fig.3 and Πloc μν |αβ (η) = −12∆Π loc μν |αβ (η) + Π tad μν |αβ (η) , (2.15) Πbub μν |αβ ∆Π loc μν |αβ = T cl −qμν + T cl −qνμ 2 T cl −cl μν + . (2.16) The operator \EOM μν |αβ in the equation (2.14) appears due to the Einstein-Hilbert part of the action (2.1). Namely, following [9,15], we split the metric perturbation onto the spiral components: h00 = 2Φ , h0k = ik kZ + ZTk , hkl = −2Ψ δkl − 2kkklE + i(kkW Tj + klW Ti ) + hT T kl , (2.17) 1 Note, that in the operator formalism time derivatives do not commute with the time-ordering operator: D T ∂t ˆA(t) ˆB(t′) E̸ = ∂t D T ˆA(t) ˆB(t′) E . However, if time derivatives appear in vertices, there additional non-covariant terms emerge in the interaction Hamiltonian, which restore the accordance with the functional-integral approach, where one can carry the time derivatives through the functional integral . 9where kkZTk = kkW Tk = kkhT T kl = 0 and hT T kk = 0 . We will work in the gauge h0k = 0 . In this gauge the linearized Einsein’s tensor G(1) μν = R(1) μν − 12 ˆgμν R(1) + ( D − 1) H2hμν in arbitrary dimension has the form: G(1) 00 = −(D − 1)( D − 2) 2η2 h00 + D − 22η ∂ηhkk + 12∂2 l hkk − 12∂l∂khkl ,G(1) 0i = −D − 22η ∂ih00 − 12∂η∂khki + 12∂η∂ihkk ,G(1) ij = −12∂i∂j h00 + 12∂i∂j hkk − 12∂2 η hij + D − 22η ∂ηhij + 12∂2 k hij −− 12 (∂i∂khkj + ∂j ∂khki ) + δij (D − 1)( D − 2) 2η2 h00 − D − 22η ∂ηh00 + 12∂2 k h00 ++ 12∂2 η hkk − D − 22η ∂ηhkk − 12∂2 l hkk + 12∂l∂khkl  , (2.18) which defines the action of the operator \EOM μν |αβ in the first line of (2.14). We will use the equations (2.14)–(2.18) to properly define the notion of the induced mass in the following sections. 2.3 Implications of de Sitter isometries In Bunch-Davies state, after the subtraction of the Λ-renormalization counterterm δΛΓeff from Γeff ,we are left with the equation of motion of the form (2.14), but with no “source”-term on the RHS and with renormalized local part of the polarizatrion operator. This equation is invariant under the gauge transformation in the zeroth order in the metric perturbations δξhμν = −ˆgμλ ˆ∇λξν −ˆgνλ ˆ∇λξμ.In order to obtain a general form of the linear equation bDμν |αβ hαβ = 0 , which respects both the dS isometry group SO (1 , D ) and gauge invariance, we will use the Lichnerowicz operator ∆L. It acts on the tensor, vector and scalar fields in the following way: ∆Lhμν = − ˆ□hμν − 2 ˆRμανβ hαβ + ˆRαμ hνα + ˆRαν hμα , ∆LVμ =  − ˆ□ − 2Λ D − 2  Vμ, ∆Lφ = − ˆ□φ, (2.19) where ˆRμανβ , ˆRμν and ˆ□ are Riemann tensor, Ricci tensor and the covariant Laplacian on the dS background correspondingly. The action of ∆L in dS commutes with the covariant derivatives, as explained e.g. in [23, 34, 35]. Then we can seek for the operator bDμν |αβ in the explicitly dS invariant form: bDμν |αβ hαβ = A(∆ L)hμν + 12B(∆ L) h ˆ∇μ ˆ∇λhνλ + ˆ∇ν ˆ∇λhμλ i ++C(∆ L) ˆ∇μ ˆ∇ν ˆ∇α ˆ∇β hαβ + D(∆ L) ˆ∇μ ˆ∇ν hαα + E(∆ L)ˆ gμν hαα + F (∆ L)ˆ gμν ˆ∇α ˆ∇β hαβ , (2.20) 10 where A, B, C, D, E, F are integro-differential operators, which can be expressed in terms of ∆L and its Green functions. Also we set D = F immediately due to the required symmetry under the switching of the pairs of indices (μν ) ↔ (αβ ).Below we show that the invariant one-loop corrected effective equation of motion can include only two independent operators, which we denote as bPtt μν |αβ , bPsμν |αβ . They are associated with the projectors onto the transverse traceless part of the graviton and onto the scalar mode of the graviton correspondingly. Namely, one can verify, using the explicit expressions given below in (2.24) and (2.25), that the operator \EOM μν |αβ from (2.14) can be written as \EOM μν |αβ ==  ∆L + 2 ( D − 1) H2  "bPtt μν |αβ − D − 2 D − 1(∆ L + ( D − 1) H2)2 (∆ L + DH 2) (∆ L + 2 ( D − 1) H2) bPsμν |αβ , (2.21) so that the effective linear EOM of the form bDμν |αβ hαβ = 0 is as follows: \EOM μν |αβ hαβ + A (∆ L) bPtt μν |αβ hαβ + E (∆ L) bPsμν |αβ hαβ = 0 . (2.22) Indeed, although we have 5 independent coefficients in (2.20), the requirement of gauge invari-ance implies three more constraints:  2A − ∆L + 4Λ D−2  B = 0 ,B + 2 D − 2C ∆L + 2Λ D−2  = 0 ,E − D ∆L + 2Λ D−2  = 0 , (2.23) so we are left with 2 independent coefficients and, therefore, two independent dS invariant and gauge invariant tensor structures, which act on the hμν . The first structure for the projection onto the transverse traceless part of the graviton htt μν = bPtt μν |αβ hαβ can be fixed by the two additional conditions A = 1 and ˆgμν htt μν = ˆ gμν bPtt μν |αβ hαβ = 0 . These additional constraints lead to the following set of the coefficients for this projector (we express the cosmological constant through the Hubble parameter): B = 2∆L + 2 ( D − 1) H2 , C = D−2 D−1 (∆ L + DH 2) (∆ L + 2 ( D − 1) H2),F = − 1 D − 11∆L + DH 2 , E = − 1 D − 1∆L + ( D − 1) H2 ∆L + DH 2 . (2.24) 11 The second independent operator bPsμν |αβ can be written in the following simple form: bPsμν |αβ = ˆgμν − ˆ∇μ ˆ∇ν −∆L − (D − 1) H2 ! ˆgαβ − ˆ∇α ˆ∇β −∆L − (D − 1) H2 ! . (2.25) A few comments must be given about the equation (2.22). First, we stress that A (∆ L) and E (∆ L) are actually integro-differential operators. Second, with the use of an intuition of flat space where ∆L ∼ k2 we can observe that the IR behaviour of A (∆ L) and E (∆ L) provides us with the coefficients mg and ϵgh in (1.1). Hence, when all the symmetries are respected during our operations, it suffices to calculate the effective mass, e.g., only for the sector of gravitational perturbations hT T μν to restore the whole “Fierz-Pauli term” in the induced gravity at large distances. Finally, the statements of this subsection are strictly working well exclusively for Bunch-Davies initial state of the matter and for Poincaré patch of dS, because in global dS the isometries are broken at the loop-level [1, 36]. 3 Effective action In this section we find the expression for the effective action Γeff . The analysis of the non-local part Πbub μν |αβ is given in the Appendix A. In order to obtain the expression for the tadpole diagram we average the second-order term (2.13) over the rotationally invariant state and get Πtad 00 |00 = 1 H2η2 12 ˆ p ∂η∂η′ F η′=η − 18H4η4 ⟨L(η)⟩ =: 1 H2η2 π1(η), Πtad 00 |0k = Π tad 0i|kl = 0 , Πtad 00 |kl = − 18H4η4 ⟨L(η)⟩ δkl + 14H2η2 ˆ p " ∂η∂η′ F η′=η − 1 D − 1p2F δkl =: 1 H2η2 π2(η)δkl , Πtad 0i|0l = − 1 H2η2 π2(η)δil , Πtad ij |kl = 18  1 H4η4 ⟨L(η)⟩ + 1 H2η2 4 D − 1 ˆ p p2F  × (δij δkl − δik δjl − δil δjk ) = =: 1 H2η2 π3(η) ( δij δkl − δik δjl − δil δjk ) , (3.1) where ´ p = ´ dD−1p (2 π)D−1 , the Keldysh function F (p|η, η ′) (for brevity we drop the arguments in the expressions above under the integrals) is taken at coincident points η = η′ and we have introduced 12 the averaged Lagrangian ⟨L(η)⟩ = H2η2 ˆ p " ∂η∂η′ F η′=η −  p2 + m2 η2  F (p|η, η ) . (3.2) Now, considering the terms, which arise from (2.16) when time derivatives act on the theta-functions of the propagators (2.7), we obtain: ∆Π loc 00 |00 (η) = 121 H2η2 ˆ p ∂η∂η′ F η′=η , ∆Π loc 00 |0k (η) = ∆Π loc 0i|kl (η) = 0 , ∆Π loc 0i|0k (η) = 1 H2η2 12( D − 1) δik ˆ p p2F (p|η, η ), ∆Π loc 00 |kl (η) = 121 H2η2 δkl ˆ p ∂η∂η′ F η′=η , ∆Π loc ij |kl (η) = 121 H2η2 δij δkl ˆ p ∂η∂η′ F η′=η . (3.3) Finally, summing up all the local contributions (3.1) and (3.3) we find from (2.15): Πloc 00 |00 (η) = 1 H2η2 14 T cl −cl 00 , Πloc ij |00 (η) = − 1 H2η2 14 T cl −cl ij , Πloc 0i|0l (η) = − 1 H2η2 14 T cl −cl 00 δil , Πloc ij |kl (η) = 1 H2η2 ( π3(η) ( δij δkl − δik δjl − δil δjk ) − 14 ˆ p ∂η∂η′ F η′=η δij δkl ) . (3.4) Having the explicit expressions for all important parts of the effective action, we can write it down as follows (we omit the Einstein-Hilbert part): Γeff = −12 ˆ dDxHDηD hαβ q T cl −cl αβ − 12 ˆ ∞ 0 dη HDηD ˆ ∞ η dη ′ HDη′D ˆ k hμν cl (η′, k)Π bub μν |αβ (−k|η′, η )hαβ q (η, −k)+ + ˆ ∞ 0 dη HDηD ˆ k hμν cl (η, k)Π loc μν |αβ (η)hαβ q (η, −k). (3.5) Let us emphasize at this point that all the local contributions (2.15) and averaged lagrangian (3.2) contain the Keldysh function at coincident points, which is the UV-divergent quantity and requires an accurate regularization procedure, which must preserve the symmetries of the theory. Nevertheless, these terms are indispensable for gauge invariance. Indeed, one can check the gauge symmetry of this action, using the transformation in the zeroth and first orders in perturbation: δξhμν = −ˆgμλ ˆ∇λξν − hνβ ˆgαβ ˆgμλ ˆ∇λξα − ˆgμλ Γ(1) νλω ξω + μ ↔ ν , (3.6) where Γ(1) νλω are the first order corrections to the exact Christoffel symbols in the metric (2.2). The invariance in the order O(ξ) is guaranteed by the covariant conservation of the stress-energy tensor. In Appendix B we show how to make sure of gauge invariance in the order O (|| hq · ξ|| ) 13 with the expressions for the polarization operators given in this section. In the case of BD-state we must have T cl −cl μν = δλ ˆgμν 2, so that the first “source”–term in (3.5) is attributed to the renormalization of the cosmological constant: Λren = Λ + 8 πGδλ . More accurately, let us subtract the following Λ-renormalization counterterm from the one-loop answer (3.5), which we also write in terms of the fields (2.6) after Keldysh rotation: δΛΓeff = − ˆ dDxpg(x)δλ = − ˆ dDxpˆgδλ − 12 ˆ dDxpˆg(x)ˆ gμν hμν q × δλ −− 14 ˆ dDxpˆg (ˆ gμν ˆgαβ − ˆgμα ˆgνβ − ˆgμβ ˆgνα ) hμν cl hqαβ × δλ. (3.7) As we see, this renormalization affects only the local contributions from the loops and, if we set δλ = −12 ⟨L⟩ − H2η2 1 D − 1 ˆ p p2F (p|η, η ), (3.8) it eliminates the “source”-term and the most of the local parts (3.4): eΠloc 00 |00 (η) = 0 , eΠloc ij |00 (η) = 0 , eΠloc 0i|0l (η) = 0 , eΠloc ij |kl (η) = 1 H2η2 (eπ3(η) ( δij δkl − δik δjl − δil δjk ) − 14 ˆ p ∂η∂η′ F η′=η δij δkl ) , (3.9) where eπ3 = 141 D − 1 ˆ p p2F (p|η, η ). (3.10) 4 Effective mass of the tensor mode Having the expressions for the quantum corrections to the induced gravity action in terms of specific integrals, we can investigate the effective equation of motion in detail. In the case of the tensor sector h00 = h0i = 0 , hij = hT T ij , the only non-vanishing component of the eq. (2.14) reads (see Appendix A for the notations in the non-local part): ∇η∂ηhT T ij + k2hT T ij − 32 πG ˆ ∞ η dη ′ HD−2η′D−2 e5(k|η, η ′)hT T ij (k, η ′) + 64 πG eπ3(η)hT T ij (k, η ) = 0 . (4.1) We see that (4.1) is an integro-differential equation, so the notion of mass requires accuracy. Following the approach of , where the effective mass of photon in the systems out of the thermal equilibrium was introduced, we expand the integral-part of the eq. (4.1) in derivatives of 2 This statement is not trivial and can be seen explicitly only in the regularization schemes which preserves dS isometries, such as dimensional regularization or point-splitting method [37–40]. It is a separate interesting topic, that even in the thermal state the situation is much more subtle for the space-times with horizons . 14 hT T ij in time. Namely, if we denote Γbub (η, η ′) = −32 πG ˆ ∞ η′ dη ′′ HD−2η′′ D−2 e5(k|η, η ′′ ),∂η′ Γbub (η, η ′) = 32 πG 1 HD−2η′D−2 e5(k|η, η ′) (4.2) and then integrate (4.1) by parts, we arrive at  ∇η∂η + k2 − 32 πG ˆ ∞ η dη ′ HD−2η′D−2 e5(k|η, η ′) + 64 πG eπ3(η)  hT T ij (k, η )− + ˆ ∞ η dη ′Γbub (η, η ′) ∂η′ hT T ij (k, η ′) = 0 . (4.3) One can continue this procedure and expand the non-local part of the effective action through multiple time derivatives of hT T ij (k, η ). Then, for slowly varying field hT T ij (k, η ) one has the Klein-Gordon equation of type (2.5): ∇η∂ηhT T ij +  k2 + m2 T T (k, η ) η2  hT T ij ≃ 0. (4.4) The last equation allows us to define an effective mass for graviton as one does for non-equilibrium systems : m2 T T = −32 πG lim k→0 lim η→0 η2 × ˆ ∞ η dη ′ HD−2η′D−2 e5 (k|η, η ′) − 2eπ3 (η)  . (4.5) In general situation, the order of the limits in (4.5) is very important. In particular, in flat space another order leads to the immediate zero value for the Debye mass [7, 42]. We define the limits in the way they are commonly taken in condensed matter physics [26, 43], where this order is also physically approved. However, it can be easily seen that in our case the quantity m2 T T (k, η ) actually depends on the dS invariant variable kη , so that the only limit we need to take is the zero limit for physical momentum kη → 0.At first glance it may seem that the integration over time region from η to ∞ may bring some infra–red effects to the mass m2 T T and the local correction eπ3 just removes some ultraviolet singularities. However, the quantum mechanical perturbation theory (see and Appendix C) provides us with the formula ∂p2 F (p|η, η ) = −2 ˆ ∞ η dη ′ HD−2η′D−2 F (p|η, η ′)ρ(p|η, η ′), (4.6) which reduces the first term on the RHS of (4.5) to the similar local contribution as the second 15 one. Now we use that ∂p2 F (p|η, η ) = 12p ∂pF and directly find in the limit k → 0: ˆ ∞ η dη ′ HD−2η′D−2 e5 (k|η, η ′) = = 2 D(D − 2) ˆ ∞ η dη ′ HD−2η′D−2 ˆ p  p2 − (kp )2 k2 2 F (p|η, η ′)ρ(k − p|η, η ′) = k→0 = − 1 D(D − 2) ˆ p  p2 − (kp )2 k2 2 ∂p2 F (p|η, η ) = = − 1 D(D − 2) Ωd (2 π)d ˆ ∞ 0 dpp D+2  1 − 2 D − 1 + 3 D2 − 1  12p∂pF == 12Ωd (2 π)d(D − 1) ˆ ∞ 0 dpp DF (p|η, η ) = 121 D − 1 ˆ p p2F (p|η, η ), (4.7) where in the third line we have integrated over the angles and then by parts over the absolute value of the momentum. Eventually, we take the renormalized value eπ3 (3.10) and find from the definition (4.5) that the mass of the spin-2 metric perturbation vanishes: m2 T T = 0 . (4.8) As it was noted in the Introduction, we believe that there is a special reason why we have no mass generation for photon and graviton in dS, while it was proved that there can be mass of the spin-2 graviton [23, 24] in AdS 43. Namely, the mass of the gauge fields generates if there is a pole in the non-local part of the self-energy appears, which corresponds to the Goldstone boson as on the fig.1. The necessary condition for this is the presence of this Goldstone boson in the tensor product of the from D(E1, 0) ⊗ D(E2, 0) , where D(E, 0) is an infinite-dimensional, irreducible, positive-weight representation of the isometry group of the embedding space (UIR), which corresponds to the physical states of the scalar field theory. Here E and s correspond to the minimal energy and angular momentum (spin) of the given representation, such that other states in it are obtained by the action of the appropriate creation operators . In the case of AdS 4 the isometry group is SO (2 , 3) and we have the relations [22, 23]: D(E, s ) → D(s + 1 , s ) ⊕ D(s + 2 , s − 1) as E → s + 1 , (4.9) D(E1, 0) ⊗ D(E2, 0) = ∞ X l=0 ∞ X n=0 D(E1 + E2 + l + 2 n, l ). (4.10) The first line (4.9) shows that the field of spin-s in the massless limit E → s+1 decomposes onto the massless field from D(s+1 , s ) of the same spin and the field of spin-(s−1) . This decomposition tells 3Strictly speaking, quantum field theory in global AdS is ill defined and suffers from unusual ultraviolet phenom-ena [44, 45]. However, it serves us with a useful playing background to investigate properties of QFT in different space-times with high number of symmetries. 16 us that the field becomes massive after swallowing the boson from the representation D(s+2 , s −1) in AdS 4. In the cases of photon ( s = 1 ) and graviton ( s = 2 ) the corresponding Goldstone bosons are from D(3 , 0) and D(4 , 1) . The second relation (4.10) shows that the states of these gauge bosons may appear in the non-local contribution to the self-energy of either photon or graviton in the case of conformally coupled scalar, which corresponds to the choice E = 1 or E = 2 . In the works [23,24] it was shown that the Goldstone vector is indeed present in the graviton’s self energy for certain boundary conditions, hence the mass of the graviton generates in AdS 4. In order to complete the considerations of the mass of the photon in from this point of view, we investigate the scalar QED in AdS 4 with curvature radius L in Appendix E. In contrast to AdS, there are no such Goldstone bosons in the tensor product of UIRs for dS isometry group, hence the absence of the photon’s and graviton’s mass is expected. However, dS is neither stationary nor stable background, so it would be rather naive to proceed this way and one’d better adopt the non-equilibrium approach, that we use in this paper. Moreover, it is argued in many works [5, 20, 28, 46, 47] that there are a lot of IR-peculiarities in loop corrections in dS, which may affect the result significantly. 5 Discussion on the scalar sector of gravity 5.1 General remarks Treating the problem in transverse-traceless gauge, we apparently can get some information only about the coefficient A (∆ L) in (2.22). In order to say something about E (∆ L) one has to include into consideration different modes of metric’s perturbation (2.17). For instance, if we naively set gμν = e2σ ˆgμν we obtain from (2.22): Γeff ∝ ˆ " σ ∆L + DH 2 σ + 32 πG D − 1 D − 2 × σ ·  ∆L + DH 2 ∆L + ( D − 1) H2 2 E (∆ L) · σ . (5.1) Let us make a few observations. First, we see that E (∆ L) determines a shift to “mass” DH 2 which conformal parameter σ already has. Second, it is crucial that both the kinetic and mass terms enter the effective action with the ghost-like sign. On the classical level in the presence of classical matter this leads to Jeans’ instability [9, 15] – it is not surprising, however, that the classical equations of motion don’t have non-trivial solutions without matter (the other constraints of (2.18) are not satisfied): in this case the field σ is non-propagating. On the other hand, nobody exactly knows what happens in loop-modified gravity, because, e.g. in the naive massive gravity , the scalar ghost-like degrees of freedom become dynamical. To answer these questions, more thorough investigation of Γeff is necessary. To find the terms which contribute to E (∆ L), it is convenient to express the bubble diagram 17 in terms of commutator of stress-energy tensors in the operator formalism: Πbub μν |αβ (k|η, η ′) = −18 BD Tμν (k, η ) , T αβ (−k, η ′) BD . (5.2) Then, using the relations derived in Appendix D, we can write for the correction to σ’s mass-like term, which is given by the trace of self-energy over μ = ν and α = β indices (before taking the limit kη → 0): δm 2 σ ∝ ˆ ∞ η dη ′ HD−2η′D−2 D (D − 2) Π 00 |00 (k|η, η ′) + Π ∆T |∆T (k|η, η ′) − 2Π loc ii |kk , (5.3) where Π∆T |∆T denotes the commutator of the form (5.2) for ∆T defined in Appendix D.1. The quantity (5.3) is UV-divergent even in flat space, not mentioning the problems with proper renor-malization in dS 4 [48–51]. This is not the end of the story. After the accurate subtractions of required counterterms, we still may have some “spurious” divergences left in the quantity δm 2 σ as a consequence of definition of the induced mass as a coefficient in the expansion of the effective action in time derivatives of hμν . This is similar to the Taylor expansion of the function e−x2 = 1 −x2 +. . . ,where the whole function is convergent in the limit x → ∞ , while each term in the expansion is divergent. The appearance of such peculiarities can be seen if one considers closely the first term in (5.3) in the limit k → 0 in x-space: D (D − 2) ˆ ∞ η dη ′ HD−2η′D−2 Π00 |00 (0|η, η ′) ∝∝ ˆ ∞ η dη ′ η′D−2 BD  T00 (0, η ) , ˆ x T00 (x, η ′)  BD . (5.4) Normally, the integral of T00 is the conserved charge (total energy), which commutes with any operator. For example, the covariant conservation of the electrical current ˆ∇μJμ = ∂ηJ0 + Dη J0 − ∂iJi = 0 implies that the integral of J0 over the position space is J0 = 1 ηD × const. Hence, in any commutator of the form (5.4) with electrical charge the dependence of the charge on time factors out and the result is exactly zero, which leads to the vanishing Debye mass in dS . In contrast, the covariant conservation condition for the stress-energy tensor includes additional term ∆T (D.2), which makes the analogy with the electric charge inapplicable. This conclusion establishes the fact that the energy (at least defined as the integral of T00 ) isn’t conserved in non-stationary background and non-trivially commutes with other operators. Therefore, although in 4Actually, being a short-distance local phenomenon, UV renormalization must be the same for any gravitational background at least at the leading order. Nevertheless, in order to obtain correct values for IR quantities, one should preserve symmetries of the theory at each step of the calculation, which, for example, forbids the naive UV cut-off regularization scheme in de Sitter. 18 flat space one has identically zero contribution from Π00 |00 to the δm 2 σ , in dS we obtain: δm 2 σ ∼ ˆ ∞ 1 dτ τ ˆ dD−1ξ (2 π)D−1 Im {t00 (ξ) t00 (ξτ )} + . . . , t00 (ξ) ≡ D − 12 hν (ξ) + ξh ′ ν (ξ) 2 ξ2 + m2 h2 ν (ξ),τ = η′ η , ξ = ηp. (5.5) Taking the mode functions (2.4), one can verify that in D > 2 the are divergences in the UV region. Therefore, in view of the fact that these divergences are not universal, we believe that they contribute to some well-defined parts of Γeff after the resummation, but in this case we should adopt more accurate approaches, such as Källén-Lehmann decomposition for the correlators in dS . We leave the treatment of the issues discussed in this subsection for future work, and below we investigate the simplest case of two-dimensional space-time. 5.2 Two-dimensional space-time It is well-known that in 2D the Einstein-Hilbert action is topological and the only independent component of metric’s perturbation is the Weyl parameter σ. Moreover, the kinetic term comes form the loops and also has ghost like sign in the effective action, if we take into account only the matter field with positive central charge c [53, 54] – below we insert the mass term to the action with ghost-like sign as well, such as it appears in the KG equation of motion for σ as a standard mass, assuming c > 0. In addition, it can be seen that the tadpole diagram in 2D contributes only to the cosmological constant’s renormalization, and the loop diagram is given by the commutator of the form (5.2) of two stress-energy tensor’s traces. Hence, because the only covariant quantities which constitute to the effective action is the covariant laplacian □ and Ricci scalar R in two dimensions, when we immerse the massive scalar field in the curved background, we expect Γeff to have the following form (in the second order in σ): Γeff = ˆ  δm 2 σ R 1 □2 R + c 96 π R 1 □R + . . .  , (5.6) where the ellipsis stand, first, for further expansion of the effective action in the powers of laplacian and, second, for less trivial terms such as Mabuchi action [12,13,55,56], which arises as a modifica-tion of the Liouville action for non-conformal matter interacting with the metric on a Riemannian manifold 5. This action satisfies cocycle condition, is bounded from below, and affects the calcula-tion of correlators in modified two-dimensional quantum gravity . Furthermore, it has a natural 5Although Mabuchi action is well-defined on Riemann surfaces of fixed area with boundary, appearance of its parts in the induced gravity seems to be universal. 19 generalization to higher dimensions, making the study of this contribution a separate interesting task. We write δm 2 σ in (5.6) instead of m2 σ , because such terms as non-perturbative Mabuchi action certainly lead to contributions to the quadratic part of Γeff being formally expanded in σ [13, 55]: SMabuchi [gμν , ˆgμν ] = ˆ p|ˆg|  1 πA σe 2σ + . . .  ,Sgrav [gμν , ˆgμν ] ≡ 12 log det − □ + M 2 det − ˆ□ + M 2 == c 96 π ˆ R 1 □R + M 2A 4 SMabuchi [gμν , ˆgμν ] + O M 4 , (5.7) where A is the area of the Riemann surface (here we assume the euclidean signature and zero genus), on which this action is defined. Indeed, for the definition of m2 σ as in the previous section, we find in flat space for the plane-wave harmonics fp(t) = 1 √24 √p2+M 2 e−i √p2+M 2t (we measure mass in the units of H here): m2 σ = 1 H2 4M 4 π ˆ t −∞ dt ′ ˆ ∞ 0 dp Im fp(t′)fp(t′)f ∗ p (t)f ∗ p (t) = m2 2π . (5.8) For the dS background we have (here h(ξ) denotes the harmonic function (2.4) eather for comple-mentary or for principal series): m2 σ = −4m4 π ˆ ∞ 1 dτ τ ˆ ∞ 0 dξ Im h2 (ξ) h∗2 (ξτ ) . (5.9) This integral is convergent and can be evaluated numerically (see fig.4). It is interesting, that the result in dS considerably deviates for small masses of scalar field, while the answer for the static Riemann manifold (5.7) is supposed to be the leading contribution in this region. This makes us believe that the IR behaviour of light scalar field in dS leads to amplification of δm 2 σ in the long-wave expansion of the effective action (5.6). Meanwhile, we see on the fig.4 that for large mass of the scalar field the value of m2 σ approaches the flat-space value, which is not surprising: very heavy fields decouple and don’t feel the effects of the background. Let us confirm this analytically for fermionic matter. Fermionic fields in 2D Consider the standard kinetic term for Dirac fermions in two dimensions (we will follow the article ): Sferm = ˆ d2xp|g| i 2 ¯ψγ μ∇μψ − ∇ μ ¯ψγ μψ − m ¯ψψ , (5.10) where γμ are gamma-matrices in curved space, and the action of covariant derivatives are deter-mined by spin-connection, see for details. In flat space the gamma matrices are chosen in the 20 m2 m2 σ Figure 4: The dependence of m2 σ on the squared mass of the scalar field. The orange line depicts the value in flat space-time. All masses are measured in the units of Hubble parameter. form γ0 = 0 11 0 ! ; γ1 = 0 1 −1 0 ! . (5.11) Then we quantize the fermionic field with canonical anticommutation relation conditions: ψ(t, x) = ˆ dp 2π eipx hbbpψ(+) p (t) + bd†−pψ(−) p (t) i , nbbp,bb† q o = n bdp, bd† q o = 2 πδ (p − q), (5.12) where we denote by ψ(+) p (t), ψ (−) p (t) the positive- and negative-frequency solutions of Dirac equa-tion, determined by (5.10). Then, using the Schwinger-Keldysh technique for fermions [32, 59], we obtain for the effective mass of σ: m2 σ = 8 m2 ˆ ∞−∞ dt ′p|g(t′)| ˆ ∞−∞ dp 2π Im  ¯ψ(+) (p|t) ψ(−) (p|t) ¯ψ(−) (p|t′) ψ(+) (p|t′) . (5.13) In the case of flat space ψ(+) (p|t) = e−i √p2+M 2t 01 ! , ψ (−) (p|t) = ei √p2+M 2t 10 ! we encounter logarithmic divergence: m2 σ = 4m2 π ˆ ∞ 0 dp pp2 + M 2 , (5.14) which can be connected to the additional non-local terms, which appear in deformation of Mabuchi action for fermions . Nevertheless, as we are interested in the difference δm 2 σ ≡ m2 σ dS − m2 σ Mink , let us proceed naively and just subtract the UV region in (5.13). As the UV behaviour of the harmonics is the same independently of the curvature, one will have the same logarithmic 21 divergence. In dS the mode functions are as follows: ψ(+) (k|η) = s H 2|k| −imW − 12 ,im (2 i|k|η) ,W12 ,im (2 i|k|η)  , k > 0  W12 ,im (2 i|k|η) , −imW − 12 ,im (2 i|k|η)  , k < 0 ψ(−) (k|η) = s H 2|k|  W ∗ 12,im (2 i|k|η) , −imW ∗− 12 ,im (2 i|k|η)  , k > 0 −imW ∗− 12 ,im (2 i|k|η) , −W ∗ 12,im (2 i|k|η)  , k < 0 (5.15) where Wκ,μ (z) is a Whittaker function. Hence, after the subtraction of the UV region in (5.13) the only contributing range of integration over physical momentum is 12τ < |k|η < 12 , τ = η′ η , so that with the use of asymptotics of Whittaker function we can estimate: δm 2 σ ≃≃ 32 m2 π ˆ ∞ 1 dτ τ ˆ 1212τ Im  Re 2 Γ(2 im )Γ( im )1(2 iξ )im  m2Re 2  Γ(2 im ) im Γ( im )1(2 iξ )im  e−2iξτ  . (5.16) The integral (5.16) can be calculated and we find δm 2 σ ≃ 4  1 − 2Si (1) π  m2 4m sinh ( πm )sinh (2 πm ), (5.17) where Si (x) is the sine integral function. The qualitative dependence of δm 2 σ on the mass of the matter field is the same as for the scalar field, including the sign of this quantity (see fig.5), which confirms our discussion above. Note also that this answer is determined by the IR behaviour of the mode functions. As we already noted, we define the mass term for σ in the effective action with the wrong sign, so that the answers depicted on figures 4 and 5 describe the positive mass in the Klein-Gordon equation, because the central charge for the scalar and fermion matter is 1 and 12 correspondingly: □σ + 96 πc δm 2 σ σ + . . . = 0 . (5.18) Let us stress here again that despite the expansion over small momenta, the ellipsis here account for non-perturbative contributions such as the Mabuchi action (5.7), whose significance at different energy scales is yet to be understood in more detail. Our conjecture is that δm 2 σ = m2 σ dS −m2 σ Mink 22 (a) (b) m2 δm 2 σ δm 2 σ m2 Figure 5: The dependence of δm 2 σon the squared mass of the matter fields. The first plot (a) depicts the numerical value for scalar field and the second plot (b) is analytical approximation for fermions. describes the response of quantum matter to the curved background at low energy scale. Further, if we treat gravity at quantum level in two dimensions, we should change c → c − 26 in (5.18), including into account the central charge of ghosts . In this case the kinetic term in the effective action can have the correct sign, and the δm 2 σ corresponds to the negative squared mass in the KG equation (5.18), effectively describing a distortion of the initial background. However, the more accurate understanding of the evolution of metric perturbations requires further study of the effective action. 6 Conclusion The paper discusses the concept of massive terms in the effective gravitational action, which are induced by quantum fields of matter in dS background. Although it is known that the initial state of quantum field theory in dS must decay due to particle production [6, 42]: Out In ̸ = 1 ,the comprehensive understanding of the physical consequences of this phenomena requires the consideration of the response of specific systems to external influences. Furthermore, the evolution of metric perturbations in the presence of classical stress-energy tensor can tell us a lot about the physics in the early Universe [9, 15], while it is also well understood in physical community that loop corrections in dS can lead to drastic modifications of tree-level results [2, 3, 17, 18, 20]. It turns out that spin-2 part of metric perturbations, associated with the graviton, does not acquire a mass in one-loop effective action. Although the notion of mass in an expanding space-time enables us to work in non-equilibrium framework and include some counterterms to the effective action, we argue that this result is regularisation scheme-independent. This is due to the auxiliary argument, that there is no “exchange” of the Goldstone vector, which gives mass to the graviton, because it cannot be produced by the stress-energy tensor of free field theory out 23 of the initial Bunch-Davies vacuum. It would be very interesting to generalize our observations to self-interacting theories, where we must take into account loop corrections . Additionally, on physical grounds, we must investigate a much larger scope of various initial states. Indeed, Bunch-Davies vacuum preserves the highest number of symmetries of the problem, while in any real situation, most symmetries are broken. For instance, in the context of the early Universe, the most natural initial state is thermal state with non-canonical temperature [41,60,61]. In this case, the meanvalue of stress-energy tensor isn’t proportional to the metric : Tμν : ̸ = δλg μν [41, 60], hence one should expect additional contributions from tadpole diagrams on the fig.3. Furthermore, in global dS the isometry group is broken due to divergent IR behaviour of the propagators at past infinity [1, 36], making one to introduce a Cauchy surface at finite time. Our analysis in sections 2–4 can be extended to more general geometries as FLRW and global dS, so we also leave such questions for further investigations. Finally, in Section 5 we have outlined the problems we encounter when considering the scalar sector of gravitational perturbations at the one-loop level. The question of “scalar ghost” in gravity remains unresolved [11,62,63], but the consideration of loop modified gravity at classical level will allow us to improve predictions about the behavior of matter in the Universe at early stages incorporating the evolution of scalar metric perturbations. In the simplest case of two dimensions, we observe that light matter fields exhibit a considerable response to an expanding background, because these fields also have significantly different infrared behavior. Therefore, we believe that further study of this issue using various approaches will provide answers about the stability and behavior of matter in expanding gravitational backgrounds. Acknowledgments We would like to acknowledge discussions with D. V. Diakonov, K. V. Bazarov and D. A. Trunin. We are grateful to Emil Akhmedov for valuable discussions, careful reading of the paper, correcting the text and support. Especially we thank Fedor K. Popov for initiating this work, fruitful ideas and careful reading of the paper. This work was supported by the grant from the Foundation for the Advancement of Theoretical Physics and Mathematics “BASIS”. A Bubble diagram In this subsection we analyze the expression for the non-local part Πbub μν |αβ of the bubble diagram. First of all, let us implement the O(D − 1) rotational symmetry of dS, rotational symmetry of the chosen quantum state and the property Πbub μν |αβ (k|η, η ′) = −Πbub αβ |μν (−k|η′, η ) to write the general 24 expression for the polarization operator: Πbub 00 |00 = a, Πbub 00 |0k = ikk k2 b, Πbub 00 |kl = f ′ 1 δ⊥ kl f ′ 2 kkkl k4 = f1δ⊥ kl f2 kkkl k4 + f  δkl − (D − 1) kkkl k2  , Πbub 0i|0k = c1δ⊥ ik c2 kikk k4 , Πbub 0i|kl = i ki k2 d′ 1 δ⊥ kl id ′ 2 kikkkl k6 + id 3  δ⊥ il kk k2 + δ⊥ ik kl k2  ≡≡ i ki k2 d1δ⊥ kl id 2 kikkkl k6 + id  δkl − (D − 1) kkkl k2  id 3  δ⊥ il kk k2 + δ⊥ ik kl k2  , Πbub ij |kl = e1δ⊥ ij δ⊥ kl +  −e2 kikj k4 δ⊥ kl e2 kkkl k4 δ⊥ ij  e3 kikj kkkl k8 ++ e4  δ⊥ ik kj kl k4 + δ⊥ jk kikl k4 + δ⊥ il kj kk k4 + δ⊥ jl kikk k4  e5 δ⊥ ik δ⊥ jl δ⊥ il δ⊥ jk , (A.1) where all the coefficient functions a, b, f ’s , c ’s , d ’s , e ’s depend on η, η ′, k and we have also used line to denote switching variables, e.g., a(k|η, η ′) def = a(k|η′, η ); δ⊥ kl def = δkl − kk kl k2 . In (A.1) we encounter 14 coefficients, but they are not independent because of the Ward identities ˆ∇αΠbub μν |αβ = 0 , which can be written explicitly as follows: ∇η′ Πbub μν |00 − (−ik k)Π bub μν |0k 1 η′ Πbub μν |00 − Πbub μν |kk  = 0 , ∇η′ Πbub μν |0k − (−ik l)Π bub μν |kl = 0 . (A.2) The solution of (A.2) can be chosen in the following form: a = a1 − a2 + m2 η′2 a3, b = ∇η′ a − D − 2 η′  a1 + a2 − m2 η′2 a3  2 η′ m2 η′2 a3,c2 = −∇ η′ b + D − 2 η′ b + D − 1 η′ b1 + D − 2 η′ b2,f1 = a1 + a2 − m2 η′2 a3, f2 = k2  a1 − a2 − m2 η′2 a3  ,f = 1 k2 1 D − 2  f2 + 1 k2 ∇η′ b  (hence f ′ 1 = f1 + f, f ′ 2 = −∇ η′ b) ,d1 = b + b1 + b2, d2 = k2 b + b1  , d 3 = ∇η′ c1,d = 1 k2 1 D − 2 (d2 − ∇ η′ c2) (hence d′ 1 = d1 + d, d ′ 2 = ∇η′ c2) ,e2 = −∇ η′ d1, e3 = −∇ η′ d2, e4 = −∇ η′ d3,e1 = − 2 D − 2e5 − 1 k2 1 D − 2e2 − 1 D − 2f ′ 1 − η′ ∇η′ f ′ 1 d′ 1  . (A.3) 25 Then we are left with 7 independent coefficient functions a1, a 2, a 3, b 1, b 2, c 1, e 5. The direct calcu-lation of the stress-energy correlators (2.16) gives the following expressions: a1 = 12 ˆ p  ∂η∂η′ F ∂ η∂η′ ρ −  kp − p2 − m2 η2  ∂η′ F ∂ η′ ρ  ,a2 = 12 ˆ p kp − p2  ∂ηF ∂ ηρ −  kp − p2 − m2 η2  F ρ  ,a3 = 12 ˆ p  ∂ηF ∂ ηρ −  kp − p2 − m2 η2  F ρ  ,b1 = −2m2 η′2 × 12 ˆ p  (k2 − kp )∂ηF ρ + kp F ∂ ηρ  ,b2 = 2 × 12 ˆ p kp − p2  (k2 − kp )∂ηF ρ + kp F ∂ ηρ  ,c1 = 1 D − 2 × 12 ˆ p  p2 − (kp )2 k2   ∂η∂η′ F ρ + F ∂ η∂η′ ρ − ∂η′ F ∂ ηρ − ∂ηF ∂ η′ ρ  ,e5 = 4 D(D − 2) × 12 ˆ p  p2 − (kp )2 k2 2 F ρ, (A.4) where for brevity we’ve denoted F = F (p|η, η ′), ρ = ρ(k − p|η, η ′) and ´ p = ´ dD−1p (2 π)D−1 . Thus, equations (A.1), (A.3), (A.4) give the whole contribution of the non-local part to the bubble diagram through several loop integrals. B Gauge invariance of the effective action Let us pick up only the contributions to δξΓeff , which contain h00 q in the order O (|| hq · ξ|| ). For the “source”-term in (3.5) we take the contributions with hαβ in (3.6) and get: −12δξ hαβ q T cl −cl αβ  == h00 q ∂ηξ0 T cl −cl 00 − 2 η h00 q ξ0 T cl −cl 00 12∂ηh00 q ξ0 T cl −cl 00 12∂ih00 q ξi T cl −cl 00 , (B.1) where we also use that in the rotationally invariant state one has T cl −cl 0i = 0 , ∂ iF (x, x ) = 0 , etc. The variation of the local parts of (3.5) in the required order is δξ hμν cl (x)Π loc μν |αβ (η)hαβ q (x) = −12  ∂ηξ0 − 1 η ξ0  T cl −cl 00 +  ∂iξj − 1 η δji ξ0  T cl −cl ij  h00 q . (B.2) Finally, we must take into account terms with the derivatives of the theta-functions in (3.5) after the gauge variation and integration by parts, while the derivatives of the polarization operator in this term is vanishing due to the Ward identities (A.2). Again, we use the commutation relations (2.9) 26 to find the bubble contribution at coincident points and obtain in the order under consideration: −12δξ ˆ hqΠbub hcl = 12 ˆ dDxHDηD h00 q  ∂iξi∂η∂η′ F (x, x ) + ∂iξj ∂xi ∂yj F (x, x )  , (B.3) (where the derivatives over η′ and yj are referred to the second argument of the Keldysh function). Therefore, the whole variation in this order vanishes: δξΓeff = −12 ˆ dDxHDηD h00 q (x)ξ0(x) ˆ∇μ T cl −cl μ0 = 0 . (B.4) In the similar way one can check that all other components of δξΓeff vanish. C Integral relation for Green functions Consider the bare action for the scalar field (2.1) and the interaction term in the form: δp2 Sint = − ˆ p ˆ dη HD−2ηD−2 δp 2ϕcl (p, η )ϕq(−p, η ) = −δp 2 ˆ dDxpˆgϕ cl (x)ϕq(x). (C.1) Then the first-order correction to the exact Keldysh propagator is given by δp2 F (p|η, η ) = −iδp 2 ˆ dD−1xdD−1z dη ′ HD−2η′D−2 ⟨ϕcl (x, η )ϕcl (z)ϕq(z)ϕcl (y, η )⟩ e−ip(x−y) == −δp 22 ˆ ∞ η dη ′ HD−2η′D−2 ρ(p|η, η ′)F (p|η, η ′). (C.2) On the other hand, we have δp2 F (p|η, η ) = δp 2∂p2 F (p|η, η ) by construction and the relation (4.6) follows immediately. D Relations for the stress-energy tensor From the very definition (2.12) we have the relation Tii = − (D − 1) T00 + ( D − 1) ∂ηϕ∂ ηϕ + ∂iϕ∂ iϕ ≡ − (D − 1) T00 + ∆ T. (D.1) In addition, the covariant conservation condition ˆ∇μTμν = 0 reads ˆ∇μTμν = ∂ηT00 + 2 η T00 + ∂iTi0 − 1 η ∆T = 0 . (D.2) 27 In momentum space in the limit k → 0 the last equation gives ∆T (k = 0, η ) = [ η∂ η + 2] T00 (0|η) . (D.3) In x-space language the limit k → 0 is equivalent to the integral over x. Then first of all we can derive the equality (below we write the stress-energy tensor in position space and use translational symmetry of the correlator): ˆ ∞ η dη ′ η′D−2 BD  T00 (0, η ) , ˆ x ∆T (x, η ′)  BD == ˆ ∞ η dη ′ η′D−2 (η′∂η′ + 2) BD  T00 (0, η ) , ˆ x T00 (x, η ′)  BD == ( D − 1) ˆ ∞ η dη ′ η′D−2 BD  T00 (0, η ) , ˆ x T00 (x, η ′)  BD == ( D − 1) ˆ ∞ η dη ′ η′D−2 Π00 |00 (0|η, η ′) , (D.4) where in the last line we have integrated by parts. We also can use the invariance of the correlation functions in x-space under transformations x → ax, η → aη (indeed, the correlators depend on the dS-invariant variable Z = η2+η′2−(x−y)2 2ηη ′ [6, 64]) and introduce new variables y = ax, ω = η2 η′ to obtain: ˆ ∞ η dη ′ η′D−2 BD ˆ x ∆T (x, η ) , T 00 (0, η ′)  BD == ˆ ∞ η dη ′ η′D−2 a4 aD−1 BD ˆ y ∆T (y, aη ) , T 00 (0, aη ′)  BD == 1 ηD ˆ η 0 ωdω BD ˆ y ∆T (y, ω ) , T 00 (0, η )  BD == 1 ηD ˆ η 0 ωdω (ω∂ ω + 2) BD ˆ y T00 (y, ω ) , T 00 (0, η )  BD ≡ 0, (D.5) where we again integrate by parts in the last line and in the second lime we take a = ηη′ . With the use of the derived equations we find ˆ ∞ η dη ′ η′D−2 Π00 |ii (0|η, η ′) = 0 , ˆ ∞ η dη ′ η′D−2 Πii |00 (0|η, η ′) = − (D − 1) ˆ ∞ η dη ′ η′D−2 Π00 |00 . (D.6) E The mass of photon in AdS 4 It is convenient for us to treat AdS 4 from the beginning as a hyperboloid, embedded into the five-dimensional pseudo-Euclidean space with coordinates XA: ηAB XAXB = L2, where ηAB = diag (1 , 1, −1, −1, −1) . Following the approach developed in the papers [45, 65], we first write the 28 bare action in the form: S[BA, ϕ ] = ˆ dμ X  −14FAB F AB + ∂0 A ieB A  ϕ 2 − m2 |ϕ|2  , (E.1) where dμ X = 2 Lδ (X2 −L2)d5X is the AdS-invariant measure, ∂0 A = ∂A − 1 X2 XAXI ∂I is the tangent derivative, FAB = ( ∂0 A − XA) BB − (∂0 B − XB ) BA and BA is a vector potential, which is considered to be tangent to the hyperboloid: XABA = 0 , X A ∈ AdS 4. The vector potential in the AdS 4 are obtained by a pull–back of BA. Let us impose additional transversal condition ∂ABAt = 0 , so that the free equation of motion for the vector–potential simplifies to the ordinary wave–equation: X2∂2 − (X · ∂)2 − 3 ( X · ∂) − 2 BAt = 0 . (E.2) This is the wave-equation for the fields in the massless representation D(2 , 1) [21, 22]. The corre-sponding gauge variation δφBA = X2∂Aφ − XAXI ∂I φ is determined by the scalar φ ∈ D(3 , 0) .We can construct the projector onto the transversal vector-potential BAt by the following gauge transformation: BAt = ˆPAB t BB = BA − X2∂Aφ − XAXI ∂I φ ,φ = 1 X2∂2 − (X · ∂)2 − 3 ( X · ∂)∂C BC . (E.3) Connecting the ordinary mass-term in (E.1) with the parameter E in four-dimensional case m2L2 = E(3 − E) we write the equation for the Wightman function: 1 − Z2 ∂2 Z − 4Z∂ Z + E(3 − E) WE (Z) = 0 , (E.4) where Z(X, Y ) = XM YM L2 is invariant variable. The solution for E̸ = 1 , 2 is [23, 64]: WE (Z) = 14π2L2 Γ( E)Γ( E − 1) Γ(2 E − 2) 1 ZE 1F2  E, E − 1; 2 E − 2; 1 Z  . (E.5) For conformally coupled scalar E = 1 , 2 the solution looks like Wc(Z) = 14π2L2  α 1 Z2 − 1 + β ZZ2 − 1  , (E.6) where the choice of α, β corresponds to a different boundary conditions. The Feynman propagator can be obtained by the introduction of iϵ -prescription: GF (X, Y ) = W (Z + iϵ ). The kernel of the inverse operator in (E.3) multiplied by L2 is equal to iW 3(Z + iϵ ).As long as the AdS 4 background is stationary and stable we might use as well the ordinary 29 Feynman diagrammatic technique in this case. Note that in the IR region we have W3(Z) = 112 π2L2 1 Z3 + O  1 Z4  , Z → ∞ . (E.7) Therefore, if there is actually a non-zero mass of the photon, we will find in its self-energy the term, proportional to the projector (E.3), and the following contribution in the effective action: δΓeff = m2 ph 2 ˆ dμ X BA ˆPAB t BB = m2 ph 2 ˆ dμ X  BABA + X2∂ABA 1 X2∂2 − (X · ∂)2 − 3 ( X · ∂)∂C BC  ∼∼ m2 ph 2 ˆ dμ X dμ Y ∂ABA(X) i 12 π2L2 1 Z3 ∂C BC (Y ), (E.8) where in the last line we kept only the non-local part of its expression in the IR region. Now we straightforwardly integrate out the scalar fields with conformal mass in (E.1) and omit the terms, proportional to BABA as they don’t lead to the structure of the form (E.8): δΓeff = −ie2 2 ˆ dμ X dμ Y BA(X)BC (Y )YAXC L4 h 2W ′ c W ∗′ c − WcW ∗′′ c − W ′′ c W ∗ c i == −ie2 2 ˆ dμ X dμ Y ∂ABA(X)∂C BC (Y )f (Z), (E.9) where we have used that YA = L2∂AZ, X C = L2∂C Z and introduced the function f (Z), which is the solution of f ′′ = 2 W ′ c W ∗′ c − WcW ∗′′ c − W ′′ c W ∗ c , decaying at infinity. The solution of this differential equation indeed contains the term, proportional to 1 Z3 : f (Z) ≃ − 116 π4L4 23Re (α∗β) 1 Z3 + . . . . (E.10) Substituting this result into the effective action (E.9) and comparing the coefficients with (E.8) we find m2 ph = e2 2π2L2 Re (α∗β) , (E.11) which actually doesn’t vanish when α and β both are non-zero. This result in AdS space at the simple example of the scalar QED confirms the discussions of the papers [23, 24], that such peculiarities of AdS as discrete spectrum of levels can lead to a presence of the Goldstone bosons as a bound states created by the electric current or stress-energy tensor even for free field theory. Indeed, in the case of QED we see the propagator of the boson from D(3 , 0) in the equations (E.8), (E.9), which is the analogy of the pole at k2 = 0 in the Higgs mechanism in the Standard Model with the exchange of the massless field on fig.1. 30 References Dmitry Krotov and Alexander M. Polyakov. Infrared Sensitivity of Unstable Vacua. Nucl. Phys. B , 849:410–432, 2011. E. T. Akhmedov, U. Moschella, K. E. Pavlenko, and F. K. Popov. Infrared dynamics of massive scalars from the complementary series in de Sitter space. Phys. Rev. D , 96(2):025002, 2017. E. T. Akhmedov, U. Moschella, and F. K. Popov. Characters of different secular effects in various patches of de Sitter space. Phys. Rev. D , 99(8):086009, 2019. Paul R. Anderson and Emil Mottola. Instability of global de Sitter space to particle creation. Phys. Rev. D , 89:104038, 2014. Paul R. Anderson and Emil Mottola. Quantum vacuum instability of “eternal” de Sitter space. Phys. Rev. D , 89:104039, 2014. E. T. Akhmedov, K. V. Bazarov, D. V. Diakonov, U. Moschella, F. K. Popov, and C. Schubert. Propagators and Gaussian effective actions in various patches of de Sitter space. Phys. Rev. D, 100(10):105011, 2019. Fedor K. Popov. Debye mass in de Sitter space. JHEP , 06:033, 2018. N. D. Birrell and P. C. W. Davies. Quantum Fields in Curved Space . Cambridge Monographs on Mathematical Physics. Cambridge Univ. Press, Cambridge, UK, 2 1984. Dmitry S Gorbunov and Valery A Rubakov. Introduction to the theory of the early universe: cosmological perturbations and inflationary theory . World Scientific, Singapore, 2011. M. Fierz and W. Pauli. On relativistic wave equations for particles of arbitrary spin in an electromagnetic field. Proc. Roy. Soc. Lond. A , 173:211–232, 1939. Claudia de Rham. Massive Gravity. Living Rev. Rel. , 17:7, 2014. Frank Ferrari, Semyon Klevtsov, and Steve Zelditch. Gravitational Actions in Two Dimensions and the Mabuchi Functional. Nucl. Phys. B , 859:341–369, 2012. Corinne de Lacroix de Lavalette. Two-dimensional quantum gravity coupled to non-conformal matter . Theses, Université Pierre et Marie Curie - Paris VI, September 2017. J. H. Jeans. The Stability of a Spherical Nebula. Philosophical Transactions of the Royal Society of London Series A , 199:1–53, January 1902. 31 D.S. Gorbunov and V.A. Rubakov. Introduction to the Theory of the Early Universe: Hot Big Bang Theory . G - Reference,Information and Interdisciplinary Subjects Series. World Scientific, 2011. A. M. Polyakov. Infrared instability of the de Sitter space. 9 2012. E. T. Akhmedov and Ph. Burda. Solution of the Dyson–Schwinger equation on de Sitter background in IR limit. Phys. Rev. D , 86:044031, 2012. E. T. Akhmedov and F. K. Popov. A few more comments on secularly growing loop corrections in strong electric fields. JHEP , 09:085, 2015. E. T. Akhmedov, F. K. Popov, and V. M. Slepukhin. Infrared dynamics of the massive ϕ4theory on de Sitter space. Phys. Rev. D , 88:024021, 2013. E.T. Akhmedov. Lecture notes on interacting quantum fields in de Sitter space. Int. J. Mod. Phys. D , 23:1430001, 2014. H. Nicolai. REPRESENTATIONS OF SUPERSYMMETRY IN ANTI-DE SITTER SPACE. In Spring School on Supergravity and Supersymmetry , 4 1984. Christian Fronsdal. Singletons and Massless, Integral Spin Fields on de Sitter Space (Elemen-tary Particles in a Curved Space. 7. Phys. Rev. D , 20:848–856, 1979. M. Porrati. Higgs phenomenon for 4-D gravity in anti-de Sitter space. JHEP , 04:058, 2002. M. Porrati. Higgs phenomenon for the graviton in ADS space. Mod. Phys. Lett. A , 18:1793– 1802, 2003. Joao Penedones, Kamran Salehi Vaziri, and Zimo Sun. Hilbert space of Quantum Field Theory in de Sitter spacetime. 1 2023. D. Boyanovsky, H. J. de Vega, and M. Simionato. Nonequilibrium quantum plasmas in scalar QED: Photon production, magnetic and Debye masses and conductivity. Phys. Rev. D, 61:085007, 2000. Maud Jaccard, Michele Maggiore, and Ermis Mitsou. Bardeen variables and hidden gauge symmetries in linearized massive gravity. Phys. Rev. D , 87(4):044017, 2013. Julien Serreau and Renaud Parentani. Nonperturbative resummation of de Sitter infrared logarithms in the large-N limit. Phys. Rev. D , 87:085012, 2013. Juergen Berges. Introduction to nonequilibrium quantum field theory. AIP Conf. Proc. ,739(1):3–62, 2004. 32 Zi-Liang Wang and Wen-Yuan Ai. Dissipation of oscillating scalar backgrounds in an FLRW universe. JHEP , 11:075, 2022. Wen-Yuan Ai, Marco Drewes, Dražen Glavan, and Jan Hajer. Oscillating scalar dissipating in a medium. JHEP , 11:160, 2021. Alex Kamenev. Field Theory of Non-Equilibrium Systems . Cambridge University Press, 2011. C. Itzykson and J.B. Zuber. Quantum Field Theory . Dover Books on Physics. Dover Publi-cations, 2012. André Lichnerowicz. Republication of: Propagators, commutators and anti-commutators in general relativity. General Relativity and Gravitation , 50:1–44, 2018. G.W. Gibbons and M.J. Perry. Quantizing gravitational instantons. Nuclear Physics B ,146(1):90–108, 1978. E. T. Akhmedov. Physical meaning and consequences of the loop infrared divergences in global de Sitter space. Phys. Rev. D , 87:044049, 2013. T. Prokopec, O. Tornkvist, and R. P. Woodard. One loop vacuum polarization in a locally de Sitter background. Annals Phys. , 303:251–274, 2003. P. C. W. Davies and S. A. Fulling. Quantum vacuum energy in two dimensional space-times. Proceedings of the Royal Society of London Series A , 354(1676):59–77, April 1977. S. M. Christensen. Vacuum expectation value of the stress tensor in an arbitrary curved background: The covariant point-separation method. Phys. Rev. D , 14:2490–2501, Nov 1976. T. S. Bunch and P. C. W. Davies. Quantum Field Theory in de Sitter Space: Renormalization by Point Splitting. Proc. Roy. Soc. Lond. A , 360:117–134, 1978. K. V. Bazarov. Notes on peculiarities of quantum fields in space–times with horizons. Class. Quant. Grav. , 39(21):217001, 2022. Alexander M. Polyakov and Fedor K. Popov. Kronecker anomalies and gravitational striction. arXiv: High Energy Physics - Theory , 3 2022. Alexei M. Tsvelik. Quantum Field Theory in Condensed Matter Physics . Cambridge University Press, 2 edition, 2003. Emil T. Akhmedov, Ugo Moschella, and Fedor K. Popov. Ultraviolet phenomena in AdS self-interacting quantum field theory. JHEP , 03:183, 2018. 33 E. T. Akhmedov, A. A. Artemev, and I. V. Kochergin. Interacting quantum fields in various charts of anti–de Sitter spacetime. Phys. Rev. D , 103(4):045009, 2021. E. T. Akhmedov. IR divergences and kinetic equation in de Sitter space. Poincare patch: Principal series. JHEP , 01:066, 2012. Tomislav Prokopec. Symmetry breaking and the Goldstone theorem in de Sitter space. JCAP ,12:023, 2012. Gerard ’t Hooft and M. J. G. Veltman. One loop divergencies in the theory of gravitation. Ann. Inst. H. Poincare Phys. Theor. A , 20:69–94, 1974. Markus B. Fröb. Fully renormalized stress tensor correlator in flat space. Phys. Rev. D ,88:045011, 2013. Sohyun Park and R. P. Woodard. Scalar Contribution to the Graviton Self-Energy during Inflation. Phys. Rev. D , 83:084049, 2011. L. H. Ford and R. P. Woodard. Stress tensor correlators in the Schwinger-Keldysh formalism. Class. Quant. Grav. , 22:1637–1647, 2005. Manuel Loparco, Joao Penedones, Kamran Salehi Vaziri, and Zimo Sun. The Källén-Lehmann representation in de Sitter spacetime. 5 2023. Alexander M. Polyakov. Quantum Geometry of Bosonic Strings. Phys. Lett. B , 103:207–210, 1981. Harold Erbin. Notes on 2d quantum gravity and liouville theory. 2015. Adel Bilal and Corinne de Lacroix. 2D gravitational Mabuchi action on Riemann surfaces with boundaries. JHEP , 11:154, 2017. Adel Bilal, Corinne de Lacroix, and Harold Erbin. Effective gravitational action for 2D massive fermions. JHEP , 11:165, 2021. Adel Bilal, Frank Ferrari, and Semyon Klevtsov. 2D Quantum Gravity at One Loop with Liouville and Mabuchi Actions. Nucl. Phys. B , 880:203–224, 2014. Clément Stahl, Eckhard Strobel, and She-Sheng Xue. Fermionic current and Schwinger effect in de Sitter spacetime. Phys. Rev. D , 93(2):025004, 2016. E. T. Akhmedov, E. N. Lanina, and D. A. Trunin. Quantization in background scalar fields. Phys. Rev. D , 101(2):025005, 2020. D. V. Diakonov and K. V. Bazarov. Thermal loops in the accelerating frame. 1 2023. 34 Elba Alonso-Monsalve and David I. Kaiser. Debye Screening of Non-Abelian Plasmas in Curved Spacetimes. 9 2023. V. I. Zakharov. Linearized gravitation theory and the graviton mass. JETP Lett. , 12:312, 1970. Kurt Hinterbichler. Theoretical Aspects of Massive Gravity. Rev. Mod. Phys. , 84:671–710, 2012. Bruce Allen and Theodore Jacobson. Vector Two Point Functions in Maximally Symmetric Spaces. Commun. Math. Phys. , 103:669, 1986. H. Janssen and C. Dullemond. Propagators for Massive Vector Fields in Anti-de Sitter Space-time Using Stueckelberg’s Lagrangian. J. Math. Phys. , 28:1023, 1987. 35
377
12 CHAPTER 2. ATMOSPHERIC PRESSURE 2.1 MEASURING ATMOSPHERIC PRESSURE The atmospheric pressure is the weight exerted by the overhead atmosphere on a unit area of surface. It can be measured with a mercury barometer, consisting of a long glass tube full of mercury inverted over a pool of mercury: Figure 2-1 Mercury barometer When the tube is inverted over the pool, mercury flows out of the tube, creating a vacuum in the head space, and stabilizes at an equilibrium height h over the surface of the pool. This equilibrium requires that the pressure exerted on the mercury at two points on the horizontal surface of the pool, A (inside the tube) and B (outside the tube), be equal. The pressure PA at point A is that of the mercury column overhead, while the pressure PB at point B is that of the atmosphere overhead. We obtain PA from measurement of h: (2.1) where ρHg = 13.6 g cm-3 is the density of mercury and g = 9.8 m s-2 is the acceleration of gravity. The mean value of h measured at sea level is 76.0 cm, and the corresponding atmospheric pressure is 1.013x105 kg m-1 s-2 in SI units. The SI pressure unit is called the Pascal (Pa); 1 Pa = 1 kg m-1 s-2. Customary pressure units are the atmosphere (atm) (1 atm = 1.013x105 Pa), the bar (b) (1 b = 1x105 Pa), the millibar (mb) (1 mb = 100 Pa), and the torr (1 torr = 1 mm Hg = 134 Pa). The use of millibars is slowly giving way to the equivalent SI unit of hectoPascals (hPa). The mean atmospheric pressure at sea level is given equivalently as P = 1.013x105 Pa = 1013 hPa = 1013 mb = 1 atm = 760 torr. h vacuum A B A PA ρHggh = 13 2.2 MASS OF THE ATMOSPHERE The global mean pressure at the surface of the Earth is PS = 984 hPa, slightly less than the mean sea-level pressure because of the elevation of land. We deduce the total mass of the atmosphere ma: (2.2) where R = 6400 km is the radius of the Earth. The total number of moles of air in the atmosphere is Na = ma/Ma = 1.8x1020 moles. Exercise 2-1. Atmospheric CO2 concentrations have increased from 280 ppmv in preindustrial times to 365 ppmv today. What is the corresponding increase in the mass of atmospheric carbon? Assume CO2 to be well mixed in the atmosphere. Answer. We need to relate the mixing ratio of CO2 to the corresponding mass of carbon in the atmosphere. We use the definition of the mixing ratio from equation (1.3), where NC and Na are the total number of moles of carbon (as CO2) and air in the atmosphere, and mC and ma are the corresponding total atmospheric masses. The second equality reflects the assumption that the CO2 mixing ratio is uniform throughout the atmosphere, and the third equality reflects the relationship N = m/M. The change ∆mC in the mass of carbon in the atmosphere since preindustrial times can then be related to the change ∆CCO2 in the mixing ratio of CO2. Again, always use SI units when doing numerical calculations (this is your last reminder!): ma 4πR2P S g --------------------5.2x1018kg = = CCO2 nCO2 na ------------NC Na --------Ma MC --------- mC ma -------⋅ = = = ∆mC ma MC Ma --------- ∆CCO2 ⋅ 5.2x1018 12x10 3 – 29x10 3 – --------------------365x10 6 – 280x10 6 – – ( ) ⋅ ⋅ = = 1.8x1014 kg 180 = billion tons! = 14 2.3 VERTICAL PROFILES OF PRESSURE AND TEMPERATURE Figure 2-2 shows typical vertical profiles of pressure and temperature observed in the atmosphere. Pressure decreases exponentially with altitude. The fraction of total atmospheric weight located above altitude z is P(z)/P(0). At 80 km altitude the atmospheric pressure is down to 0.01 hPa, meaning that 99.999% of the atmosphere is below that altitude. You see that the atmosphere is of relatively thin vertical extent. Astronomer Fred Hoyle once said, "Outer space is not far at all; it's only one hour away by car if your car could go straight up!" Figure 2-2 Mean pressure and temperature vs. altitude at 30oN, March Atmospheric scientists partition the atmosphere vertically into domains separated by reversals of the temperature gradient, as shown in Figure 2-2. The troposphere extends from the surface to 8-18 km altitude depending on latitude and season. It is characterized by a decrease of temperature with altitude which can be explained simply though not quite correctly by solar heating of the surface (we will come back to this issue in chapters 4 and 7). The stratosphere extends from the top of the troposphere (the tropopause) to about 50 km altitude (the stratopause) and is characterized by an increase of temperature with altitude due to absorption of solar radiation by the ozone layer (problem 1. 3). In 80 60 40 20 0 0.01 0.1 1 10 100 1000 80 60 40 20 0 200 240 280 Pressure, hPa Altitude, km Temperature, K Stratosphere Mesosphere Troposphere 15 the mesosphere, above the ozone layer, the temperature decreases again with altitude. The mesosphere extends up to 80 km (mesopause) above which lies the thermosphere where temperatures increase again with altitude due to absorption of strong UV solar radiation by N2 and O2. The troposphere and stratosphere account together for 99.9% of total atmospheric mass and are the domains of main interest from an environmental perspective. Exercise 2-2 What fraction of total atmospheric mass at 30oN is in the troposphere? in the stratosphere? Use the data from Figure 2-2. Answer. The troposphere contains all of atmospheric mass except for the fraction P(tropopause)/P(surface) that lies above the tropopause. From Figure 2-2 we read P(tropopause) = 100 hPa, P(surface) = 1000 hPa. The fraction Ftrop of total atmospheric mass in the troposphere is thus The troposphere accounts for 90% of total atmospheric mass at 30oN (85% globally). The fraction Fstrat of total atmospheric mass in the stratosphere is given by the fraction above the tropopause, P(tropopause)/P(surface), minus the fraction above the stratopause, P(stratopause)/P(surface). From Figure 2-2 we read P(stratopause) = 0.9 hPa, so that The stratosphere thus contains almost all the atmospheric mass above the troposphere. The mesosphere contains only about 0.1% of total atmospheric mass. 2.4 BAROMETRIC LAW We will examine the factors controlling the vertical profile of atmospheric temperature in chapters 4 and 7. We focus here on explaining the vertical profile of pressure. Consider an elementary slab of atmosphere (thickness dz, horizontal area A) at altitude z: Ftrop 1 P tropopause ( ) P 0 ( ) --------------------------------------– 0.90 = = Fstrat P tropopause ( ) P stratopause ( ) – P surface ( ) ------------------------------------------------------------------------------------0.099 = = 16 Figure 2-3 Vertical forces acting on an elementary slab of atmosphere The atmosphere exerts an upward pressure force P(z)A on the bottom of the slab and a downward pressure force P(z+dz)A on the top of the slab; the net force, (P(z)-P(z+dz))A, is called the pressure-gradient force. Since P(z) > P(z+dz), the pressure-gradient force is directed upwards. For the slab to be in equilibrium, its weight must balance the pressure-gradient force: (2.3) Rearranging yields (2.4) The left hand side is dP/dz by definition. Therefore (2.5) Now, from the ideal gas law, (2.6) where Ma is the molecular weight of air and T is the temperature. Substituting (2.6) into (2.5) yields: (2.7) We now make the simplifying assumption that T is constant with (P(z)-P(z+dz))A z+dz z weight horizontal area A pressure-gradient force ρagAdz ρagAdz P z ( ) P z dz + ( ) – ( )A = P z dz + ( ) P z ( ) – dz -----------------------------------------ρag – = dP dz -------ρag – = ρa PMa RT ------------= dP P -------Mag RT -----------dz – = 17 altitude; as shown in Figure 2-2, T varies by only 20% below 80 km. We then integrate (2.7) to obtain (2.8) which is equivalent to (2.9) Equation (2.9) is called the barometric law. It is convenient to define a scale height H for the atmosphere: (2.10) leading to a compact form of the Barometric Law: (2.11) For a mean atmospheric temperature T = 250 K the scale height is H = 7.4 km. The barometric law explains the observed exponential dependence of P on z in Figure 2-2; from equation (2.11), a plot of z vs. ln P yields a straight line with slope -H (check out that the slope in Figure 2-2 is indeed close to -7.4 km). The small fluctuations in slope in Figure 2-2 are caused by variations of temperature with altitude which we neglected in our derivation. The vertical dependence of the air density can be similarly formulated. From (2.6), ρa and P are linearly related if T is assumed constant, so that (2.12) A similar equation applies to the air number density na. For every H rise in altitude, the pressure and density of air drop by a factor e = 2.7; thus H provides a convenient measure of the thickness of the atmosphere. In calculating the scale height from (2.10) we assumed that air P z ( ) P 0 ( ) ln – ln Mag RT -----------z – = P z ( ) P 0 ( ) Mag RT -----------z –     exp = H RT Mag -----------= P z ( ) P 0 ( )e z H ----– = ρa z ( ) ρa 0 ( )e z H ----– = 18 behaves as a homogeneous gas of molecular weight Ma = 29 g mol-1. Dalton’s law stipulates that each component of the air mixture must behave as if it were alone in the atmosphere. One might then expect different components to have different scale heights determined by their molecular weight. In particular, considering the difference in molecular weight between N2 and O2, one might expect the O2 mixing ratio to decrease with altitude. However, gravitational separation of the air mixture takes place by molecular diffusion, which is considerably slower than turbulent vertical mixing of air for altitudes below 100 km (problem 4. 9). Turbulent mixing thus maintains a homogeneous lower atmosphere. Only above 100 km does significant gravitational separation of gases begin to take place, with lighter gases being enriched at higher altitudes. During the debate over the harmful effects of chlorofluorocarbons (CFCs) on stratospheric ozone, some not-so-reputable scientists claimed that CFCs could not possibly reach the stratosphere because of their high molecular weights and hence low scale heights. In reality, turbulent mixing of air ensures that CFC mixing ratios in air entering the stratosphere are essentially the same as those in surface air. Exercise 2-3 The cruising altitudes of subsonic and supersonic aircraft are 12 km and 20 km respectively. What is the relative difference in air density between these two altitudes? Answer. Apply (2.12) with z1 = 12 km, z2 = 20 km, H = 7.4 km: The air density at 20 km is only a third of that at 12 km. The high speed of supersonic aircraft is made possible by the reduced air resistance at 20 km. 2.5 THE SEA-BREEZE CIRCULATION An illustration of the Barometric Law is the sea-breeze circulation commonly observed at the beach on summer days (Figure 2-4). Consider a coastline with initially the same atmospheric temperatures and pressures over land (L) and over sea (S). Assume that there is initially no wind. In summer during the day the land ρ z2 ( ) ρ z1 ( ) -------------e z2 H ----– e z1 H ----– --------e z2 z1 – ( ) H --------------------– 0.34 = = = 19 surface is heated to a higher temperature than the sea. This difference is due in part to the larger heat capacity of the sea, and in part to the consumption of heat by evaporation of water. Figure 2-4 The sea-breeze circulation As long as there is no flow of air between land and sea, the total air columns over each region remain the same so that at the surface PL(0) = PS(0). However, the higher temperature over land results in (a) Initial state: equal T, P over land and sea (TL = TS, PL = PS) LAND SEA ln P z S L (b) Sunny day: land heats more than sea (TL > TS) ⇒HL > HS L S slope -H LAND SEA (c) High-altitude flow from land to sea ⇒ PL(0) < PS(0) ⇒PL(z) > PS(z) ⇒high-altitude flow from land to sea ⇒reverse surface flow from sea to land LAND SEA L S ln P ln P z z Flow Circulation cell 20 a larger atmospheric scale height over land (HL > HS), so that above the surface PL(z) > PS(z) (Figure 2-4). This pressure difference causes the air to flow from land to sea, decreasing the mass of the air column over the land; consequently, at the surface, PL(0) < PS(0) and the wind blows from sea to land (the familiar "sea breeze"). Compensating vertical motions result in the circulation cell shown in Figure 2-4. This cell typically extends ~10 km horizontally across the coastline and ~1 km vertically. At night a reverse circulation is frequently observed (the land breeze) as the land cools faster than the sea.
378
The proof of the $l^2$ Decoupling Conjecture | Annals of Mathematics =============== Skip to content Home About Editorial Board Submission Guidelines Subscriptions Contact The proof of the l 2 l 2 Decoupling Conjecture Pages 351-389 from Volume 182 (2015), Issue 1 by Jean Bourgain, Ciprian Demeter Abstract We prove the l 2 l 2 Decoupling Conjecture for compact hypersurfaces with positive definite second fundamental form and also for the cone. This has a wide range of important consequences. One of them is the validity of the Discrete Restriction Conjecture, which implies the full range of expected L p x,t L x,t p Strichartz estimates for both the rational and (up to N ϵ N ϵ losses) the irrational torus. Another one is an improvement in the range for the discrete restriction theorem for lattice points on the sphere. Various applications to Additive Combinatorics, Incidence Geometry and Number Theory are also discussed. Our argument relies on the interplay between linear and multilinear restriction theory. Keywords Strichartz estimates, additive energy, discrete restriction estimates Mathematical Subject Classification Primary 2000: 11L03 Secondary: 42A16, 42A25, 52C35 DOI MR 3374964 zbMATH 06456013 Milestones Received: 16 April 2014 Revised: 19 August 2014 Accepted: 3 October 2014 Authors Jean Bourgain School of Mathematics, Einstein Drive, Institute for Advanced Study, Princeton, NJ 08540 Ciprian Demeter Department of Mathematics, Indiana University, 831 East Third St., Bloomington, IN 47405 ← Previous article in this issue Search for: Full Article Online Content on Project Euclid 2017–2025 Online Content on JSTOR 1884--2019 To appear in forthcoming issues 2025 Vol. 2011 2 3 Vol. 2021 2024 Vol. 2001 2 3 Vol. 1991 2 3 2023 Vol. 1981 2 3 Vol. 1971 2 3 2022 Vol. 1961 2 3 Vol. 1951 2 3 2021 Vol. 1941 2 3 Vol. 1931 2 3 2020 Vol. 1921 2 3 Vol. 1911 2 3 2019 Vol. 1901 2 3 Vol. 1891 2 3 2018 Vol. 1881 2 3 Vol. 1871 2 3 2017 Vol. 1861 2 3 Vol. 1851 2 3 2016 Vol. 1841 2 3 Vol. 1831 2 3 2015 Vol. 1821 2 3 Vol. 1811 2 3 2014 Vol. 1801 2 3 Vol. 1791 2 3 2013 Vol. 1781 2 3 Vol. 1771 2 3 2012 Vol. 1761 2 3 Vol. 1751 2 3 2011 Vol. 1741 2 3 Vol. 1731 2 3 2010 Vol. 1721 2 3 Vol. 1711 2 3 2009 Vol. 1701 2 3 Vol. 1691 2 3 2008 Vol. 1681 2 3 Vol. 1671 2 3 2007 Vol. 1661 2 3 Vol. 1651 2 3 2006 Vol. 1641 2 3 Vol. 1631 2 3 2005 Vol. 1621 2 3 Vol. 1611 2 3 2004 Vol. 1601 2 3 Vol. 1591 2 3 2003 Vol. 1581 2 3 Vol. 1571 2 3 2002 Vol. 1561 2 3 Vol. 1551 2 3 2001 Vol. 1541 2 3 Vol. 1531 2 3 2000 Vol. 1521 2 3 Vol. 1511 2 3 1999 Vol. 1501 2 3 Vol. 1491 2 3 1998 Vol. 1481 2 3 Vol. 1471 2 3 1997 Vol. 1461 2 3 Vol. 1451 2 3 1996 Vol. 1441 2 3 Vol. 1431 2 3 1995 Vol. 1421 2 3 Vol. 1411 2 3 1994 Vol. 1401 2 3 Vol. 1391 2 3 1993 Vol. 1381 2 3 Vol. 1371 2 3 1992 Vol. 1361 2 3 Vol. 1351 2 3 1991 Vol. 1341 2 3 Vol. 1331 2 3 1990 Vol. 1321 2 3 Vol. 1311 2 3 1989 Vol. 1301 2 3 Vol. 1291 2 3 1988 Vol. 1281 2 3 Vol. 1271 2 3 1987 Vol. 1261 2 3 Vol. 1251 2 3 1986 Vol. 1241 2 3 Vol. 1231 2 3 1985 Vol. 1221 2 3 Vol. 1211 2 3 1984 Vol. 1201 2 3 Vol. 1191 2 3 1983 Vol. 1181 2 3 Vol. 1171 2 3 1982 Vol. 1161 2 3 Vol. 1151 2 3 1981 Vol. 1141 2 3 Vol. 1131 2 3 1980 Vol. 1121 2 3 Vol. 1111 2 3 1979 Vol. 1101 2 3 Vol. 1091 2 3 1978 Vol. 1081 2 3 Vol. 1071 2 3 1977 Vol. 1061 2 3 Vol. 1051 2 3 1976 Vol. 1041 2 3 Vol. 1031 2 3 1975 Vol. 1021 2 3 Vol. 1011 2 3 1974 Vol. 1001 2 3 Vol. 991 2 3 1973 Vol. 981 2 3 Vol. 971 2 3 1972 Vol. 961 2 3 Vol. 951 2 3 1971 Vol. 941 2 3 Vol. 931 2 3 1970 Vol. 921 2 3 Vol. 911 2 3 1969 Vol. 901 2 3 Vol. 891 2 3 1968 Vol. 881 2 3 Vol. 871 2 3 1967 Vol. 861 2 3 Vol. 851 2 3 1966 Vol. 841 2 3 Vol. 831 2 3 Proudly powered by WordPress. PDF Document Information Annals of Mathematics Fine Hall – Washington Road Princeton University Princeton, NJ 08544, USA Phone: 1-609-258-6468, Fax: 1-609-258-1367 Copyright © 2025 Annals of Mathematics
379
Log In Member Home Preflop Prodigy App Articles Free Preflop Ranges All Poker Training The $7 Postflop Playbook for No Limit Hold'em Most Popular The Upswing Lab Hold'em Membership Lucid Poker New Cash Game Trainer PLO Launch Pad by Dylan Weisman The Poker Blueprint by Uri Peleg Tournament Mastery by Top Tournament Pros Smash Live Cash Games by Nick Petranglo Advanced PLO Mastery by Dylan Weisman & Chris Wehner Elite Cash Game Exploits by Uri Peleg Heads-Up Master Class by Doug Polk & Fabian Adler Mixed Game Mastery by Jake Abdalla Redline Rocketship by Uri Peleg PLO Matrix Preflop Tool Contact Us Quizzes Reviews Merch Rakeback The Upswing Pros FAQ Logout Skip to content Now Powered By More Info > Now Powered By More Info > What is Rake in Poker? And Should Rake Impact Your Poker Strategy? What is Rake? Rake refers to the scaled commission taken by a casino to pay for its operating costs. Rake is collected in tournaments as part of the buy-in cost and in cash games by the dealer dropping a percentage of each pot. If you’ve ever played cash game poker in a casino, you’ve probably seen the dealer taking chips out of the pot during most hands — those chips are the casino’s rake. Just to state the obvious: more rake is NOT better. How is Rake Collected? There are 3 common methods used by cardroom to collect rake (ordered from most to least common): 1. Pot Rake The most common type of collection, pot rake is generally 2.5% to 10% of the pot in each hand, usually up to a predetermined maximum amount. Some card rooms, however, take a set amount of rake from the pot regardless of pot size. Some cardrooms do not take any rake until the flop is dealt, so if you raise preflop and take down the blinds, you win the whole pot. This is called “no flop, no drop”. 2. Time Collection Time collection (also “timed rake” or “table charge”) is a set fee collected (typically) every half-hour during the game. This form of rake is collected in one of two ways: Player time: A set amount is collected from each player. Time pot: A set amount is collected from the first pot over a certain amount. Time rakes are generally reserved for higher limit games ($10–$20 and above). 3. Dead Drop A set amount of rake is placed on the dealer button each hand by the player in that position, which the dealer collects before any cards are dealt. Note: The rest of this article is a fairly advanced explanation on how to adjust to rake. If you’d like to read up on more fundamental aspects of poker strategy, check out 10 Quick Poker Tips That Will Help Your Game. How Should Rake Influence Your Decisions? Simply put, higher rake lowers your EV (Expected Value). How does this affect your decisions? In a nutshell, higher rake forces you to play a tighter style of poker. This means that all those marginal postflop calls become folds. Moreover, this has an effect on the preflop ranges since those marginal open-raises, calls, and 3-bets depended on those marginal value bets, bluffs and calls to be profitable. Consequently, you have to play tighter preflop and postflop. What Situations Are Most Impacted by Poker Rake? There are two common spots that require particularly big adjustments in high raked games: calling from the big blind and 3-betting. We will focus on just these two spots moving forward. Big Blind Defending Ranges In this section, we will go through the process of building a solid big blind defend range against any raise size. You can then copy this process in order to build your own ranges based on the rake structure in your game. Let’s run through an example hand assuming a $5 capped rake that is taken when the flop is dealt. (This is a typical rake structure for live card rooms, unfortunately.) Local Casino $1/$2. 9-Handed. Effective Stacks $200. Hero is in the BB with two cards 5 folds. CO raises to $8. 2 folds. Hero…? Let’s figure out what Hero’s defending range should look like. The process starts with the pot odds that he is getting (learn how to calculate pot odds here). Hero would need to call $6 to play for a pot of $12 (CO’s $8 raise + the dead blinds $3 + Hero’s $6 call – $5 rake): Pot Odds = $6 / $12 = 0.5 = 50% raw equity needed Since sometimes Hero will be forced to fold before the river or fold what is the best hand, he won’t actually get to realize all of this raw equity. This is important to keep in mind when defending — depending on the specific hand, you may need more or a bit less than 50% equity. A Quick Word on Equity Realization Every hand realizes equity differently. Hands that are strong, connected, and/or suited tend to realize the most equity — think AA, AQo, or JTs. These types of hands oftentimes over-realize their equity — in the example above, they could call profitably with less than 50% equity. Disconnected and/or offsuit hands tend to realize the least equity — think A2o or Q7o. These types of hands almost always under-realize their equity — in the example above, they’d need more than 50% equity to call profitably. Position and player skill also matters when it comes to equity realization. In position players realize more equity than out of position players. A veteran pro will realize more equity than an inexperienced player because the pro will play better postflop. So, if your hand is a borderline call in terms of raw equity, you should probably: Call if you have a strong hand with good playability or if you feel that you have a significant edge postflop. Fold if you have a hand with poor playability or are unsure about your edge against your opponent. If neither option feels right with your particular hand, go for a 3-bet. You’ll gain experience faster that way. With that in mind, let’s see how all the possible hands in Hero’s range fare against the CO’s estimated raising range: The numbers under each hand represent that hand’s equity versus the other range. On the left, you have CO’s opening range. On the right, you can see how much equity each hand has against CO’s opening range. Before I show you which hands Hero should defend, let’s figure out how much equity Hero would need to defend if there were no rake. Pot odds = $6 / $8 (CO’s bet) + $9 (our call + the dead blinds) Pot odds = $6 / $17 = 0.35 -> 35% raw equity needed The difference is staggering! 50% compared to 35%. Let’s take a look at how different Hero’s defending range is with and without rake (I cut out the borderline and unplayable hands that likely won’t realize enough equity): The numbers under each hand represent that hand’s equity versus the cutoff’s raising range, You can do this exercise on your own and change the different variables such as the opener raise size and range and get to familiarize yourself with proper big blind defense ranges. 3-Bet Defending Ranges In this section, we will go through the process of building optimal 3-bet calling ranges based on your opponent’s range and his raise size. As in the previous section, you can copy this process to build your own strategies based on the different variables that you encounter (opponent’s range, raise size, poker rake amount, etc). The sample hand that we will use takes place at the same casino, and we will take a look at both the raked and rakeless situations. Local Casino $1/$2. 9-Handed. Effective Stacks $200. Hero is in the CO with two cards 5 folds. Hero raises to $6. BU 3-bets to $18. 2 folds. Hero…? Here is the Upswing Lab-recommended open-raising range for live games from the cutoff: Default cutoff raising range from the Upswing Lab (Red = Raise, Pink = Optional Raise, Blue = Fold) In this example, we will assume the cutoff raises with all of the optional hands. Let’s assume that the button uses the 3-betting range recommended by the Lab for live games: Default button vs cutoff range from the Upswing Lab (Red = 3-Bet, Pink = Optional 3-Bet, Orange = 3-Bet or Call, Green = Call, Blue = Fold) We’ll assume the button 3-bets only the red and orange hands. Note: Want our full library of preflop range charts? Get 259 preflop charts for online cash games, live cash games, and tournaments when you join the Upswing Lab. Learn more now! The process is very similar to the one we used in the previous section. We will first calculate Hero’s pot odds: Pot odds = $12 (how much we need to call) / $12 (our call) + $6 (our raise) + $18 (our opponent’s 3-bet) + $3 (dead blinds) – $5 (rake) Pot odds = $12 / $34 = 0.35 = 35% raw equity needed If there was no rake, the calculation would look like this: Pot odds = $12 / $18 + $18 + $3 Pot odds = $12 / $39 = 0.30 = 30% raw equity needed You can see that the equity difference is not as drastic here. This is because the rake is the same amount ($5), but the pot is larger. In the previous section $5 was taken from a $17 pot (29.4% of the pot), but $5 is being taken from $37 (13.5% of the pot) in this case. Let’s see how our range should look in each case (again, I cut out the borderline and unplayable hands that likely won’t realize enough equity): Conclusion Poker rake has a big impact on how you should play. You need to take it into account and adjust your strategy if you want to crush as much as possible. If you play online, you can repeat the process above to figure out how much rake should impact ranges in your games. Lookup the rake structure of the site you play on and plug in the numbers. The resulting ranges won’t be as tight as the ones above, but I bet you’ll still be surprised when you see how much rake makes a difference online. If you’re a live player, now you understand how much poker rake should impact your preflop ranges. Are the rake-adjusted ranges in this article tighter or looser than you expected? Let me know in the comments below. Good luck, grinders! Read more from Upswing Poker: The Smart Approach to Losing Poker Sessions How to Crush Nits (10 Tactics That Win Against Tight Players) How to Analyze Your Poker Hands Effectively in 5 Minutes Related Posts Bet Sizing Strategy: 8 Rules to Help You Choose the Perfect Bet Size By Dan B. | March 24, 2020 Big Blind Defense Strategy 101: Tournaments vs Cash Games By Mike Brady | May 28, 2019 10 Cash Game Poker Tips for Dominating the Table By Dan B. | August 25, 2021 What Are Implied Odds? How to Use Implied Odds Like a Veteran Pro By Dan B. | November 13, 2018 Single-Raised Pots vs 3-Bet Pots: How Should Your Strategy Differ? By Dan B. | September 4, 2020 Limps in Poker: The Ultimate Guide to Crushing Open Limpers By Dan B. | April 21, 2025 How to Snowball Your Winnings vs Bad Poker Players By Dan B. | December 4, 2023 4 GTO Myths That Way Too Many Poker Players Believe By George Mathias | February 27, 2018 Bet Sizing Strategy: 8 Rules to Help You Choose the Perfect Bet Size By Dan B. | March 24, 2020 Big Blind Defense Strategy 101: Tournaments vs Cash Games By Mike Brady | May 28, 2019 10 Cash Game Poker Tips for Dominating the Table By Dan B. | August 25, 2021 What Are Implied Odds? How to Use Implied Odds Like a Veteran Pro By Dan B. | November 13, 2018 Single-Raised Pots vs 3-Bet Pots: How Should Your Strategy Differ? By Dan B. | September 4, 2020 Limps in Poker: The Ultimate Guide to Crushing Open Limpers By Dan B. | April 21, 2025 How to Snowball Your Winnings vs Bad Poker Players By Dan B. | December 4, 2023 4 GTO Myths That Way Too Many Poker Players Believe By George Mathias | February 27, 2018 See All Intermediate Articles Home > What is Rake in Poker? And Should Rake Impact Your Poker Strategy? Dan B.|Poker Strategy|Apr 26, 2020 Dan B.|Poker Strategy|Apr 26, 2020 Home > What is Rake in Poker? And Should Rake Impact Your Poker Strategy? About the Author Dan B. Dan B. - Lead Strategy Author - Online High-Stakes Cash Game Pro with a passion for poker theory and teaching. I'm available for quick strategy questions and hourly coaching -- reach out to me at [email protected] Put Your Skills to the Test with Quick Poker Quizzes! Take Poker Quizzes Now Any Questions?
380
Sign Up Today Start your 14 day free trial today History Hit Gift Subscription The best gift in history Prehistoric Ireland Life and Death of the Inca Young Elizabeth: In Her Own Words Magna Carta The Road to Magna Carta Edward II: Worst King of England? Richard III is Given a Voice in History Hit Documentary History Hit Film on Archaeology at Glencoe Reveals Secrets of 1692 Massacre What Really Happened at Agincourt? The 1225 Magna Carta: A Lasting Legacy 800 Years On Honouring the Legacy: D-Day’s Enduring Story in 2025 Why did Magna Carta get written in the first place? 10 Famous Ancient Egyptian Pharaohs Harry Atkins 01 Sep 2021 [adthrive-in-post-video-player video-id=”caoCHx5P” upload-date=”2022-03-01T17:14:24.000Z” name=”10 Famous Egyptian Pharaohs” description=”” player-type=”default” override-embed=”default”] This educational video is a visual version of this article and presented by Artificial Intelligence (AI). Please see our AI ethics and diversity policy for more information on how we use AI and select presenters on our website. The remarkable sophistication of the Ancient Egyptian empire is still hard to reconcile with how far back in time it existed. But the stories of the Ancient Egyptian pharaohs undoubtedly bring us closer to a fascinating civilization that spanned over 3,000 years and 170 pharaohs. The Ancient Egyptian pharaoh’s role was both political and religious. Interpretations varied from ruler to ruler, of course, but the pharaohs were generally thought to be imbued with divinity and were effectively regarded as intermediaries between the gods and people. Yet, despite the spiritual reverence with which they were regarded, the pharaohs were also responsible for the more earthly concerns of leadership, and each Egyptian pharaoh had a unique legacy; some were architectural innovators or revered military leaders while others were brilliant diplomats. Here are 10 of the most famous. 1. Djoser (reign 2686 BC – 2649 BC) Djoser is perhaps the most famous Third Dynasty Egyptian pharaoh, but little is known about his life. What is known, however, is that he oversaw the construction of the famous step pyramid at Saqqara, a hugely significant milestone in ancient Egyptian architecture. This pyramid, in which Djoser was buried, was the first structure to realise the iconic step design. 2. Khufu (reign 2589 ‒ 2566 BC) Head of Khufu in ivory displayed in Altes Museum Image Credit: ArchaiOptix, CC BY-SA 4.0 , via Wikimedia Commons A Fourth Dynasty pharaoh, Khufu’s greatest legacy is undoubtedly the Great Pyramid of Giza, one of the Seven Wonders of the World. The monumental structure is a testament to the bewildering sophistication of Egyptian architecture and, remarkably, remained the tallest man-made structure in the world for the best part of 4,000 years. It was conceived by Khufu as his stairway to heaven and the means of its construction remains something of a mystery to this day. 3. Hatshepsut (reign 1478–1458 BC) Only the second woman to assume the role of pharaoh, Hatshepsut was the wife of Thutmose II and reigned in the Eighteenth Dynasty. Her step-son Thutmose III was just two years old when his father died in 1479 and so Hatshepsut soon took on the role of pharaoh (though Thutmose III also technically ruled as co-regent). Hatshepsut shored up her legitimacy as pharaoh by claiming that her mother was visited by the deity Amon-Ra while pregnant with her, thus signalling her divinity. She took to the role of pharaoh and proved an accomplished ruler, re-establishing important trade routes and overseeing extended periods of peace. 4. Thutmose III (reign 1458–1425 BC) Thutmose III dedicated himself to military training while his step-mother was pharaoh, only taking over the role of main ruler when Hatshepsut died in 1458. The pharaoh’s military training paid off and he earned a reputation as something of a military genius; indeed, Egyptologists sometimes refer to him as the Napoleon of Egypt. Thutmose III never lost a battle and his military exploits won him the respect of his subjects and, for many, a status as the greatest ever pharaoh. 5. Amenhotep III (reign 1388–1351 BC) During Amenhotep III’s 38-year reign, he largely presided over a peaceful and prosperous Egypt. Indeed, Amenhotep III’s accomplishments as pharaoh were more cultural and diplomatic than military; few Ancient Egyptian pharaohs can match his architectural and artistic legacy. 6. Akhenaten (reign 1351–1334 BC) The son of Amenhotep III, Akhenaten was named Amenhotep IV at birth but changed his name in accordance with his radical monotheistic beliefs. The meaning of his new name, “He who is of service to the Aten”, honoured what he believed to be the one true god: Aten, the Sun God. Akhenaten’s religious conviction was such that he moved the Egyptian capital from Thebes to Amarna and named it Akhetaten, “Horizon of Aten”. Amarna wasn’t a previously recognised place before the rule of Akhenaten. At the same time he changed his name, he ordered a new capital city to be built. He chose the site as it was uninhabited – it was not the property of anyone else, but Aten’s. Akhenaten’s wife, Nefertiti, was a strong presence during his reign and played a significant part in his religious revolution. As well as being the wife of an Ancient Egyptian Pharaoh, Nefertiti was made famous by her limestone bust. It is one of the most copied works of Ancient Egyptian art and can be found in the Neues Museum. After Akhenaten’s death, Egypt rapidly returned to polytheism and the traditional gods he had disavowed. 7. Tutankhamun (reign 1332–1323 BC) Tutankhamun’s golden mask Image Credit: Roland Unger, CC BY-SA 3.0 , via Wikimedia Commons The youngest pharaoh in Egyptian history when he ascended to the throne at just 9 or 10 years old, Tutankhamun became the most famous Egyptian pharaoh of all. But the young pharaoh’s fame isn’t the result of extraordinary achievements but instead derives almost entirely from the discovery of his tomb in 1922 – one of the great archaeological finds of the 20th century. “King Tut”, as the pharaoh became known after the discovery of his spectacular burial site, only reigned for 10 years, and died aged just 20. The cause of his death remains a mystery to Egyptologists. 8. Ramses II (reign 1279–1213 BC) Ramses II’s reign was undoubtedly the greatest of the 19th Dynasty and, even by pharaoh standards, unabashedly ostentatious. The son of Seti I, with whom he had a period of co-regency, Ramses II went on to declare himself a god, while earning a reputation as a great warrior, fathering 96 children and ruling for 67 years. Make no mistake, Ramses the Great was not a modest pharaoh. The extensive architectural legacy of his reign is testament to this – as is the fact that his excesses are thought to have left the throne close to bankruptcy at the time of his death. 9. Xerxes I (reign 486 – 465 BC) Xerxes I reigned in the 27th Dynasty during which time Egypt was part of the Persian Empire, having been conquered in 525 BC. Persian Achaemenid Kings were acknowledged as pharaohs and so Xerxes the Great, as he was known, earns a place on our list by virtue of fame, if not popularity. He is often portrayed as a tyrant and it’s likely that, as a Persian king, his disregard for local traditions did not endear him to the Egyptians. Xerxes I was very much a pharaoh in absentia and his failed attempts to invade Greece ensured that his portrayal by Greek historians (and by extension the film 300) is not kind. 10. Cleopatra VII (reign 51 – 30 BC) The last active ruler of the Ptolemaic Kingdom of Egypt, Cleopatra presided over the dying days of the Egyptian empire, yet her fame has lived on through folklore, Shakespeare and Hollywood. It’s hard to disentangle the real Cleopatra from the legend but scholars suggest that her portrayal as a stunningly beautiful seductress undersells her brilliance as a leader. Cleopatra was an astute, politically savvy ruler who succeeded in bringing peace and relative prosperity to an ailing empire. The story of her love affairs with Julius Caesar and Marc Anthony is well documented but, without space to explore the complexities of a familiar tale, we might at least say that it’s tragic conclusion – Cleopatra’s suicide on 12 August 30 BC brought an end to the Egyptian empire. Related Articles 10 Facts About Khufu: The Pharaoh Who Built the Great Pyramid 10 Facts About Amenhotep III Watch and Listen The Incredible Story of William J. Bankes – Adventurer, Collector, Spy Karnak: Decoding Egypt’s Greatest Temple Related Locations Abu Simbel Abusir Pyramids Harry Atkins 1 September 2021 You May Also Like The 1225 Magna Carta: A Lasting Legacy 800 Years On Honouring the Legacy: D-Day’s Enduring Story in 2025 Why did Magna Carta get written in the first place? Edward II: England’s Worst Monarch? Newgrange and the Secrets of Prehistoric Ireland Hands-on With the Only Gladiator Helmet Ever Found in Britain Richard III is Given a Voice in History Hit Documentary History Hit Film on Archaeology at Glencoe Reveals Secrets of 1692 Massacre What Really Happened at Agincourt? Gladiators Mini Series Coming to Dan Snow’s History Hit How European Artists Shaped the Image of the Witch Why Chillingham Is Known as Britain’s Most Haunted Castle History Hit brings you the stories that shaped the world through our award winning podcast network and an online history channel. Other Links Make Your Inbox the Best in History We will send you the latest TV programmes, podcast episodes and articles, as well as exclusive offers from our shop and carefully selected partners. Sign Me Up
381
Published Time: 2023-10-25 Interpretation of near-threshold peaks using the method of independent -matrix poles | Phys. Rev. C =============== Opens in a new window Opens an external website Opens an external website in a new window This website utilizes technologies such as cookies to enable essential site functionality, as well as for analytics, personalization, and targeted advertising. To learn more, view the following link: Privacy Policy Skip to Main Content All JournalsPhysics Magazine Article Lookup Sign in Article Lookup Sign in Sign in All JournalsPhysics Magazine Highlights Recent Accepted Collections Authors Referees Press About Editorial Team RSS Physical Review C Highlights Recent Accepted Collections Authors Referees Press About Editorial Team RSS Reuse & Permissions It is not necessary to obtain permission to reuse this article or its components as it is available under the terms of the Creative Commons Attribution 4.0 International license. This license permits unrestricted use, distribution, and reproduction in any medium, provided attribution to the author(s) and the published article's title, journal citation, and DOI are maintained. Please note that some figures may have been included with permission from other third parties. It is your responsibility to obtain the proper permission from the rights holder directly for these figures. Open Access Interpretation of near-threshold peaks using the method of independent -matrix poles Leonarc Michelle Santos and Denny Lane B. Sombillo National Institute of Physics, University of the Philippines Diliman, Quezon City 1101, Philippines [email protected] PDF Share X Facebook Mendeley LinkedIn Reddit Sina Weibo Phys. Rev. C 108, 045204 – Published 25 October, 2023 DOI: Export Citation 2 Show metrics 2 CITATIONS Abstract We propose a model-independent analysis of near-threshold enhancements using independent -matrix poles. In this formulation, we constructed a Jost function with controllable zeros to ensure that no poles are generated on the physical Riemann sheet. We show that there is a possibility of misinterpreting the observed near-threshold signals if one utilized a limited parametrization and restricted the analysis to only one element of the matrix. Specifically, there is a possibility of the emergence of an ambiguous pair of poles which are singularities of the full matrix but may not manifest in one of its elements. For a more concrete discussion, we focused on an effective two-channel scattering where the full matrix is a matrix. We apply our method to the coupled two-channel analysis of the and found that the compact pentaquark interpretation cannot be ruled out yet. Physics Subject Headings (PhySH) Scattering amplitudes Exotic baryons Hadrons Optical, coupled-channel & distorted wave models Real & complex analysis Article Text I. INTRODUCTION One of the active areas of investigation in hadron spectroscopy is the interpretation of near-threshold phenomena [1–6]. In 2019, an updated analysis of the decays based on Runs 1 and 2 of the LHCb collaboration was presented in Ref.. They observed the narrow pentaquark state [then called the together with the two-peak structure of the resonance which was not present in their initial analysis in Ref.. These newly observed resonances have narrow decay widths and are below the or , a typical signature of molecules. Reference had concluded that is a virtual state with the threshold being within its extent. A similar parametrization study and a deep learning approach had the same conclusion for the signal. Other interpretations are possible and were done in different studies. For example, in Ref. the resonance is favored to have a molecular structure using the quantum chromodynamics (QCD) sum rules formalism. It was found to be the bound state with . The molecular picture with the same quantum number is favored as well in Ref. where they studied the mass and decay properties of using isospin breaking effects and rearrangement decay properties. Reference used a coupled-channel formalism and ends up with the same conclusion. In Ref., it was argued that if range corrections can be neglected, virtual states are molecular in nature and hence the studies cited above pose no contradiction. As can be observed, the molecular picture is favored by most studies. However, we still cannot dismiss the compact pentaquark picture since information about quantum numbers and decay properties is lacking experimentally. In fact, a model based approach in Ref., the resonance was studied under the compact diquark model as a hidden-charm diquark-diquark-antiquark baryon with . Until we settle the quantum numbers and decay properties of the resonances of the , we cannot completely rule out the compact nature of the . A good way to investigate this resonance with some of its properties still being ambiguous is by using a minimally biased bottom-up approach study . The matrix is a good tool to use in a bottom-up approach study as it can be constructed without any details of the interacting potential. We only need to impose analyticity, unitarity, and hermiticity below the lowest threshold. Given these three mathematical restrictions, we can reproduce the scattering amplitude from experiment by identifying the optimal placement and number of poles in the scattering process. Finally, we can look for a theoretical model that can reproduce the same analytic properties of the constructed matrix. In this paper, we show that some arrangements of poles may not be accommodated by the usual amplitude parametrizations such as the effective range expansion. Specifically, there are combinations of poles which will not manifest in the line shape of the elastic scattering amplitude. This, in turn, opens up the possibility of having a pole structure that caters the compact nature of . The difficulty of capturing these subtle pole configurations may arise due to the contamination of coupled channel effects. For example, weakly interacting final state hadrons (higher mass channel) may have a virtual state pole that can be displaced away from the real energy axis due to coupling with the lower mass channel. If there is a resonance that is strongly coupled to the lower mass channel, then the displaced virtual state pole and the shadow pole of the resonance may have a cancellation effect in the elastic transition amplitude. One can invoke the pole-counting method to interpret a near-threshold pole with an accompanying shadow pole as nonmolecular [16–18]. In this work, we propose to use the independent -matrix poles to accommodate all possible interpretations of the observed near-threshold enhancements. The content of this paper is organized as follows. In Sec.II, we review the formalism of the matrix and show how one can construct an matrix using independent poles via the uniformization scheme introduced in Refs.[19–23]. In Sec.III, we discuss how identical line shapes arise. We show that the ambiguity can be resolved by adding the contribution of the off diagonal channel. As an application, we investigate the signal and show that its line shape can take a one-pole configuration or three-pole configuration. In Sec.IV, we give our conclusion and outlook for future works. II. FORMALISM The matrix is an operator that describes the interaction of a scattering process. In momentum space, one could decompose the matrix in terms of the noninteracting terms and interacting terms as (1) where is the scattering amplitude from momentum to and the factors of the interacting term may vary depending on the literature. The scattering amplitude is related to the cross section via (2) In principle, we could use Eqs.(1) and (2) to construct a parametrized matrix to fit it in the measured cross section data from experiments. The peaks from such data are characterized by the pole singularities of the matrix and it is indicative of the nature of the intermediate particles. In practice, the usual treatment of peaks in the scattering cross section is to utilize the Breit-Wigner parametrization to extract the mass and width of the resonance. This approach works very well if the peaks are far from any threshold and the widths are narrow. However, most of the recently observed peaks occur very close to some two-hadron threshold, where the peaks are no longer a reliable information to quote the mass of the observed state. Moreover, coupled-channel effects can no longer be ignored if the peaks are close to the thresholds. Unlike the complex energy plane of a single-channel system, one has to probe deeper into the multiple Riemann sheets of the energy complex plane of a multichannel scattering. Specifically, the peaks observed may correspond to different pole arrangements in different Riemann sheets. In a one-channel scattering, we are interested in singularities of the amplitude in the complex momentum plane. The poles on its positive imaginary axis correspond to bound states and the poles below its real axis may correspond to resonances. With the relativistic energy-momentum relation or the nonrelativistic relation , the complex momentum plane transforms into two Riemann sheets of the complex energy . The first Riemann sheet (top sheet) of , which we call the physical sheet, corresponds to the upper half-plane of . The importance of physical sheet is that the scattering region lies on this complex energy plane. The scattering region corresponds to the energy axis used in plotting scattering observables. The second Riemann sheet (bottom sheet) of , which we call the unphysical sheet, corresponds to the lower half-plane of . Due to causality, no other singularities should be present in the physical energy sheet aside from bound state poles and a branch point at the threshold [25,26]. Accordingly, in a two-channel scattering, we get four Riemann sheets (see for an in-depth discussion). Only poles closest to the scattering region are relevant in the description of scattering data. In this paper, we used the notation of Pearce and Gibson in in labeling our Riemann sheets. We label the sheets as where the string can be or to denote a top sheet or bottom sheet and the order of character denotes the channel. For example, the sheet corresponds to the bottom sheet of the first channel and top sheet of the second channel. The correspondence of Pearce and Gibson's notation with the more commonly used notation of Frazer and Hendry is listed in Table I. TABLE I. Riemann sheet notation. In this work, we will follow Pearce and Gibson's notation. The index correspond to the channel number and denotes top(bottom) sheet. | Frazer | Pearce | Topology in | | --- | --- | --- | | and Hendry | and Gibson | complex | | I | | ; | | II | | ; | | III | | ; | | IV | | ; | We show in Fig.1(a) an illustration of the four Riemann sheets in a two-channel scattering. The scattering region is represented by the red ray with two dots (branch points) lying on the physical sheet . It is directly connected with the lower halves of the and sheets. Crossing the branch cut between the first energy threshold and second energy threshold will send us to the sheet. On the other hand, if we cross the branch cut above , we end up in the sheet. Energy poles found on these two sheets are quoted with negative imaginary parts. Consequently, these poles fit the description of unstable quantum states since their negative imaginary parts can reproduce the expected exponential decay of unstable states. On the other hand, energy poles on the sheet are quoted with positive imaginary parts since only the upper half-plane of the sheet affect the scattering region. This gives rise to an exponential increase in time which does not correspond to any quantum state. FIG. 1. The relevant regions of the four Riemann sheets in a two-channel scattering. (a)The energy complex plane. (b)The uniformized variable plane mapped from the complex energy plane using uniformization. A. Analytic structure of a two-channel matrix Without loss of generality, we can focus on an effective two-channel scattering. The full two-channel transition is described by a matrix whose elements are given by (3) and (4) where is the two-channel Jost function [30–33]. The subscripts correspond to the channel index with 1 representing the lower mass channel, and 2 for the higher mass. Causality requires that the matrix be analytic up to the branch points and poles [25,26]. These singularities are related to the details of scattering. Branch cuts are dictated by kinematics while the poles are dynamical in nature. Depending on the location of the pole on the Riemann sheet, it may in general correspond to a bound state, virtual state, or resonance . The zeros of the Jost function correspond to the poles of the matrix. Analyticity is imposed on Eq.(3) by requiring for [19,31]. The Schwarz reflection principle and the hermiticity of the matrix below the lowest threshold ensures that for every momentum pole , there is another pole given by . All of these must be considered in the construction of the Jost function. From Eq.(3), we see that the matrix is a ratio of two Jost functions. One of the most straightforward ways to construct an analytic, unitary, and symmetric matrix is by using a Jost function, of the form , where . The extra factor is needed to ensure that as we get . The polynomial part, when written in factored form, allows us to form a set of independent zeros of . B. Independent poles via uniformization scheme We propose the use of independent poles in the analysis of near-threshold signals for two reasons. First, this is to ensure that the treatment is model-independent. In other parametrizations, such as the Flatté or effective range expansion, fixing one of the poles will necessarily alter the position of the other pole. These parametrizations may be constructed without reference to any models but one can always find an effective coupled-channel potential that can reproduce such pole trajectory [28,29,35,36]. Second, a parametrization that allows independence of poles can cover a wider model space without violating the expected properties of the matrix unlike in other parametrization. This limitation is observed in where a specific coupled-channel effective range approximation can only produce poles in either or sheet but not in . An matrix with independent poles can cover a wider model space without compromising any of the expected properties of matrix. It is also important that the amplitude model to be used gives the correct threshold behavior associated with branch point singularity of the two-hadron scattering. The uniformization method introduced in [19,20] and utilized in [21,22] is an appropriate scheme for our present objective. The channel's momentum in the two-hadron center of mass frame is given by (5) where is the threshold energy and is the reduced mass of the system. The invariant Mandelstam variable can be written as (6) where we introduced the new momentum variable for convenience of scaling. Instead of constructing the matrix using the momenta and , we introduce the uniformized variable defined by the transformation (7) The linear dependence of the with removes the issue of branch point singularity associated with the threshold. In other words, uniformization reduces the number of complex planes from four energy planes to only one plane. Figure 1(b) shows the plane and a detailed description of such mapping can be seen in [20,23]. All the relevant halves of the complex energy planes in Fig.1(a) are on the first and fourth quadrants of the plane in Fig.1(b). Referring to Eqs.(3) and (4), we can use a rational Jost function such that the pole of the matrix is easily determined by its zeros. The simplest Jost function takes the form (8) where we had introduced several factors. The negative conjugate terms are a consequence of the hermiticity of the matrix below the lowest threshold [21,24]. The extra pole , called the regulator, is added to ensure that the diagonal elements of the matrix behave as as [19,20]. In the short-ranged potential scattering theory, we expect the phase shift to vanish at large energies to be consistent with the Levinson's theorem. This expectation can be met when we impose . This means that we have another pole which depend on . To ensure that is the only relevant pole that can affect the interpretation of enhancement, we set the regulator to be far from the scattering region. Referring to the uniformized plane in Fig.1(b), one can minimize the influence of the regulator by placing it either on the or the sheet. Note that a regulator pole on the sheet will result into a structure between the two thresholds, which significantly affects the interpretation of the line shape, hence it is in our best interest to avoid putting a regulator in this region. The simplest regulator we could use following these requirements is , where the phase factor ensures that the regulator falls on either the or sheet below the lowest threshold. We reiterate the importance of regulator not affecting the physics of the matrix. In , the uniformized truncated Mittag-Leffler parametrization neglected the regulator. Such formulation assumes that the conjugate pole is much closer to the scattering region than the . The absence of other background poles, especially , makes the contribution of relevant in the interpretation of the , which resulted into a broad line shape in the mass distribution. This is the reason why the authors of concluded that the requires only one pole in the second Riemann sheet in contrast with the current consensus of two-pole structure interpretation [37–40]. Care must be taken in the construction of amplitudes. Removing the contribution of , or any possible background poles, as emphasized in , might lead into misinterpretation of line shapes. In other parametrizations, such as the -matrix model (see, for example, ), the equation for the poles are typically quartic in the channel momenta. One may set the Riemann sheet of the desired relevant pole by adjusting the coupling parameters but there is always a tendency that a shadow pole will appear on the physical sheet [42,43]. The linear dependence of the uniformized variable on the channel momenta in Eq.(7) guarantees that no shadow pole is produced using the Jost function in Eq.(8). The regulator pole, which can be controlled in our formulation, ensured that the matrix will not violate causality. In a situation where there is an actual shadow pole, an independent pole can be added to the Jost function with no direct relation to the main pole. That is, one can freely place the pole and its shadow in any position without being restricted by some coupling parameter. With all of these considerations, the full Jost function with different combinations of independent poles takes the form (9) Using the independent-pole form of the Jost function in Eq.(9), the two-channel -matrix elements satisfying unitarity, analyticity, and hermiticity below the lowest threshold take the form (10) The scattering amplitude can be obtained from the relation where is the Kronecker . III. AMBIGUOUS LINE SHAPES The independent poles of the full matrix are determined by the zeros of the . It is possible that the zero of the denominator cancels the zero of the numerator for one of the -matrix elements, say . This means that such pole will not manifest in the line shape as if it did not exist at all. This subtle features of the matrix is important as we can potentially miss out other possible pole configurations if we probe only one element of the full matrix. In this section, we first present how this ambiguity arises. We emphasize that this ambiguity is dependent on the parametrization of the matrix. In particular, the ambiguity of the formalism we use arises from the parametrization of the regulator. As an application, we discuss the implication of the ambiguous line shapes in the context of the signal. We propose that such ambiguity can be removed by probing the off diagonal term of the matrix. We close the section by discussing the importance of probing the off diagonal -matrix terms and the limitation of the effective range expansion. A. Emergence of ambiguity Here, we point out that there exists a pole configuration that has no effect on one of the elements of the full matrix. We start by looking at the pole configuration of one of the -matrix element that is equal to unity. We focus on the ambiguity of the channel. Recall that we construct the element as (11) where the notation in the last line is introduced for convenience. The terms reg. and c.c. denote the regulator and complex conjugate of the zeros (or poles) of the preceding terms, respectively. With some foresight, we consider the pair of poles and . From Eq.(11), the matrix element reads as (12) The pair of poles and entails a unitary matrix. We will call such a combination of poles that lead to a unitary -matrix element as “ambiguous pair poles.” In terms of momentum, recalling Eq.(7), the pair poles and translates into (13) where denotes the momentum of the channel of the pole. It follows from Eq.(13) that for the equation to hold; the takeaway is that the imaginary part of the pair must be opposite. This implies that the pair poles should either be found separately on the and sheet on the and sheets. The former instance is prohibited since it will violate analyticity and causality. Hence, we consider two poles in which one is located at the sheet and the other at the sheet an ambiguous pair pole if . Moreover, their regulators are given by and , located on the and sheets, respectively. The condition for the ambiguous pair poles states no specific magnitude for it to occur, only that they must depend on each other as . Despite these ambiguous pair poles having no effective contribution in the line shape, they cannot be ignored especially when they are near the threshold. If we probe either the or channel, we will observe a difference between the two configurations. We then assert that one must be rigorous in using certain approximations. For example, without proper justification, the prescription might potentially miss out pole configurations with ambiguous pair poles. Through a similar argument, we could assert that the same conclusion holds for the channel. The ambiguous pair for the channel should lie on the and sheet and satisfies . However, this will only be possible if we reassign the regulator of the pole at the sheet as , i.e., the regulator should be placed on the sheet instead of the sheet. We consider for a moment this pole on the sheet and its regulator. Recalling Eqs.(9) and (10), we have (14) where the second fraction are the regulator terms and its complex conjugates. Since is located at the imaginary axis, the denominator of the second fraction is equal to , which makes the singularity a double pole. We have to recall that state correspondence is always a simple pole [19,24] and hence, our regulator assignment on the sheet does not invalidate causality. Moreover, this second order pole is faraway from the threshold, and hence its presence is not relevant in the interpretation of amplitude line shape. B. Implication of the ambiguous line shapes To demonstrate the ambiguous line shape and its consequences, we look into the LHCb signal. We reconstruct its scattering amplitude using the data and best fit found in . To facilitate the discussion of this section, we will be using the pole counting method by Morgan to analyze the resulting scattering amplitudes. The idea hinges on the fact that nonpotential resonances occurring very close to an -wave threshold are associated with poles on the and sheets of the energy plane [17,18]. Simply put, the pole counting method states that a single pole on the sheet indicates a predominantly molecular bound state whereas poles appearing on the and sheets both close to the threshold indicates a state dominated by its compact component. First, we consider a one-pole configuration with a background pole. We placed the main pole on the with real part MeV and width MeV. The background pole is added on the sheet, below the first channel threshold 3184.9 MeV with width MeV. Figure 2 shows the -matrix elements of this one-pole configuration. FIG. 2. Elements of the matrix with a pole on the sheet, signifying a molecular nature based on pole counting method. By looking only at the matrix element, its sharp peak near the threshold hints a signature of a molecular state. We can back this up by appealing to the pole counting method which, for an isolated pole in sheet, suggests a molecular nature of the observed signal. On the other hand, both the bottom-top approach done in and which utilized the form (15) where and are smooth functions, is the three-body phase space, and being the matrix element had concluded that the signal is likely due to a virtual state. As mentioned earlier, near-threshold virtual states are molecular in nature as long as range corrections can be neglected , and hence the agreement with the pole counting method and the conclusions in and . However, it is worth mentioning that the recent investigation of the same signal using deep learning in favors the compact interpretation. We move on to the next configuration. We use the same isolated pole in sheet from the previous configuration plus an additional ambiguous pair poles on the and sheets. The values of the added poles are set to produce an ambiguous pair of poles as described in the previous subsection. Specifically, we set the pair poles such that their real parts are 4317.73 MeV and 12.5 MeV for its widths. The three-pole configuration and its -matrix elements are shown in Fig.3. FIG. 3. Elements of the matrix with an isolated pole on the , and sheets. This configuration can be interpreted having a non-molecular nature based on pole counting method. The interpretation of three-pole configuration is now outside the direct applicability of the pole counting argument. We are confronted with different possible scenarios for the occurrences of such pole structures. One naive interpretation is that the sheet pole is a molecular state of the channel with the added and poles as some nonmolecular state that is strongly coupled to channel. However, such interpretation is unlikely since the real parts of the and poles are exactly placed at the second threshold. This means that there is no available phase space for the unstable state to decay into the second channel. A more plausible interpretation is that the sheet pole is a virtual state of the channel and the remaining and poles correspond to a compact state that is strongly coupled to the channel. Unlike the first interpretation, there is enough phase space for such a compact resonance to decay into channel. Interestingly, this scenario matches the hybrid model proposed in [45,46]. Comparing the line shape in Fig.2, which admits a purely molecular interpretation, and in Fig.3, one can interpret that a compact state combined with a virtual state can lead to a purely molecular-like interpretation. That is, the compact state enhances the attraction of the hadrons in the higher mass channel. Certainly, there is an obvious ambiguity in the interpretation of line shape with the one-pole and the three-pole structures. Their distinction will only become evident when we probe either the off-diagonal element or the elastic amplitude. Considering that the available data came from the decay , it is appropriate to include the contribution of transition when computing for the invariant mass distribution of to determine the presence of ambiguous pair of poles. C. On the necessity of probing the amplitude The condition for the existence of ambiguous pair poles, i.e. poles in different Riemann sheets but with exactly the same position, is highly fine tuned. More realistic situation may have poles in different Riemann sheets but not necessarily with the same position due to possible contamination of coupled channel effects. That is, there maybe a slight difference in the line shape of for different pole structures. However, when the error bars of the experimental data are large then any distinction among the with different pole structures will no longer be useful. The inclusion of might help since the one-pole structure has slightly larger in comparison with the three-pole structure. To further demonstrate the importance of , we consider the in the invariant mass spectrum of . We construct an matrix similar to that of the three-pole configuration example above. Afterwards, we displace either the real or the imaginary part of one of the pair poles while fixing in place the other. In this way, the ambiguous pair poles are less fine tuned. The respective amplitude line shapes are then plotted in Fig.4. Notice that the tail of line shape is sensitive to the presence of ambiguous poles. However, the peak structure for all cases are almost similar despite their differences in their tail. This might entail a problem if the error bars obscured the distinction of ambiguous line shapes near the threshold. It is also possible that the line shape of the slightly displaced ambiguous pair poles might be absorbed by the background parametrizations if the amplitude ansatz is limited to produce one isolated pole near the threshold. It was shown in that the effective range expansion cannot produce a pole on the sheet due to the impossibility of making the real and imaginary parts of the amplitude's denominator simultaneously equal to zero. Such restriction limits the model space of a given line shape to molecular-like bound or virtual states and immediately rules out compact state interpretation. FIG. 4. Displacing one of the pair poles and fixing the other. The darker color represents the displacement below the original () and the lighter hues represent the displacement above the original () values. Quantitatively, there is little to no difference near the threshold. IV. CONCLUSION AND OUTLOOK The method of independent -matrix poles is a useful tool in analyzing near-threshold phenomena in a model-independent way. We have shown that in our formulation, it is possible that some of the commonly used analyses missed out important physics in probing the nature of near-threshold enhancements. The parametrized background may absorb the relevant physics if the parametrization used can only cover a limited model space. Other elements of the full matrix can be useful in providing more rigorous interpretations of the observed signals. We used the independent -matrix poles formulation to study the in the invariant mass spectrum of . It turned out that, by focusing only on the contribution of in the overall line shape of the distribution, one cannot rule out yet the compact pentaquark interpretation. Our result gives credence to the deep learning analysis made in and the improved data in together with its corresponding analysis in . It is also worth noting that a three-channel analysis upholding the principles of matrix in shows evidence of poles near the scattering region, emphasizing the importance of strong coupling with higher channels. Indeed, sophisticated methods catering all possibilities must be considered to obtain a definitive interpretation of the pentaquark signals. Moving forward, we plan to use the present formalism to improve the deep learning extraction of pole configurations started in [42,50,51]. The independence of the poles used in generating the line shape ensures that no specific trajectory is preferred to reach a particular pole configuration. Together with the vast parameters that a deep neural network can provide and its ability to generalize beyond the training dataset, one can then extract a nonbiased interpretation of near-threshold enhancements. ### ACKNOWLEDGMENT This work was funded by the UP System Enhanced Creative Work and Research Grant (ECWRG-2021-2-12R). References (51) S.L. Olsen, T. Skwarnicki, and D. Zieminska, Rev. Mod. Phys.90, 015003 (2018). F.-K. Guo, C. Hanhart, Ulf-G. Meißner, Q. Wang, Q. Zhao, and B.-S. Zou, Rev. Mod. Phys.90, 015004 (2018). J.A. Oller, Prog. Part. Nucl. Phys.110, 103728 (2020). F.-K. Guo, X.-H. Liu, and S. Sakai, Prog. Part. Nucl. Phys.112, 103757 (2020). M. Mai, U.-G. Meißner, and C. Urbach, Phys. Rep.1001, 1 (2023). M. Albaladejo et al., Prog. Part. Nucl. Phys.127, 103981 (2022). R. Aaij et al. (LHCb Collaboration), Phys. Rev. Lett.122, 222001 (2019). R. Aaij et al. (LHCb Collaboration), Phys. Rev. Lett.115, 072001 (2015). C. Fernández-Ramírez, A. Pilloni, M. Albaladejo, A. Jackura, V. Mathieu, M. Mikhasenko, J.A. Silva-Castro, and A.P. Szczepaniak (JPAC Collaboration), Phys. Rev. Lett.123, 092001 (2019). L. Ng, L. Bibrzycki, J. Nys, C. Fernández-Ramírez, A. Pilloni, V. Mathieu, A.J. Rasmusson, and A.P. Szczepaniak (Joint Physics Analysis Center Collaboration), Phys. Rev. D105, L091501 (2022). H.-X. Chen, W. Chen, and S.-L. Zhu, Phys. Rev. D100, 051501(R) (2019). J.-B. Cheng and Y.-R. Liu, Phys. Rev. D100, 054002 (2019). M.-L. Du, V. Baru, F.-K. Guo, C. Hanhart, Ulf-G. Meißner, J.A. Oller, and Q. Wang, Phys. Rev. Lett.124, 072001 (2020). I. Matuschek, V. Baru, F.-K. Guo, and C. Hanhart, Eur. Phys. J. A57, 101 (2021). A. Ali and A.Y. Parkhomenko, Phys. Lett. B793, 365 (2019). D. Morgan, Nucl. Phys. A543, 632 (1992). D. Morgan and M. Pennington, Phys. Lett. B258, 444 (1991). D. Morgan and M.R. Pennington, Phys. Rev. D48, 5422 (1993). R.G. Newton, Scattering Theory of Waves and Particles, Theoretical and Mathematical Physics (Springer-Verlag, Berlin/Heidelberg, 1982). M. Kato, Ann. Phys.31, 130 (1965). W. Yamada and O. Morimatsu, Phys. Rev. C102, 055201 (2020). W.A. Yamada and O. Morimatsu, Phys. Rev. C103, 045201 (2021). W.A. Yamada, O. Morimatsu, T. Sato, and K. Yazaki, Phys. Rev. D105, 014034 (2022). J. Taylor, Scattering Theory: Quantum Theory on Nonrelativistic Collisions (Wiley, New York, 1972). N.G. van Kampen, Phys. Rev.91, 1267 (1953). N.G. van Kampen, Phys. Rev.89, 1072 (1953). S.A. Rakityansky, in Jost Functions in Quantum Mechanics: A Unified Approach to Scattering, Bound, and Resonant State Problems (Springer International Publishing, Cham, 2022), pp. 407–423. B.C. Pearce and B.F. Gibson, Phys. Rev. C40, 902 (1989). W.R. Frazer and A.W. Hendry, Phys. Rev.134, B1307 (1964). K.J.L. Couteur, Proc. R. Soc. London A256, 115 (1960). R.G. Newton, J. Math. Phys.2, 188 (1961). R.G. Newton, J. Math. Phys.3, 75 (1962). M.L. Kharakhan and Y.M. Shirokov, Theor. Math. Phys.3, 374 (1971). A.M. Badalyan, L.P. Kok, M.I. Polikarpov, and Y.A. Simonov, Phys. Rep.82, 31 (1982). C. Hanhart, J.R. Pelaez, and G. Rios, Phys. Lett. B739, 375 (2014). T. Hyodo, Phys. Rev. C90, 055208 (2014). R.L. Workman et al. (Particle Data Group), Prog. Theor. Exp. Phys.2022, 083C01 (2022). Z.-Y. Wang, H.A. Ahmed, and C.W. Xiao, Eur. Phys. J. C81, 833 (2021). U.-G. Meißner, Symmetry12, 981 (2020). T. Hyodo and D. Jido, Prog. Part. Nucl. Phys.67, 55 (2012). S.-Q. Kuang, L.-Yun Dai, X.-W.Kang and D.-L. Yao, Eur. Phys. J. C80, 433 (2020). D. L. B. Sombillo, Y. Ikeda, T. Sato, and A. Hosaka, Phys. Rev. D104, 036001 (2021). R.J. Eden and J.R. Taylor, Phys. Rev.133, B1575 (1964). Z. Zhang, J. Liu, J. Hu, Q. Wang, and U.-G. Meißner, Sci. Bull.68, 981 (2023). Y. Yamaguchi, A. Giachino, A. Hosaka, E. Santopinto, S. Takeuchi, and M. Takizawa, Phys. Rev. D96, 114031 (2017). Y. Yamaguchi, A. Hosaka, S. Takeuchi, and M. Takizawa, J. Phys. G: Nucl. Part. Phys.47, 053001 (2020). S. Adhikari et al. (GlueX Collaboration), Phys. Rev. C108, 025201 (2023). I. Strakovsky, W.J. Briscoe, E. Chudakov, I. Larin, L. Pentchev, A. Schmidt, and R.L. Workman, Phys. Rev. C108, 015202 (2023). D. Winney et al. (Joint Physics Analysis Center Collaboration), Phys. Rev. D108, 054018 (2023). D. L. B. Sombillo, Y. Ikeda, T. Sato, and A. Hosaka, Phys. Rev. D102, 016024 (2020). D.L.B. Sombillo, Y. Ikeda, T. Sato, and A. Hosaka, Few-Body Syst.62, 52 (2021). Outline Information Citing Articles (2) Abstract Article Text INTRODUCTION FORMALISM AMBIGUOUS LINE SHAPES CONCLUSION AND OUTLOOK ACKNOWLEDGMENTS References Phys. Rev. C 108, 045204– Published 25 October, 2023 Vol. 108, Iss. 4 — October 2023 Received 15 August 2023 Accepted 10 October 2023 Export Citation Reuse & Permissions DOI: Published by the American Physical Society under the terms of the Creative Commons Attribution 4.0 International license. Further distribution of this work must maintain attribution to the author(s) and the published article's title, journal citation, and DOI. Funded by SCOAP 3. Published by the American Physical Society Outline Information Outline Citing Articles (2) Abstract Article Text INTRODUCTION FORMALISM AMBIGUOUS LINE SHAPES CONCLUSION AND OUTLOOK ACKNOWLEDGMENTS References Information Phys. Rev. C 108, 045204– Published 25 October, 2023 Vol. 108, Iss. 4 — October 2023 Received 15 August 2023 Accepted 10 October 2023 Export Citation Reuse & Permissions DOI: Published by the American Physical Society under the terms of the Creative Commons Attribution 4.0 International license. Further distribution of this work must maintain attribution to the author(s) and the published article's title, journal citation, and DOI. Funded by SCOAP 3. Published by the American Physical Society NewsJoin APS Authors General InformationSubmit a ManuscriptPublication RightsOpen AccessSCOAP 3Policies & PracticesTips for AuthorsProfessional Conduct Referees General InformationSubmit a ReportUpdate Your InformationPolicies & PracticesReferee FAQGuidelines for RefereesOutstanding Referees Librarians General InformationSubscriptionsOnline License AgreementUsage StatisticsYour Account Students PhysicsPhysicsCentralStudent Membership Connect PrivacyPoliciesContact InformationFeedback NewsJoin APS Authors General Information Submit a Manuscript Publication Rights Open Access SCOAP 3 Policies & Practices Tips for Authors Professional Conduct Referees General Information Submit a Report Update Your Information Policies & Practices Referee FAQ Guidelines for Referees Outstanding Referees Librarians General Information Subscriptions Online License Agreement Usage Statistics Your Account Students Physics PhysicsCentral Student Membership Connect Privacy Policies Contact Information Feedback ISSN 2469-9993 (online), 2469-9985 (print). ©2025 American Physical Society. All rights reserved. Physical Review C™ is a trademark of the American Physical Society, registered in the United States, Canada, European Union, and Japan. The APS Physics logo and Physics logo are trademarks of the American Physical Society. Information about registration may be found here. Use of the American Physical Society websites and journals implies that the user has read and agrees to our Terms and Conditions and any applicable Subscription Agreement. Sign In to Your Journals Account Username Password Forgot your username/password? Create an account Sign in to your APS Member Account Sign in via your Institution Sign In Filter Filter Article Lookup Enter a citation Paste a citation or DOI Lookup Article Journal Volume Article ID / page number Lookup Article
382
Efficient Algorithm for Partitioning a Directed Acyclic Graph into Short Paths - Theoretical Computer Science Stack Exchange =============== Join Theoretical Computer Science By clicking “Sign up”, you agree to our terms of service and acknowledge you have read our privacy policy. Sign up with Google OR Email Password Sign up Already have an account? Log in Skip to main content Stack Exchange Network Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Visit Stack Exchange Loading… Tour Start here for a quick overview of the site Help Center Detailed answers to any questions you might have Meta Discuss the workings and policies of this site About Us Learn more about Stack Overflow the company, and our products current community Theoretical Computer Science helpchat Theoretical Computer Science Meta your communities Sign up or log in to customize your list. more stack exchange communities company blog Log in Sign up Home Questions Unanswered AI Assist Labs Tags Chat Users Companies Teams Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Try Teams for freeExplore Teams 3. Teams 4. Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Explore Teams Teams Q&A for work Connect and share knowledge within a single location that is structured and easy to search. Learn more about Teams Hang on, you can't upvote just yet. You'll need to complete a few actions and gain 15 reputation points before being able to upvote. Upvoting indicates when questions and answers are useful. What's reputation and how do I get it? Instead, you can save this post to reference later. Save this post for later Not now Efficient Algorithm for Partitioning a Directed Acyclic Graph into Short Paths Ask Question Asked 1 year, 11 months ago Modified1 year, 6 months ago Viewed 295 times This question shows research effort; it is useful and clear 1 Save this question. Show activity on this post. I am working on a problem involving partitioning a directed acyclic graph into distinct multiple paths, each with a maximum length constraint. The goal is to minimize the number of paths (this should below a given number, do not care that much about getting the absolute minimum) while ensuring that the length of each path is below a given threshold (the maximum length constraint). This problem resembles the "Minimum Path Partitioning" problem, which is known to be NP-hard. I'm seeking advice on efficient algorithms or approximation strategies to tackle this problem. I've considered using a depth-first search (DFS) approach to explore the graph, but I'm open to more advanced techniques. Could anyone recommend heuristic algorithms, approximation approaches, or point me to relevant research in this area? Any insights or guidance would be greatly appreciated. graph-theory graph-algorithms approximation-algorithms Share Share a link to this question Copy linkCC BY-SA 4.0 Cite Improve this question Follow Follow this question to receive notifications edited Aug 18, 2023 at 21:41 user69908 asked Aug 18, 2023 at 21:22 user69908user69908 11 2 2 bronze badges 4 Should the paths be edge disjoint? How big is your "maximum length of a path" (I'll call that k k)? –user3508551 Commented Aug 19, 2023 at 3:19 Sorry for the delayed reply. k would be <20, typically 10-15. The resulting paths would be node- disjoint (and therefore path disjoint) - no two paths should share nodes –user69908 Commented Aug 21, 2023 at 18:17 How big are the problem instances you want to solve? –Neal Young Commented Aug 22, 2023 at 13:17 I'm trying to solve similar problem and asked a question here cstheory.stackexchange.com/questions/53375/…. The domain I'm trying to solve is close ended unlike yours. Code is a graph, in code, I'm trying to find ways to reduce it down to its grammar, visualise data flow, control flow, ultimately reduce it down to custom grammar and perform code refactor. –Vetrivel Commented Sep 30, 2023 at 17:47 Add a comment| 1 Answer 1 Sorted by: Reset to default This answer is useful 1 Save this answer. Show activity on this post. Claim: There exists an almost linear time approximation algorithm that returns m m paths such that m≤(2−1 k)O P T m≤(2−1 k)O P T respecting the length conditions. Since you say k<20 k<20, this should do fairly well in practice. Let P∗1,…,P∗d P 1∗,…,P d∗ be the optimal decomposition of vertex disjoint paths each with length at most k k. Observation 1: It must be that d≥n/k d≥n/k Proof: Each path P∗i P i∗ has at most k k nodes, and in total they have n n vertices. So it must be that d≥n/k d≥n/k. Observation 2: It is possible to get vertex disjoint decomposition P 1,…,P c P 1,…,P c such that c≤d c≤d (The paths need not respect the length constraint). The algorithm runs in almost linear time. Proof: This is known as the minimum path cover in directed acyclic graphs. You can read how to do this on this wikipedia article, but the gist of the argument is that you build a bipartite graph B(V∪V,E′)B(V∪V,E′) and connect u v∈E′u v∈E′ if and only if u v∈E u v∈E. Then G′G′ has a matching of size ℓ ℓ if and only if G G has vertex-disjoint path cover with n−ℓ n−ℓ paths. You can then construct the actual paths from the matching. Since you can solve maximum matching in O~(m 1+o(1))O~(m 1+o(1)) with max flow, the result follows (in practice, you want to use a specialized maximum matching algorithm on bipartite graphs). Observation 3: There exists an almost linear time approximation algorithm that returns a decomposition of m m paths respecting the length constraint with m≤(2−1 k)d m≤(2−1 k)d. Proof: Let P 1,…,P c P 1,…,P c be the path decomposition from Observation 2. For each path P i P i with more than k k nodes, we break it into ⌈|P i|k⌉⌈|P i|k⌉ paths in the most obvious way (cake cutting). This gives us a set of vertex-disjoint paths with length at most k k. However, how many paths do we have? We have ∑i=1 c⌈|P i|k⌉≤∑i=1 c|P i|+k−1 k=∑c i=1|P i|k+c−c k=n k+(1−1 k)c≤d+(1−1 k)d=(2−1 k)d∑i=1 c⌈|P i|k⌉≤∑i=1 c|P i|+k−1 k=∑i=1 c|P i|k+c−c k=n k+(1−1 k)c≤d+(1−1 k)d=(2−1 k)d The result follows. Share Share a link to this answer Copy linkCC BY-SA 4.0 Cite Improve this answer Follow Follow this answer to receive notifications edited Aug 22, 2023 at 15:59 answered Aug 22, 2023 at 15:53 user3508551user3508551 1,376 7 7 silver badges 15 15 bronze badges Add a comment| Your Answer Thanks for contributing an answer to Theoretical Computer Science Stack Exchange! Please be sure to answer the question. Provide details and share your research! But avoid … Asking for help, clarification, or responding to other answers. Making statements based on opinion; back them up with references or personal experience. Use MathJax to format equations. MathJax reference. To learn more, see our tips on writing great answers. Draft saved Draft discarded Sign up or log in Sign up using Google Sign up using Email and Password Submit Post as a guest Name Email Required, but never shown Post Your Answer Discard By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy. Start asking to get answers Find the answer to your question by asking. Ask question Explore related questions graph-theory graph-algorithms approximation-algorithms See similar questions with these tags. Featured on Meta Upcoming initiatives on Stack Overflow and across the Stack Exchange network... Will you help build our new visual identity? Report this ad Report this ad Related 18Maximum number of internally vertex-disjoint odd length s-t paths 10Finding short and fat paths 9What's the approximation factor of this Max k-Cut approximation? 4Path coloring in general graphs 4Minimum Union-Sum Cost Path 10"Relatives" of the shortest path problem Hot Network Questions When two black holes merge, what happens to the space-time inside them? Is there any way to still use Manifest v2 extensions in Google Chrome 139+? Does the Melf's Acid Arrow spell require a ranged attack roll? Why doesn't chatGPT learn from its interactions with users? History of Wilcoxon/Mann-Whitney being for the median? Proper way to power off a Ubuntu 22.04-5 desktop from single user mode Best bike type for multi-day tours in France (e.g., Discover France itineraries) How to describe this set of figures? A specific case Can metal atoms act as ligands? Which public officers other than presidents and lawmakers are chosen by people? A story where a character that looks like Wile E. Coyote helps to relocate a community of business-sharp hunters-gatherers How to deal with this problem in hedonism? Why do the rules allow resigning in drawn positions with insufficient mating material? In the US, can I contribute to my Roth IRA, ahead of the time I get the earned income? Is 人形机器人 a redundant expression? How did the early Church interpret Hebrews 6:4-6, Hebrews 10:26-31, 2 Peter 2:20-22, and other similar passages? How to reduce repetition in a large amount of if-else statements when reading from a buffer Reskinning creatures without accidentally hiding how dangerous/safe they are How can a theory be discarded if the Duhem–Quine thesis suggests it can’t be falsified I failed to make Claus benzene. (With sticks.) Reuse the profile of apt Firefox in Flatpak version through a symbolic link? LM393 comparator not pulling down Is there a simple method to prove that this triangle is isosceles? Show double quotient with congruence subgroup is simply connected? Question feed Subscribe to RSS Question feed To subscribe to this RSS feed, copy and paste this URL into your RSS reader. Why are you flagging this comment? It contains harassment, bigotry or abuse. This comment attacks a person or group. Learn more in our Code of Conduct. It's unfriendly or unkind. This comment is rude or condescending. Learn more in our Code of Conduct. Not needed. This comment is not relevant to the post. Enter at least 6 characters Something else. A problem not listed above. Try to be as specific as possible. Enter at least 6 characters Flag comment Cancel You have 0 flags left today Theoretical Computer Science Tour Help Chat Contact Feedback Company Stack Overflow Teams Advertising Talent About Press Legal Privacy Policy Terms of Service Your Privacy Choices Cookie Policy Stack Exchange Network Technology Culture & recreation Life & arts Science Professional Business API Data Blog Facebook Twitter LinkedIn Instagram Site design / logo © 2025 Stack Exchange Inc; user contributions licensed under CC BY-SA. rev 2025.8.13.32804 By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy. Accept all cookies Necessary cookies only Customize settings Cookie Consent Preference Center When you visit any of our websites, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences, or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and manage your preferences. Please note, blocking some types of cookies may impact your experience of the site and the services we are able to offer. Cookie Policy Accept all cookies Manage Consent Preferences Strictly Necessary Cookies Always Active These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information. Cookies Details‎ Performance Cookies [x] Performance Cookies These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance. Cookies Details‎ Functional Cookies [x] Functional Cookies These cookies enable the website to provide enhanced functionality and personalisation. They may be set by us or by third party providers whose services we have added to our pages. If you do not allow these cookies then some or all of these services may not function properly. Cookies Details‎ Targeting Cookies [x] Targeting Cookies These cookies are used to make advertising messages more relevant to you and may be set through our site by us or by our advertising partners. They may be used to build a profile of your interests and show you relevant advertising on our site or on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device. Cookies Details‎ Cookie List Clear [x] checkbox label label Apply Cancel Consent Leg.Interest [x] checkbox label label [x] checkbox label label [x] checkbox label label Necessary cookies only Confirm my choices
383
Congjun Wu's homepage =============== Congjun Wu's homepage =============== Physics 217 -- Phase transitions and RG Course Syllabus Syllabus Lecture notes For Lecture 1 to 4, read Herbut book Chapter 1, and Goldenfeld book Chapter 2 and 3 Lecture 1 One dimensional Ising model Lecture 2 Two dimensional Ising model (I) Lecture 3 Two dimensional Ising model (II) Lecture 4 Two dimensional Ising model and 1D quantum Ising model (III) Supplemental Material: Onsager solution Lecture 5 Ginzburg Landau mean-field theory For Lecture 5 to 10, read Goldenfeld book, Chapter 5, 6, 7, 9 Lecture 6 Gaussian model and Ginzburg criterion Lecture 7 Scaling hypothesis Lecture 8 Dimension and Anomalous dimension Lectures 9 and 10 Real Space Renormalization Group I and II Lecture 11 4-epsilon (I): Gaussian model and scaling Lecture 12 4-epsilon (II): phi-4 theory and RG equations Lecture 13 4-epsilon (III): calculation of critical exponents Lecture 14 4-epsilon (IV): Integration of RG equations, crossover Lecture 15 Non-liner sigma model, asymptotic freedom Lecture 16 and 17 K-T transition of XY model More is different The famous paper by P. W. Anderson. Howework assignment HW1: Goldenfeld book "Lectures on phase transitions and RG", Chapter 3, Excersie 3.1, 3.2, 3.3, due time April 18, on class Solutions posted Apirl 22. HW2: Goldenfeld book "Lectures on phase transitions and RG" Chapter 5, Excercise 5.1, 6.3; Chapter Excercise 7.1, due time May 14 on class. Solutions posted on May 22. HW3: Goldenfeld book "Lectures on phase transitions and RG" Chapter 9, Excercise 9.1, 9.3; Chapter 12 Excercise 12-3 due time May 28 on class Solutions posted on Jun. 11. Final projects (to be added more) . Exact solution to 2D Ising model. Ref. Schultz, Lieb, and Mattis, Rev. Mod. Phys. 36, 856 (1964). Properties of quantum Ising model, Chapter 4 of the book "Quantum Phase transition" by S. Sachdev. Fluctuation induced first-order phase transition (Weinberg-Coleman mechanism) (I) Ref: B. I. Halperin, T. C. Lubensky and Shang-keng Ma, PRL 47, 1469 (1974); Fluctuation induced first-order phase transition (Weinberg-Coleman mechanism (II) Ref: "Radiative Corrections as the Origin of Spontaneous Symmetry Breaking", Weinberg-Coleman, Phys. Rev. D 7, 1888 (1973), or Peskin book P469. Quantum critical behavior of Heisenberg model in 2D Ref S. Charkaravarty, B. I. Halperin, and D. R. Nelson, PRB 39, 2344 (1989) Mermin-Wagner theorem and related things Ref: Auberbach's book: Interacting electrons and quantum magnetism, Chapter 6. Application of non-linear sigma model to quantum spin chain Ref: Auberbach's book: Interacting electrons and quantum magnetism Chapter 12 and 14 RG for dynamic systems. Ref: Goldenfel's text book RG in the field theory method: Callan-Symmanzik equaiton. Ref: Peskin's textbook. Chapter 12 and 13 Quantum phase transition of itinerant electrons. Hertz-Millis Ref: Ben Simons' texbbook "Condensed matter field theory", Chapter 8, problem 8.8.2. The density-matrix renormalization group: Rev. Mod. Phys. 77, 259 (2005) U. Schollwock. An introduction to lattice gauge theory and spin systems Rev. Mod. Phys. 51, 659 (1979), John B. Kogut. Quantum phase transtion, Rev. Mod. Phys. 69, 315 (1997) S. L. Sondhi, S. M. Girvin, J. P. Carini, and D. Shahar. Criticality on fractals, arXiv:1404.6311, "Quantum criticality from Ising model on fractal lattices" Beni Yoshida, Aleksander Kubica. Back to home Last modified: Jan 7, 2010.
384
385
Published Time: 2017-08-11T09:20:17-07:00 Generating a board from a list of moves - Chess Forums - Chess.com =============== HomePlayPuzzlesLearnWatchNewsSocialMore All Results for All Results for Sign UpLog In English Light UI Dark UI Support Forums Forums General Chess Discussion Generating a board from a list of moves Sort: Oldest joepezzula Aug 11, 2017 1 #1 I'm playing chess with my uncle over email. We have a list of about 20 moves thus far, but each time I get a new email I have to go to a site and move the pieces digitally, all 20 moves, to get the board up to date. I know, I know I can do it physically with a real board, but is there any site / program where I can copy and paste the list of moves, and a board of that status is generated automatically? Probably makes me lazy. BeepBeepImA747 Aug 11, 2017 0 #2 Chess.com daily chess joepezzula Aug 11, 2017 -1 #3 ? My uncle is not going to sign up for that. We're doing an old-school play by listing the moves, but I'm more of a visual person so wanted to see if something online existed to type in the moves. Sqod Aug 11, 2017 0 #4 If you learn FEN you can write down the board position within about one minute, then paste that into chess.com's Analysis board to get the corresponding board displayed. Martin_Stahl Aug 11, 2017 0 #5 Or use a Chess DB program to keep track of your games. Sqod Aug 11, 2017 1 #6 Or if your computer has Windows it should have Chess Titans, where you can play the moves on that board (in Human vs Human mode), and save the game as you exit the program. xman720 Aug 11, 2017 0 #7 Or you can keep track of the game in pgn format and paste that into an analysis board. 1: e4 e5 2: Nf3 Nc6 3: Bb5 a6 is valid pgn format as long as the asterisk is at the end. 02rup Dec 23, 2017 0 #8 1+ 02rup Dec 23, 2017 0 #9 Sorry. Tried and failed to post pgn 😕 wistiti3000 Dec 23, 2017 0 #10 I have created a tool which do exactly that from a FEN notation. It's free and fun : ganapathysubramanian Dec 23, 2017 0 #11 Is there a software/website that provides printable (black ink only) boards readily? SeniorPatzer Dec 23, 2017 0 #12 Sqod wrote: If you learn FEN you can write down the board position within about one minute, then paste that into chess.com's Analysis board to get the corresponding board displayed. Where can I learn FEN? Sqod Dec 23, 2017 0 #13 SeniorPatzer wrote: Where can I learn FEN? Start at the same place you would for anything else: Wikipedia. That's what I did. MARattigan Dec 23, 2017 0 #14 SeniorPatzer wrote: Sqod wrote: If you learn FEN you can write down the board position within about one minute, then paste that into chess.com's Analysis board to get the corresponding board displayed. Where can I learn FEN? You don't really need to. If you have a chess program save the starting position as a PGN then copy your moves to the end (replacing anything after the stuff in square brackets) with a text editor,. It should reload into your chess program with the game so far. Click on the right hand barred arrow or the last move or whatever to get the final position. You don't need move numbers or anything else just so long as the moves are in the right order. Make sure if you have pawn promotions to write e.g. c8=Q rather than c8Q. If you don't have a chess program download Tarrasch from here ActuallySleepy Dec 23, 2017 0 #15 Why not just gets real board to keep up with it? thNaksbsbs Oct 21, 2023 0 #16 [Event "Vs. Computer"] [Site "Chess.com"] [Date "2023-10-21"] [White "Francis e4 e5 2. Nf3 Nc6 3. Bb5 a6 4. Ba4 d6 5. Bxc6+ bxc6 6. d4 Bg4 7. dxe5 Bxf3 8. Qxf3 dxe5 9. O-O f6 10. Rd1 Bd6 11. Nc3 Ne7 12. a4 O-O 13. Be3 Rb8 14. b3 Ng6 Qe2 Nf4 16. Qxa6 Qd7 17. Bxf4 exf4 18. Qc4+ Kh8 19. Qe2 Rfe8 20. Rd3 f5 21. Qf3 fxe4 22. Nxe4 Qe7 23. a5 Qxe4 24. Qxe4 Rxe4 25. a6 Ra8 26. Kf1 Be5 27. c3 Kg8 28. Ra5 Kf7 29. f3 Re3 30. Rxe3 fxe3 31. Rxe5 Rxa6 32. Rxe3 Ra3 33. b4 Ra1+ Kf2 Ra2+ 35. Re2 Rxe2+ 36. Kxe2 Ke6 37. c4 Kd6 38. Ke3 Ke5 39. h4 Kd6 40. Kd4 Ke6 41. Kc5 Kd7 42. g4 Kc8 43. Kxc6 Kd8 44. f4 1-0 MARattigan Oct 22, 2023 0 #17 That one won't do much good - it's got 4 "["s and only 3 "]"s and an odd number of unescaped double quotes. Just copy your moves into a text file on as many lines as required using not more than 254 characters per line and not splitting any individual moves across lines. E.g.: e4 e5 Nf3 Nc6 Bb5 Then select "Learn"->"Analysis" from the menu on this site and drag the file into the area that says, "Paste one or more PGNs, or drag and drop your PGN file here" and click "Add Game(s)". Your moves are then listed. Click on the last move and your position is displayed. E.g. with the moves shown: then add your chosen move and your opponent's response to your text file. Bit inconvenient that it gives you an analysis, but just don't look. MARattigan Oct 22, 2023 0 #18 xman720 wrote: Or you can keep track of the game in pgn format and paste that into an analysis board. 1: e4 e5 2: Nf3 Nc6 3: Bb5 a6 is valid pgn format as long as the asterisk is at the end. To be valid, the asterisk at the end of an incomplete game is required for both import and export pgns as you say, but most programs, including chess.com, don't require it in an import pgn. I think the relevant specs are 1994 versionand 2010 version. In both versions the colons following the move numbers are invalid, but are accepted in import format by some programs (e.g. Arena) - unfortunately not chess.com. For an import pgn, the specifications require an arbitrary sequence of white space characters followed by an arbitrary number of periods (which I believe is American for full stops). So valid (and accepted by chess.com Analyse) would be e.g. 1..e4 1 .........e5 Nf3 2Nc6 Bb5 3 a6 But the move numbers are not required for an import pgn by either the specs or chess.com, so e4 e5 Nf3 Nc6 Bb5 a6 is simplest (and also accepted by chess.com Analyse, though strictly invalid without the final asterisk). li-kirb7 Mar 13, 2025 0 #19 d4 e5 2. Nf3 Bb4+ 3. Nc3 exd4 4. Nxd4 d5 5. a3 Bc5 6. b4 Bxd4 7. Qxd4 Bf5 8. Qxd5 Bd7 9. Qxb7 Nc6 10. b5 Na5 11. Qa6 Nc6 12. bxc6 Qc8 13. Qb7 Qb8 14. Qxb8+ Rxb8 15. cxd7+ Kxd7 16. Nd5 Kc8 17. Bf4 Nf6 18. Nxc7 Ne8 19. Na6 Rb7 20. e4 Rb6 Be3 Rb2 22. Rc1 Ra2 23. Bc5 Nc7 24. Bc4 Rb2 25. O-O Re8 26. Bxa7 Rb7 27. Bxf7 Rxe4 28. Nc5 Rxa7 29. Nxe4 Rxa3 30. Ra1 Ra8 31. Nd6+ Kd7 32. Rfd1 Kd8 33. Bh5 g6 34. Bf3 Rb8 35. h4 Rb6 36. Nf7+ Kc8 37. Rd8# 1-0 Top Log In or Join Remove Ads Forums Hot Topics Unanswered Most Recent I chess.com better than lichess.org? FineVinbrantVis4 min ago Post your games here :) FineVinbrantVis4 min ago Chess.com has suddenly become really laggy gIobalize19 min ago Community Poll Submission chatterboxkhyle21 min ago Low Rapid / High Daily elo 🤨 NM mikewier30 min ago WHY AM I SO [removed ]STUPID? CAN SOMEONE EXPLAIN WHAT IS WRONG WITH MY BRAIN? Sac-ROOOOOOOK33 min ago THE BAD ABOUT FORUMS delcai00734 min ago Why can't we see mistakes and blunders immediately after the game anymore? Bris231535 min ago Ask your GM Chesscoach kakouloukiya36 min ago im offering free coaching for anyone under 2000 Vardvag38 min ago Forum Legend Following New Comments Locked Topic Pinned Topic Remove Ads Support Chess Terms About Students Jobs Developers User Agreement Privacy Policy Privacy Settings Cheating & Fair Play Partners Compliance Chess.com © 2025 Ready to Play Chess? Play Online Play Friends Play Bots Tournaments Play Now
386
Generating functions involving binomial coefficients 4n 2n  it’s squared, reciprocal and their closed forms for hypergeometric expressions Narendra Bhandari,Bajura district,Nepal [email protected] 27 February 2021 Abstract In this paper we consider the infinite series pertaining to the binomial coeffi-cients 4n 2n  where we make study on the several classes of generating functions containing the coefficients, 4n 2n  , it’s squared 4n 2n 2 and it’s reciprocal 4n 2n −1 by utilizing the generating function and integral representation of central binomial coefficients. Also we make discussions on generating functions of variants form of main results and we make an attempt to give closed forms for the respective hypergeometric forms. Key words: central binomial coefficients, generating function, dilogarithm function, hypergeometric function. 1 Introduction Central binomial coefficients are the particular type of positive integers of binomial coefficients that appear exactly in the middle of the even numbered rows of the Pascal triangle which we define them by 2n n  = (2n)! (n!)2 , ∀n ≥0 1 Interestingly, with the binomial series expansion of the function (1 −4x)−1/2 for |x| < 1/4, the coefficients show up their presence providing us with gen-erating function as 1 √1 −4x = ∞ X n=0 2n n  xn = 1 + 2x + 6x2 + 20x3 + 70x4 + · · · The proof of the last result can easily be obtained in classical combinatorial books and a classical proof is discussed in (see page 2, Lemma 1.1). Since long time the study of central binomial coefficients is being made resulting intriguing identities and theorems in the field of number theory, combinatoric and calculus in the study of infinite series and integrals. Many intriguing results/power series were studied and discussed by Lehmer . This paper is meant to be studying the coefficients of central binomial coef-ficients that appear at the even position namely, 1, 6, 70, 924, 12870, · · ·, ie; the coefficient defined by 4n 2n  = (4n)! ((2n)!)2, ∀n ≥0 As we have the generating function of central binomial coefficients and with the aid of it, it is easy to deduce the generating function for the evened central binomial coefficients, 4n 2n  . In other words ∞ X n=0 4n 2n  xn = 1 √ 2 s 1 + √1 −16x 1 −16x , |x| < 1/16 (1) A proof is discussed in the article (see page 4, Lemma 2.1). 2 Generating function Now we consider five sequences pertaining to even central binomial coeffi-cients with it’s reciprocal and squared version for which we shall be deriving generating function and we define them as An = 4n 2n  , Bn = 1 n 4n 2n  , Cn = 1 n2 4n 2n  , Dn = 4n 2n 2 , En = 4n 2n −1 2 Since we already know that for |x| < 1/16, ∞ X n=0 Anxn = 1 √ 2 s 1 + √1 −16x 1 −16x , on dividing by x and integrating from 0 to z for |z| ≤1/16 we get ∞ X n=1 Bnzn = 4 ln 2 −ln 1 + √ 1 −16z  −2 ln √ 2 + q 1 + √ 1 −16z  (2) which is the strategy used in my article (page 6) however, we shall find a different approach for (2) also the central notion of the paper for gener-ating functions is heavily based on differention and integration method of the resulted power series. As the infinite sums are expressed in terms of hypergeometric function for which on actual solving became cumbersome to get the closed form in terms of elementary functions. So the auxiliary fo-cus of the paper is too give possible elementry results to their respective hypergeometrics expression. 3 Theorems and Proofs Theorem 3.1.(First main result) If Bn = 1 n 4n 2n  n≥1 , then for |z| ≤1 16 the following equality holds. ∞ X n=1 Bnzn = 4 ln 2 −ln 1 + √ 1 −16z  −2 ln √ 2 + q 1 + √ 1 −16z  (3) Before we construct the proof of the theorem we need the Lemma required for the proof. Lemma 3.1.1. For all a, b > 0, the following equality holds Z π 2 0 ln a2 cos2 x + b2 sin2 x  dx = π ln a + b 2  Proof: The proof of lemma is mentioned in 1 which is based on the logarithmic series manipulation. Proof of theorem 3.1. We make the use of Wallis integral , namely W2n = Z π 2 0 sin2n xdx = π 2 2n n  1 4n W4n = Z π 2 0 sin4n xdx = π 2 4n 2n  1 16n 3 rearranging the latter identity gives us 4n 2n  = 2 π Z π 2 0 16n sin4n xdx and plug-ging the value of 4n 2n  in (3) we get ∞ X n=1 Bnzn = 2 π Z π 2 0 ∞ X n=1 (16z sin4 x)n n ! dx = −2 π Z π 2 0 log 1 −16z sin4 x  dx by factoring we observe that 1 −16z sin4 x = (1 + 4√z sin2 x)(1 −4√z sin2 x) ∞ X n=1 Bnzn = −2 π Z π 2 0 log(1 + 4√z sin2 x)dx −2 π Z π 2 0 log(1 −4√z sin2 x)dx we can write 1 ± 4√z sin2 x = cos2 x + sin2 x q1 ± 4√z 2 and by Lemma 3.1.1 it follows that ∞ X n=1 Bnzn = 4 ln 2 −2  ln  1 + q 1 + 4√z  + ln  1 + q 1 −4√z  | {z } Q (4) We further simply Q by simple mean of algebraic work and by rationalizing the expression gives Q = 2 log 1 + √ 1 −16x + 2(1 + √1 −16z) p 1 −4√z + p 1 + 4√z ! by logarithmic properties it follows that Q = 2 log 1 + √ 1 −16z  + 2 log    1 +    4 p 1 −4√z + p 1 + 4√z 2    1/2    = log(1 + √ 1 −16z) + 2 log  ( √ 2)2 + √ 2 q 1 + √ 1 −16z  −ln 2 = ln 1 + √ 1 −16z  +2 ln √ 2 + q 1 + √ 1 −16z  and hence combining 4 ln 2− Q gives us the desired equality (3) and hence completes the proof. 4 Corollaries: If y = sgn(z) is the signum function for real value of |z| ≤1/16, then ∞ X n=1 (y)n Bnzn = 4 ln 2−ln  1 + p 1 −y16z  −2 ln √ 2 + q 1 + p 1 −y16z  Proof : The proof is trivial as it is nothing but just the replacing of x by −x and vice-versa in (3) and at x = 0 we have 0 = 3 ln(2) −2 ln √ 8 = 0 which is true. Similarly,using (1) for |x| < 1 it is straightforward to deduce the following power series equalities ∞ X n=0 yn 16n 4n 2n  xn n + 1 = 8 −23/2 1 + √1 −xy 3/2 3xy (5) ∞ X n=0 yn 16n 4n 2n  xn 2n + 1 = q 2 1 + √1 −xy  √yx = √ 2 p 1 + √1 −xy (6) Proof: To prove (5) and (6), first we integrate (1) giving us (5) and fur-ther replacing x by x2 and followed by integration of (1) yields the required equality (6). Also subtracting twice of (6) from (5) give rise to generating function ∞ X n=0 yn4n 2n  xn 16n(2n + 1)(n + 1) = 6 √ 2yx −  8 −23/2 1 + √1 −yx 3/2 p 1 + √1 −yx 3yx p 1 + √1 −yx For (3) we now evaluate at some values of z, when z = ± 1 16 √ 5 we can observe the appearance of golden ratio(φ) and it’s reciprocal in the final closed form with the alternation of sign ie; ∞ X n=1 1 n 4n 2n   ±1 16 √ 5 n = 4 ln 2−ln 1 + s 2φ∓1 √ 5 ! −2 ln    √ 2 + v u u t1 + s 2φ∓1 √ 5    5 Also for z = −1 64 in (3) we get and identity in the form of golden ratio ∞ X n=1 1 n 4n 2n  1 64n = 6 ln 2 −ln (1 + 2φ) −2 ln  2 + p 1 + 2φ  Since the sum ∞ X n=1 Bnzn = 64F3  1, 1, 5 4, 7 4; 3 2, 2, 2; 16z  z for which its corre-sponding simpler form is equal to (3) which in other words simplifies complex look it into simple result. In the next section, we investigating the power series for 1 n2 4n 2n  . The work for desired power series is completely based on (3). Here Li2(x) denotes the dilogarithm function which we will be encountering in course of the work. Theorem 3.2.(Second main result) If Cn =  1 n2 4n 2n  n≥1 and for |v| ≤ 1 16 δ(v) = p 1 + √1 −16z, then the following equality holds for ∞ X n=1 Cnvn = M+2Li2 1 2 −δ(v) 2 √ 2  +Li2  1 −δ2(v) 2  +4Li2  −δ(v) √ 2  −ln2 √ 2 + δ(v)  + 4 ln 2 ln |v|−ln2 (δ2(v)) 2 −3 ln 2 ln  | √ 2 −δ(v)|  −ln 2 ln |2 −δ2(v)|  −ln 2 ln δ2(v) where M is constant which is 2ζ(2) + 45 4 ln2(2). Proof of theorem 3.2: The proof can be proceed in the same way like that of theorem 3.1 however, we encountered integral 2 π Z π 2 0 Li2(16v sin4 x)dx which become more complicated to solve and to develop the proof of it in an easy way we make use of the result (3). Now dividing the power series obtained in (3) by z and on integrating from 0 to v gives us I(v) ∞ X n=1 Cnvn = Z v 0 4 ln 2 −ln 1 + √1 −16z  −2 ln √ 2 + p 1 + √1 −16z  z dz 6 It is easy to see that the integral has primitive in terms of dilogarithm and logarithmic functions and by applying the linearity of integral we see that primitive of it blows up at it’s lower limit so we treat I(v) an an improper integral ie; lim ϵ→0+ Z v ϵ f(z)dz = F(v) −lim ϵ→0+ F(ϵ) = F(v) −M where f(z) is the integrand and F(v) is an antiderivative. Now by linearity we see that I(v) = 4 ln 2 ln z − Z ln(1 + √1 −16z) z dz | {z } J1 −2 Z ln( √ 2 + p 1 + √1 −16z) z dz | {z } J2 We evaluate J1 by making substitution u = √1 −16z and by partial fractions decomposition we have then J1 = 2 Z u ln(1 + u) u2 −1 du PFD = ln2(1 + u) 2 + Z ln(1 + u) u −1 du Now we set u −1 = w giving us Z ln(2 + w) w dw = Z ln 2dw w + Z ln 1 + w 2  w dw = ln 2 ln w −Li2  −w 2  and hence combining the last two obtained primitive and making undo of each substitution we get J1 equal to Li2 1 −√1 −16z 2  −ln2 1 + √1 −16z  2 −ln(2) ln 1 − √ 1 −16z  + C1 In similar fashion, to evaluate J2 we substitute q 1 + √ 1 −16z = s and by making partial fraction of obtained result after substitution it yields J2 Z 4(s2 −1) ln √ 2 + s  s(s2 −2) ds PFD = Z ln s + √ 2  s2 −2 ds + 2 Z ln s + √ 2  s ds as last three integrals are trivial with primitives Z ln √ 2 + s  s + √ 2 ds = ln2 s + √ 2  2 and with further substitution of −s + √ 2 = t and undoing the substitution 7 we have Z ln 2 √ 2 −t  t dt = −3 2 ln 2 ln √ 2 −s  + Li2 2 − √ 2s 4 ! and 2 Z ln s + √ 2  s ds = ln 2 ln(s) −2Li2  −s √ 2  . Undoing the each sub-stitution made gives us J2 equal to −ln 2 ln δ2(v)  +2Li2 2 − √ 2 p 1 + √1 −16z 4 ! +4Li2 − p 1 + √1 −16z √ 2 ! −ln2 √ 2 + q 1 + √ 1 −16z  −3 ln 2 ln √ 2 − q 1 + √ 1 −16z  + C2 Thus on combining the results 4 ln 2 ln(z) −J1 −J2 and by fundamental theorem of calculus I(v) = F(v) −lim ϵ→0+  4Li2 (−1) −ln2  2 √ 2  −3 ln2(2) 2 −F(ϵ)  where F(ϵ) = 4 ln 2 ln ϵ−3 ln 2 ln √ 2 − q 1 + √ 1 −16ϵ  −ln 2 ln 1 − √ 1 −16ϵ  Now it enough to show that lim ϵ→0+ F(ϵ) = −lim ϵ→0+ ln    √ 2 − p 1 + √1 −16ϵ 3 1 −√1 + 16ϵ  ϵ4   ln 2 and with rationalization of the numerator and simplification gives us −lim ϵ→0+ ln    164  1 + p 1 + √ 16ϵ 4 √ 2 + p 1 + √1 + 16ϵ    ln 2 = −15 2 ln2 2 So I(v) = F(v) + 2ζ(2) + 9 4 ln2 2 + 1 2 ln2 2 + 15 2 ln2 2 = F(v) + 2ζ(2) + 45 4 ln2 2 | {z } M 8 where F(v) is Li2 1 2 − √1 −16v 2  −1 2 ln2 1 + √ 1 −16v  −ln 2 ln 1 − √ 1 −16v ! +4 ln 2 ln |v| + 2Li2 1 2 − p 1 + √1 −16v 2 √ 2 ! −ln2 √ 2 + q 1 + √ 1 −16v  +4Li2 − p 1 + √1 −16v √ 2 ! −3 ln 2 ln √ 2 − q 1 + √ 1 −16v ! and for convenience we write δ(v) = p 1 + √1 −16v for |v| ≤1/16 and com-bining F(v) and M we get the desired result with the completion of proof of the theorem. For the alternating version we replace v by −v and the sum due to Wolfram alpha generates hypergeometric expression 65F4  1, 1, 1, 5 4, 7 4; 3 2, 2, 2, 2; 16v  v which doesn’t seems easy to be reducing to the second main result. How-ever,the strategy above works effectively to provide elementary answer to it. Corollaries: Due to the theorem 3.2, we get some crazy results which are quite straight-forward to show that following identities holds. ∞ X n=1 1 n2 4n 2n  1 16n = 7π2 12 −25 4 ln2 2−4Li2  1 √ 2  +2Li2 2 − √ 2 4 ! −ln2  1 + √ 2  +3 ln 2 ln  1 + √ 2  ≈0.5081222068 · · · (7) ∞ X n=1 (−1)n n2 4n 2n  1 16n = Li2 1 − √ 2 2 ! + 4Li2  − s 1 + √ 2 2  + π2 3 −19 4 ln2 2 +2Li2 √ 2 − p 1 + √ 2 2 √ 2 ! −ln2(1 + √ 2) 2 −ln2 q 1 + √ 2 + √ 2  −3 ln 2 ln q 1 + √ 2 − √ 2  ≈−0.32379214 · · · . (8) 9 As the main result boils down to the integral (see highlighted aforementioned result in red) and last two corollaries too provide the closed form for the integral in particular for v = 1/16 and v = −1/16 ie; ∞ X n=1 1 n2 4n 2n   ± 1 16 n = 2 π Z π 2 0 Li2 ± sin4 x  dx = ( (7) if + sign (8) if −sign And similarly, for any |v| ≤1/16 we can now easily deduce the closed from for hypergeometric forms as well as for the aforementioned integral in red. Since we discussed above on the generating function for 1 n2 4n 2n  and following next section highlights it’s light on the power series for squared of coefficients 4n 2n  or 4n 2n 2 in which work is accompanied by Elliptical integrals of first and second kind with their usual notations K(x) and E(x) respectively along with some intriguing identities. Theorem 3.3. ( third main result) If Dn = (4n 2n 2) n≥0 and for all |w| < 1/256, then the following equality holds. ∞ X n=0 Dn  w2 256 n = √ 2 π Z π 2 0 s 1 + p 1 −w sin4 y 1 −w sin4 y dy = K (√w) −K (−√w) π where K(x) is complete elliptical integral of the first kind. Proof of theorem 3.3: The proof is constructed in a such way where we avoid the evaluation the integral appearing in the main result. To do so we now exploit the power series of An and integral representation of 4n 2n  to obtained the desired integral. ∞ X n=0 4n 2n 2  w 256 n = 2 π Z π 2 0 ∞ X n=0 1 16n 4n 2n  (w sin4 y)ndy and due to result (1) it follows that ∞ X n=0 4n 2n 2  w 256 n = √ 2 π Z π 2 0 s 1 + p 1 −w sin4 y 1 −w sin4 y dy (9) 10 And by the definition of complete elliptical integral of the first kind K(w) = Z π 2 0 dθ p 1 −w2 sin2 θ = π 2 ∞ X n=0 1 16n 2n n 2 w2n, (10) Replacing w by √w and w →−√w. Adding series (9) at √w and −√w gives ∞ X n=0 4n 2n 2  w2 256 n = K(√w) + K(−√w) π (11) and from (9) and (11) we get the required result. The last relation (11) corresponds to the hypergeometric expression X n≥0 Dn  w2 256 n = 4F3 1 4, 1 4, 3 4, 3 4; 1 2, 1 2, 1; w  = K(√w) + K(−√w) π reducing the unpleasant and complex look of hypergeometric expression into simpler form of Elliptical integrals. Equation (11) now for any |w| < 1/256 makes it possible to find the closed form in terms of elliptical form which are not nicer in look. So, we give some beautiful identities by the utility of (11). Some intriguing identities and integral representation In this section, we mention some intriguing identities on series and their corresponding integral representations involving the squared even central bi-nomial coefficients 4n 2n 2 by the explicit use of the theorem 3.3 and generating functions (5) and (6) respectively. ∞ X n=0 4n 2n 2 256n(2n + 1) = Z 1 0 K(√w) + K(−√x) π dw = 2 π −2 √ 2π Γ2 1 4  + Γ2 1 4  2π √ 2π (12) Proof : As we have already established the relation in (11) and we merely do integrate (11) within the interval of w ∈[0, 1] yielding. ∞ X n=0 4n 2n 2 256n(2n + 1) = Z 1 0 K(√w) + K(−√w) π dw (13) Equation (11) breaks down to the integral appearing in (14) and (15) respec-tively and by definition of complete elliptical integrals of first kind Z 1 0 dθ π Z π 2 0 1 p 1 −w sin2 θ + 1 √ 1 + w sin2 θ ! dw = 2 π+ 1 π Z π 2 0 dθ 1 + √ 1 + sin2 θ 11 For the fun purpose we handle (13) by the series manipulation, so by (10) and integrating we obtained the following series π 2 ∞ X n=0 (±1)n 16n(n + 1) 2n n 2 = Z π 2 0 " ∞ X n=0 (±1)n 4n 2n n sin2n θ n + 1 # dθ (14) Using the generating function of central binomial coefficients (14) is easily deducible to Z π 2 0 " ∞ X n=0 (±1)n 4n 2n n  sin2n θ (n + 1) # dθ = Z π 2 0 dθ sin2 θ Z sin2 θ 0 dj √1 ± j (15) Further with respective sign in (15),on integration it got reduced to two different integrals and adding them; Z π 2 0 2 −2 cos θ sin2 θ −2 Z π 2 0 1 − √ 1 + sin2 w sin2 w dθ = 2 + Z π 2 0 2dθ 1 + √ 1 + sin2 θ (16) Due integration by part it is easy to see Z π 2 0 2dθ 1 + √ 1 + sin2 θ = −2 Z π 2 0 1 − √ 1 + sin2 θ sin2 θ θ PFD = 2 Z π 2 0 1 −sin2 θ √ 1 + sin2 θ dθ = Z π 2 0 4dθ √ 1 + sin2 θ −2 Z π 2 0 p 1 + sin2 θdθ = 4K(−1) −2E(−1) with E(m) being complete Elliptical integral of second kind. By standard definition of Elliptical integrals 4K(−1) −2E(−1) equates to Z 1 0 2 −2u2 √ 1 −u4du = 1 2B 1 4, 1 2  −1 2B 3 4, 1 2  = Γ2 1 4  2 √ 2π −(2π)3/2 Γ2 1 4  Dividing (16) by π and combining with last gives the required result. From the above conclusion we also draw the following integral equality Z √ 2 1 udu (1 + u) p (2 −u2)(u2 −1) = π 2 Γ4 1 4  −8π2 (2π)3/2Γ2 1 4  ! = Gπ 2 −1 Gπ for the integral in (16) and we express the last identity in terms of constant called Gauss Constant,G. 12 Following the techniques used for (13) it is quite trivial to deduce the follow-ing elegant identities. ∞ X n=0 4n 2n 2 256n(n + 1) = 20 9π + Γ2 1 4  9π √ 2π + 4 √ 2π 3Γ2 1 4  (17) ∞ X n=0 4n 2n 2 256n(2n + 1)(n + 1) = 16 9π −16 √ 2π 3Γ2 1 4  + 4 √ 2Γ2 1 4  9π3/2 (18) Relation (18) directly follows from (13) and (17). Interestingly, author noted the dazzling identities which the author mentioned as ∞ X n=0 4n 2n 2 28n+3(2n + 1)2 = 1 2 ∞ X n=0 (1 + (−1)n) 16n(n + 1)2 2n n 2 = 1 π − √ 2π Γ2 1 4  (19) ∞ X n=0 1 16n+1 4n 2n 2 16n(n + 1)2 − 2n 2n 2 2−1(n + 2)2 ! = 2 √ 2π 3Γ2 1 4  −2 9 + √ 2Γ2 1 4  27π3/2 (20) The validity and accuracy of the closed form has been confirmed by use of computer algebra system which are expressed in hypergeometric form. Now employing (5) and (6) we give the some bewitching integrals form for (17) and (12) respectively. ∞ X n=0 4n 2n 2 256n(2n + 1) = 2 π Z π 2 0 ∞ X n=0 4n 2n  sin4n θ 16n(2n + 1)dθ (6) = 2 √ 2 π Z π 2 0 dθ q 1 + p 1 −sin4 θ ∞ X n=0 4n 2n 2 256n(n + 1) (5) = 4 3π Z π 2 0 √ 2  1 + cos θ √ 1 + sin2 θ  q 1 + p 1 −sin4 x −4 cos2 θ(1 + sin2 θ) −1 dθ These integrals are in agreement with (12) and (17) indeed via computer check. Also for (18) we conclude an offbeat integral form 2 π Z π 2 0 6 √ 2 sin4 θ −  8 −23/2  1 + p 1 −sin4 θ 3/2 p 1 + cos θ √ 1 + sin2 θ 3 p 1 + cos θ √ 1 + sin2 θ dθ 13 which is merely the combination of last two integrals or generating function of sum in (18) with an output of 16 9π −16 √ 2π 3Γ2 1 4  + 4 √ 2πΓ2 1 4  9π3/2 . With the generating function of binomial coefficients 4n 2n 2 our last focus will be on the generating function for the coefficient in reciprocal form which is now we make shed light on it in next section. Theorem 3.4. (fourth main result) If En = 4n 2n −1 and for all |u| < 2 the following equality holds ∞ X n=0 Enu4n = 16 16 −u4 + 2u arcsin u 2  (4 −u2)3/2 −arcsinh u 2  (4 + u2)3/2 ! Proof of theorem 3.4: The notion of proof for the theorem has been provided in but no closed form mentioned so we now be proving the result in general with an alternative way in which the idea of beta integral form of binomial coefficients is exploited. By beta integral form of binomial coefficients we have n k −1 = (n + 1) Z 1 0 yn(1 −y)ndy ⇒ 4n 2n −1 = (4n + 1) Z 1 0 y2n(1 −x)2ndy we multiply both sides by u4n(4n + 1) and followed by summation ∞ X n=0 4n 2n −1u4n 4n + 1 = ∞ X n=0 Z 1 0 u4ny2n(1 −y)2ndy = Z 1 0 ∞ X n=0 u4ny2n(1 −y)2n ! dy we observed the elementary geometric series for all |y| < 1 in the latter result and hence by partial fraction decomposition (PFD) it follows Z 1 0 dy 1 −u4y2(1 −y)2 PFD = 1 2 Z 1 0  dy 1 + u2y(1 −y) + dy 1 −u2y(1 −y)  Last two integrals are standard arctangent intergrals which are trivial to show Z 1 0 dy 1 −u2y2(1 −y2) = 2 tan−1  u √ 4−u2  u √ 4 −u2 + 2 tanh−1  u √ 4+u2  u √ u2 + 4 14 and since tan−1 y = sin−1  y √ 1−y2  and with x 7→ix we get sinh−1 y = tanh−1  y √ 1+y2  where i = √−1 that implies ∞ X n=0 4n 2n −1 u4n 4n + 1 = 2 sin−1 u 2  u √ 4 −u2 + sinh−1 u 2  u √ 4 + u2 ! (21) We multiple by u and on differentaiting with respect to u gives us ∞ X n=0 4n 2n −1 u4n = 2 4 −u2 + 2u sin−1 u 2  (4 −u2)3/2 + 2 4 + u2 −2u sinh−1 u 2  (4 + u2)3/2 adding the results we obtained the desired equality and hence completes the proof. The right hand expression of the theorem also suggests that the alternating series possess bizzare appearance of final result due to the involvement of complex numbers where the extraction of real part is complex to do however, for non alternating case it is pretty straightforward. Also we can observe that the hypergeometric form of the summation X n≥0 Enu4n = 3F2 1 2, 1, 1; 1 4, 3 4; u4 16  , |u| < 2 which is equal to the expression of right hand side of the theorem. Corollaries: For u = 1 the following equality holds ∞ X n=0 4n 2n −1 = 16 15 + π 9 √ 3 −2 log φ 5 √ 5 For the case of alternating series we perform x 7→√x and then x = √−1 which gives ugly result in terms of complex numbers however, the outstanding simplified result is mentioned in which agrees with the actual answer of the sum. Also we can obtained a squared power series of inverse sine and inverse hyperbolic sine. 4 ∞ X n=0 16nu4n+2 4n 2n  (4n + 1)(4n + 2) = arcsin2 u + arcsinh2u (22) 15 The proof of the identity in (22) is easy to sketch as we merely need to transpose u to the left hand side of (21), hence on integrating and simplifying leads us the desired result. Similarly, integrating (22) we further obtained the following taylor series for 4 ∞ X n=0 16nu4n+3 4n 2n  (4n + 1)(4n + 2)(4n + 3) = 2 √ 1 −u2 arcsin u − √ 1 + u2arcsinhu  + u arcsin2 u + arcsinh2u  (23) u = 1 the sum attains the closed form 4 X n≥0 16n 4n 2n  (4n + 1)(4n + 2)(4n + 3) = π2 4 + ln2  1 + √ 2  −2 √ 2 ln(1 + √ 2) which also leads to have simple closed form the complex hypergeometric 2 3 4F3 1 2, 1 2, 1, 1; 5 4, 3 2, 7 4, 1  ≈0.7513 which is numerically equals to the last result we obtained. 4 Theorems and proofs for some special cases We present some exciting three identities deduced from (22) and (23). Theorems: The following two identities hold. 4 ∞ X n=0 16n 4n 2n  (4n + 1)2(4n + 2) = 4G + 2Li2  −1 √ 2  −2Li2  −1 − √ 2  −π2 12 +ln2(2) 4 −ln2  1 + √ 2  −2 ln  1 + √ 2  ln  2 + √ 2  where G is Catalan constant. 4 ∞ X n=0 16n 4n 2n  (4n + 1)(4n + 2)2 = π2 4 ln 2 −3 8ζ(3) −ln2  1 + √ 2  ln  2 + 2 √ 2  16 +2 3 ln3  1 + √ 2  + ln 2 ln2  1 + √ 2  + Li2 3 + 2 √ 2  ln 3 + 2 √ 2  2 −Li3 3 + 2 √ 2  2 + iπ ln2 3 + 2 √ 2  4 + ln2 3 + 2 √ 2  ln 2 + 2 √ 2  4 Proofs of theorems: For the first identity we make use of (22) in which we divide both sides by u2 and then integrating from 0 to 1 gives us 4 ∞ X n=0 16n 4n 2n  (4n + 1)2(4n + 2) = Z 1 0 arcsin2 u + arcsinh2u u2 du Applying the integration by parts in the latter expression we get −π2 4 −ln2  1 + √ 2  + 2 Z 1 0 arcsin u u √ 1 −u2du + 2 Z 1 0 arcsinhu u √ 1 + u2du Subbing u by sin y and sinh u in former and latter intergral respectively. 2 Z π 2 0 y sin ydy + 2 Z ln(1+ √ 2) 0 y sinh ydy = 4G + 4 Z ln(1+ √ 2) 0 yey e2y −1dy as the highlighted integral in red is well known result of Catalan constant,G and by substituting y = log t in the last integral then by partial fraction decomposition we obtained 4 Z 1+ √ 2 0 ln t t2 −1dt PFD = 2 " −Li2(1 −t) −Li2(−t) −ln t ln(t −1) #1+ √ 2 1 Definite integral = 2Li2  −1 √ 2  −2Li2  −1 − √ 2  + π2 12 −ln(1 + √ 2) ln(2 + √ 2) where we employ the dilogarithm identity Li2(−z) + Li2(−z−1) = −ζ(2) − ln2(z) 2 for z = − √ 2. Combining the obtained results gives the desired closed form. For the second theorem we again explicitly make use of (22) where we divide both sides by u and carrying the integration from 0 to 1 yields Z 1 0 arcsin2 u + arcsinh2u u du IBP = −2 Z 1 0 arcsin u ln u √ 1 −u2 du−2 Z 1 0 arcsinhu ln u √ 1 + u2 du 17 further by making substitution of u as sin y and sinh y in the aforementioned two integrals respectively gives rise to −2 Z π 2 0 y ln(sin y)dy −2 Z ln(1+ √ 2) 0 y ln (sinh y) dy = π2 4 ln 2 −7 8ζ(3) + I The red integral is straightforward by Fourier series of ln(sin y) and I being the last integral which is our main focus of evaluation. −2 Z ln(1+ √ 2) 0 y ln e2y −1  −y ln(2ey)  dy = −2 Z ln(1+ √ 2) 0 y ln e2y −1  dy+V where V = 2 3arcsinh3(1) + ln 2arcsinh2(1) and for blue integral we perform IBP giving us −arcsinh2(1) ln  2 + 2 √ 2  +2 Z ln(1+ √ 2) 0 y2e2y −1 + e2y dy = P + 1 4 Z 3+2 √ 2 1 ln2 t t −1dt in fact the latter integral is easy to deduce by the magic of IBP. 1 4 " ln(1 −t) ln2 t + 2Li2(t) ln t −2Li3(t) #3+2 √ 2 1 = ζ(3) 2 −Li3 3 + 2 √ 2  2 +ln2 3 + 2 √ 2  ln −2 −2 √ 2  4 −Li2 3 + 2 √ 2  ln 3 + 2 √ 2  2 since ln(−2 −2 √ 2) = iπ + ln(2 + 2 √ 2) and collecting the values of P and V for I we acquire the required closed form. Theorem: The following equality holds. 4 ∞ X n=0 16n 4n 2n  (4n + 1)(4n + 2)(4n + 3)2 = 4G−arcsinh(1)  4 √ 2 + log  3 −2 √ 2  where G being Catalan constant and this result is acquired from (23). The integrals to be evaluated are trivial and in the final closed answer we come across a dilogarithm expression Li2(1 − √ 2) −Li2(−1 + √ 2) which is equal to −π2 8 + 1 2ln2( √ 2 −1), evaluated by author himself in MSE (here). 18 The first identity on performaning partial fraction can be written into two series whose equivalent hypergeometric expression turns out to be 44F3 1 4, 1 2, 1, 1; 3 4, 5 4, 5 4; 1  −24F3 1 2, 1 2, 1, 1; 3 4, 5 4, 3 2; 1  ≈2.11022 which attains the closed form derived for the aforementioned first identity. Similarly, second and third identity shows the heavy weight of hypergeometric expressions whose closed we have deduced easily by the mean of generating functions. 5F4 1 2, 1 2, 1 2, 1, 1; 3 4, 5 4, 3 2, 3 2; 1  ≈1.09551 2 3 4F3 1 2, 1 2, 1, 1; 5 4, 3 2, 7 4; 1  −4 9 4F3 1 2, 3 4, 1, 1; 5 4, 7 4, 7 4; 1  ≈0.2317 For the justification of the results we make the use of CAS and the outputs are found be correct. 5 Conclusion From the above study we give possible closed forms to the hypergeometric expression of heavy weight by introducing the powerful notion of generating function producing their correspondening interesting and useful results. 6 References N.Bhandari. Evaluating logarithmic integrals by the logarithmic series manipulation. Romanian Mathematical Magazine,An Interactive Journal -series/. D.H.Lehmer. Interesting series involving central binomial coefficients. American Math, Monthly,7-449-457, 1985. R. Sprugnoli. Sum of reciprocals of the central binomial coefficients. Electronic Journal of Combinatorial Number Theory 6, 2016. E.W.Weisstein. Wallis Cosine Formula.MathWorld. wolfram.com//WallisCosineFormual.html/. 19
387
Published Time: Thu, 14 Aug 2025 05:11:35 GMT Strepsipterans | Encyclopedia.com =============== Skip to main content EXPLORE EXPLORE Earth and Environment History Literature and the Arts Medicine People Philosophy and Religion Places Plants and Animals Science and Technology Social Sciences and the Law Sports and Everyday Life Additional References Articles Daily Literature and the Arts Biographies Libraries, Books, and Printing: Biographies Strepsipterans Strepsiptera gale views 1,505,491updated May 18 2018 Strepsiptera Also known as twisted-winged parasites, strepsipterans are small insects which are internal parasites of other insects. Measuring between 0.02-0.16 in (0.5 and 4 mm) long, the males and females lead totally different lives. Males are free, winged insects—resembling some forms of beetles—and females are wingless, shapeless insects living as parasites. Strepsipterans live all over the world, except Antarctica. Belonging to the largest class of animals in the world—the class Insecta—the superclass Hexapoda contains over 750,000 species. There are two subclasses within Hexapoda: (1) Apterygota (insects without wings) which contains two orders, and (2) Pterygota (insects with wings, accounting for 99.9% of all insect species) which contains twenty-eight orders. Further classification of strepsipterans is continuously being revised. Sometimes, they are considered to be a suborder of the order Coleoptera, an order containing beetles; however, often they are given their own order—the order Strepsiptera. Currently, there are seven families within the order Strepsiptera, containing about 300 species of insects, 60 of which live in North America. As mentioned before, the appearance and behavior of the male and female strepsipterans differ markedly. The female resembles a grub, having no wings, legs, eyes, or mouth. Also, her nervous system is very diminished. She generally attaches herself to another insect as a host. For instance, the female of the Stylopidae species attaches herself to the stomach of a wasp or bee. She burrows almost her entire body into her host, sticking out slightly. While she does not usually kill her host, her presence sterilizes it, causing it to have both male and female sexual organs, and alters its appearance otherwise. The male strepsipteran is independent of any host, having antennae, wings, and large eyes. He darts about during his ten hour lifetime, continuously looking for a female to mate. She lures him with a special odor, since she is almost invisible within her host. He injects his sperm through a small hole located between her thorax and abdomen. The male dies soon after mating; when he dies, his forewings dry out and twist up like a corkscrew, giving these insects their common name. After a few days, the female hatches about 1,500 tiny larvae that are born with eyes, mouths, and three pair of legs ending in suckers. Leaving their mother through an opening in her back, they continue to exist on the host bee (or other insect) until it lands on a flower. At this point, the offspring climb onto the flower and await another bee. Being consumed by a new bee, the young strepsipterans ride back to the hive. The bee regurgitates them when it stocks its nest with nectar, and the young strepsipterans are free to bore into the bee larvae and molt to their legless, inactive forms. Thus, the cycle begins again. The Gale Encyclopedia of Science × Cite this article Pick a style below, and copy the text for your bibliography. MLA Chicago APA "Strepsiptera ." The Gale Encyclopedia of Science. . Encyclopedia.com. 14 Aug. 2025 . "Strepsiptera ." The Gale Encyclopedia of Science. . Encyclopedia.com. (August 14, 2025). "Strepsiptera ." The Gale Encyclopedia of Science. . Retrieved August 14, 2025 from Encyclopedia.com: Learn more about citation styles Citation styles Encyclopedia.com gives you the ability to cite reference entries and articles according to common styles from the Modern Language Association (MLA), The Chicago Manual of Style, and the American Psychological Association (APA). Within the “Cite this article” tool, pick a style to see how all available information looks when formatted according to that style. Then, copy and paste the text into your bibliography or works cited list. Because each style has its own formatting nuances that evolve over time and not all information is available for every reference entry or article, Encyclopedia.com cannot guarantee each citation it generates. Therefore, it’s best to use Encyclopedia.com citations as a starting point before checking the style against your school or publication’s requirements and the most-recent information available at these sites: Modern Language Association The Chicago Manual of Style American Psychological Association Notes: Most online reference entries and articles do not have page numbers. Therefore, that information is unavailable for most Encyclopedia.com content. However, the date of retrieval is often important. Refer to each style’s convention regarding the best way to format page numbers and retrieval dates. In addition to the MLA, Chicago, and APA styles, your school, university, publication, or institution may have its own requirements for citations. Therefore, be sure to refer to those guidelines when editing your bibliography or works cited list. Strepsiptera gale views 3,894,955updated Jun 08 2018 Strepsiptera Also known as twisted-winged parasites , strepsipterans are small insects which are internal parasites of other insects. Measuring between 0.02-0.16 in (0.5 and 4 mm) long, the males and females lead totally different lives. Males are free, winged insects—resembling some forms of beetles—and females are wingless, shapeless insects living as parasites. Strepsipterans live all over the world, except Antarctica . Belonging to the largest class of animals in the world—the class Insecta—the superclass Hexapoda contains over 750,000 species . There are two subclasses within Hexapoda: (1) Apterygota (insects without wings) which contains two orders, and (2) Pterygota (insects with wings, accounting for 99.9% of all insect species) which contains twenty-eight orders. Further classification of strepsipterans is continuously being revised. Sometimes, they are considered to be a suborder of the order Coleoptera, an order containing beetles ; however, often they are given their own order—the order Strepsiptera. Currently, there are seven families within the order Strepsiptera, containing about 300 species of insects, 60 of which live in North America . As mentioned before, the appearance and behavior of the male and female strepsipterans differ markedly. The female resembles a grub, having no wings, legs, eyes, or mouth. Also, her nervous system is very diminished. She generally attaches herself to another insect as a host. For instance, the female of the Stylopidae species attaches herself to the stomach of a wasp or bee. She burrows almost her entire body into her host, sticking out slightly. While she does not usually kill her host, her presence sterilizes it, causing it to have both male and female sexual organs, and alters its appearance otherwise. The male strepsipteran is independent of any host, having antennae, wings, and large eyes. He darts about during his ten hour lifetime, continuously looking for a female to mate. She lures him with a special odor, since she is almost invisible within her host. He injects his sperm through a small hole located between her thorax and abdomen. The male dies soon after mating; when he dies, his forewings dry out and twist up like a corkscrew, giving these insects their common name. After a few days, the female hatches about 1,500 tiny larvae that are born with eyes, mouths, and three pair of legs ending in suckers . Leaving their mother through an opening in her back, they continue to exist on the host bee (or other insect) until it lands on a flower . At this point, the offspring climb onto the flower and await another bee. Being consumed by a new bee, the young strepsipterans ride back to the hive. The bee regurgitates them when it stocks its nest with nectar , and the young strepsipterans are free to bore into the bee larvae and molt to their legless, inactive forms. Thus, the cycle begins again. The Gale Encyclopedia of Science × Cite this article Pick a style below, and copy the text for your bibliography. MLA Chicago APA "Strepsiptera ." The Gale Encyclopedia of Science. . Encyclopedia.com. 14 Aug. 2025 . "Strepsiptera ." The Gale Encyclopedia of Science. . Encyclopedia.com. (August 14, 2025). "Strepsiptera ." The Gale Encyclopedia of Science. . Retrieved August 14, 2025 from Encyclopedia.com: Learn more about citation styles Citation styles Encyclopedia.com gives you the ability to cite reference entries and articles according to common styles from the Modern Language Association (MLA), The Chicago Manual of Style, and the American Psychological Association (APA). Within the “Cite this article” tool, pick a style to see how all available information looks when formatted according to that style. Then, copy and paste the text into your bibliography or works cited list. Because each style has its own formatting nuances that evolve over time and not all information is available for every reference entry or article, Encyclopedia.com cannot guarantee each citation it generates. Therefore, it’s best to use Encyclopedia.com citations as a starting point before checking the style against your school or publication’s requirements and the most-recent information available at these sites: Modern Language Association The Chicago Manual of Style American Psychological Association Notes: Most online reference entries and articles do not have page numbers. Therefore, that information is unavailable for most Encyclopedia.com content. However, the date of retrieval is often important. Refer to each style’s convention regarding the best way to format page numbers and retrieval dates. In addition to the MLA, Chicago, and APA styles, your school, university, publication, or institution may have its own requirements for citations. Therefore, be sure to refer to those guidelines when editing your bibliography or works cited list. Strepsiptera oxford views 2,705,640updated May 11 2018 Strepsiptera (stylops, twisted-wing parasite; phylum Arthropoda, class Insecta) An order of very small (adult males are 1–1.75 mm long), beetle-like insects almost all of which are parasites whose larval stages develop inside other insects. Adult males have a single pair of wings, comparable to the hind wings of other insects, the fore wings being reduced to structures resembling the halteres of Diptera. Females have no wings and in most species the female lives its entire life as an internal parasite of another insect, all but a small part of its body concealed inside its own last larval covering. Infestation causes changes in the body of the host (called ‘stylopization’) by which it can be distinguished from an uninfested individual, and causes the host to become infertile. There are nine families. A Dictionary of Zoology MICHAEL ALLABY × Cite this article Pick a style below, and copy the text for your bibliography. MLA Chicago APA MICHAEL ALLABY "Strepsiptera ." A Dictionary of Zoology. . Encyclopedia.com. 14 Aug. 2025 . MICHAEL ALLABY "Strepsiptera ." A Dictionary of Zoology. . Encyclopedia.com. (August 14, 2025). MICHAEL ALLABY "Strepsiptera ." A Dictionary of Zoology. . Retrieved August 14, 2025 from Encyclopedia.com: Learn more about citation styles Citation styles Encyclopedia.com gives you the ability to cite reference entries and articles according to common styles from the Modern Language Association (MLA), The Chicago Manual of Style, and the American Psychological Association (APA). Within the “Cite this article” tool, pick a style to see how all available information looks when formatted according to that style. Then, copy and paste the text into your bibliography or works cited list. Because each style has its own formatting nuances that evolve over time and not all information is available for every reference entry or article, Encyclopedia.com cannot guarantee each citation it generates. Therefore, it’s best to use Encyclopedia.com citations as a starting point before checking the style against your school or publication’s requirements and the most-recent information available at these sites: Modern Language Association The Chicago Manual of Style American Psychological Association Notes: Most online reference entries and articles do not have page numbers. Therefore, that information is unavailable for most Encyclopedia.com content. However, the date of retrieval is often important. Refer to each style’s convention regarding the best way to format page numbers and retrieval dates. In addition to the MLA, Chicago, and APA styles, your school, university, publication, or institution may have its own requirements for citations. Therefore, be sure to refer to those guidelines when editing your bibliography or works cited list. More From encyclopedia.com About this article Strepsipterans All Sources 3 cengage 2 oxford 1 Updated Aug 18 2018 About encyclopedia.com content Print Topic × 1/1 You Might Also Like NEARBY TERMS Strepsiptera (Strepsipterans) Strepponi, Giuseppina (Clelia Maria Josepha) Strepponi, Giuseppina (actually, Clelia Maria Josepha) Strepponi, Giuseppina (1815–1897) strepitoso Strepa, James, Bl. strep strenuous Strens, Jules strengthener strengthen Strength Training: Nutrition Strength Training for Children and Young Athletes strength training Strength and Weakness Strengell, Marianne (1909–1998) Strenesse Group Streltsy Strelkoff, Tatiana Strelitziaceae Strelitzia Strelitz Strelisker, Marcus Strelisk, Uri ben Phinehas of Streitwieser, Andrew, Jr. Strepsipterans Strepsirrhini strepto- Streptobacillus Streptocarpus Streptococcaceae Streptococcal Disease Streptococcal Infections, Group A Streptococcal Infections, Group B streptococcal poisoning streptococcal toxic shock syndrome Streptococci and Streptococcal Infections streptodornase streptokinase streptolysin Streptopelia Streptopelia decaocto Streptozocin Stresa Stresemann, Erwin Stresemann, Gustav (1878–1929) Stresemann, Wolfgang Stress and Coping Stress and Depression Stress and Memory stress axial cross Subscribe to our newsletter Sign up with your email address to receive news and updates. We respect your privacy. Footer menu Home About Us Help Site Feedback Privacy & Cookie Policy Terms and Conditions Daily © 2019 Encyclopedia.com | All rights reserved. ✕ Do not sell or share my personal information. You have chosen to opt-out of the sale or sharing of your information from this site and any of its affiliates. To opt back in please click the "Customize my ad experience" link. This site collects information through the use of cookies and other tracking tools. Cookies and these tools do not contain any information that personally identifies a user, but personal information that would be stored about you may be linked to the information stored in and obtained from them. This information would be used and shared for Analytics, Ad Serving, Interest Based Advertising, among other purposes. For more information please visit this site's Privacy Policy. CANCEL CONTINUE Your Use of Our Content ✕ The content we make available on this website [and through our other channels] (the “Service”) was created, developed, compiled, prepared, revised, selected, and/or arranged by us, using our own methods and judgment, and through the expenditure of substantial time and effort. This Service and the content we make available are proprietary, and are protected by these Terms of Service (which is a contract between us and you), copyright laws, and other intellectual property laws and treaties. This Service is also protected as a collective work or compilation under U.S. copyright and other laws and treaties. We provide it for your personal, non-commercial use only. You may not use, and may not authorize any third party to use, this Service or any content we make available on this Service in any manner that (i) is a source of or substitute for the Service or the content; (ii) affects our ability to earn money in connection with the Service or the content; or (iii) competes with the Service we provide. These restrictions apply to any robot, spider, scraper, web crawler, or other automated means or any similar manual process, or any software used to access the Service. You further agree not to violate the restrictions in any robot exclusion headers of this Service, if any, or bypass or circumvent other measures employed to prevent or limit access to the Service by automated means. × Information from your device can be used to personalize your ad experience. Do not sell or share my personal information. Terms of Content Use A Raptive Partner Site
388
FEN (Forsyth-Edwards Notation) - Chess Terms - Chess.com =============== HomePlayPuzzlesLearnWatchNewsSocialMore All Results for All Results for Sign UpLog In English Light UI Dark UI Support Chess Terms Forsyth-Edwards Notation (FEN) English‎ Deutsch English 한국어 If you need to describe a position reached during a game of chess, how would you do it? The Forsyth-Edwards Notation (FEN for short) is one of the easiest ways. Here is what you need to know about FEN: What Is FEN? Why Is FEN Important? How Does FEN Work? Piece Placement Active Color Castling Rights Possible En Passant Targets Halfmove Clock Fullmove Number How To Use FEN In Chess.com Conclusion What Is FEN? FEN is the abbreviation of Forsyth-Edwards Notation, and it is the standard notation to describe positions of a chess game. Steven J. Edwards, a computer programmer, created this notation system based on another system designed by the journalist David Forsyth. Edwards modified the older notation system so that chess software could use it. A single line of text can describe this position. FEN differs from the Portable Game Notation (PGN) because it denotes only a single position instead of the moves that lead to it. Why Is FEN Important? FEN is important because it makes it easy to translate any chess position into a single line of text. It facilitates the process of recreating positions using computers and allows players to share them and restart games from any point they desire. For this reason, FEN is especially helpful to chess teachers, coaches, trainers, and students. It replaces the need for chess mentors to send large PGN files to their students and speeds up the process of sharing positions, even when people are far apart. How Does FEN Work? FEN sequences are composed exclusively of ASCII characters so computers can recognize them. These strings have six different fields, each describing one aspect of a position and separated by a space character. Piece Placement The first field represents the placement of pieces. It starts describing the content of each square, beginning from the eighth rank and ending with the first. For each rank, squares begin from the first file and go to the eighth. Lowercase letters describe the black pieces. Just like in PGN, "p" stands for pawn, "r" for rook, "n" for knight, "b" for bishop, "q" for queen, and "k" for king. The same letters are used for the white pieces, but they appear in uppercase. Empty squares are denoted by numbers from one to eight, depending on how many empty squares are between two pieces. The sequence "rnbqkbnr/pppppppp/8/8/8/8/PPPPPPPP/RNBQKBNR" describes the piece placement field of the starting position of a game of chess. The image below illustrates the way FEN records represent the position of the pieces. The FEN piece placement field for this position is "r1bk3r/p2pBpNp/n4n2/1p1NP2P/6P1/3P4/P1P1K3/q5b1" Active Color The second field indicates who moves next. This field always appears in lowercase, and "w" specifies that it is White's turn to move, while "b" indicates that Black plays next. FEN: 8/8/8/4p1K1/2k1P3/8/8/8 b - - 0 1. Notice the "b" in the second field: it is Black's turn to move. Castling Rights The next field tells if the players can castle and to what side. Uppercase letters come first to indicate White's castling availability, followed by lowercase letters for Black's. The letter "k" indicates that kingside castling is available, while "q" means that a player may castle queenside. The symbol "-" designates that neither side may castle. FEN: 4k2r/6r1/8/8/8/8/3R4/R3K3 w Qk - 0 1. The "Qk" in the third field indicates that White may castle queenside and Black may castle kingside. Possible En Passant Targets If a pawn has moved two squares immediately before a position is reached and is thus a possible target for an en passant capture, the FEN string adds the square behind the pawn in algebraic notation in its fourth field. If no en passant targets are available, the "-" symbol is used. FEN: rnbqkbnr/pppppppp/8/8/4P3/8/PPPP1PPP/RNBQKBNR b KQkq e3 0 1. Even though no pawns may capture the e4-pawn, the FEN string would still contain the en passant target square e3 in its fourth field. Notice that the mere fact that a pawn has moved two squares implies that this indicator of a possible en passant capture would be present. The absence of enemy pawns threatening that capture does not influence this notation. Halfmove Clock The next field of the FEN code informs how many moves both players have made since the last pawn advance or piece capture—known by chess programmers as the number of halfmoves. This field is useful to enforce the 50-move draw rule. When this counter reaches 100 (allowing each player to make 50 moves), the game ends in a draw. FEN: 8/5k2/3p4/1p1Pp2p/pP2Pp1P/P4P1K/8/8 b - - 99 50. The fifth field of this string tells us that this game ends in a draw on the next move. Fullmove Number The sixth and last field of the FEN code shows the number of completed turns in the game. This number is incremented by one every time Black moves. Chess programmers call this a fullmove. How To Use FEN In Chess.com You just learned how the FEN system works and how to create one yourself, but that does not mean that you need to do it by hand. Chess.com can do all the hard work for you and provide you with the FEN code for any position automatically. If you want to share a specific position with others, all you have to do is click the share button that you can find in any of your games or in the Analysis Board. Next, when you select the PGN option, the FEN string appears at the top of the window. If you already have a FEN code and you want to translate it to a position, you can also do that very quickly on Chess.com. Go to the Analysis Board and select the Load FEN option. Next, you should paste the FEN code in the field that pops up and click Load. Conclusion You now know what FEN is and its importance. You can also use it to share and check any position you would like. Now head over to our the Analysis Board to test out the FEN feature! Explore More Chess Terms Chess Pieces En Passant Checkmate Remove Ads Chess Terms Zwischenzug Gambit Kotov Syndrome Spell Chess Caissa Highlighted Terms Chess Chess Background Chess Calculator Support Chess Terms About Students Jobs Developers User Agreement Privacy Policy Privacy Settings Cheating & Fair Play Partners Compliance Chess.com © 2025 Ready to Play Chess? Play Online Play Friends Play Bots Tournaments Play Now Wait! Before you go... Play a Quick Game You can play 1, 3, and 5 min gamesTry a Chess Puzzle They start out easy for beginner playersStart a Practice Game Play a low-pressure computer opponent
389
Published Time: Thu, 22 Jun 2023 20:32:39 GMT Number Theory Naoki Sato [email protected] 0 Preface This set of notes on number theory was originally written in 1995 for students at the IMO level. It covers the basic background material that an IMO student should be familiar with. This text is meant to be a reference, and not a replacement but rather a supplement to a number theory textbook; several are given at the back. Proofs are given when appropriate, or when they illustrate some insight or important idea. The problems are culled from various sources, many from actual contests and olympiads, and in general are very difficult. The author welcomes any corrections or suggestions. 1 Divisibility For integers a and b, we say that a divides b, or that a is a divisor (or factor ) of b, or that b is a multiple of a, if there exists an integer c such that b = ca , and we denote this by a | b. Otherwise, a does not divide b, and we denote this by a - b. A positive integer p is a prime if the only divisors of p are 1 and p. If pk | a and pk+1 - a where p is a prime, i.e. pk is the highest power of p dividing a, then we denote this by pk‖a.Useful Facts • If a, b > 0, and a | b, then a ≤ b. • If a | b1, a | b2, . . . , a | bn, then for any integers c1, c2, . . . , cn, a | n ∑ i=1 bici. Theorem 1.1 . The Division Algorithm . For any positive integer a and integer b, there exist unique integers q and r such that b = qa + r and 0 ≤ r < a , with r = 0 iff a | b.1Theorem 1.2 . The Fundamental Theorem of Arithmetic . Every integer greater than 1 can be written uniquely in the form pe1 1 pe2 2 · · · pek k , where the pi are distinct primes and the ei are positive integers. Theorem 1.3 . (Euclid) There exist an infinite number of primes. Proof . Suppose that there are a finite number of primes, say p1, p2, . . . , pn. Let N = p1p2 · · · pn + 1. By the fundamental theorem of arithmetic, N is divisible by some prime p. This prime p must be among the pi, since by assumption these are all the primes, but N is seen not to be divisible by any of the pi, contradiction. Example 1.1 . Let x and y be integers. Prove that 2 x + 3 y is divisible by 17 iff 9 x + 5 y is divisible by 17. Solution . 17 | (2 x + 3 y) ⇒ 17 | [13(2 x + 3 y)], or 17 | (26 x + 39 y) ⇒ 17 | (9 x + 5 y), and conversely, 17 | (9 x + 5 y) ⇒ 17 | [4(9 x + 5 y)], or 17 | (36 x + 20 y) ⇒ 17 | (2 x + 3 y). Example 1.2 . Find all positive integers d such that d divides both n2 +1 and ( n + 1) 2 + 1 for some integer n. Solution . Let d | (n2 + 1) and d | [( n + 1) 2 + 1], or d | (n2 + 2 n + 2). Then d | [( n2 + 2 n + 2) − (n2 + 1)], or d | (2 n + 1) ⇒ d | (4 n2 + 4 n + 1), so d | [4( n2+2 n+2) −(4 n2+4 n+1)], or d | (4 n+7). Then d | [(4 n+7) −2(2 n+1)], or d | 5, so d can only be 1 or 5. Taking n = 2 shows that both of these values are achieved. Example 1.3 . Suppose that a1, a2, . . . , a2n are distinct integers such that the equation (x − a1)( x − a2) · · · (x − a2n) − (−1) n(n!) 2 = 0 has an integer solution r. Show that r = a1 + a2 + · · · + a2n 2n . (1984 IMO Short List) Solution . Clearly, r 6 = ai for all i, and the r − ai are 2 n distinct integers, so |(r − a1)( r − a2) · · · (r − a2n)| ≥ | (1)(2) · · · (n)( −1)( −2) · · · (−n)| = ( n!) 2, 2with equality iff {r − a1, r − a2, . . . , r − a2n} = {1, 2, . . . , n, −1, −2, . . . , −n}. Therefore, this must be the case, so (r − a1) + ( r − a2) + · · · + ( r − a2n)= 2 nr − (a1 + a2 + · · · + a2n)= 1 + 2 + · · · + n + ( −1) + ( −2) + · · · + ( −n) = 0 ⇒ r = a1 + a2 + · · · + a2n 2n . Example 1.4 . Let 0 < a 1 < a 2 < · · · < a mn +1 be mn + 1 integers. Prove that you can select either m + 1 of them no one of which divides any other, or n + 1 of them each dividing the following one. (1966 Putnam Mathematical Competition) Solution . For each i, 1 ≤ i ≤ mn + 1, let ni be the length of the longest sequence starting with ai and each dividing the following one, among the integers ai, ai+1 , . . . , amn +1 . If some ni is greater than n then the problem is solved. Otherwise, by the pigeonhole principle, there are at least m + 1 values of ni that are equal. Then, the integers ai corresponding to these ni cannot divide each other. Useful Facts • Bertrand’s Postulate . For every positive integer n, there exists a prime p such that n ≤ p ≤ 2n. • Gauss’s Lemma . If a polynomial with integer coefficients factors into two polynomials with rational coefficients, then it factors into two poly-nomials with integer coefficients. Problems 1. Let a and b be positive integers such that a | b2, b2 | a3, a3 | b4, b4 | a5,. . . . Prove that a = b.2. Let a, b, and c denote three distinct integers, and let P denote a poly-nomial having all integral coefficients. Show that it is impossible that P (a) = b, P (b) = c, and P (c) = a.(1974 USAMO) 33. Show that if a and b are positive integers, then ( a + 12 )n + ( b + 12 )n is an integer for only finitely many positive integers n.(A Problem Seminar , D.J. Newman) 4. For a positive integer n, let r(n) denote the sum of the remainders when n is divided by 1, 2, . . . , n respectively. Prove that r(k) = r(k − 1) for infinitely many positive integers k.(1981 K¨ ursch´ ak Competition) 5. Prove that for all positive integers n,0 < n ∑ k=1 g(k) k − 2n 3 < 23, where g(k) denotes the greatest odd divisor of k.(1973 Austrian Mathematics Olympiad) 6. Let d be a positive integer, and let S be the set of all positive integers of the form x2 + dy 2, where x and y are non-negative integers. (a) Prove that if a ∈ S and b ∈ S, then ab ∈ S.(b) Prove that if a ∈ S and p ∈ S, such that p is a prime and p | a,then a/p ∈ S.(c) Assume that the equation x2 + dy 2 = p has a solution in non-negative integers x and y, where p is a given prime. Show that if d ≥ 2, then the solution is unique, and if d = 1, then there are exactly two solutions. 2 GCD and LCM The greatest common divisor of two positive integers a and b is the great-est positive integer that divides both a and b, which we denote by gcd( a, b ), and similarly, the lowest common multiple of a and b is the least positive 4integer that is a multiple of both a and b, which we denote by lcm( a, b ). We say that a and b are relatively prime if gcd( a, b ) = 1. For integers a1, a2,. . . , an, gcd( a1, a 2, . . . , a n) is the greatest positive integer that divides all of a1, a2, . . . , an, and lcm( a1, a 2, . . . , a n) is defined similarly. Useful Facts • For all a, b, gcd( a, b ) · lcm( a, b ) = ab . • For all a, b, and m, gcd( ma, mb ) = m gcd( a, b ) and lcm( ma, mb ) = mlcm( a, b ). • If d | gcd( a, b ), then gcd (ad , bd ) = gcd( a, b ) d . In particular, if d = gcd( a, b ), then gcd( a/d, b/d ) = 1; that is, a/d and b/d are relatively prime. • If a | bc and gcd( a, c ) = 1, then a | b. • For positive integers a and b, if d is a positive integer such that d | a, d | b, and for any d′, d′ | a and d′ | b implies that d′ | d, then d =gcd( a, b ). This is merely the assertion that any common divisor of a and b divides gcd( a, b ). • If a1a2 · · · an is a perfect kth power and the ai are pairwise relatively prime, then each ai is a perfect kth power. • Any two consecutive integers are relatively prime. Example 2.1 . Show that for any positive integer N , there exists a multiple of N that consists only of 1s and 0s. Furthermore, show that if N is relatively prime to 10, then there exists a multiple that consists only of 1s. Solution . Consider the N + 1 integers 1, 11, 111, . . . , 111...1 ( N + 1 1s). When divided by N , they leave N + 1 remainders. By the pigeonhole princi-ple, two of these remainders are equal, so the difference in the corresponding integers, an integer of the form 111...000, is divisible by N . If N is relatively prime to 10, then we may divide out all powers of 10, to obtain an integer of the form 111...1 that remains divisible by N .5Theorem 2.1 . For any positive integers a and b, there exist integers x and y such that ax + by = gcd( a, b ). Furthermore, as x and y vary over all integers, ax + by attains all multiples and only multiples of gcd( a, b ). Proof . Let S be the set of all integers of the form ax +by , and let d be the least positive element of S. By the division algorithm, there exist integers q and r such that a = qd + r, 0 ≤ r < d . Then r = a − qd = a − q(ax + by ) = (1 − qx )a − (qy )b, so r is also in S. But r < d , so r = 0 ⇒ d | a, and similarly, d | b, so d | gcd( a, b ). However, gcd( a, b ) divides all elements of S,so in particular gcd( a, b ) | d ⇒ d = gcd( a, b ). The second part of the theorem follows. Corollary 2.2 . The positive integers a and b are relatively prime iff there exist integers x and y such that ax + by = 1. Corollary 2.3 . For any positive integers a1, a2, . . . , an, there exist integers x1, x2, . . . , xn, such that a1x1+a2x2+· · · +anxn = gcd( a1, a 2, . . . , a n). Corollary 2.4 . Let a and b be positive integers, and let n be an integer. Then the equation ax + by = n has a solution in integers x and y iff gcd( a, b ) | n. If this is the case, then all solutions are of the form (x, y ) = ( x0 + t · bd , y 0 − t · ad ) , where d = gcd( a, b ), ( x0, y 0) is a specific solution of ax + by = n, and t is an integer. Proof . The first part follows from Theorem 2.1. For the second part, as stated, let d = gcd( a, b ), and let ( x0, y 0) be a specific solution of ax + by = n,so that ax 0 + by 0 = n. If ax + by = n, then ax + by − ax 0 − by 0 = a(x − x0) + b(y − y0) = 0, or a(x − x0) = b(y0 − y), and hence (x − x0) · ad = ( y0 − y) · bd. Since a/d and b/d are relatively prime, b/d must divide x − x0, and a/d must divide y0 − y. Let x − x0 = tb/d and y0 − y = ta/d . This gives the solutions described above. 6Example 2.2 . Prove that the fraction 21 n + 4 14 n + 3 is irreducible for every positive integer n. (1959 IMO) Solution . For all n, 3(14 n + 3) − 2(21 n + 4) = 1, so the numerator and denominator are relatively prime. Example 2.3 . For all positive integers n, let Tn = 2 2n Show that if m 6 = n, then Tm and Tn are relatively prime. Solution . We have that Tn − 2 = 2 2n − 1 = 2 2n−1·2 − 1= ( Tn−1 − 1) 2 − 1 = T 2 n−1 − 2Tn−1 = Tn−1(Tn−1 − 2) = Tn−1Tn−2(Tn−2 − 2) = · · · = Tn−1Tn−2 · · · T1T0(T0 − 2) = Tn−1Tn−2 · · · T1T0, for all n. Therefore, any common divisor of Tm and Tn must divide 2. But each Tn is odd, so Tm and Tn are relatively prime. Remark . It immediately follows from this result that there are an infinite number of primes. The Euclidean Algorithm . By recursive use of the division algorithm, we may find the gcd of two positive integers a and b without factoring either, and the x and y in Theorem 2.1 (and so, a specific solution in Corollary 2.4). For example, for a = 329 and b = 182, we compute 329 = 1 · 182 + 147 , 182 = 1 · 147 + 35 , 147 = 4 · 35 + 7 , 35 = 5 · 7, and stop when there is no remainder. The last dividend is the gcd, so in our example, gcd(329,182) = 7. Now, working through the above equations 7backwards, 7 = 147 − 4 · 35 = 147 − 4 · (182 − 1 · 147) = 5 · 147 − 4 · 182 = 5 · (329 − 182) − 4 · 182 = 5 · 329 − 9 · 182 . Remark . The Euclidean algorithm also works for polynomials. Example 2.4 . Let n be a positive integer, and let S be a subset of n + 1 elements of the set {1, 2, . . . , 2n}. Show that (a) There exist two elements of S that are relatively prime, and (b) There exist two elements of S, one of which divides the other. Solution . (a) There must be two elements of S that are consecutive, and thus, relatively prime. (b) Consider the greatest odd factor of each of the n + 1 elements in S. Each is among the n odd integers 1, 3, . . . , 2 n − 1. By the pigeon-hole principle, two must have the same greatest odd factor, so they differ (multiplication-wise) by a power of 2, and so one divides the other. Example 2.5 . The positive integers a1, a2, . . . , an are such that each is less than 1000, and lcm( ai, a j ) > 1000 for all i, j, i 6 = j. Show that n ∑ i=1 1 ai < 2. (1951 Russian Mathematics Olympiad) Solution . If 1000 m+1 < a ≤ 1000 m , then the m multiples a, 2 a, . . . , ma do not exceed 1000. Let k1 the number of ai in the interval ( 1000 2 , 1000], k2 in (1000 3 , 1000 2 ], etc. Then there are k1 + 2 k2 + 3 k3 + · · · integers, no greater than 1000, that are multiples of at least one of the ai. But the multiples are distinct, so k1 + 2 k2 + 3 k3 + · · · < 1000 ⇒ 2k1 + 3 k2 + 4 k3 + · · · = ( k1 + 2 k2 + 3 k3 + · · · ) + ( k1 + k2 + k3 + · · · ) < 1000 + n< 2000 . 8Therefore, n ∑ i=1 1 ai ≤ k1 21000 + k2 31000 + k3 41000 + · · · = 2k1 + 3 k2 + 4 k3 + · · · 1000 < 2. Note: It can be shown that n ≤ 500 as follows: Consider the greatest odd divisor of a1, a2, . . . , a1000 . Each must be distinct; otherwise, two differ, multiplication-wise, by a power of 2, which means one divides the other, contradiction. Also, there are only 500 odd numbers between 1 and 1000, from which the result follows. It also then follows that n ∑ i=1 1 ai < 32. Useful Facts • Dirichlet’s Theorem . If a and b are relatively prime positive integers, then the arithmetic sequence a, a + b, a + 2 b, . . . , contains infinitely many primes. Problems 1. The symbols ( a, b, . . . , g ) and [ a, b, . . . , g ] denote the greatest common divisor and lowest common multiple, respectively of the positive inte-gers a, b, . . . , g. Prove that [a, b, c ]2 [a, b ][ a, c ][ b, c ] = (a, b, c )2 (a, b )( a, c )( b, c ). (1972 USAMO) 2. Show that gcd( am − 1, a n − 1) = agcd( m,n ) − 1 for all positive integers a > 1, m, n.93. Let a, b, and c be positive integers. Show that lcm( a, b, c ) = abc · gcd( a, b, c )gcd( a, b ) · gcd( a, c ) · gcd( b, c ). Express gcd( a, b, c ) in terms of abc , lcm( a, b, c ), lcm( a, b ), lcm( a, c ), and lcm( b, c ). Generalize. 4. Let a, b be odd positive integers. Define the sequence ( fn) by putting f1 = a, f2 = b, and by letting fn for n ≥ 3 be the greatest odd divisor of fn−1 + fn−2. Show that fn is constant for n sufficiently large and determine the eventual value as a function of a and b.(1993 USAMO) 5. Let n ≥ a1 > a 2 > · · · > a k be positive integers such that lcm( ai, a j ) ≤ n for all i, j. Prove that ia i ≤ n for i = 1, 2, . . . , k. 3 Arithmetic Functions There are several important arithmetic functions, of which three are pre-sented here. If the prime factorization of n > 1 is pe1 1 pe2 2 · · · pek k , then the number of positive integers less than n, relatively prime to n, is φ(n) = ( 1 − 1 p1 ) ( 1 − 1 p2 ) · · · ( 1 − 1 pk ) n = pe1−11 pe2−12 · · · pek −1 k (p1 − 1)( p2 − 1) · · · (pk − 1) , the number of divisors of n is τ (n) = ( e1 + 1)( e2 + 1) · · · (ek + 1) , and the sum of the divisors of n is σ(n) = ( pe1 1 pe1−11 + · · · + 1)( pe2 2 pe2−12 + · · · + 1) · · · (pek k pek −1 k · · · + 1) = (pe1+1 1 − 1 p1 − 1 ) ( pe2+1 2 − 1 p2 − 1 ) · · · (pek +1 k − 1 pk − 1 ) . Also, φ(1), τ (1), and σ(1) are defined to be 1. We say that a function f is multiplicative if f (mn ) = f (m)f (n) for all relatively prime positive 10 integers m and n, and f (1) = 1 (otherwise, f (1) = 0, which implies that f (n) = 0 for all n). Theorem 3.1 . The functions φ, τ , and σ are multiplicative. Hence, by taking the prime factorization and evaluating at each prime power, the formula above are found easily. Example 3.1 . Find the number of solutions in ordered pairs of positive integers ( x, y ) of the equation 1 x + 1 y = 1 n, where n is a positive integer. Solution . From the given, 1 x + 1 y = 1 n ⇔ xy = nx + ny ⇔ (x − n)( y − n) = n2. If n = 1, then we immediately deduce the unique solution (2,2). For n ≥ 2, let n = pe1 1 pe2 2 · · · pek k be the prime factorization of n. Since x, y > n ,there is a 1-1 correspondence between the solutions in ( x, y ) and the factors of n2, so the number of solutions is τ (n2) = (2 e1 + 1)(2 e2 + 1) · · · (2 ek + 1) . Example 3.2 . Let n be a positive integer. Prove that ∑ d|n φ(d) = n. Solution . For a divisor d of n, let Sd be the set of all a, 1 ≤ a ≤ n, such that gcd( a, n ) = n/d . Then Sd consists of all elements of the form b · n/d ,where 0 ≤ b ≤ d, and gcd( b, d ) = 1, so Sd contains φ(d) elements. Also, it is clear that each integer between 1 and n belongs to a unique Sd. The result then follows from summing over all divisors d of n.Problems 1. Let n be a positive integer. Prove that n ∑ k=1 τ (k) = n ∑ k=1 ⌊nk ⌋ . 11 2. Let n be a positive integer. Prove that ∑ d|n τ 3(d) = ∑ d|n τ (d)  2 . Prove that if σ(N ) = 2 N + 1, then N is the square of an odd integer. (1976 Putnam Mathematical Competition) 4 Modular Arithmetic For a positive integer m and integers a and b, we say that a is congruent to b modulo m if m | (a − b), and we denote this by a ≡ b modulo m, or more commonly a ≡ b (mod m). Otherwise, a is not congruent to b modulo m,and we denote this by a 6 ≡ b (mod m) (although this notation is not used often). In the above notation, m is called the modulus , and we consider the integers modulo m. Theorem 4.1 . If a ≡ b and c ≡ d (mod m), then a + c ≡ b + d (mod m)and ac ≡ bd (mod m). Proof . If a ≡ b and c ≡ d (mod m), then there exist integers k and l such that a = b + km and c = d + lm . Hence, a + c = b + d + ( k + l)m, so a + c ≡ b + d (mod m). Also, ac = bd + dkm + blm + klm 2 = bd + ( dk + bl + klm )m, so ac ≡ bd (mod m). Useful Facts • For all integers n, n2 ≡ { 01 } (mod 4) { if n is even , if n is odd . • For all integers n, n2 ≡  041  (mod 8)  if n ≡ 0 (mod 4) , if n ≡ 2 (mod 4) , if n ≡ 1 (mod 2) . 12 • If f is a polynomial with integer coefficients and a ≡ b (mod m), then f (a) ≡ f (b) (mod m). • If f is a polynomial with integer coefficients of degree n, not identically zero, and p is a prime, then the congruence f (x) ≡ 0 (mod p)has at most n solutions modulo p, counting multiplicity. Example 4.1 . Prove that the only solution in rational numbers of the equation x3 + 3 y3 + 9 z3 − 9xyz = 0 is x = y = z = 0. (1983 K¨ ursch´ ak Competition) Solution . Suppose that the equation has a solution in rationals, with at least one non-zero variable. Since the equation is homogeneous, we may obtain a solution in integers ( x0, y 0, z 0) by multiplying the equation by the cube of the lowest common multiple of the denominators. Taking the equa-tion modulo 3, we obtain x30 ≡ 0 (mod 3). Therefore, x0 must be divisible by 3, say x0 = 3 x1. Substituting, 27 x31 + 3 y30 + 9 z30 − 27 x1y0z0 = 0 ⇒ y30 + 3 z30 + 9 x31 − 9x1y0z0 = 0 . Therefore, another solution is ( y0, z 0, x 1). We may then apply this reduction recursively, to obtain y0 = 3 y1, z0 = 3 z1, and another solution ( x1, y 1, z 1). Hence, we may divide powers of 3 out of our integer solution an arbitrary number of times, contradiction. Example 4.2 . Does one of the first 10 8 + 1 Fibonacci numbers terminate with 4 zeroes? Solution . The answer is yes. Consider the sequence of pairs ( Fk, F k+1 )modulo 10 4. Since there are only a finite number of different possible pairs (10 8 to be exact), and each pair is dependent only on the previous one, this sequence is eventually periodic. Also, by the Fibonacci relation, one can find the previous pair to a given pair, so this sequence is immediately periodic. But F0 ≡ 0 (mod 10 4), so within 10 8 terms, another Fibonacci number divisible by 10 4 must appear. 13 In fact, a computer check shows that 10 4 | F7500 , and ( Fn) modulo 10 4 has period 15000, which is much smaller than the upper bound of 10 8.If ax ≡ 1 (mod m), then we say that x is the inverse of a modulo m,denoted by a−1, and it is unique modulo m. Theorem 4.2 . The inverse of a modulo m exists and is unique iff a is relatively prime to m. Proof . If ax ≡ 1 (mod m), then ax = 1+ km for some k ⇒ ax −km = 1. By Corollary 2.2, a and m are relatively prime. Now, if gcd( a, m ) = 1, then by Corollary 2.2, there exist integers x and y such that ax + my = 1 ⇒ ax =1 − my ⇒ ax ≡ 1 (mod m). The inverse x is unique modulo m, since if x′ is also an inverse, then ax ≡ ax ′ ≡ 1 ⇒ xax ≡ xax ′ ≡ x ≡ x′. Corollary 4.3 . If p is a prime, then the inverse of a modulo p exists and is unique iff p does not divide a. Corollary 4.4 . If ak ≡ bk (mod m) and k is relatively prime to m, then a ≡ b (mod m). Proof . Multiplying both sides by k−1, which exists by Theorem 4.2, yields the result. We say that a set {a1, a 2, . . . , a m} is a complete residue system modulo m if for all i, 0 ≤ i ≤ m−1, there exists a unique j such that aj ≡ i (mod m). Example 4.3 . Find all positive integers n such that there exist complete residue systems {a1, a 2, . . . , a n} and {b1, b 2, . . . , b n} modulo n for which {a1 + b1, a 2 + b2, . . . , a n + bn} is also a complete residue system. Solution . The answer is all odd n. First we prove necessity. For any complete residue system {a1, a 2, . . . , a n} modulo n, we have that a1 + a2 + · · · + an ≡ n(n + 1) /2 (mod n). So, if all three sets are complete residue systems, then a1 +a2 +· · · +an +b1 +b2 +· · · +bn ≡ n2 +n ≡ 0 (mod n)and a1 + b1 + a2 + b2 + · · · + an + bn ≡ n(n + 1) /2 (mod n), so n(n + 1) /2 ≡ 0(mod n). The quantity n(n + 1) /2 is divisible by n iff ( n + 1) /2 is an integer, which implies that n is odd. Now assume that n is odd. Let ai = bi = i for all i. Then ai + bi = 2 i for all i, and n is relatively prime to 2, so by Corollary 4.4, {2, 4, . . . , 2n} is a complete residue system modulo n. Theorem 4.5 . Euler’s Theorem . If a is relatively prime to m, then aφ(m) ≡ 1 (mod m). 14 Proof . Let a1, a2, . . . , aφ(m) be the positive integers less than m that are relatively prime to m. Consider the integers aa 1, aa 2, . . . , aa φ(m). We claim that they are a permutation of the original φ(m) integers ai, modulo m. For each i, aa i is also relatively prime to m, so aa i ≡ ak for some k. Since aa i ≡ aa j ⇔ ai ≡ aj (mod m), each ai gets taken to a different ak under multiplication by a, so indeed they are permuted. Hence, a1a2 · · · aφ(m) ≡ (aa 1)( aa 2) · · · (aa φ(m)) ≡ aφ(m)a1a2 · · · aφ(m) ⇒ 1 ≡ aφ(m) (mod m). Remark . This gives an explicit formula for the inverse of a modulo m: a−1 ≡ aφ(m)−2 (mod m). Alternatively, one can use the Euclidean algorithm to find a−1 ≡ x as in the proof of Theorem 4.2. Corollary 4.6 . Fermat’s Little Theorem (FLT) . If p is a prime, and p does not divide a, then ap−1 ≡ 1 (mod p). Example 4.4 . Show that if a and b are relatively prime positive integers, then there exist integers m and n such that am + bn ≡ 1 (mod ab ). Solution . Let S = am + bn, where m = φ(b) and n = φ(a). Then by Euler’s Theorem, S ≡ bφ(a) ≡ 1 (mod a), or S − 1 ≡ 0 (mod a), and S ≡ aφ(b) ≡ 1 (mod b), or S − 1 ≡ 0 (mod b). Therefore, S − 1 ≡ 0, or S ≡ 1(mod ab ). Example 4.5 . For all positive integers i, let Si be the sum of the products of 1, 2, . . . , p − 1 taken i at a time, where p is an odd prime. Show that S1 ≡ S2 ≡ · · · ≡ Sp−2 ≡ 0 (mod p). Solution . First, observe that (x − 1)( x − 2) · · · (x − (p − 1)) = xp−1 − S1xp−2 + S2xp−3 − · · · − Sp−2x + Sp−1. This polynomial vanishes for x = 1, 2, . . . , p − 1. But by Fermat’s Little Theorem, so does xp−1 − 1 modulo p. Taking the difference of these two polynomials, we obtain another polynomial of degree p − 2 with p − 1 roots modulo p, so it must be the zero polynomial, and the result follows from comparing coefficients. 15 Remark . We immediately have that ( p − 1)! ≡ Sp−1 ≡ − 1 (mod p), which is Wilson’s Theorem. Also, xp − x ≡ 0 (mod p) for all x, yet we cannot compare coefficients here. Why not? Theorem 4.7 . If p is a prime and n is an integer such that p | (4 n2 + 1), then p ≡ 1 (mod 4). Proof . Clearly, p cannot be 2, so we need only show that p 6 ≡ 3 (mod 4). Suppose p = 4 k + 3 for some k. Let y = 2 n, so by Fermat’s Little Theorem, yp−1 ≡ 1 (mod p), since p does not divide n. But, y2 + 1 ≡ 0, so yp−1 ≡ y4k+2 ≡ (y2)2k+1 ≡ (−1) 2k+1 ≡ − 1 (mod p), contradiction. Therefore, p ≡ 1 (mod 4). Remark . The same proof can be used to show that if p is a prime and p | (n2 + 1), then p = 2 or p ≡ 1 (mod 4). Example 4.6 . Show that there are an infinite number of primes of the form 4 k + 1 and of the form 4 k + 3. Solution . Suppose that there are a finite number of primes of the form 4k + 1, say p1, p2, . . . , pn. Let N = 4( p1p2 · · · pn)2 + 1. By Theorem 4.7, N is only divisible by primes of the form 4 k + 1, but clearly N is not divisible by any of these primes, contradiction. Similarly, suppose that there are a finite number of primes of the form 4k + 3, say q1, q2, . . . , qm. Let M = 4 q1q2 · · · qm − 1. Then M ≡ 3 (mod 4), so M must be divisible by a prime of the form 4 k + 3, but M is not divisible by any of these primes, contradiction. Example 4.7 . Show that if n is an integer greater than 1, then n does not divide 2 n − 1. Solution . Let p be the least prime divisor of n. Then gcd( n, p − 1) = 1, and by Corollary 2.2, there exist integers x and y such that nx +( p−1) y = 1. If p | (2 n − 1), then 2 ≡ 2nx +( p−1) y ≡ (2 n)x(2 p−1)y ≡ 1 (mod p) by Fermat’s Little Theorem, contradiction. Therefore, p - (2 n − 1) ⇒ n - (2 n − 1). Theorem 4.8 . Wilson’s Theorem . If p is a prime, then ( p − 1)! ≡ − 1(mod p). (See also Example 4.5.) Proof . Consider the congruence x2 ≡ 1 (mod p). Then x2 − 1 ≡ (x − 1)( x + 1) ≡ 0, so the only solutions are x ≡ 1 and −1. Therefore, for each i,2 ≤ i ≤ p − 2, there exists a unique inverse j 6 = i of i, 2 ≤ j ≤ p − 2, modulo 16 p. Hence, when we group in pairs of inverses, (p − 1)! ≡ 1 · 2 · · · (p − 2) · (p − 1) ≡ 1 · 1 · · · 1 · (p − 1) ≡ − 1 (mod p). Example 4.8 . Let {a1, a 2, . . . , a 101 } and {b1, b 2, . . . , b 101 } be complete residue systems modulo 101. Can {a1b1, a 2b2, . . . , a 101 b101 } be a complete residue system modulo 101? Solution . The answer is no. Suppose that {a1b1, a 2b2, . . . , a 101 b101 } is a complete residue system modulo 101. Without loss of generality, assume that a101 ≡ 0 (mod 101). Then b101 ≡ 0 (mod 101), because if any other bj was congruent to 0 modulo 101, then aj bj ≡ a101 b101 ≡ 0 (mod 101), contradiction. By Wilson’s Theorem, a1a2 · · · a100 ≡ b1b2 · · · b100 ≡ 100! ≡−1 (mod 101), so a1b1a2b2 · · · a100 b100 ≡ 1 (mod 101). But a101 b101 ≡ 0(mod 101), so a1b1a2b2 · · · a100 b100 ≡ 100! ≡ − 1 (mod 101), contradiction. Theorem 4.9 . If p is a prime, then the congruence x2 + 1 ≡ 0 (mod p)has a solution iff p = 2 or p ≡ 1 (mod 4). (Compare to Theorem 7.1) Proof . If p = 2, then x = 1 is a solution. If p ≡ 3 (mod 4), then by the remark to Theorem 4.7, no solutions exist. Finally, if p = 4 k + 1, then let x = 1 · 2 · · · (2 k). Then x2 ≡ 1 · 2 · · · (2 k) · (2 k) · · · 2 · 1 ≡ 1 · 2 · · · (2 k) · (−2k) · · · (−2) · (−1) (multiplying by 2 k −1s) ≡ 1 · 2 · · · (2 k) · (p − 2k) · · · (p − 2) · (p − 1) ≡ (p − 1)! ≡ − 1 (mod p). Theorem 4.10 . Let p be a prime such that p ≡ 1 (mod 4). Then there exist positive integers x and y such that p = x2 + y2. Proof . By Theorem 4.9, there exists an integer a such that a2 ≡ − 1(mod p). Consider the set of integers of the form ax − y, where x and y are integers, 0 ≤ x, y < √p. The number of possible pairs ( x, y ) is then (b√pc + 1) 2 > (√p)2 = p, so by pigeonhole principle, there exist integers 0 ≤ x1, x 2, y 1, y 2 < √p, such that ax 1−y1 ≡ ax 2−y2 (mod p). Let x = x1−x2 and y = y1 − y2. At least one of x and y is non-zero, and ax ≡ y ⇒ a2x2 ≡ 17 −x2 ≡ y2 ⇒ x2 + y2 ≡ 0 (mod p). Thus, x2 + y2 is a multiple of p, and 0 < x 2 + y2 < (√p)2 + ( √p)2 = 2 p, so x2 + y2 = p. Theorem 4.11 . Let n be a positive integer. Then there exist integers x and y such that n = x2 + y2 iff each prime factor of n of the form 4 k + 3 appears an even number of times. Theorem 4.12 . The Chinese Remainder Theorem (CRT) . If a1, a2, . . . , ak are integers, and m1, m2, . . . , mk are pairwise relatively prime integers, then the system of congruences x ≡ a1 (mod m1),x ≡ a2 (mod m2), ... x ≡ ak (mod mk)has a unique solution modulo m1m2 · · · mk. Proof . Let m = m1m2 · · · mk, and consider m/m 1. This is relatively prime to m1, so there exists an integer t1 such that t1 · m/m 1 ≡ 1 (mod m1). Accordingly, let s1 = t1 · m/m 1. Then s1 ≡ 1 (mod m1) and s1 ≡ 0(mod mj ), j 6 = 1. Similarly, for all i, there exists an si such that si ≡ 1(mod mi) and si ≡ 0 (mod mj ), j 6 = i. Then, x = a1s1 + a2s2 + · · · + aksk is a solution to the above system. To see uniqueness, let x′ be another solution. Then x − x′ ≡ 0 (mod mi) for all i ⇒ x − x′ ≡ 0 (mod m1m2 · · · mk). Remark . The proof shows explicitly how to find the solution x. Example 4.9 . For a positive integer n, find the number of solutions of the congruence x2 ≡ 1 (mod n). Solution . Let the prime factorization of n be 2 epe1 1 pe2 2 · · · pek k . By CRT, x2 ≡ 1 (mod n) ⇔ x2 ≡ 1 (mod pei i ) for all i, and x2 ≡ 1 (mod 2 e). We consider these cases separately. We have that x2 ≡ 1 (mod pei i ) ⇔ x2 − 1 = ( x − 1)( x + 1) ≡ 0 (mod pei i ). But pi cannot divide both x − 1 and x + 1, so it divides one of them; that is, x ≡ ± 1 (mod pei i ). Hence, there are two solutions. Now, if ( x − 1)( x + 1) ≡ 0 (mod 2 e), 2 can divide both x − 1 and x + 1, but 4 cannot divide both. For e = 1 and e = 2, it is easily checked that there are 1 and 2 solutions respectively. For e ≥ 3, since there is at most one factor 18 of 2 in one of x − 1 and x + 1, there must be at least e − 1 in the other, for their product to be divisible by 2 e. Hence, the only possibilities are x − 1 or x + 1 ≡ 0, 2 e−1 (mod 2 e), which lead to the four solutions x ≡ 1, 2 e−1 − 1, 2e−1 + 1, and 2 e − 1. Now that we know how many solutions each prime power factor con-tributes, the number of solutions modulo n is simply the product of these, by CRT. The following table gives the answer: e Number of solutions 0, 1 2k 2 2k+1 ≥ 3 2k+2 Theorem 4.11 . Let m be a positive integer, let a and b be integers, and let k = gcd( a, m ). Then the congruence ax ≡ b (mod m) has k solutions or no solutions according as k | b or k - b.Problems 1. Prove that for each positive integer n there exist n consecutive positive integers, none of which is an integral power of a prime. (1989 IMO) 2. For an odd positive integer n > 1, let S be the set of integers x,1 ≤ x ≤ n, such that both x and x + 1 are relatively prime to n. Show that ∏ x∈S x ≡ 1 (mod n). Find all positive integer solutions to 3 x + 4 y = 5 z .(1991 IMO Short List) 4. Let n be a positive integer such that n + 1 is divisible by 24. Prove that the sum of all the divisors of n is divisible by 24. (1969 Putnam Mathematical Competition) 5. (Wolstenholme’s Theorem) Prove that if 1 + 12 + 13 + · · · + 1 p − 119 is expressed as a fraction, where p ≥ 5 is a prime, then p2 divides the numerator. 6. Let a be the greatest positive root of the equation x3 − 3x2 + 1 = 0. Show that ba1788 c and ba1988 c are both divisible by 17. (1988 IMO Short List) 7. Let {a1, a 2, . . . , a n} and {b1, b 2, . . . , b n} be complete residue systems modulo n, such that {a1b1, a 2b2, . . . , a nbn} is also a complete residue system modulo n. Show that n = 1 or 2. 8. Let m, n be positive integers. Show that 4 mn − m − n can never be a square. (1984 IMO Proposal) 5 Binomial Coefficients For non-negative integers n and k, k ≤ n, the binomial coefficient (nk ) is defined as n! k!( n − k)! , and has several important properties. By convention, (nk ) = 0 if k > n .In the following results, for polynomials f and g with integer coefficients, we say that f ≡ g (mod m) if m divides every coefficient in f − g. Theorem 5.1 . If p is a prime, then the number of factors of p in n! is ⌊np ⌋ + ⌊ np2 ⌋ + ⌊ np3 ⌋ · · · . It is also n − sn p − 1 , where sn is the sum of the digits of n when expressed in base p. Theorem 5.2 . If p is a prime, then (pi ) ≡ 0 (mod p)20 for 1 ≤ i ≤ p − 1. Corollary 5.3 . (1 + x)p ≡ 1 + xp (mod p). Lemma 5.4 . For all real numbers x and y, bx + yc ≥ b xc + byc. Proof . x ≥ b xc ⇒ x + y ≥ b xc + byc ∈ Z, so bx + yc ≥ b xc + byc. Theorem 5.5 . If p is a prime, then (pk i ) ≡ 0 (mod p)for 1 ≤ i ≤ pk − 1. Proof . By Lemma 5.4, k ∑ j=1 (⌊ ipj ⌋ + ⌊pk − ipj ⌋) ≤ k ∑ j=1 ⌊pk pj ⌋ , where the LHS and RHS are the number of factors of p in i!( pk − i)! and pk! respectively. But, ⌊ ipk ⌋ = ⌊pk −ipk ⌋ = 0 and ⌊pk pk ⌋ = 1, so the inequality is strict, and at least one factor of p divides (pk i ). Corollary 5.6 . (1 + x)pk ≡ 1 + xpk (mod p). Example 5.1 . Let n be a positive integer. Show that the product of n consecutive positive integers is divisible by n!. Solution . If the consecutive integers are m, m + 1, . . . , m + n − 1, then m(m + 1) · · · (m + n − 1) n! = (m + n − 1 n ) . Example 5.2 . Let n be a positive integer. Show that (n + 1) lcm (( n 0 ) , (n 1 ) , . . . , (nn )) = lcm(1 , 2, . . . , n + 1) . (AMM E2686) Solution . Let p be a prime ≤ n + 1 and let α (respectively β) be the highest power of p in the LHS (respectively RHS) of the above equality. Choose r so that pr ≤ n + 1 < p r+1 . Then clearly β = r. We claim that if pr ≤ m < p r+1 , then pr+1 - (mk ) for 0 ≤ k ≤ m. (∗)21 Indeed, the number of factors of p in (mk ) is γ = r ∑ s=1 (⌊ mps ⌋ − ⌊ kps ⌋ − ⌊m − kps ⌋) . Since each summand in this sum is 0 or 1, we have γ ≤ r; that is, () holds. For 0 ≤ k ≤ n, let ak = ( n + 1) (nk ) = ( n − k + 1) (n + 1 k ) = ( k + 1) (n + 1 k + 1 ) . By ( ∗), pr+1 does not divide any of the integers (nk ), (n+1 k ), or (n+1 k+1 ). Thus, pr+1 can divide ak only if p divides each of the integers n + 1, n − k + 1, and k + 1. This implies that p divides ( n + 1) − (n − k + 1) − (k + 1) = −1, contradiction. Therefore, pr+1 - ak. On the other hand, for k = pr − 1, we have that k ≤ n and ak = ( k + 1) (n+1 k+1 ) is divisible by pr. Therefore, β = r = α. Theorem 5.7 . Lucas’s Theorem . Let m and n be non-negative integers, and p a prime. Let m = mkpk + mk−1pk−1 + · · · + m1p + m0, and n = nkpk + nk−1pk−1 + · · · + n1p + n0 be the base p expansions of m and n respectively. Then (mn ) ≡ (mk nk )( mk−1 nk−1 ) · · · (m1 n1 )( m0 n0 ) (mod p). Proof . By Corollary 5.6, (1 + x)m ≡ (1 + x)mk pk +mk−1pk−1+··· +m1p+m0 ≡ (1 + x)pk mk (1 + x)pk−1mk−1 · · · (1 + x)pm 1 (1 + x)m0 ≡ (1 + xpk )mk (1 + xpk−1 )mk−1 · · · (1 + xp)m1 (1 + x)m0 (mod p). By base p expansion, the coefficient of xn on both sides is (mn ) ≡ (mk nk )( mk−1 nk−1 ) · · · (m1 n1 )( m0 n0 ) (mod p). 22 Corollary 5.8 . Let n be a positive integer. Let A(n) denote the number of factors of 2 in n!, and let B(n) denote the number of 1s in the binary expansion of n. Then the number of odd entries in the nth row of Pascal’s Triangle, or equivalently the number of odd coefficients in the expansion of (1 + x)n, is 2 B(n). Furthermore, A(n) + B(n) = n for all n.Useful Facts • For a polynomial f with integer coefficients and prime p,[f (x)] pn ≡ f (xpn ) (mod p). Problems 1. Let a and b be non-negative integers, and p a prime. Show that (pa pb ) ≡ (ab ) (mod p). Let an be the last non-zero digit in the decimal representation of the number n!. Is the sequence a1, a2, a3, . . . eventually periodic? (1991 IMO Short List) 3. Find all positive integers n such that 2 n | (3 n − 1). 4. Find the greatest integer k for which 1991 k divides 1990 1991 1992 1992 1991 1990 . (1991 IMO Short List) 5. For a positive integer n, let a(n) and b(n) denote the number of binomial coefficients in the nth row of Pascal’s Triangle that are congruent to 1 and 2 modulo 3 respectively. Prove that a(n) − b(n) is always a power of 2. 6. Let n be a positive integer. Prove that if the number of factors of 2 in n! is n − 1, then n is a power of 2. 23 7. For a positive integer n, let Cn = 1 n + 1 (2nn ) , and Sn = C1 + C2 + · · · + Cn.Prove that Sn ≡ 1 (mod 3) if and only if there exists a 2 in the base 3 expansion of n + 1. 6 Order of an Element We know that if a is relatively prime to m, then there exists a positive integer n such that an ≡ 1 (mod m). Let d be the smallest such n. Then we say that d is the order of a modulo m, denoted by ord m(a), or simply ord( a) if the modulus m is understood. Theorem 6.1 . If a is relatively prime to m, then an ≡ 1 (mod m) iff ord( a) | n. Furthermore, an0 ≡ an1 iff ord( a) | (n0 − n1). Proof . Let d = ord( a). It is clear that d | n ⇒ an ≡ 1 (mod m). On the other hand, if an ≡ 1 (mod m), then by the division algorithm, there exist integers q and r such that n = qd + r, 0 ≤ r < d . Then an ≡ aqd +r ≡ (ad)qar ≡ ar ≡ 1 (mod m). But r < d , so r = 0 ⇒ d | n. The second part of the theorem follows. Remark . In particular, by Euler’s Theorem, ord( a) | φ(m). Example 6.1 . Show that the order of 2 modulo 101 is 100. Solution . Let d = ord(2). Then d | φ(101), or d | 100. If d < 100, then d divides 100/2 or 100/5; that is, d is missing at least one prime factor. However, 250 ≡ 1024 5 ≡ 14 5 ≡ 196 · 196 · 14 ≡ (−6) · (−6) · 14 ≡ − 1 (mod 101) , and 220 ≡ 1024 2 ≡ 14 2 ≡ − 6 (mod 101) , so d = 100. Example 6.2 . Prove that if p is a prime, then every prime divisor of 2p − 1 is greater than p.24 Solution . Let q | (2 p − 1), where q is a prime. Then 2 p ≡ 1 (mod q), so ord(2) | p. But ord(2) 6 = 1, so ord(2) = p. And by Fermat’s Little Theorem, ord(2) | (q − 1) ⇒ p ≤ q − 1 ⇒ q > p .In fact, for p > 2, q must be of the form 2 kp + 1. From the above, ord(2) | (q − 1), or p | (q − 1) ⇒ q = mp + 1. Since q must be odd, m must be even. Example 6.3 . Let p be a prime that is relatively prime to 10, and let n be an integer, 0 < n < p . Let d be the order of 10 modulo p.(a) Show that the length of the period of the decimal expansion of n/p is d.(b) Prove that if d is even, then the period of the decimal expansion of n/p can be divided into two halves, whose sum is 10 d/ 2 − 1. For example, 1/7 = 0 .142857, so d = 6, and 142 + 857 = 999 = 10 3 − 1. Solution . (a) Let m be the length of the period, and let n/p =0.a 1a2 . . . a m. Then 10 mnp = a1a2 . . . a m.a 1a2 . . . a m ⇒ (10 m − 1) np = a1a2 . . . a m, an integer. Since n and p are relatively prime, p must divide 10 m − 1, so d divides m. Conversely, p divides 10 d −1, so (10 d −1) n/p is an integer, with at most d digits. If we divide this integer by 10 d − 1, then we obtain a rational number, whose decimal expansion has period at most d. Therefore, m = d.(b) Let d = 2 k, so n/p = 0 .a 1a2 . . . a kak+1 . . . a 2k. Now p divides 10 d −1 = 10 2k − 1 = (10 k − 1)(10 k + 1). However, p cannot divide 10 k − 1 (since the order of 10 is 2 k), so p divides 10 k + 1. Hence, 10 knp = a1a2 . . . a k.a k+1 . . . a 2k ⇒ (10 k + 1) np = a1a2 . . . a k + 0 .a 1a2 . . . a k + 0 .a k+1 . . . a 2k is an integer. This can occur iff a1a2 . . . a k +ak+1 . . . a 2k is a number consisting only of 9s, and hence, equal to 10 k − 1. Problems 25 1. Prove that for all positive integers a > 1 and n, n | φ(an − 1). 2. Prove that if p is a prime, then pp−1 has a prime factor that is congruent to 1 modulo p.3. For any integer a, set na = 101 a−100 ·2a. Show that for 0 ≤ a, b, c, d ≤ 99 , n a + nb ≡ nc + nd (mod 10100) implies {a, b } = {c, d }.(1994 Putnam Mathematical Competition) 4. Show that if 3 ≤ d ≤ 2n+1 , then d - (a2n 1) for all positive integers a. 7 Quadratic Residues Let m be an integer greater than 1, and a an integer relatively prime to m. If x2 ≡ a (mod m) has a solution, then we say that a is a quadratic residue of m. Otherwise, we say that a is a quadratic non-residue . Now, let p be an odd prime. Then the Legendre symbol (ap ) is assigned the value of 1 if a is a quadratic residue of p. Otherwise, it is assigned the value of −1. Theorem 7.1 . Let p be an odd prime, and a and b be integers relatively prime to p. Then (a) (ap ) ≡ a(p−1) /2 (mod p), and (b) (ap )( bp ) = (ab p ) . Proof . If the congruence x2 ≡ a (mod p) has a solution, then a(p−1) /2 ≡ xp−1 ≡ 1 (mod p), by Fermat’s Little Theorem. If the congruence x2 ≡ a (mod p) has no solutions, then for each i, 1 ≤ i ≤ p − 1, there is a unique j 6 = i, 1 ≤ j ≤ p − 1, such that ij ≡ a. Therefore, all the integers from 1 to p − 1 can be arranged into ( p − 1) /2 such pairs. Taking their product, a(p−1) /2 ≡ 1 · 2 · · · (p − 1) ≡ (p − 1)! ≡ − 1 (mod p), 26 by Wilson’s Theorem. Part (b) now follows from part (a). Remark . Part (a) is known as Euler’s criterion. Example 7.1 . Show that if p is an odd prime, then (1 p ) + (2 p ) · · · + (p − 1 p ) = 0 . Solution . Note that 1 2, 2 2, . . . , (( p − 1) /2) 2 are distinct modulo p,and that (( p + 1) /2) 2, . . . , ( p − 1) 2 represent the same residues, simply in reverse. Hence, there are exactly ( p−1) /2 quadratic residues, leaving ( p−1) /2quadratic non-residues. Therefore, the given sum contains ( p − 1) /2 1s and (p − 1) /2 −1s. Theorem 7.2 . Gauss’s Lemma . Let p be an odd prime and let a be relatively prime to p. Consider the least non-negative residues of a, 2 a, . . . , (( p − 1) /2) a modulo p. If n is the number of these residues that are greater than p/ 2, then (ap ) = ( −1) n. Theorem 7.3 . If p is an odd prime, then (−1 p ) = ( −1) (p−1) /2; that is, (−1 p ) = { 1 if p ≡ 1 (mod 4) , −1 if p ≡ 3 (mod 4) . Proof . This follows from Theorem 4.9 (and Theorem 7.1). Theorem 7.4 . If p is an odd prime, then (2 p ) = ( −1) (p2−1) /8; that is, (2 p ) = { 1 if p ≡ 1 or 7 (mod 8) , −1 if p ≡ 3 or 5 (mod 8) . 27 Proof . If p ≡ 1 or 5 (mod 8), then 2(p−1) /2 (p − 12 ) ! ≡ 2 · 4 · 6 · · · (p − 1) ≡ 2 · 4 · 6 · · · (p − 12 ) · ( −p − 32 ) · · · (−5) · (−3) · (−1) ≡ (−1) (p−1) /4 (p − 12 ) ! ⇒ 2(p−1) /2 ≡ (−1) (p−1) /4 (mod p). By Theorem 7.1, (2 p ) = ( −1) (p−1) /4. Hence, (2 p ) = 1 or −1 according as p ≡ 1 or 5 (mod 8). Similarly, if p ≡ 3 or 7 (mod 8), then 2(p−1) /2 (p − 12 ) ! ≡ 2 · 4 · 6 · · · (p − 32 ) · ( −p − 12 ) · · · (−5) · (−3) · (−1) ≡ (−1) (p+1) /4 (p − 12 ) ! ⇒ 2(p−1) /2 ≡ (−1) (p+1) /4 (mod p). Hence, (2 p ) = 1 or −1 according as p ≡ 7 or 3 (mod 8). Example 7.2 . Prove that if n is an odd positive integer, then every prime divisor of 2 n − 1 is of the form 8 k ± 1. (Compare to Example 6.2) Solution . Let p | (2 n − 1), where p is prime. Let n = 2 m + 1. Then 2n ≡ 22m+1 ≡ 2(2 m)2 ≡ 1 (mod p) ⇒ (2 p ) = 1 ⇒ p is of the form 8 k ± 1. Theorem 7.5 . The Law of Quadratic Reciprocity . For distinct odd primes p and q, (pq ) ( qp ) = ( −1) p−12 · q−12 . Example 7.3 . For which primes p > 3 does the congruence x2 ≡ − 3(mod p) have a solution? Solution . We seek p for which (−3 p ) = (−1 p ) ( 3 p ) = 1. By quadratic reciprocity, (3 p ) ( p 3 ) = ( −1) (p−1) /2 = (−1 p ) , 28 by Theorem 7.3. Thus, in general, (−3 p ) = (−1 p ) ( 3 p ) = (p 3 ) ( −1 p )2 = (p 3 ) . And, ( p 3 ) = 1 iff p ≡ 1 (mod 3). Since p 6 ≡ 4 (mod 6), we have that x2 ≡ − 3(mod p) has a solution iff p ≡ 1 (mod 6). Example 7.4 . Show that if p = 2 n + 1, n ≥ 2, is prime, then 3 (p−1) /2 + 1 is divisible by p. Solution . We must have that n is even, say 2 k, for otherwise p ≡ 0(mod 3). By Theorem 7.1, (3 p ) ≡ 3(p−1) /2 (mod p). However, p ≡ 1 (mod 4), and p ≡ 4k + 1 ≡ 2 (mod 3) ⇒ (p 3 ) = −1, and by quadratic reciprocity, (3 p ) ( p 3 ) = ( −1) (p−1) /2 = 1 , so (3 p ) = −1 ⇒ 3(p−1) /2 + 1 ≡ 0 (mod p). Useful Facts • (a) If p is a prime and p ≡ 1 or 3 (mod 8), then there exist positive integers x and y such that p = x2 + 2 y2.(b) If p is a prime and p ≡ 1 (mod 6), then there exist positive integers x and y such that p = x2 + 3 y2.Problems 1. Show that if p > 3 is a prime, then the sum of the quadratic residues among the integers 1, 2, . . . , p − 1 is divisible by p.2. Let Fn denote the nth Fibonacci number. Prove that if p > 5 is a prime, then Fp ≡ (p 5 ) (mod p). 29 3. Show that 16 is a perfect 8 th power modulo p for any prime p.4. Let a, b, and c be positive integers that are pairwise relatively prime, and that satisfy a2 − ab + b2 = c2. Show that every prime factor of c is of the form 6 k + 1. 5. Let p be an odd prime and let ζ be a primitive pth root of unity; that is, ζ is a complex number such that ζp = 1 and ζk 6 = 1 for 1 ≤ k ≤ p − 1. Let Ap and Bp denote the set of quadratic residues and non-residues modulo p, respectively. Finally, let α = ∑ k∈Ap ζk and β = ∑ k∈Bp ζk.For example, for p = 7, α = ζ + ζ2 + ζ4 and β = ζ3 + ζ5 + ζ6. Show that α and β are the roots of x2 + x +1 − (−1 p ) p 4 = 0 . 8 Primitive Roots If the order of g modulo m is φ(m), then we say that g is a primitive root modulo m, or simply of m. Example 8.1 . Show that 2 is a primitive root modulo 3 n for all n ≥ 1. Solution . The statement is easily verified for n = 1, so assume the result is true for some n = k; that is, 2 φ(3 k ) ≡ 22·3k−1 ≡ 1 (mod 3 k). Now, let d be the order of 2 modulo 3 k+1 . Then 2 d ≡ 1 (mod 3 k+1 ) ⇒ 2d ≡ 1 (mod 3 k), so 2 · 3k−1 | d. However, d | φ(3 k+1 ), or d | 2 · 3k. We deduce that d is either 2 · 3k−1 or 2 · 3k. Now we require the following lemma: Lemma. 2 2·3n−1 ≡ 1 + 3 n (mod 3 n+1 ), for all n ≥ 1. This is true for n = 1, so assume it is true for some n = k. Then by assumption, 22·3k−1 = 1 + 3 k + 3 k+1 m for some integer m ⇒ 22·3k = 1 + 3 k+1 + 3 k+2 M for some integer M (obtained by cubing) ⇒ 22·3k ≡ 1 + 3 k+1 (mod 3 k+2 ). By induction, the lemma is proved. Therefore, 2 2·3k−1 ≡ 1 + 3 k 6 ≡ 1 (mod 3 k+1 ), so the order of 2 modulo 3 k+1 is 2 · 3k, and again by induction, the result follows. 30 Corollary 8.2 . If 2 n ≡ − 1 (mod 3 k), then 3 k−1 | n. Proof . The given implies 2 2n ≡ 1 (mod 3 k) ⇒ φ(3 k) | 2n, or 3 k−1 | n. Theorem 8.3 . If m has a primitive root, then it has φ(φ(m)) (distinct) primitive roots modulo m. Theorem 8.4 . The positive integer m has a primitive root iff m is one of 2, 4, pk, or 2 pk, where p is an odd prime. Theorem 8.5 . If g is a primitive root of m, then gn ≡ 1 (mod m) iff φ(m) | n. Furthermore, gn0 ≡ gn1 iff φ(m) | (n0 − n1). Proof . This follows directly from Theorem 6.1. Theorem 8.6 . If g is a primitive root of m, then the powers 1, g, g2,. . . , gφ(m)−1 represent each integer relatively prime to m uniquely modulo m.In particular, if m > 2, then gφ(m)/2 ≡ − 1 modulo m. Proof . Clearly, each power gi is relatively prime to m, and there are φ(m) integers relatively prime to m. Also, if gi ≡ gj (mod m), then gi−j ≡ 1 ⇒ φ(m) | (i − j) by Theorem 8.6, so each of the powers are distinct modulo m. Hence, each integer relatively prime to m is some power gi modulo m. Furthermore, there is a unique i, 0 ≤ i ≤ φ(m) − 1, such that gi ≡ − 1 ⇒ g2i ≡ 1 ⇒ 2i = φ(m), or i = φ(m)/2. Proposition 8.7 . Let m be a positive integer. Then the only solutions to the congruence x2 ≡ 1 (mod m) are x ≡ ± 1 (mod m) iff m has a primitive root. Proof . This follows from Example 4.9. Example 8.2 . For a positive integer m, let S be the set of positive integers less than m that are relatively prime to m, and let P be the product of the elements in S. Show that P ≡ ± 1 (mod m), with P ≡ − 1 (mod m)iff m has a primitive root. Solution . We use a similar strategy as in the proof of Wilson’s Theorem. The result is clear for m = 2, so assume that m ≥ 3. We partition S as follows: Let A be the elements of S that are solutions to the congruence x2 ≡ 1 (mod m), and let B be the remaining elements. The elements in B can be arranged into pairs, by pairing each with its distinct multiplicative inverse. Hence, the product of the elements in B is 1 modulo m.The elements in A may also be arranged into pairs, by pairing each with 31 its distinct additive inverse, i.e. x and m − x. These must be distinct, because otherwise, x = m/ 2, which is not relatively prime to m. Note that their product is x(m − x) ≡ mx − x2 ≡ − 1 (mod m). Now if m has a primitive root, then by Proposition 8.7, A consists of only the two elements 1 and −1, so P ≡ − 1 (mod m). Otherwise, by Example 4.9, the number of elements of A is a power of two that is at least 4, so the number of such pairs in A is even, and P ≡ 1 (mod m). Remark . For m prime, this simply becomes Wilson’s Theorem. Theorem 8.8 .(1) If g is a primitive root of p, p a prime, then g or g + p is a primitive root of p2, according as gp−1 6 ≡ 1 (mod p2) or gp−1 ≡ 1 (mod p2). (2) If g is a primitive root of pk, where k ≥ 2 and p is prime, then g is a primitive root of pk+1 .By Theorem 8.6, given a primitive root g of m, for each a relatively prime to m, there exists a unique integer i modulo φ(m) such that gi ≡ a (mod m). This i is called the index of a with respect to the base g, denoted by ind g(a)(i is dependent on g, so it must be specified). Indices have striking similarity to logarithms, as seen in the following properties: (1) ind g(1) ≡ 0 (mod φ(m)), ind g(g) ≡ 1 (mod φ(m)), (2) a ≡ b (mod m) ⇒ ind g(a) ≡ ind g(b) (mod φ(m)), (3) ind g(ab ) ≡ ind g(a) + ind g(b) (mod φ(m)), (4) ind g(ak) ≡ k ind g(a) (mod φ(m)). Theorem 8.9 . If p is a prime and a is not divisible by p, then the con-gruence xn ≡ a (mod p) has gcd( n, p − 1) solutions or no solutions according as a(p−1) / gcd( n,p −1) ≡ 1 (mod p) or a(p−1) / gcd( n,p −1) 6 ≡ 1 (mod p). Proof . Let g be a primitive root of p, and let i be the index of a with respect to g. Also, any solution x must be relatively prime to p, so let u be the index of x. Then the congruence xn ≡ a becomes gnu ≡ gi (mod p) ⇔ nu ≡ i (mod p − 1). Let k = gcd( n, p − 1). Since g is a primitive root of p, k | i ⇔ gi(p−1) /k ≡ a(p−1) /k ≡ 1. The result now follows from Theorem 4.11. 32 Remark . Taking p to be an odd prime and n = 2, we deduce Euler’s criterion. Example 8.3 Let n ≥ 2 be an integer and p = 2 n + 1. Show that if 3(p−1) /2 + 1 ≡ 0 (mod p), then p is a prime. (The converse to Example 7.4.) Solution . From 3 (p−1) /2 ≡ 32n−1 ≡ − 1 (mod p), we obtain 3 2n ≡ 1(mod p), so the order of 3 is 2 n = p−1, but the order also divides φ(p) ≥ p−1. Therefore, φ(p) = p − 1, and p is a prime. Example 8.4 . Prove that if n = 3 k−1, then 2 n ≡ − 1 (mod 3 k). (A partial converse to Corollary 8.2.) Solution . By Example 8.1, 2 is a primitive root of 3 k. Therefore, 2 has order φ(3 k) = 2 · 3k−1 = 2 n ⇒ 22n ≡ 1 ⇒ (2 n − 1)(2 n + 1) ≡ 0 (mod 3 k). However, 2 n − 1 ≡ (−1) 3k−1 − 1 ≡ 1 6 ≡ 0 (mod 3), so 2 n + 1 ≡ 0 (mod 3 k). Example 8.5 . Find all positive integers n > 1 such that 2n + 1 n2 is an integer. (1990 IMO) Solution . Clearly, n must be odd. Now assume that 3 k‖n; that is, 3 k is the highest power of 3 dividing n. Then 3 2k | n2 | (2 n + 1) ⇒ 2n ≡ − 1(mod 3 2k) ⇒ 32k−1 | n, by Corollary 8.2 ⇒ 2k − 1 ≤ k ⇒ k ≤ 1, showing that n has at most one factor of 3. We observe that n = 3 is a solution. Suppose that n has a prime factor greater than 3; let p be the least such prime. Then p | (2 n +1) ⇒ 2n ≡ − 1 (mod p). Let d be the order of 2 modulo p. Since 2 2n ≡ 1, d | 2n. If d is odd, then d | n ⇒ 2n ≡ 1, contradiction, so d is even, say d = 2 d1. Then 2 d1 | 2n ⇒ d1 | n. Also, d | (p − 1), or 2d1 | (p − 1) ⇒ d1 ≤ (p − 1) /2 < p . But d1 | n, so d1 = 1 or d1 = 3. If d1 = 1, then d = 2, and 2 2 ≡ 1 (mod p), contradiction. If d1 = 3, then d = 6, and 26 ≡ 1 (mod p), or p | 63 ⇒ p = 7. However, the order of 2 modulo 7 is 3, which is odd, again contradiction. Therefore, no such p can exist, and the only solution is n = 3. Useful Facts • All prime divisors of the Fermat number 2 2n 1, n > 1, are of the form 2n+2 k + 1. 33 Problems 1. Let p be an odd prime. Prove that 1i + 2 i + · · · + ( p − 1) i ≡ 0 (mod p)for all i, 0 ≤ i ≤ p − 2. 2. Show that if p is an odd prime, then the congruence x4 ≡ − 1 (mod p)has a solution iff p ≡ 1 (mod 8). 3. Show that if a and n are positive integers with a odd, then a2n ≡ 1(mod 2 n+2 ). 4. The number 142857 has the remarkable property that multiplying it by 1, 2, 3, 4, 5, and 6 cyclically permutes the digits. What are other numbers that have this property? Hint: Compute 142857 × 7. 9 Dirichlet Series Despite the intimidating name, Dirichlet series are easy to work with, and can provide quick proofs to certain number-theoretic identities, such as Example 3.2. Let α be a function taking the positive integers to the integers. Then we say that f (s) = ∞ ∑ n=1 α(n) ns = α(1) + α(2) 2s + α(3) 3s + · · · is the Dirichlet series generating function (Dsgf) of the function α,which we denote by f (s) ↔ α(n). Like general generating functions, these generating functions are used to provide information about their correspond-ing number-theoretic functions, primarily through manipulation of the gen-erating functions. Let 1 denote the function which is 1 for all positive integers; that is, 1( n) = 1 for all n. Let δ1(n) be the function defined by δ1(n) = { 1 if n = 1 , 0 if n > 1. It is easy to check that 1 and δ1 are multiplicative. 34 Now, let α and β be functions taking the positive integers to the integers. The convolution of α and β, denoted α ∗ β, is defined by (α ∗ β)( n) = ∑ d|n α(d)β(n/d ). Note that convolution is symmetric; that is, α ∗ β = β ∗ α. Theorem 9.1 . Let f (s) ↔ α(n) and g(s) ↔ β(n). Then ( f · g)( s) ↔ (α ∗ β)( n). We now do three examples. The Dsgf of 1( n) is the well-known Riemann Zeta function ζ(s): ζ(s) = ∞ ∑ n=1 1 ns = 1 + 12s + 13s + · · · , so ζ(s) ↔ 1( n). This function will play a prominent role in this theory. What makes this theory nice to work with is that we may work with these functions at a purely formal level; no knowledge of the analytic properties of ζ(s) or indeed of any other Dsgf is required. By Theorem 9.1, the number-theoretic function corresponding to ζ2(s) is ∑ d|n 1( d)1( n/d ) = ∑ d|n 1 = τ (n). Hence, ζ2(s) ↔ τ (n). Finally, it is clear that 1 ↔ δ1(n). If α is a multiplicative function, then we can compute the Dsgf corre-sponding to α using the following theorem. Theorem 9.2 . Let α be a multiplicative function. Then ∞ ∑ n=1 α(n) ns = ∏ p ∞ ∑ k=0 α(pk) pks = ∏ p [1 + α(p)p−s + α(p2)p−2s + α(p3)p−3s + · · · ], where the product on the right is taken over all prime numbers. 35 As before, if we take α = 1, then we obtain ζ(s) = ∏ p (1 + p−s + p−2s + p−3s + · · · )= ∏ p ( 11 − p−s ) = 1 ∏ p (1 − p−s), an identity that will be useful. We say that a positive integer n > 1 is square-free if n contains no repeated prime factors; that is, p2 - n for all primes p. With this in mind, we define the M¨ obius function μ as follows: μ(n) =  1 if n = 1 , 0 if n is not square-free, and (−1) k if n is square-free and has k prime factors . It is easy to check that μ is multiplicative. By Theorem 9.2, the corresponding Dsgf is given by ∏ p (1 − p−s) = 1 ζ(s). Hence, 1 /ζ (s) ↔ μ(n), and this property makes the the seemingly mysterious function μ very important, as seen in the following theorem. Theorem 9.3 . (M¨ obius Inversion Formula) Let α and β be functions such that β(n) = ∑ d|n α(d). Then α(n) = ∑ d|n μ(n/d )β(d). Proof . Let f (s) ↔ α(n) and g(s) ↔ β(n). The condition is equivalent to β = α ∗ 1, or g(s) = f (s)ζ(s), and the conclusion is equivalent to α = β ∗ μ,or f (s) = g(s)/ζ (s). Theorem 9.4 . Let f (s) ↔ α(n). Then for any integer k, f (s − k) ↔ nkα(n). 36 For more on Dirichlet series, and generating functions in general, see H. Wilf, Generatingfunctionology .Problems 1. Let α, β, and γ be functions taking the positive integers to the integers. (a) Prove that α ∗ δ1 = α.(b) Prove that ( α ∗ β) ∗ γ = α ∗ (β ∗ γ). (c) Prove that if α and β are multiplicative, then so is α ∗ β.2. Prove that the following relations hold: ζ(s − 1) ζ(s) ↔ φ(n),ζ(s)ζ(s − 1) ↔ σ(n),ζ(s) ζ(2 s) ↔ | μ(n)|. Let the prime factorization of a positive integer n > 1 be pe1 1 pe2 2 · · · pek k .Define the functions λ and θ by λ(n) = ( −1) e1+e2+··· +ek and θ(n) = 2 k.Set λ(1) = θ(1) = 1. Show that λ and θ are multiplicative, and that ζ(2 s) ζ(s) ↔ λ(n) and ζ2(s) ζ(2 s) ↔ θ(n). For all positive integers n, let f (n) = n ∑ m=1 n gcd( m, n ). (a) Show that f (n) = ∑ d|n dφ (d). (b) Let n = pe1 1 pe2 2 · · · pek k 1 be the prime factorization of n. Show that f (n) = (p2e1+1 1 + 1 p1 + 1 ) ( p2e2+1 2 + 1 p2 + 1 ) · · · (p2ek +1 1 + 1 pk + 1 ) . Verify Example 3.2 in one calculation. 37 6. Let id denote the identity function; that is, id( n) = n for all n. Verify each of the following identities in one calculation: (a) φ ∗ τ = σ.(b) μ ∗ 1 = δ1.(c) μ ∗ id = φ.(d) φ ∗ σ = id · τ .(e) σ ∗ id = 1 ∗ (id · τ ). 7. Let a1, a2, . . . , be the sequence of positive integers satisfying ∑ d|n ad = 2 n for all n. Hence, a1 = 2, a2 = 2 2 − 2 = 2, a3 = 2 3 − 2 = 6, a4 =24 − 2 − 2 = 12, and so on. Show that for all n, n | an.Hint: Don’t use the Dsgf of ( an)∞ 1 ; use the M¨ obius Inversion Formula. Bigger Hint: Consider the function f : [0 , 1] → [0 , 1] defined by f (x) = {2x}, where {x} = x − b xc is the fractional part of x. Find how the formula in the problem relates to the function f (n) = f ◦ f ◦ · · · ◦ f ︸ ︷︷ ︸ n .8. For all non-negative integers k, let σk be the function defined by σk(n) = ∑ d|n dk. Thus, σ0 = τ and σ1 = σ. Prove that ζ(s)ζ(s − k) ↔ σk(n). 10 Miscellaneous Topics 10.1 Pell’s Equations Pell’s equations (or Fermat’s equations, as they are rightly called) are diophantine equations of the form x2 − dy 2 = N , where d is a positive non-square integer. There always exist an infinite number of solutions when N = 1, which we characterize. 38 Theorem 10.1.1 . If ( a, b ) is the lowest positive integer solution of x2 − dy 2 = 1, then all positive integer solutions are of the form (xn, y n) = ( (a + b√d)n + ( a − b√d)n 2 , (a + b√d)n − (a − b√d)n 2√d ) . We will not give a proof here, but we will verify that every pair indicated by the formula is a solution. The pair ( xn, y n) satisfy the equations xn + yn √d = ( a + b√d)n, and xn − yn √d = ( a − b√d)n. Therefore, x2 n − dy 2 n = ( xn + yn √d)( xn − yn √d)= ( a + b√d)n(a − b√d)n = ( a2 − db 2)n = 1 , since ( a, b ) is a solution. Remark . The sequences ( xn), ( yn) satisfy the recurrence relations xn =2ax n−1 − xn−2, yn = 2 ay n−1 − yn−2.For x2 − dy 2 = −1, the situation is similar. If ( a, b ) is the least positive solution, then the ( xn, y n) as above for n odd are the solutions of x2 − dy 2 = −1, and the ( xn, y n) for n even are the solutions of x2 − dy 2 = 1. Example 10.1.1 Find all solutions in pairs of positive integers ( x, y ) to the equation x2 − 2y2 = 1. Solution . We find that the lowest positive integer solution is (3,2), so all positive integer solutions are given by (xn, y n) = ( (3 + 2 √2) n + (3 − 2√2) n 2 , (3 + 2 √2) n − (3 − 2√2) n 2√2 ) . The first few solutions are (3,2), (17,12), and (99,70). 39 Example 10.1.2 . Prove that the equation x2 −dy 2 = −1 has no solution in integers if d ≡ 3 (mod 4). Solution . It is apparent that d must have a prime factor of the form 4 k +3, say q. Then x2 ≡ − 1 (mod q), which by Theorem 4.9 is a contradiction. Problems 1. In the sequence 12, 53, 11 8 , 27 19 , . . . , the denominator of the nth term ( n > 1) is the sum of the numerator and the denominator of the ( n − 1) th term. The numerator of the nth term is the sum of the denominators of the nth and ( n−1) th term. Find the limit of this sequence. (1979 Atlantic Region Mathematics League) 2. Let x0 = 0, x1 = 1, xn+1 = 4 xn − xn−1, and y0 = 1, y1 = 2, yn+1 =4yn − yn−1. Show for all n ≥ 0 that y2 n = 3 x2 n (1988 Canadian Mathematical Olympiad) 3. The polynomials P , Q are such that deg P = n, deg Q = m, have the same leading coefficient, and P 2(x) = ( x2 − 1) Q2(x) + 1. Show that P ′(x) = nQ (x). (1978 Swedish Mathematical Olympiad, Final Round) 10.2 Farey Sequences The nth Farey sequence is the sequence of all reduced rationals in [0,1], with both numerator and denominator no greater than n, in increasing order. Thus, the first 5 Farey sequences are: 0/1, 1/1, 0/1, 1/2, 1/1, 0/1, 1/3, 1/2, 2/3, 1/1, 0/1, 1/4, 1/3, 1/2, 2/3, 3/4, 1/1, 0/1, 1/5, 1/4, 1/3, 2/5, 1/2, 3/5, 2/3, 3/4, 4/5, 1/1. Properties of Farey sequences include the following: 40 (1) If a/b and c/d are consecutive fractions in the same sequence, in that order, then ad − bc = 1. (2) If a/b , c/d , and e/f are consecutive fractions in the same sequence, in that order, then a + eb + f = cd. (3) If a/b and c/d are consecutive fractions in the same sequence, then among all fractions between the two, ( a + c)/(b + d) (reduced) is the unique fraction with the smallest denominator. For proofs of these and other interesting properties, see Ross Honsberger, “Farey Sequences”, Ingenuity in Mathematics .Problems 1. Let a1, a2, . . . , am be the denominators of the fractions in the nth Farey sequence, in that order. Prove that 1 a1a2 1 a2a3 · · · + 1 am−1am = 1 . 10.3 Continued Fractions Let a0, a1, . . . , an be real numbers, all positive, except possibly a0. Then let 〈a0, a 1, . . . , a n〉 denote the continued fraction a0 + 1 a1 + · · · + 1 an−1 + 1 an . If each ai is an integer, then we say that the continued fraction is simple .Define sequences ( pk) and ( qk) as follows: p−1 = 0 , p0 = a0, and pk = akpk−1 + pk−2,q−1 = 0 , q0 = 1 , and qk = akpk−1 + qk−2, for k ≥ 1. Theorem 10.3.1 . For all x > 0 and k ≥ 1, 〈a0, a 1, . . . , a k−1, x 〉 = xp k−1 + pk−2 xq k−1 + qk−2 . 41 In particular, 〈a0, a 1, . . . , a k〉 = pk qk . Theorem 10.3.2 . For all k ≥ 0, (1) pkqk−1 − pk−1qk = ( −1) k−1,(2) pkqk−2 − pk−2qk = ( −1) kak.Define ck to be the kth convergence 〈a0, a 1, . . . , a k〉 = pk/q k. Theorem 10.3.3 . c0 < c 2 < c 4 < · · · < c 5 < c 3 < c 1.For a nice connection between continued fractions, linear diophantine equations, and Pell’s equations, see Andy Liu, “Continued Fractions and Diophantine Equations”, Volume 3, Issue 2, Mathematical Mayhem .Problems 1. Let a = 〈1, 2, . . . , 99 〉 and b = 〈1, 2, . . . , 99 , 100 〉. Prove that |a − b| < 199!100! . (1990 Tournament of Towns) 2. Evaluate 8 √√√√√2207 − 12207 − 12207 − · · · . Express your answer in the form a+b√cd , where a, b, c, d are integers. (1995 Putnam) 10.4 The Postage Stamp Problem Let a and b be relatively prime positive integers greater than 1. Consider the set of integers of the form ax + by , where x and y are non-negative integers. The following are true: (1) The greatest integer that cannot be written in the given form is ( a − 1)( b − 1) − 1 = ab − a − b.42 (2) There are 12 (a − 1)( b − 1) positive integers that cannot be written in the given form. (3) For all integers t, 0 ≤ t ≤ ab − a − b, t can be written in the given form iff ab − a − b − t cannot be. (If you have not seen or attempted this enticing problem, it is strongly suggested you have a try before reading the full solution.) Before presenting the solution, it will be instructive to look at an example. Take a = 12 and b = 5. The first few non-negative integers, in rows of 12, with integers that cannot be written in the given form in bold, are shown: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 With this arrangement, one observation should become immediately ap-parent, namely that bold numbers in each column end when they reach a multiple of 5. It should be clear that when reading down a column, once one hits an integer that can be written in the given form, then all successive integers can be as well, since we are adding 12 for each row we go down. It will turn out that this one observation is the key to the solution. Proof . Define a grapefruit to be an integer that may be written in the given form. For each i, 0 ≤ i ≤ a − 1, let mi be the least non-negative integer such that b | (i + am i). It is obvious that for k ≥ mi, i + ak is a grapefruit. We claim that for 0 ≤ k ≤ mi − 1, i + ak is not a grapefruit. It is sufficient to show that i + a(mi − 1) is not a grapefruit, if mi ≥ 1. Let i + am i = bn i, ni ≥ 0. Since i + a(mi − b) = b(ni − a), mi must be strictly less than b; otherwise, we can find a smaller mi. Then i + a(mi − b) ≤ a−1−a = −1, so ni < a , or ni ≤ a−1. Suppose that ax +by = i+a(mi −1) = bn i − a, for some non-negative integers x and y. Then a(x + 1) = b(ni − y), so ni − y is positive. Since a and b are relatively prime, a divides ni − y.However, ni ≤ a − 1 ⇒ ni − y ≤ a − 1, contradiction. Therefore, the greatest non-grapefruit is of the form bn i − a, ni ≤ a − 1. The above argument also shows that all positive integers of this form are also non-grapefruits. Hence, the greatest non-grapefruit is b(a−1) −a = ab −a−b,proving (1). 43 Now, note that there are mi non-grapefruits in column i. The above tells us the first grapefruit appearing in column i is nib. Since 0, b, 2 b, . . . , ( a−1) b appear in different columns (because a and b are relatively prime), and there are a columns, we conclude that as i varies from 0 to a − 1, ni takes on 0, 1, . . . , a − 1, each exactly once. Therefore, summing over i, 0 ≤ i ≤ a − 1, ∑ i (i + am i) = ∑ i i + ∑ i am i = a(a − 1) 2 + a ∑ i mi = ∑ i bn i = a(a − 1) b 2 ⇒ a ∑ i mi = a(a − 1)( b − 1) 2 ⇒ ∑ i mi = (a − 1)( b − 1) 2 , proving (2). Finally, suppose that ax 1 + by 1 = t, and ax 2 + by 2 = ab − a − b − t, for some non-negative integers x1, x2, y1, and y2. Then a(x1 + x2) + b(y1 + y2) = ab − a − b, contradicting (1). So, if we consider the pairs ( t, ab − a − b − t), 0 ≤ t ≤ (a − 1)( b − 1) /2 − 1, at most one element in each pair can be written in the given form. However, we have shown that exactly ( a − 1)( b − 1) /2 integers cannot be written in the given form, which is the number of pairs. Therefore, exactly one element of each pair can be written in the given form, proving (3). Remark . There is a much shorter proof using Corollary 2.4. Can you find it? For me, this type of problem epitomizes problem solving in number theory, and generally mathematics, in many ways. If I merely presented the proof by itself, it would look artificial and unmotivated. However, by looking at a specific example, and finding a pattern, we were able to use that pattern as a springboard and extend it into a full proof. The algebra in the proof is really nothing more than a translation of observed patterns into formal notation. (Mathematics could be described as simply the study of pattern.) Note also that we used nothing more than very elementary results, showing how powerful basic concepts can be. It may have been messy, but one should never be afraid to get one’s hands dirty; indeed, the deeper you go, the 44 more you will understand the importance of these concepts and the subtle relationships between them. By trying to see an idea through to the end, one can sometimes feel the proof almost working out by itself. The moral of the story is: A simple idea can go a long way. For more insights on the postage stamp problem, see Ross Honsberger, “A Putnam Paper Problem”, Mathematical Gems II .Problems 1. Let a, b, and c be positive integers, no two of which have a common divisor greater than 1. Show that 2 abc − ab − bc − ca is the largest integer that cannot be expressed in the form xab + yca + zab , where x, y, and z are non-negative integers. (1983 IMO) References A. Adler & J. Coury, The Theory of Numbers , Jones and Bartlett I. Niven & H. Zuckerman, An Introduction to the Theory of Numbers ,John Wiley & Sons c© First Version October 1995 c© Second Version January 1996 c© Third Version April 1999 c© Fourth Version May 2000 Thanks to Ather Gattami for an improvement to the proof of the Postage Stamp Problem. This document was typeset under L ATEX, and may be freely distributed provided the contents are unaltered and this copyright notice is not removed. Any comments or corrections are always welcomed. It may not be sold for profit or incorporated in commercial documents without the express permis-sion of the copyright holder. So there. 45
390
Matthew Schwartz Statistical Mechanics, Spring 2019 Lecture 7: Ensembles 1 Introduction In statistical mechanics, we study the possible microstates of a system. We never know exactly which microstate the system is in. Nor do we care. We are interested only in the behavior of a system based on the possible microstates it could be, that share some macroscopic proporty (like volume V ; energy E, or number of particles N). The possible microstates a system could be in are known as the ensemble of states for a system. There are different kinds of ensembles. So far, we have been counting microstates with a fixed number of particles N and a fixed total energy E. We defined as the total number microstates for a system. That is (E; V ; N) = X microstatesk withsameN;V ;E 1 (1) Then S = kBln is the entropy, and all other thermodynamic quantities follow from S. For an isolated system with N fixed and E fixed the ensemble is known as the microcanonical ensemble. In the microcanonical ensemble, the temperature is a derived quantity, with 1 T = @S @E. So far, we have only been using the microcanonical ensemble. For example, a gas of identical monatomic particles has (E; V ; N) ∼ 1 N!V NE 3 2N. From this we computed the entropy S = kBln which at large N reduces to the Sackur-Tetrode formula. The temperature is 1 T = @S @E = 3 2 NkB E so that E = 3 2NkBT. Also in the microcanonical ensemble we observed that the number of states for which the energy of one degree of freedom is fixed to "i is (E ¡ "i). Thus the probability of such a state is Pi= (E ¡ "i) (E) ∼e¡"i/kBT. This is the Boltzmann distribution. Within the context of the microcanonical ensemble, we also derived the Boltzmann distribution using the principle of maximum entropy. This approach is very general. It uses nothing about the system other than that the total number of degrees of freedom N is large and the total energy is E. To use the maximum entropy principle we counted the number of ways that N particles could be allocated into groups of size ni with energies "i, so that P ni = N and P ni"i = E. We found that in the most probable allocation of particles to groups, the probability of finding a particle with energy "i was Pi = 1 Ze¡ "i (2) where Z = P i e¡ "i and = 1 kBT. Sometimes we don't know the total energy, but we know the temperature. This situation is in fact much more common than knowing the energy. For example, in the room you are in, what is the energy of the air molecules? I bet you don't have a clue. But I bet you have a good idea of what the temperature is. When we fix temperature instead of energy, we have to allow the energy to fluctuate. For example, think of two systems in thermal contact. The thermal contact allows energy to flow in and out of each system, so energy of each system is not fixed. We call a set of microstates with N ; V and T fixed but variable E the canonical ensemble. In the canonical ensemble, the primary object is not the number of state or the entropy S but rather the partition function Z( ) = X microstatesk withsameN ;V e¡ Ek (3) In the partition function, energies of the microstates summed over can vary. Thus the left-hand side, Z( ), cannot depend on energy. Instead, it depends on temperature. Once Z is known, it is straightforward to compute the average energy hEi and other thermodynamic quantities, as we will see. 1 In both the microcanonical and canonical ensembles, we fix the volume. We could instead let the volume vary and sum over possible volumes. Allowing the volume to vary gives the Gibbs ensemble. In the Gibbs ensemble, the partition function depends on pressure rather than volume, just as the canonical ensemble depended on temperature rather than energy. In the microcanonical, canonical, and Gibbs ensembles, the number of particles N in the system is fixed. In some situations, we want the number of particles to vary. For example, chemical reactions change the number of each molecule type. So in chemistry we can't fix N. Instead we fix something called the chemical potential, µ. Chemical potential is like a pressure for particle number. Chemical potential is a very important concept, but very difficult to grasp, so we will spend a lot of time understanding it in this lecture and beyond. When N can vary we use the grand canonical ensemble. The main object of interest in the grand canonical ensemble is the grand partition function Z = X microstatesk withsameV e¡ Eke µNk (4) The grand canonical ensemble is used in chemistry, quantum statistical mechanics, and much of condensed matter physics. 2 Canonical ensemble In the microcanonical ensemble, we calculated properties of its system by counting the number of microstates at fixed energy. Then, for example, temperature is a derived quantity, 1 kBT = @ln @E . In the canonical ensemble, we fix the temperature T, and the (average) energy becomes the derived quantity. In order to fix the temperature, it is a useful conceptual trick to imagine our system of interest in thermal contact with a heat reservoir. This means the system and heat reservoir can exchange energy through heat, but no work can be done by the system on the reservoir or vice versa. The point of the reservoir is to make concrete the idea of fixing the temperature and letting the energy fluctuate. Figure 1. When a system is in thermal contact with a heat reservoir, its temperature is fixed. Its energy fluctuates around its average value. We do not allow particles to go from the system to the reservoir, only energy. The number of particles in the system can be small – we can have a single atom even – it won't matter. This is important because the canonical ensemble will allow us to discuss systems with a limited number of quantum states, in contrast to the microcanonical ensemble where we really did need to expand at large N to make progress. Although the system can be small, the reservoir does need to be large, so that it has much much more energy than the system. But this is not a constraint, just a conceptual trick, since the reservoir does not actually need to exist. We would like to know what is the probability of finding the system in a fixed microstate k with energy Ek? To be clear: every momentum and position of every particle in k is fixed. 2 Section 2 Since the system + reservoir is a closed system, the total energy of the system + reservoir is fixed at Etot. Since we have fixed the microstate k of the system, the total number of states is determined only by properties of the reservoir. More precisely, the probability of finding the system in microstate k is proportional to the number of ways of configuring the system + reservoir with the system in microstate k. Since the total energy is fixed, this number is the same as the number of ways of configuring the reservoir with energy Eres = Etot ¡ Ek: Pk = Pres(Eres) = C × res(Eres) = C × res(Etot ¡ Ek) (5) for some constant C. res(Eres) is the number of microstates of the reservoir with energy Eres. Now let us use the fact that Ek ≪Eres ≈Etot, which comes from our assumption of a heat reservoir. We can then expand the logarithm of the number of reservoir states around Ek = 0: ln res(Etot ¡ Ek) = ln res(Etot) ¡ Ek@ln res(E) @E E=Etot + ··· (6) Next we can use that @ln res(E) @E = in equilibrium1, so ln res(Etot ¡ Ek) = ln res(Etot) ¡ Ek (7) Exponentiating both sides gives res(Etot ¡ Ek) = res(Etot)e¡ Ek (8) Then by Eq. (5) we have Pk = 1 Ze¡ Ek (9) for some constant Z = 1 C × res(Etot). In the canonical ensemble, we will compute Z not using res but by the shortcut that the probabilities sum to 1, P Pk = 1. The formula for Pk we found in Eq (9) is the Boltzmann distribution. Note how much quicker the derivation of the Boltzmann distribution is in the canonical ensemble than in the microcanon-ical ensemble. In the microcanonical ensemble, we had to count all the states, take the logarithm, expand at large N, express E in terms of T, expand for small " and simplify. Alternatively, we could use the maximum entropy principle, which still required us to split N particles into m groups, work out the combinatoric factors, take N large, insert Lagrange multipliers, then maximize entropy. In the canonical ensemble, we just hook the system up to a reservoir then “bam!” out pops Boltzmann. The constant Z is called the partition function. Using P Pk = 1, we find Z = X microstatesk e¡ Ek = X energiesi gie¡ Ei (10) In the first sum, we sum over all microstates k, in the second sum, we sum over all possible energies Ei and weight the sum by the degeneracy gi, i.e. the number of microstates with energy Ei. If the energies are continuous, we write Z = Z g(E)dEe¡ E (11) where g(E) is called the density of states: g(E)dE gives the number of states with energies between E and E + dE. The set of energies of a system along with the density of states is called the spectrum of a theory. The partition function is an amazingly powerful object. If we know it exactly, we can calculate any thermodynamic property of the system. For example, hEi = X k EkPk = 1 Z X k Eke¡ Ek = 1 Z  ¡@ X k e¡ Ek  = ¡ 1 Z @Z @ (12) So hEi = ¡@lnZ @ (13) 1. This requires taking the thermodynamic limit (large N) for the reservoir. W e do not have to take large N for the system since we are purposefully avoiding ever using sys. Canonical ensemble 3 Thus, knowing the partition function, we can get the expected value for the energy of the system by simply differentiating. An important point about the canonical ensemble is that we derived a result about the system only. The partition function is a sum over microstates of the system. Pk is the probability of finding the system in microstate k when it is in equilibrium at a temperature T no matter what it is in contact with. We need it to be in contact with something to exchange energy and keep it at finite temperature, but the details of that system are totally irrelevant (except for its temperature). Note that we write hEi for the expected value of energy, rather than E since hEi is calculated rather than fixed from the beginning. The thing we compute, hEi is a function of , so hEi is a derived quantity rather than one we fix from the start, as in the microcanonical ensemble. In a real system hooked to a thermal bath, the total energy E would fluctuate around hEi. If the system is isolated, then we can simply set hEi( ) = E and solve for a relation between E and T. In fact, this is mostly how we will use the canonical ensemble, to compute equilibrium properties of an isolated system. In such cases, we use hEi and E interchangeably. Note that the relation between E and T can be derived from the microcanonical ensemble or from the canonical ensemble. It will be the same relation (as we will check when we can). 3 Example 1: monatomic ideal gas For an ideal monatomic gas with positions q and momenta p, the energy depends only on momenta E = P j p ~2 2m. So Z ≈ Z d3Nqd3Np (∆q)3N (∆p)3Nexp 2 4¡ X j p ~j 2 2m 3 5 (14) Here ∆p and ∆q are the size of phase space regions that we consider minimal. Classical mechanics gives no indication of what we should take for ∆q and ∆p, and no results that we derive will depend on our choices. As mentioned before, in quantum mechanics, we know to set ∆q∆p = h (see Lecture 10) so let's take this value. Also, recall that for entropy to be extrinsic, we have to count any state in which the same positions and momenta are occupied as the same state. Thus we need to divide the integration by N! for identical particles. This gives Z = 1 N! Z d3Nqd3N h3N exp 2 4¡ X j p ~j 2 2m 3 5 (15) The q integrals trivially give a factor of V N. The p integrals are the product of 3N Gaussian integrals. Each one gives Z ¡1 1 dpe¡ p2 2m = 2πm r (16) So that Z = 1 N!  V h3 N 2πm 3 2N (17) Mostly we are interested in this at large N, where N! ! e¡NN N gives Zmoatomicgas = eN  V Nh3 N 2πm 3 2N (18) Once we have Z it is easy to compute the (average) energy: E = hEi = ¡@lnZ @ = ¡3 2N @ @ ln  2πm  = 3 2NkBT (19) 4 Section 3 This is in agreement with the result form the equipartition theory (the 3 kinetic degrees of freedom each get 1 2kBT of energy per molecule). Note that this analysis of the ideal gas in the canonical ensemble was a much easier way to compute the average energy than in the microcanonical ensemble, where we had to look at the surface area of a 3N-dimensional sphere. 3.1 Heat capacity Recall that the heat capacity CV is the amount of heat required to change the temperature at constant volume: CV =  Q ∆T  V =  @E @T  V . Recalling that = 1 kBT we have CV = @hEi @T = @hEi @ @ @T = ¡ 1 kBT 2 @ @  ¡@lnZ @  = 1 kBT 2 @2lnZ @ 2 (20) This equation lets us compute the heat capacity directly from the partition function. Let's check for the monatomic ideal gas. Using Eq. (18) we find that CV = 1 kBT 2 @2 @ 2ln  ¡3 2N  = 3 2N 1 2kBT 2 = 3 2NkB (21) in agreement with our previous results.2 3.2 Entropy How do we extract the entropy from the partition function. The easiest way is using the Gibbs entropy: S = ¡kB X k Pkln Pk = ¡kB X k e¡ Ek Z lne¡ Ek Z = kB X k e¡ Ek Z ( Ek + ln Z) (24) The first term on the right is the average energy times kB = 1 T. For the second term we just use P 1 Ze¡ Ek = P Pn = 1. Therefore S = hEi T + kBln Z (25) For example, using the partition function for the monatomic ideal gas, Eq. (18) we get S = 3 2NkB + kBln " eN  V Nh3 N 2πm 3 2N # (26) which reduces to: S = NkB  ln V Nh3 + 3 2ln[2πmkBT] + 5 2  (27) 2. It's interesting to write the calculation another way. Note that @ @  ¡@lnZ @  = @ @  1 Z X Ee¡ E  = ¡ 1 Z2  @Z @ X Ee¡ E ¡ 1 Z X E2e¡ E (22) using that ¡ 1 Z @Z @ = hEi we see that this is ¡hE2i. Thus, CV = ¡ 1 kBT 2 @ @  ¡@lnZ @  = hE2i ¡ hEi2 kBT 2 (23) In other words, the heat capacity is given by the RMS energy fluctuations. This tells us that how a system changes when heated can be determined from properties of the system in equilibrium (the RMS energy fluctuations). In other words, to measure the heat capacity, we do not ever have to actually heat up the system. Instead, we can let the system heat up itself through thermal fluctuations away from the mean. This is a special case of a very general and powerful result in statistical physics known as the fluctuation dissipation theorem. Another example was our computation of how the drag coefficient in a viscous fluid related to the fluctuations determined by random walks (Brownian motion): if you drag something, the energy dissipates in the same way that statistical fluctuations dissipate. Example 1: monatomic ideal gas 5 Substituting T = 2 3NkBE this gives back the Sackur-Tetrode equation that we computed with the microcanonical ensemble. 4 Example 2: vibrational modes Let's work out the canonical ensemble for another system, the vibrational modes of a diatomic molecule. For a diatomic molecule, motion along the axis of the molecule is governed by a potential V (x). The equilibrium position x0 is where the force vanishes: F = ¡V 0(x0) = 0. Expanding the potential near its minimum (the equilibrium position), V (x)=V (x0)+ 1 2(x¡x0)2V 00(x0)+··· we see that for small deviations from equilibrium, the potential is quadratic. Thus for small displacements it is going to be well modelled by a simple harmonic oscillator with spring constant k = V 00(x0). The oscillation frequency is ! = k m q . I assume you studied the quantum mechanics of a simple harmonic oscillator in your QM course. The oscillator has Hamiltonian H = p2 2m + 1 2mw2 x2 (28) The energy eigenstates are n(x) = 1 2nn! p  m! π~ 1/4 e¡m!x2 2~ Hn m! ~ x r ! (29) where Hn(z) = (¡1)nez2 dn dzn(e¡z2) are the Hermite polynomials. You can check that  ¡ ~2 2m@x 2 + 1 2mw2 x2  n = En n (30) where En = ~!  n + 1 2  ; n = 0; 1; 2; ··· (31) So a harmonic oscillator at rest has energy E0 = 1 2~!. Each successive mode has ~! more energy than the previous mode. Note that for the simple harmonic oscillator, there is only one degree of freedom, so N = 1. If we fix the energy E, then we know exactly the state of the system, =1, and there is no ensemble to work with. Thus the microcanonical ensemble is not much use: it doesn't let us answer any questions we would like to ask. For example, we want to know what is the typical energy in a vibrational mode at fixed temperature? If we fix the energy ahead of time, we obviously can't answer this question. So let us work in the canonical ensemble and compute the partition function for the system. We need to evaluate Z = X n e¡ En = e¡ 2~!X n=0 1 e¡n ~! (32) To evaluate this sum, we define x = e¡ ~! so that e¡n ~! = xn and e¡ 2~! = x p . Then the sum is a geometric series which is easy to sum X n=0 1 xn = 1 1 ¡ x (33) So, Z = x p X n=0 1 xn = x p 1 ¡ x = 1 1 x p ¡ x p = 1 e 2~! ¡ e¡ 2~! = 1 2sinh  2~!  (34) Here, the answer is expressed in terms of the hyperbolic sine function sinh(x) = 1 2(ex ¡ e¡x). With the exact partition function known, we can start computing things. The energy is hEi = ¡@ln Z @ = ¡~! 2 coth  2~!  = ~!  1 e ~! ¡ 1 + 1 2  (35) 6 Section 4 (Feel free to use mathematica to take derivatives or do it yourself by writing the expression in terms of exponentials and using the chain rule.) Comparing to Eq. (31) we see that the average excitation number is hni = 1 e ~! kBT ¡ 1 = (36) for kBT .~!, hni≈0 and only the ground state is occupieda and from (35), the energy flatlines at its zero point: E0= 1 2~!. At higher temperatures, the hEi and hni grow linearly with the temperature. The heat capacity is CV = @E @T = kB  ~! kBT 2 e ¡ ~! kBT  1 ¡ e ¡ ~! kBT 2 = (37) Note that heat capacity is very small until the first energy state can be excited, then it grows linearly. For H2, the vibrational mode has ν ~ vib = 4342cm¡1 corresponding to Tvib = chν ~ kB = ~! kB = 6300K. So at low energies, the vibrational mode cannot be excited which is why the heat capacity for hydrogen is CV = 5 2NkBT rather than 7 2NkBT. We discussed this in Lecture 4, but now we have explained it and can make more precise quantitative predictions of how the heat capacity changes with temperature. Including the kinetic contribution, and a factor of N for the N molecules that can be excited in the vibration mode we see CV = 5 2NkB + NkB 0 @ Tvib T e¡ Tvib 2T 1 ¡ e¡ Tvib T 1 A 2 = (38) This shows how the heat capacity goes up as the vibrational mode starts to be excitable. Note that although the temperature for the vibrational mode is 6300 K, the vibrational mode starts to be excited well below that temperature. The dots are data. We see good agreement! Can you figure out why the heat capacity dies offat low temperature? What do you think explains the small offset of the data from the theory prediction in the plot? We'll eventually produce a calculation in even better agreement with the data, but we need to incorporate quantum indistinguishability to get it right, as we will learn starting in Lecture 10. 5 Gibbs ensemble In the microcanonical ensemble, we computed the number of states at a given energy (V ; N ; E) and used it to derive the entropy S(V ; N ; E) = kBln (V ; N ; T). In the canonical ensemble, we computed the partition function by summing over Boltzmann factors, Z(N ; V ; ) =P k e¡ Ek. In both cases we have been holding V and N fixed. Now we want to try varying V . Gibbs ensemble 7 First, let's quickly recall why the temperature is the same in any two systems in thermal equilibrium. The quickest way to see this is to recall that entropy is extensive, so a system with energy E1 = E and another with energy E2 = Etot ¡ E has total entropy S12(Etot; E) = S1(E) + S2(Etot ¡ E) (39) Then the state with maximum entropy is the one where 0 = @S12(Etot; E) @E = @S1(E) @E E=E1 ¡ @S2(E) @E E=E2 (40) where the @S1(E) @E E=E1 means evaluate the partial derivative at E =E1. In the second term we set E = E2 = Etot ¡ E. Thus we see that 1 T = @S @E is the same in the two systems. Now let's consider an ensemble that lets V vary. This is sometimes called the Gibbs ensemble. In the Gibbs ensemble you have two systems in equilibrium that can exchange energy and volume. Exchanging volume just means we have a moveable partition in between them. So the total volume is conserved Figure 2. An ensemble where volume is allowed to vary Now we just apply the same formal argument as in Eqs. (39) and (40): the entropy is the sum of the entropy of the two sides, and the total volume is fixed: V1 + V2 = Vtot. This implies that S12(Etot; Vtot; E; V ) = S1(E; V ) + S2(Etot ¡ E; Vtot ¡ V ) (41) And so maximizing entropy, by demanding both the partial derivative with respect to E and the one with respect to V vanish give that the temperature 1 T = @S @E is the same on both sides (from the E derivative) and that 0 = @S12(Etot; Vtot; E; V ) @V = @S1(E; V ) @V V =V1 ¡ @S2(E; V ) @V V =V2 (42) Thus @S @V is another quantity that is the same for any system in equilibrium. It is related to pressure, but is @S @V = P or 1 P or 2π T P? We can figure out the T factor by dimensional analysis, and adjust the constant we can also check by matching on to the ideal gas law. To figure out what @S @V is, all we have to do is compute it in some sample system, such as a monatomic ideal gas. Using the entropy of a monatomic ideal gas in the canonical ensemble, Eq. (27), we find  @S @V  E = @ @V NkB  ln V N + 3 2ln  4πmE 3Nh2  + 5 2  = NkB V = P T (43) We have established that the left-hand side of this equation @S @V is the same for any two systems in equilibrium. We also already know that 1 T =  @S @E  N ;V is the same for any two systems in equilibrium. We conclude that the quantity T @S @V is the same for any two systems in equilibrium, and give this quantity the name pressure: P ≡T  @S @V  E (44) 8 Section 5 There is a unique value for pressure among systems in equilibrium. This is of course consistent with the familiar observation that two gases will equilibrate with equal pressure, since a pressure difference would cause the partition to move. But this way of deriving it, we never had to talk about gas molecules or forces. Using Eq. (44) you can compute the pressure for a solid or photon gas or Bose-Einstein condensate or whatever. Note that Eq. (44) works for S computed any way you like it, as S =kBln in the microcanonical ensemble or S = hEi T + kBln Z in the canonical ensemble. The entropy is the entropy, however you compute it, and the pressure is the pressure. Now let us consider the total derivative of S(E; V ): dS =  @S @E  dE +  @S @V  dV = 1 T dE + P T dV (45) or dE = TdS ¡ PdV (46) This equation is none other than ∆E = Q ¡ W (47) The change in energy is the heat Q = TdS absorbed minus the work done by changing volume. You might still be asking, how do we know that the quantity “P” really is “pressure”? Well, we showed that it is for a monatomic ideal gas. And everything in equilibrium has the same “pressure” (P = F A so if the pressures aren't equal there's a net force, things change, and it's not equilibrium) and the same “P”. Thus, by the law of syllogism, it must be “P”= “pressure” for any system. It's the same argument about how we know that T is “temperature”. The Gibbs ensemble is usually just considered a variation on the canonical ensemble. You could in principle try to define a partition function for this ensemble by summing ZGE= Pe¡ Ek¡ PVk, but then you'd have to be able to compute the volume Vk for a microstate. I don't know of any examples where this is done. The point of the Gibbs ensemble is that thinking of volume varying between systems gives a nice general way to think about pressure, as conjugate to volume and analogous to temperature. The formula P = T  @S @V  E holds no matter how S is computed. The Gibbs ensemble will also give us, through analogy, some intuition for what a chemical potential is. 6 Grand canonical ensemble Now let's consider systems where the number of particles is not fixed. The basic example for this is chemistry: in a chemical reaction, the number of each molecule species is not conserved. For example, when iron rusts the reaction is 3 Fe + 4 H2O!Fe3O4 + 4 H2 (48) Although the number of each type of atom is conserved, the number of each type of molecule is not. Other examples with particle non-conservation are the photons that come offof a hot object (blackbody radiation) or radioactive decay, such as n! p+ + e¡. An ensemble where N changes in a system but the total number of particles is conserved among systems is called the grand canonical ensemble. Just like in the discussion in the previous section about pressure, the result maximizing entropy implies that @S1(E; V ; N) @N = @S2(E; V ; N) @N (49) for any two systems in equilibrium that can share “N”. As with pressure, we multiply this derivative by T and give this derivative a name µ ≡¡T  @S @N  E;V (50) This quantity is called the chemical potential. In equilibrium, the chemical potential of any two systems is the same. The minus sign is a convention. It makes the chemical potential negative in most circumstances. Grand canonical ensemble 9 A useful way to think about chemical potential is as a pressure for number density. For example, suppose you have an atom that has two states, a ground state 0 and an excited state 1. In equilibrium, there will be some concentrations hn0i and hn1i of the two states, and the two chemical potentials µ1 and µ2 will be equal. Since the excited states have more energy, we expect fewer of them, so hn1i <hn0i. Say we add to the system some more atoms in the ground state. This would push more atoms into the excited state to restore equilibrium. This pushing is due to the “number density pressure” of the chemical potential. Adding to n0 pushes up µ0, so µ0 = / µ1 anymore; the number densities then change until equilibrium is restored. While there is only one kind of temperature and pressure there are lots of chemical potentials: one for every type of conserved N. If there are 4 different types of particles involved, there are 4 chemical potentials. The more general formula is @S1(E; V ; N1; N2; ···) @N1 = ¡µ1 T ; @S1(E; V ; N1; N2; ···) @N2 = ¡µ2 T ; ··· (51) You should not think of the chemical potential as being connected to the grand canonical ensemble in any essential way. The chemical potential is property of the system, like pressure or temperature, relevant no matter what statistical system we use to perform the calculation. To see how chemical potential is embedded in the microcanonical ensemble, recall our microcanonical maximum entropy calculation, where we imposed Σni = N and P ni"i = E as constraints. Then we maximized entropy by maximizing S kB = ln = ¡N X i=1 m fi ln fi ¡ ¡X ni ¡ N  ¡ ¡X ni"i ¡ E  (52) Since @ln @E = , we identified this Lagrange multiplier with the usual = 1 kBT. Since @ln @N = we can now identify µ = ¡ kBT as the chemical potential. Thus given in the microcanonical ensemble, we compute the chemical potential as µ = ¡ kBT = ¡kBT  @ln (E; V ; N) @N  = ¡T  @S @N  E;V (53) in agreement with Eq. (50). As in Eq. (46) we can now consider the total derivative of energy, letting E; V and N all vary: dS =  @S @E  dE +  @S @V  dV +  @S @N  dN = 1 T dE + P T dV ¡ µ TdN (54) That is, dE = TdS ¡ PdV + µdN (55) This implies that  @E @N  S;V = µ (56) So the chemical potential represents the change in energy when a particle is added at constant V and S. This is almost intuitive. Unfortunately for the constant S constraint makes Eq. (56) hard to interpret. Don't worry though, we'll come up with better ways to understand µ in Section 7. 6.1 Grand partition function As in Section 2 let us now hook a small system up to a reservoir to derive the Boltzmann factor. This time the reservoir should have large energy and large particle number, and both energy and particle number can flow between the system and reservoir. As before, think about picking one microstate k of the system with energy Ek and Nk particles. Once Ek and N are fixed, the total number of microstates is determined only by the states in the reservoir. Eq. (7) becomes ln res(Etot ¡ Ek; Ntot ¡ Nk) = ln res(Etot; Ntot) ¡ Ek + µNk (57) where Eq. (53) was used. This leads to a Boltzmann factor Pk = 1 Ze¡ Ek+ µNk (58) 10 Section 6 where Z(V ; ; µ) = X k Pk = X k e¡ Ek+ µNk (59) is called the grand partition function. The grand partition function lets us calculate the expected number of particles hN i = X k NkPk = 1 Z X k Nke¡ Ek+ µNk = 1 1 Z @Z @µ = 1 @lnZ @µ (60) We can also calculate the usual things the partition function lets us calculate, such as the average energy. hEi = X k EkPk = 1 Z X k Eke¡ Ek+ µNk = ¡ 1 Z @Z @ ¡ 1 Z X k µNke¡ "k+ µNk (61) =¡@lnZ @ ¡ µhN i (62) Particle number and chemical potential are conjugate, like pressure and volume. If you know N for a system then you can calculate µ by @E @N. This is like how if you know the energy for a system, you can calculate temperature from 1 T = @S @E. If you know the chemical potential instead of N, then you can compute average number by hN i = 1 @lnZ @µ . This is like how if you know temperature and not the energy, you can compute the average energy form hEi = ¡@ln Z @ . Finally, let's compute the entropy, in analogy to Eq. (25). We start with Eq. (24), which goes through to the grand canonical ensemble with Z ! Z and E ! (E ¡ µN): S = kB X e¡ Ek+ µNk Z [ (Ek ¡ µNk) + ln Z] (63) =hEi T ¡ µhN i T + kBln Z (64) Thus, ¡kBT lnZ = hEi ¡ TS ¡ µhN i (65) This will be a useful relation. 7 Chemical potential To get a feel for µ, let's do some examples. We'll work in the microcanonical ensemble for now since µ is independent of the way we calculate it and trying to understand Z and µ at the same time is unnecessarily challenging. We'll come back to Z in the next lecture and use it a lot in quantum statistical mechanics. 7.1 Ideal gas For a monatomic gas, using the Sackur-Tetrode equation, we find µ = ¡T @S @N E;V = ¡T @ @N  NkB  ln V N + 3 2ln  4πmE 3Nh2  + 5 2  (66) =¡kBT  ln V N + 3 2ln  4πmE 3Nh2  (67) Note that the 5 2 has dropped out. Using E = 3 2NkBT for this gas, we can write this relation in an abbreviated form µ = kBTln " n  h2 2πmkBT 3/2# = kBT ln nλ3 (68) Chemical potential 11 where n = N V is the number density and λ = h 2πmkBT p = 2π~2 mkBT r (69) is called the thermal de Broglie wavelength. Recall that the de Broglie wavelength is λ = h p, with p the momentum. The thermal de Broglie wavelength is therefore the de Broglie wavelength of a particle with momentum p = 2πmkBT p = 2π 3 q prms with prms the RMS momentum of a gas a temperature T. Thus the thermal wavelength is measure of the lenth scale at which quantum effects become important in a gas at temperature T. To get a feel for typical numbers, consider air at room temperature. The molar mass of air is 29.0 g mol and density is ρ=1.27 kg m3. So λ= h 2πmN2kBT p =1.87×10¡11m while n=(3.35×10¡9m)¡3. In other words, the typical distance between atoms in air is d = 3.3nm and the thermal de Broglie wavelength is much smaller, λ = 0.02nm. So in air nλ3 = 1.7 × 10¡7 ≪1. This means that µ = kBT ln nλ3 = ¡0.39eV. So we see that the chemical potential of air is negative. This is typical of gases. For the chemical potential to be positive, we would need nλ3 ≈1 which means the de Broglie wavelength is the same order as the intermolecular spacing. In such situations, quantum mechanics is relavant and we must use quantum statistical mechanics (see Lecture 10). Solving Eq. (68) for n gives n = 1 λ3exp  µ kBT  (70) This says that the number density is related exponentially to the chemical potential. Thus if we double the number of particles, n ! 2n, the chemical potential goes up by µ ! µ + ln 2. As the system gets denser and denser, the chemical potential rises towards 0. Returning to the analogy between chemical potential like a pressure, suppose you have a system with a concentration gradient. Then the part with more particles will be at higher chemical potential and the part with lower gradient at lower chemical potential, according to Eq. (70). This is why it is called a potential – it is like potential energy, but for particle number. Figure 3. Chemical potential is higher in more dense regions. It is like potential energy for particle number. Particles move from high to low µ, until the µ's are all equal. 7.2 Ground-state energy Next, suppose we give an overall shift to the energy of our gas. For example, say some chemical reaction occurred and energy was released or absorbed. Or maybe we are interested in different configurations of a molecule that have different potential energy, or maybe we care about the molecules in an electronic excited state, as if the state was filled with a laser, or if the rest mass is important. With an energy offset, then there is a contribution to the total energy of the form Eoffset = N" with " the ground-state energy. The functional form of entropy is the same as without the offset, but the energy it depends on is now the remaining energy available for kinetic excitations: Ekin = E ¡ N". Thus, in the microcanonical ensemble we have simply S = NkB  ln V N + 3 2ln  4πm (E ¡ N") 3Nh2  + 5 2  (71) 12 Section 7 One can also derive this from the canonical ensemble. Say the partition function without the energy shift is Z0. Then with the energy offset Z = Z0E¡ N". This gives hEi = ¡@ln Z @ =hE0i + N". Then S = hEi T + kBln Z = hE0i T + N" T + kBln Z0 ¡ kB N" = hE0i T + kBln Z0 = S0(E0) = S0(E ¡ N") (72) Thus using the canonical ensemble we find that the entropy S with the shift has the functional form as S0 without the shift, it is only the energy where we evaluate S that changes. This is in agreement with Eq. (71). From the entropy, we can compute the chemical potential µ = ¡T  @S @N  E;V = kBT ln nλ3 + " (73) with λ in Eq. (69), so that for an ideal gas with ground-state energy " n = 1 λ3exp  µ ¡ " kBT  (74) Thus when the energies shift, the chemical potential can shift to compensate. Differences in chem-ical potential are independent of an overall energy shift. This is consistent with our interpretation of chemical potential as a potential. It is like a potential energy, and shifts with energy as potential energy does. With the energy offset, we can refine our observation about the chemical potential being neg-ative for an ideal gas. Now we see that a more precise statement is that the chemical potential is less than the ground state energy " for a classical gas. In general, the chemical potential gets two contributions: one from the density and one from the energy. The density contribution is of entropic origin and depends on how many molecules are in the system. The energetic contribution is due to the internal structure of the molecule and independent of whatever else is going on. Equilibrium, where chemical potentials are equal, comes from a balance between these two contributions. This will be clearer with some examples. 7.3 Chemical reactions Chemical potentials are useful in situations where particles turn into other types of particles. When there are more than one type of particle in the system (as there typically are when we consider problems involving chemical potential), we need a different µ for each particle. So Eq. (55) becomes dE = TdS ¡ PdV + X µjNj (75) As a concrete example, consider the Haber process for the production of ammonia 3 H2 + N2 2 NH3 (76) Note that the number of each individual molecule is not conserved, but because the number of hydrogen atoms and nitrogen atoms is conserved, the relative coefficients (3,1 and 2) in Eq. (76) are fixed. In chemistry, the concentrations or molar number densities of molecule j are denoted as [j] = njN A, with nj = Nj V and N A = 6 × 1023 1 mol Avogadro's number. In equilibrium, there will be some relationship among the concentrations [H2] of hydrogen, [N2] for nitrogen and [NH3] for ammonia that we can compute using chemical potentials. As the concentrations change, at fixed volume and fixed total energy, the entropy changes as dS = @S @[H2]d[H2] + @S @[N2]d[N2] + @S @[NH3]d[ NH3] (77) The changes in concentrations are related by Eq. (76): for every mole of N2 consumed, exactly three moles of H2 are consumed and 2 moles of NH3 are produced. Thus, d[N2] = 3d[H2] = ¡2d[NH3]. Thus, using Eq. (75) with dV = dE = 0, or equivalently equation (50), we have 0 = 3µH2 + µN2 ¡ 2µNH3 (78) Chemical potential 13 This constraint among the chemical potentials is a generalization of µ1= µ2 in equilibrium for two systems that can exchange particles. Here there are 3 systems that can exchange particles. Now, from Eq. (74) we know how to relate the number of particles to the chemical potential for a monatomic ideal gas: [X] = 1 λ3exp  ¡"X ¡ µX kBT  (79) where "X is the ground state energy for molecule X. To get the µ's to drop out, we can take the ratio of concentrations to appropriate powers: [H2]3[N2] [NH3]2 ≈λNH3 6 λH2 9 λN2 3 × exp  ¡3"H2 + "N2 ¡ 2"NH3 kBT  exp  ¡3µH2 + µN2 ¡ 2µNH3 kBT  | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |{z} } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } } =1 (80) The second exponential is just 1 because of Eq. (78), which is why we chose the powers of [H2] and [NH3] that we did on the left hand side. The ≈means that we are approximating everything as monatomic ideal gases (not a great approximation but it's a start) The sum of energies is just the net energy change in the reaction, ∆". For the Haber process, which is exothermic, ∆" = 92.4 kJ mol. So [H2]3[N2] [NH3]2 ≈λNH3 6 λH2 9 λN2 3 exp  ¡ ∆" kBT  (assuming monatomic gases) (81) This is special case (for monatomic ideal gases) of the law of mass action. It says that the rela-tive concentrations of reacting molecules in equilibrium are determined by the Boltzmann factor dependent on the change in energy associated with the reaction. This formula arises from a balance between entropic contributions to the chemical potentials on both sides (though their number densities) and energetic contributions (in the exponential factor). We have written explicitly the reminder that this formula assumes that the reactants and products are monatomic gases. This is not a bad assumption in some cases. More generally though, for chemicals reacting, we will need to add corrections to the right hand side. These corrections will be included in the next lecture, where the law of mass action is derived in full. 7.4 Example: matter antimatter asymmetry For another example, consider the process of a proton-antiproton annihilation. Antiprotons p¡ are anti-particles of protons. They have the same mass as protons but opposite electric charge. Protons and anti-protons can annihilate into photons p+ + p¡ + (82) The reverse reaction is photons converting into proton-antiproton pairs. These annihilations and conversions happen constantly when the temperature is well above the threshold energy for pair production kBT ≫" = 2mpc2 = 2 GeV (83) We don't care so much about the details of why or how this process occurs, just that it does occur. This threshold temperature is around 2 × 1013 K. So in most systems of physical interest (stars, your body, etc.) this doesn't happen. It did happen however, in the early universe, until 0.01 seconds after the big bang. Note that while the above reaction conserves the number of protons minus the number of antiprotons, it does not conserve the number of photons. Indeed, other reactions can easily change photon number, such as + e¡ e¡ + + (84) A more down to earth example is the light in your room – photons are constantly being produced, not absorbed. Eq. (84) implies that µ + µe¡ = µe¡ + 2µ (85) 14 Section 7 In other words, that µ = 0 (86) This is a general property of particles that are not associated with any conserved quantity: their chemical potential vanishes. Then the reaction in Eq. (82) gives µp+ + µp¡ = 0 (87) The energy change in the reaction + ! p+ + p¡ is ∆" = 2mpc2. Thus, as in Eq. (81), using T =3K (the temperature of outer space), and treating protons and antiprotons as monatomic ideal gases (an excellent approximation in fact): [p¡][p+] = 1 λ6e ¡ 2mpc2 kBT = (2πmkBT)3 h3 e ¡ 2mpc2 kBT (88) At 3K the thermal wavelength is λ = h 2πmkB(3K) p ≈1 nm. Thus, [p¡][p+] =  4 × 10¡1683311945335 1 m3 2 (89) And we can conclude that either [p¡]≈0 or [p+]≈0 (or both can be small). This is consistent with the observational fact that we don't see both protons and antiprotons around – at most one type can survive. Suppose that all the protons and antiprotons came from processes like + ! p+ + p¡ that produce or remove the same number of protons and antiprotons. This would make the proton/antiproton concentrations equal3, and their chemical potentials equal too (and hence µp+= µp¡ = 0 by Eq. (87)). Setting the concentrations equal, we then get [p¡] = [p+] = 1 λ3e ¡ ∆" 2kBT = 4 × 10¡1683311945335 1 m3 ≈0 (90) So this first pass calculation says there shouldn't be any protons or antiprotons around at all! To refine our calculation, it's important to note that we used equilibrium physics, but since the universe is expanding, equilibrium is not always a good approximation. At some point as the universe expands and cools, the protons and antiprotons become so dilute that they cannot find each other to annihilate. This is called “freeze-out”. The freeze-out temperature is determined by when the rate for p+p¡! is equal to the expansion rate. The rate for p+p¡! is determined by the proton-cross section σ ∼ 1 mp 2, the number density [p+] and the velocity, which we can take to be given by the Maxwell-Boltzmann avergage h1 2mpv ~2i= 3 2kBT. So, the annihilation rate (events per unit time) is: ¡ annihilate = nσv = (2πmp kBT)3/2e ¡ mpc2 kBT 1 mp 2 3kBT mp r (91) The expansion rate requires some general relativity. The result is ¡ expansion = kB 2T 2 MPl (92) where MPl = GN ¡1/2 = 1019 GeV is Planck's constant. Setting these equal results in a freezeout temperature T f = 2.4 × 1011K (93) At this time [p+] = [p¡] ≈(2πmp kBT f)3/2e ¡ mpc2 kBTf = 1023 1 m3 (94) 3. Actually, in any unitary quantum field theory the equilibrium concentrations of particles and antiparticles must be the same. This follows from an unbreakable symmetry known as CPT invariance that combines switching particles and antiparticles (C), flipping the particles spins (P ) and time-reversal invariance (T). Chemical potential 15 As the universe continues to expand from T f down to 3K its size scales with temperature so [p+] = [p¡] ≈1023 1 m3  3K T f 3 = 1.68 × 10¡10 1 m3 (95) This is the honest-to-goodness prediction of cosmology for the density of protons left over from the big bang (i.e. close to the best estimate physicists can make). Eq. (95) is much more reasonable than the equilibrium prediction in Eq. (90), but still in stark disagreement with data: the average number density of protons in the universe is [p+] = 0.26 1 m3. This is a problem. In fact, this is one of the great unsolved problems in physics, called the mystery of baryogenesis or the matter-antimatter asymmetry. One possible solution is to set the initial conditions so that [p+] = / [p¡] to start with. Once these are set, if all the processes are symmetric in p+ and p¡ then [p+] = / [p¡] will persist. Note however, that the universe is currently 1026m wide, and growing. There are 1080 more protons than antiprotons in the observable universe today. So it would be a little strange to set this enormous asymmetry at the big bang. When the universe is only 10¡35m across, this would correspond to a shocking number density of 10185 1 m3. Moreover, the current cosmological model involves inflation, which produces exponential growth at early times, so whatever initial asymmetry we set would be completely washed away when inflation ends. In other words, it's possible, but would be very unsettling, to solve the baryogenesis problem by tuning the initial conditions. Another option is to start offsymmetric but have processes that are not symmetric between particles and antiparticles. In turns out in the Standard Model of particle physics, there are none: for every way of producing an electron or proton, there is also a way of producing a positron or antiproton with exactly the same rate. In fact, this equality is guaranteed by symmetries (lepton number and baryon number). Moreover, if you made a modification so that the symmetries were violated, then effectively protons could turn into antiprotons. Thus, since protons and antiprotons have the same mass (and value of ") their chemical potentials would push them towards the same concentrations, which by Eq. (90) is zero. The story is again a little more complicated, since there is inflation, and reheating and the expansion of the universe is not quite quasi-static, and there is actually a super-tiny violation of the symmetry between protons and antiprotons within the Standard Model. Even when you include all these things, it doesn't work, you still get no matter out once the universe cools. So we are stuck. Why is there so much matter in the universe? Why is there more matter than antimatter? Nobody knows. 8 Partition function and the spectrum (optional) Some of you may find it illuminating to think about the partition function in a big-picture, more abstract sense (as if it's not abstract enough already!). The following discussion is just included because some students may find it illuminating. It is not required reading for the course. The partition function is computed by summing over energies. As you probably know, the energies of a system contain a tremendous amount of information. Indeed, in classical mechanics, the energy at a point in phase space is given by the the Hamiltonian function H(q ~i; p ~i; t). If you know H, every possible behavior of the system can be determined by solving Hamilton's equations of motion. In quantum mechanics, the same is true: if you know the Hamiltonian operator H ^¡ q ~ ^i; p ~ ^i  , you can determine the time-evolution of the system completely through the Schrödinger equation. Eigenvalues of the Hamiltonian are the energies of the system. For example, consider the Hamiltonian for interactions among water molecules. We can approx-imate the Hamiltonian as depending on the distances Rij=jq ~1 ¡ q ~2j between the centers of mass of the molecules. We should find that if two molecules are far away, the energy becomes independent of Rij. If we try to put the molecules on top of each other, it should be impossible, so the energy should blow up. Because of hydrogen bonding, we expect a weak attractive force at intermediate distances with a shallow potential minimum. That is, we expect something roughly like: 16 Section 8 Figure 4. (left) The potential energy between two water molecules is close to a Lennard-Jones potential. The corresponding density of states has singularities at the bound state energy and at zero energy the when the molecules are far apart. This is called the Lennard-Jones potential. A pairwise potential model like this can explain many the properties of water – surface tension, boiling, freezing, heat capacity, etc. For example, the force from water molecule i is given by F ~ = ¡ @ @q ~iH(q ~i; p ~i). More sophisticated classical models, including 3-body interactions and so on, can explain even more emergent behavior of water. If you know the quantum Hamiltonian H ^ exactly, you would be able to determine everything about water exactly (in principle). If you really want to determine the complete time evolution of a system, you need the full functional form of the classical Hamiltonian, or in a quantum system, the energy eigenvalues "i and eigenvectors i(x) of the quantum Hamiltonian. However, you get a long way to understanding the behavior of a system just knowing the spectrum. The density of states for the Lennard Jones potential is shown on the right of Fig. 4. Its singularities indicate the bound state at E = ¡1.0 and the continuum at E = 0. The only thing you can't get from the density of states is the bound state distance r0, since there is no information about position in the density of states (of course, in this case, you could reconstruct the distance from the energy by dimensional analysis). So the spectrum itself has almost all of the information we care about. In quantum mechanics, the spectrum is the set of eigenvalues. Knowing only the spectrum, we don't have the information contained in the eigenvectors. Most of the time we don't actually care about the eigenvectors. One way to see this is that the eigenvectors are just projections from changing basis, i(x) = h"ijxi. In an energy basis, the Hamiltonian is diagonal. Thus if we are interested in basis-independent properties of a system (as we almost always are), the spectrum is sufficient. The point of the above discussion is to motivate why the spectrum of a system is extremely powerful, and contains almost all the physical information we would care to extract about a system. Now observe that the partition function carries the same information as the spectrum, just represented differently. In fact, the partition function is just the Laplace transform of the spectrum. A Laplace transform is a way of constructing a function F( ) from a function f(E) by integrating over E: Z( ) = L[f(E)] = Z 0 1 dE e¡ Ef(E) (96) So the partition function is the Laplace transform of the spectrum Z = L[E(qi; pi)]. Note that a Laplace transform is just a real version of a Fourier transform (take ! i ). Thus working with the partion function instead of the spectrum is like working in Fourier space, with representing the frequency. The average over configurations is like the average over time taken to produce a frequency spectrum. Equilibrium corresponds to a particular value of which is analogous to a particular frequency component dominating, like the resonance on a flute. The shape in Fourier space of the frequency spectrum around a resonance gives the timbre of a note, explaining why a C on a flute sounds different from an C on a trumpet. Thus, in a way, the partition function, through its derivatives at a fixed , give the timbre of a physical system. Partition function and the spectrum (optional) 17
391
Conway's Game of Life The rules: 1. Start with a square grid of “cells”, each of which is “alive” or “dead”. 2. At each step, we count the number of each cells neighbours that are alive, and 1. any live cell with fewer than 2 live neighbours dies 2. any live cell with more than three neighbours dies 3. any live cell with 2 or 3 live neighbours lives on 4. any dead cell with exactly three live neighbours becomes alive, and otherwise stays dead We start with some initial pattern, and then watch the system evolve: e.g., The interesting thing is how much variety such simple rules can create. For instance, the following are “still lives” that don't change once formed. We can see that each of the live cells has 2 or 3 live neighbours, and no dead cell has three live neighbours. These are “oscillators” – they go through a series of steps, and get back to the starting pattern, and then repeat: We can't (easily) see them oscillate on paper, so we will use some software to simulate life. There are various choices. Today we are using GameOfLife – golly – Another good one is Winlife32 – Try out the examples above to see that they work. Then play around and create some of your own patterns. Some patterns die out, some settle down to oscillate, and some evolve forever. Some of the more remarkable patterns are the “gliders”and “spaceships” that move along. Try these two: There are many amazing patterns that can be created, e.g., 1. the F-pentomino – a simple pattern that creates great complexity; 2. glider guns – a pattern that creates gliders repeatedly; 3. logic-gates, for instance to simulate binary logic such as used in computers; 4. self-replications – much as in real “life” where organisms reproduce. Its now known that the game of life can be used to replicate all the functions of a general computing device called a Turing machine, although it might not be very efficient. So we can do anything with this simple game that you can do on a computer, if you are clever enough. Perhaps the most amazing pattern to date is one that prints out the value of or the golden ratio φ. Why study “life”? Apart from being fun, the game of life is a special case of a cellular automaton. They illustrate a basic principle that complex, large-scale behaviour (e.g., life) can arise from simple local rules. The behaviour isn't random, but it isn't easy to predict either. These automata have been used to study 1. crystal formation 2. pattern formation, for instance on sea shells 3. insect colonies 4. economic systems and anything else that can be modelled as a large number of simple “agents” that interact to create complex results. People have even suggested that the fundamental physics of the Universe might be based on a cellular automaton. There are many variations, and a huge literature on the Game of Life and its relatives. More information can be found in many places, but here are some starting points:
392
Paradise Lost Book III Summary & Analysis | SparkNotes =============== Search all of SparkNotes Search Clear Recent Searches Clear Top Study Guides The Great Gatsby Lord of the Flies Macbeth To Kill a Mockingbird Frankenstein Menu Start free trial Log in Literature Shakespeare Other Subjects AP® Test Prep PLUS Teacher SparkTeach Teacher's Handbook Blog Log in Sign Up for PLUS My PLUS Dashboard My PLUS Activity Notes Bookmarks Test Prep PLUS No Fear Translations & Audio Mastery Quizzes Flashcards Infographics No Fear Graphic Novels Account Details Subscription & Billing Sign Out Please wait while we process your payment Reset Password Email Send password reset email Your password reset email should arrive shortly. If you don't see it, please check your spam folder. Sometimes it can end up there. Something went wrong If you don't see it, please check your spam folder. Sometimes it can end up there. Please wait while we process your payment Log in or Create account to start your free trial of SparkNotes Plus. Log inSign up Sparknotes Email Password Your password must: Be between 8-15 characters. Contain at least one capital letter. Contain at least one number. Be different from your email address. Log in Forgot Password By signing up you agree to our terms and privacy policy. Don’t have an account? Subscribe now Create Your Account Sign up for your FREE 7-day trial Ad-free experience Note-taking Flashcards & Quizzes AP® English Test Prep Plus much more First Name Last Name Email Password Your password must: Be between 8-15 characters. Contain at least one capital letter. Contain at least one number. Be different from your email address. [x] Sign up for the free PLUS newsletter Choose Your Plan By signing up you agree to our termsandprivacy policy. Already have an account? Log in Your Email Edit Choose Your Plan Individual Group Discount $4.99/month + tax Monthly BEST VALUE $24.99/year + tax Annual Save over 50% with a SparkNotes PLUS Annual Plan! Continue to Payment Continuing to Payment will take you to apayment page Purchasing SparkNotes PLUS for a group? Get Annual Plans at a discount when you buy 2 or more! Quantity Price $24.99$18.74/subscription + tax Subtotal $37.48 + tax Save 25% on 2-49 accounts Save 30% on 50-99 accounts Want 100 or more? Contact us for a customized plan. Continue to Payment Continuing to Payment will take you to apayment page Your Plan Edit Payment Details Card Number Expiration Date (MM / YY) Security Code Country United States Australia Canada Hong Kong India South Africa United States United Kingdom My country is not listed We're sorry, SparkNotes Plus isn't available in your country. Name on Card Billing Address City Zip/Postal Code State/Region Alabama Alaska Arizona Arkansas California Colorado Connecticut Delaware District of Columbia Florida Georgia Hawaii Idaho Illinois Indiana Iowa Kansas Kentucky Louisiana Maine Maryland Massachusetts Michigan Minnesota Mississippi Missouri Montana Nebraska Nevada New Hampshire New Jersey New Mexico New York North Carolina North Dakota Ohio Oklahoma Oregon Pennsylvania Rhode Island South Carolina South Dakota Tennessee Texas Utah Vermont Virginia Washington West Virginia Wisconsin Wyoming Save Card and Continue Payment Details Edit Payment Summary SparkNotes Plus You'll be billed after your free trial ends. 7-Day Free Trial Not Applicable Renews August 20, 2025 August 13, 2025 Discounts (applied to next billing) DUE NOW US $0.00 SNPLUSROCKS20|20%Discount Remove This is not a valid promo code. Discount Code(one code per order) Apply SparkNotes PLUS Annual Plan - Group Discount Qty: 00 Subtotal US $0,000.00 Discount (00% off) -US $000.00 Tax US $XX.XX DUE NOW US $1,049.58 SparkNotes Plus subscription is $4.99/month or $24.99/year as selected above. The free trial period is the first 7 days of your subscription. TO CANCEL YOUR SUBSCRIPTION AND AVOID BEING CHARGED, YOU MUST CANCEL BEFORE THE END OF THE FREE TRIAL PERIOD. You may cancel your subscription on your Subscription and Billing page or contact Customer Support at [email protected]. Your subscription will continue automatically once the free trial period is over. Free trial is available to new customers only. [x] By signing up you agree to our terms and privacy policy. [x] By signing up you agree to our terms and privacy policy. Start 7-Day Free Trial Complete Purchase Choose Your Plan This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply. Your Free Trial Starts Now! For the next 7 days, you'll have access to awesome PLUS stuff like AP English test prep, No Fear Shakespeare translations and audio, a note-taking tool, personalized dashboard, & much more! Go to My PLUS Dashboard Launch SparkNotes PLUS Thank You! You’ve successfully purchased a group discount. Your group members can use the joining link below to redeem their group membership. You'll also receive an email with the link. No URL Copy Members will be prompted to log in or create an account to redeem their group membership. Redeem a Code Now Manage Your Purchase Thanks for creating a SparkNotes account! Continue to start your free trial. Start free trial of SparkNotes Plus We're sorry, we could not create your account. SparkNotes PLUS is not available in your country. See what countries we’re in. There was an error creating your account. Please check your payment details and try again. Try Again Back to SparkNotes Sign Up for PLUS Literature Shakespeare Other Subjects AP® Test Prep PLUS Teacher SparkTeach Teacher's Handbook Blog Help My PLUS Activity Notes Bookmarks AP® Test Prep PLUS No Fear Translations & Audio Mastery Quizzes Flashcards Infographics No Fear Graphic Novels Account Details Subscription & Billing Manage Group Discount Sign Out click me Please wait while we process your payment Your PLUS subscription has expired We’d love to have you back! Renew your subscription to regain access to all of our exclusive, ad-free study tools. Renew your subscription to regain access to all of our exclusive, ad-free study tools. Renew your subscription Please wait while we process your payment Go ad-free AND get instant access to grade-boosting study tools! Start your 7-day FREE trial now! PLUS Paradise Lost ============= Study Guide Title Home Study Guide Full Text Mastery Quizzes Flashcards John Milton Study Guide Title Home Study Guide Full Text Mastery Quizzes Flashcards Summary Characters Literary Devices Questions & Answers Quotes Quick Quizzes Essays Further Study Summary Sparklet Chapter Summaries Summary & Analysis Book I, Lines 1–26 Book I, Lines 27–722 Book II Book III Book IV Book V Book VI Book VII Book VIII Book IX, Lines 1–403 Book IX, Lines 404–1189 Book X Book XI Book XII Full Poem Full Poem Summary Full Poem Analysis Key Facts Characters Character List Satan Adam Eve God the Father God the Son Sin and Death Beelzebub Michael Gabriel Uriel Raphael Literary Devices Themes Motifs Symbols Questions & Answers Why is Satan cast out of heaven? How is Satan characterized in Paradise Lost? Why do Adam and Eve eat the fruit? Why is Satan transformed into a serpent? What is Satan’s plan? Quotes Famous Quotes Explained By Theme Disobedience Sin and Death Gender By Section Book I, Lines 1–26 Book I, Lines 27–722 Book II Book III Book IV Book V Book VI Book VII Book VIII Book IX, Lines 1–403 Book IX, Lines 404–1189 Book X Book XI Book XII By Character Satan Adam Eve God the Father God the Son Quick Quizzes Book Full Book Essays Sample A+ Essay: The Mind Can “make a heaven of hell, a hell of heaven" Mini Essays Suggested Essay Topics Further Study John Milton and Paradise Lost Background Suggestions for Further Reading Please wait while we process your payment Unlock your FREE SparkNotes PLUS trial! Unlock your FREE Trial! Sign up and get instant access to bookmarks. Ad-Free experience Easy-to-access study notes Flashcards & Quizzes AP® English test prep Plus much more Sign up to start your FREE trial Renew your subscription Already have an account? Log in Summary Paradise Lost Book III Book III [x] Save false Summary Paradise Lost Book III PreviousNext Summary: Book III Book III opens with a second invocation to his muse, this time addressed to “holy light” (III.1). Milton asks that the heavenly light shine inside him and illuminate his mind with divine knowledge so that he can share this knowledge with his readers. The scene shifts to Heaven, where God has been watching all of the events in Hell with his Son sitting at his right hand. He sees Satan flying up toward the new Earth and the parents of mankind. At the same time, he sees everything that will happen because of it, perceiving past, present, and future simultaneously. He sees that man will fall, of his own fault, because God gave him free will—yet without that will, man would not be capable of sincere love. Man would merely go through the motions. While it would be just to punish man for his own actions, God determines that he will act primarily out of love and mercy. The Son, full of compassion, praises God for his kindness toward man, but asks how mercy can be given without destroying justice. God answers that a suitable sacrifice must be made: someone worthy must offer to die to pay for man’s sin. The angelic choirs are silent, but the Son immediately offers himself. He will become mortal so that God can yield to Death and conquer Hell. God is overjoyed, even though he will be giving up his son, because he knows that it is good to sacrifice his son for the salvation of the human race, in order for justice and mercy to be served. Those that have faith in the Son will be redeemed, but those who do not accept grace will still be doomed to Hell. The choirs of angels now break into a song of praise extolling the goodness of both Father and Son, which will turn a sorrowful deed into greater glory for both God and man. The story returns to Satan, who lands on Earth in what is now China. There are not yet any living things there, or any of the works of man that will eventually distract man’s mind from God. At length, Satan sees a high-reaching structure in the distance, an enormous kingly gate in the sky with stairs leading all the way down to Earth. This gate guards Heaven, which was at that time visible from Earth. Flying over to it, Satan climbs up a few steps to get a better view. He sees the new creation in all its glory, but can only feel jealousy. He does not stay put for long, though: he is drawn by the golden sun, hanging above the green and lush land, and flies toward it. There he sees an angel standing on a hill. To deceive him, Satan changes to a cherub, or low-ranking angel. Recognizing the other angel as the Archangel Uriel, Satan approaches and addresses him. Satan claims to have just come down from Heaven, full of curiosity about the new world he has been hearing so much about, and curious about its inhabitants. Satan’s transformation and his speech are so flawless that even Uriel cannot see through the subterfuge. The Archangel is pleased that a young angel is showing so much zeal to find out about the world that God brought out of the Chaos from earth, air, wind and fire. He happily points out the way to Paradise, where Adamlives. After giving his due respects, Satan flies off with dark intentions. Analysis: Book III As the narrative of Paradise Lost shifts from its sustained focus on Hell and Satan and begins to present glimpses of Heaven and God, we may feel that the story loses some of the intense interest and appeal that it began with. The discussion in Heaven is moving and theologically interesting, but the parts of the poem treating the evil designs of Satan are written with more potency and rhetorical vigor. The characters in Heaven play a relatively passive role, watching the story unfold, while Satan actively and endlessly devises his evil machinations. Moreover, the sinful, evil characters hold our attention more easily than the pure and virtuous ones. Satan appears to be the active hero, struggling for his personal desires, and God may seem rather dull. These observations, however, are beside the point that Milton hopes to prove to his readers: God’s reason and grace rule the universe and control all of those who live there. Read an in-depth analysis of God. The encounter between Satan and Uriel demonstrates Satan’s capacity for deception and fraud, as he subverts Uriel’s role as a guardian by disguising himself as a cherub. Uriel is unable to recognize Satan in part because he does not believe it possible that Satan would be lurking around. As a devout and virtuous angel, Uriel is unable to recognize evil even when it presents itself right in front of him. Through Satan’s deception of Uriel, Milton shows the significance of the sin of fraud, or hypocrisy. Fraud is an especially damaging sin because it is invisible to others, hurting them in ways they are not even aware of. In the Inferno, Dante maintains that fraud is the worst of all man’s sins. Milton goes almost as far in showing that leading innocent people to evil is much worse than leading yourself to evil. Read an in-depth analysis of Satan. Milton reveals his own personal theological positions in Book III. Through God’s initial speech, for example, Milton discards the orthodox Calvinist position of predestination. Omniscient God, seeing the fall in the future, says that men cannot blame God for their fate, or for acts of evil or bad luck, insisting that man possesses free will, even though God can foresee what they will do. God’s speech here contradicts the Calvinist belief, held by most of Milton’s fellow Puritans, that the fate of every man’s soul is decided before birth. Milton refuses to abandon his belief in free will, insisting that man must have free will in order to prove his sincere love for God. This balance between free will and virtue is a paradox—man is free to choose, but only truly free when he chooses the good. Read important quotes by God that shed light on Milton’s theological views. Milton had to confront certain problems inherent in any attempt to represent beings and events outside of time and human understanding. To have God and the Son appear as separate characters in a work of fiction poses particular problems and risks in terms of logical consistency. There may not be a completely coherent way to represent God and the Son as characters who are both independent and human-like, but at the same time consubstantial, omniscient, omnipresent, and omnipotent. It was extremely ambitious of Milton to risk heresy by putting words in God’s mouth, and he lessens this risk by incorporating numerous biblical allusions into the speeches of God and the Son. By making God and the Son two different characters, Milton asserts that they are essentially separate but equal entities. Milton did not believe in the Holy Trinity completely, and believed that the Son was created after God, not coeternally. The relationship between God and the Son is not fully revealed. Appearing as separate characters with separate comments, they may still share a mind. Some actions, like God’s plea for a volunteer, and the Son’s subsequent volunteering, argue that they do not share a single mind. God asks for a volunteer, yet he must know ahead of time that his Son will be the only volunteer. The precise nature of the relationship between the two remains mysterious. Read more about Milton’s portrayal of God the Father and God the Son as separate entities. Previous section Book IINext section Book IV Did you know you can highlight text to take a note?x ### Paradise Lost (SparkNotes Literature Guide) PRINT EDITION Ace your assignments with our guide to Paradise Lost! BUY NOW PLUS Notes See All Notes Add Note with SparkNotes PLUS Paradise Lost Add your thoughts right here! Please wait while we process your payment Unlock your FREE SparkNotes PLUS trial! Unlock your FREE Trial! Sign up and get instant access to creating and saving your own notes as you read. Ad-Free experience Easy-to-access study notes Flashcards & Quizzes AP® English test prep Plus much more Sign up to start your FREE trial Renew your subscription Already have an account? Log in Popular pages: Paradise Lost #### Character List CHARACTERS #### Satan CHARACTERS #### Themes LITERARY DEVICES #### Disobedience QUOTES #### Full Book QUICK QUIZZES Take a Study Break #### The 7 Most Messed-Up Short Stories We All Had to Read in School #### QUIZ: Which Pride and Prejudice Character Matches Your Personality? #### QUIZ: Is This a Taylor Swift Lyric or a Quote by Edgar Allan Poe? #### QUIZ: Which Greek God Are You? Sign up for our latest news and updates! By entering your email address you agree to receive emails from SparkNotes and verify that you are over the age of 13. You can view our Privacy Policy here. Unsubscribe from our emails at any time. First Name Last Name Email Sign Up SparkNotes—the stress-free way to a better GPA Explore Literature Shakespeare Other Subjects AP® Test Prep PLUS Teacher’s Handbook Blog Premium Study Tools SparkNotes PLUS Sign Up Log In PLUS Help Helpful Resources How to Cite SparkNotes How to Write Literary Analysis William Shakespeare's Life & Times Glossary of Shakespeare Terms Glossary of Literary Terms About Us Help About Contact Us Copyright © 2025 SparkNotes LLC Terms of Use | Privacy | Cookie Policy | Your Privacy Choices Barnes & Noble uses cookies to offer you a better user experience. By clicking “Accept All Cookies” you agree to the storing of cookies on your device in accordance with our Cookie Policy Manage Preferences Reject All Cookies Accept All Cookies Privacy Preference Center Your Privacy Strictly Necessary Cookies Performance Cookies Functional Cookies Targeting Cookies Your Privacy When you visit any website, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized web experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and change our default settings. However, blocking some types of cookies may impact your experience of the site and the services we are able to offer. More information Strictly Necessary Cookies Always Active These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information. Performance Cookies [x] Performance Cookies These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance. Functional Cookies [x] Functional Cookies These cookies enable the website to provide enhanced functionality and personalisation. They may be set by us or by third party providers whose services we have added to our pages. If you do not allow these cookies then some or all of these services may not function properly. Targeting Cookies [x] Targeting Cookies These cookies may be set through our site by our advertising partners. They may be used by those companies to build a profile of your interests, to measure the effectiveness of marketing campaigns, and show you relevant adverts on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device. If you do not allow these cookies, you will experience less targeted advertising. Cookie List Consent Leg.Interest [x] checkbox label label [x] checkbox label label [x] checkbox label label Clear [x] checkbox label label Apply Cancel Save Settings Reject All Allow All
393
python - Numpy where syntax from docs - Stack Overflow =============== Join Stack Overflow By clicking “Sign up”, you agree to our terms of service and acknowledge you have read our privacy policy. Sign up with Google Sign up with GitHub OR Email Password Sign up Already have an account? Log in Skip to main content Stack Overflow 1. About 2. Products 3. For Teams Stack Overflow for Teams Where developers & technologists share private knowledge with coworkers Advertising Reach devs & technologists worldwide about your product, service or employer brand Knowledge Solutions Data licensing offering for businesses to build and improve AI tools and models Labs The future of collective knowledge sharing About the companyVisit the blog Loading… current community Stack Overflow helpchat Meta Stack Overflow your communities Sign up or log in to customize your list. more stack exchange communities company blog Log in Sign up Let's set up your homepage Select a few topics you're interested in: python javascript c#reactjs java android html flutter c++node.js typescript css r php angular next.js spring-boot machine-learning sql excel ios azure docker Or search from our full list: javascript python java c# php android html jquery c++ css ios sql mysql r reactjs node.js arrays c asp.net json python-3.x .net ruby-on-rails sql-server swift django angular objective-c excel pandas angularjs regex typescript ruby linux ajax iphone vba xml laravel spring asp.net-mvc database wordpress string flutter postgresql mongodb wpf windows amazon-web-services xcode bash git oracle-database spring-boot dataframe azure firebase list multithreading docker vb.net react-native eclipse algorithm powershell macos visual-studio numpy image forms scala function vue.js performance twitter-bootstrap selenium winforms kotlin loops express dart hibernate sqlite matlab python-2.7 shell rest apache entity-framework android-studio csv maven linq qt dictionary unit-testing asp.net-core facebook apache-spark tensorflow file swing class unity-game-engine sorting date authentication go symfony t-sql opencv matplotlib .htaccess google-chrome for-loop datetime codeigniter perl http validation sockets google-maps object uitableview xaml oop visual-studio-code if-statement cordova ubuntu web-services email android-layout github spring-mvc elasticsearch kubernetes selenium-webdriver ms-access ggplot2 user-interface parsing pointers google-sheets c++11 security machine-learning google-apps-script ruby-on-rails-3 templates flask nginx variables exception sql-server-2008 gradle debugging tkinter listview delphi jpa asynchronous web-scraping haskell pdf jsp ssl amazon-s3 google-cloud-platform jenkins xamarin testing wcf batch-file generics npm ionic-framework network-programming unix recursion google-app-engine mongoose visual-studio-2010 .net-core android-fragments assembly animation math svg session hadoop intellij-idea rust next.js curl join winapi django-models laravel-5 url heroku http-redirect tomcat google-cloud-firestore inheritance webpack image-processing gcc keras asp.net-mvc-4 swiftui logging dom matrix pyspark actionscript-3 button post optimization web firebase-realtime-database jquery-ui cocoa iis xpath d3.js javafx firefox xslt internet-explorer caching select asp.net-mvc-3 opengl events asp.net-web-api plot dplyr encryption magento search stored-procedures amazon-ec2 ruby-on-rails-4 memory canvas audio multidimensional-array jsf random vector redux cookies input facebook-graph-api flash indexing xamarin.forms arraylist ipad cocoa-touch data-structures video model-view-controller azure-devops apache-kafka serialization jdbc woocommerce razor routes awk servlets mod-rewrite excel-formula beautifulsoup filter docker-compose iframe aws-lambda design-patterns text django-rest-framework visual-c++ cakephp mobile android-intent struct react-hooks methods groovy mvvm ssh lambda checkbox time ecmascript-6 google-chrome-extension grails installation sharepoint cmake shiny spring-security jakarta-ee plsql android-recyclerview core-data types meteor sed android-activity activerecord bootstrap-4 websocket graph replace scikit-learn group-by vim file-upload junit boost sass memory-management import deep-learning async-await error-handling eloquent dynamic soap dependency-injection silverlight layout apache-spark-sql charts deployment browser gridview svn while-loop google-bigquery vuejs2 ffmpeg dll highcharts view foreach makefile plugins c#-4.0 redis reporting-services jupyter-notebook merge unicode reflection https server google-maps-api-3 twitter oauth-2.0 extjs terminal axios pip split pytorch cmd encoding django-views collections database-design hash netbeans automation data-binding ember.js build tcp pdo mysqli sqlalchemy apache-flex entity-framework-core concurrency command-line spring-data-jpa printing react-redux java-8 lua html-table jestjs ansible neo4j service parameters material-ui enums flexbox module promise visual-studio-2012 outlook firebase-authentication webview web-applications uwp jquery-mobile utf-8 datatable python-requests parallel-processing colors drop-down-menu scipy scroll tfs hive count syntax ms-word twitter-bootstrap-3 ssis fonts rxjs google-analytics constructor file-io three.js paypal powerbi graphql cassandra discord graphics compiler-errors gwt socket.io react-router solr backbone.js url-rewriting memory-leaks datatables nlp oauth terraform datagridview drupal oracle11g zend-framework knockout.js triggers neural-network interface django-forms angular-material casting google-api jmeter linked-list path timer proxy django-templates arduino orm directory windows-phone-7 parse-platform visual-studio-2015 cron conditional-statements push-notification functional-programming primefaces pagination model jar xamarin.android hyperlink uiview google-cloud-functions visual-studio-2013 vbscript gitlab azure-active-directory jwt download swift3 sql-server-2005 configuration process rspec pygame properties combobox callback windows-phone-8 linux-kernel safari scrapy permissions emacs scripting raspberry-pi clojure x86 scope io azure-functions expo compilation responsive-design nhibernate mongodb-query angularjs-directive request bluetooth reference binding dns 3d architecture playframework pyqt version-control discord.js doctrine-orm package get rubygems f# sql-server-2012 autocomplete openssl tree datepicker kendo-ui jackson yii controller grep nested xamarin.ios static null dockerfile statistics transactions active-directory datagrid uiviewcontroller webforms discord.py phpmyadmin sas computer-vision notifications duplicates mocking youtube pycharm yaml nullpointerexception menu sum blazor plotly bitmap asp.net-mvc-5 visual-studio-2008 electron yii2 floating-point css-selectors stl jsf-2 android-listview time-series cryptography ant hashmap character-encoding stream msbuild asp.net-core-mvc sdk google-drive-api selenium-chromedriver jboss joomla cors devise navigation anaconda cuda background multiprocessing binary frontend camera pyqt5 iterator linq-to-sql mariadb onclick ios7 android-jetpack-compose microsoft-graph-api rabbitmq android-asynctask tabs laravel-4 amazon-dynamodb environment-variables insert uicollectionview linker xsd coldfusion console continuous-integration upload ftp textview opengl-es macros operating-system mockito formatting localization vuejs3 xml-parsing json.net type-conversion data.table kivy timestamp integer calendar segmentation-fault android-ndk prolog drag-and-drop char crash jasmine dependencies automated-tests geometry azure-pipelines android-gradle-plugin itext fortran sprite-kit header firebase-cloud-messaging mfc attributes nuxt.js nosql format odoo db2 jquery-plugins event-handling jenkins-pipeline nestjs leaflet julia annotations flutter-layout keyboard postman textbox arm visual-studio-2017 gulp stripe-payments libgdx synchronization timezone uikit azure-web-app-service xampp dom-events crystal-reports wso2 android-emulator swagger namespaces uiscrollview aggregation-framework sequelize.js jvm google-sheets-formula chart.js com subprocess snowflake-cloud-data-platform geolocation webdriver centos html5-canvas garbage-collection dialog widget numbers concatenation sql-update qml set tuples java-stream smtp mapreduce ionic2 windows-10 rotation android-edittext modal-dialog spring-data nuget http-headers doctrine radio-button grid sonarqube lucene xmlhttprequest listbox switch-statement initialization internationalization components apache-camel boolean google-play serial-port ldap gdb ios5 youtube-api return latex pivot eclipse-plugin frameworks tags containers github-actions subquery c++17 dataset asp-classic foreign-keys label uinavigationcontroller embedded copy google-cloud-storage delegates struts2 migration protractor base64 queue find uibutton sql-server-2008-r2 arguments composer-php append jaxb stack zip tailwind-css cucumber autolayout ide entity-framework-6 iteration popup r-markdown windows-7 airflow vb6 ssl-certificate g++ gmail hover jqgrid clang range Next You’ll be prompted to create an account to view your personalized homepage. Home Questions AI Assist Labs Tags Challenges Chat Articles Users Jobs Companies Collectives Communities for your favorite technologies. Explore all Collectives Teams Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Try Teams for freeExplore Teams 3. Teams 4. Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Explore Teams Collectives™ on Stack Overflow Find centralized, trusted content and collaborate around the technologies you use most. Learn more about Collectives Teams Q&A for work Connect and share knowledge within a single location that is structured and easy to search. Learn more about Teams Hang on, you can't upvote just yet. You'll need to complete a few actions and gain 15 reputation points before being able to upvote. Upvoting indicates when questions and answers are useful. What's reputation and how do I get it? Instead, you can save this post to reference later. Save this post for later Not now Numpy where syntax from docs Ask Question Asked 7 years, 11 months ago Modified7 years, 11 months ago Viewed 1k times This question shows research effort; it is useful and clear 1 Save this question. Show activity on this post. Trying to teach myself some python and I am super confused from the docs what the where function does. Can somebody explain the example from the documentation below step by step please? ```python np.where(, ... , ... ) array([[1, 8], [3, 4]]) ``` python numpy Share Share a link to this question Copy linkCC BY-SA 3.0 Improve this question Follow Follow this question to receive notifications asked Aug 22, 2017 at 7:11 Moeiz RiazMoeiz Riaz 305 1 1 gold badge 4 4 silver badges 20 20 bronze badges 2 Not sure if learning Python via the use of numpy is the best path to follow (in a general case). –Ignacio Vergara Kausel Commented Aug 22, 2017 at 7:19 This actually got me confused as I thought the entire first condition must be met. So if you don't make the conditions in the same structure..automatic broadcast takes into place, some of them somewhat odd –dia Commented Jun 2, 2018 at 8:40 Add a comment| 3 Answers 3 Sorted by: Reset to default This answer is useful 2 Save this answer. Show activity on this post. The basic syntax is np.where(x, a, b) Wherever x is true, take that element of a, and wherever it's false, take an element of b. It's equivalent to something like this: x = . . ), not x = array() + array() array() + array() = array() Share Share a link to this answer Copy linkCC BY-SA 3.0 Improve this answer Follow Follow this answer to receive notifications edited Jun 20, 2020 at 9:12 CommunityBot 1 1 1 silver badge answered Aug 22, 2017 at 7:20 Daniel FDaniel F 14.4k 2 2 gold badges 34 34 silver badges 58 58 bronze badges Add a comment| This answer is useful 1 Save this answer. Show activity on this post. Basically used as follows: python np.where(condition, value if condition is True, value if condition is False) In this case: condition is value if condition is True is . value if condition is False is . The final result of array() is equal to the array from 'value if condition is True', except for the one location in condition where it is False. In this case, the value of 8 comes from the second array. Share Share a link to this answer Copy linkCC BY-SA 3.0 Improve this answer Follow Follow this answer to receive notifications answered Aug 22, 2017 at 7:20 AlexanderAlexander 110k 32 32 gold badges 212 212 silver badges 208 208 bronze badges Add a comment| This answer is useful 1 Save this answer. Show activity on this post. I think it becomes pretty clear when you add linebreaks to arrange the inputs to look like matrices: python np.where( # First argument [[True, False], [True, True]], # Second argument [[1, 2], [3, 4]], # Third argument [[9, 8], [7, 6]]) You can see the first argument as a mask that determines from which of the two following inputs elements should be taken. The result python array([[1, 8], [3, 4]]) contains elements from the second argument wherever the mask is True and elements from the third argument where it is False. Share Share a link to this answer Copy linkCC BY-SA 3.0 Improve this answer Follow Follow this answer to receive notifications answered Aug 22, 2017 at 7:28 MB-FMB-F 23.7k 5 5 gold badges 70 70 silver badges 127 127 bronze badges Add a comment| Your Answer Thanks for contributing an answer to Stack Overflow! Please be sure to answer the question. Provide details and share your research! But avoid … Asking for help, clarification, or responding to other answers. Making statements based on opinion; back them up with references or personal experience. To learn more, see our tips on writing great answers. Draft saved Draft discarded Sign up or log in Sign up using Google Sign up using Email and Password Submit Post as a guest Name Email Required, but never shown Post Your Answer Discard By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy. Start asking to get answers Find the answer to your question by asking. Ask question Explore related questions python numpy See similar questions with these tags. The Overflow Blog Renewing Chat on Stack Overflow AI isn’t stealing your job, it’s helping you find it Featured on Meta Will you help build our new visual identity? Upcoming initiatives on Stack Overflow and across the Stack Exchange network... Updated design for the new live activity panel experiment Further Experimentation with Comment Reputation Requirements Report this ad Report this ad Community activity Last 1 hr Users online activity 14433 users online 15 questions 25 answers 100 comments 265 upvotes Popular tags htmljavapythonjavascriptc#css Popular unanswered question Best way to work with a large amount of if-else? c th3virtuos0 65 15 hours ago Linked 0How does numpy.where() work when input in the form numpy.where(array condition, array1, array2)? -1lambda function of if else statement in python pandas Related 3Basics of numpy where function, what does it do to the array? 4Understanding about the numpy.where 2how does numpy.where work? 5"where" clause in numpy-1.13 ufuncs 1understanding the numpy where function 2numpy where condition output explained 0Can somebody explain how numpy.where and True/False works? 0Strange use of np.where 1Numpy where behavior 2numpy.all has where parameter for what purpose Hot Network Questions Is the logic of the original smoking study valid? Does trading for Kyogre in Pokémon Omega Ruby include its Mega Evolution? When was John Mark from Acts first identified as Mark the Evangelist? How can a theory be discarded if the Duhem–Quine thesis suggests it can’t be falsified Dropdown width with very long options David receiving gifts from pagan kings Do you email authors whose results you have improved? How do I keep my internal drives active? Where should I host software for individual papers when GitHub is now part of Microsoft AI? Why doesn't chatGPT learn from its interactions with users? Samba(Linux)/Windows interaction If linear negation is interpreted as representing destructors, how to make sense of double linear negation elimination? I found that we can calculate the time of solar eclipses that will happen in the very far future. Do we need relativity in this calculation? Is laser engraving on an interstellar object feasible? Graphical software tools for quick and easy diagrams What is a good way to get magnetic sensor input? Factoring RSA numbers on a laptop Why isn't gauge symmetry a symmetry while global symmetry is? Does it make any sense to run a journal for pre-college students interested in medicine? Rectangle and circle with same area and circumference In the US, can I contribute to my Roth IRA, ahead of the time I get the earned income? Can high schoolers post to arXiv or write preprints? What does my 3D Printing Life-Seeder Probe need to print to populate the Universe for humans? Formula of Simonene Question feed Subscribe to RSS Question feed To subscribe to this RSS feed, copy and paste this URL into your RSS reader. lang-py Why are you flagging this comment? It contains harassment, bigotry or abuse. This comment attacks a person or group. Learn more in our Code of Conduct. It's unfriendly or unkind. This comment is rude or condescending. Learn more in our Code of Conduct. Not needed. This comment is not relevant to the post. Enter at least 6 characters Something else. A problem not listed above. Try to be as specific as possible. Enter at least 6 characters Flag comment Cancel You have 0 flags left today Stack Overflow Questions Help Chat Products Teams Advertising Talent Company About Press Work Here Legal Privacy Policy Terms of Service Contact Us Your Privacy Choices Cookie Policy Stack Exchange Network Technology Culture & recreation Life & arts Science Professional Business API Data Blog Facebook Twitter LinkedIn Instagram Site design / logo © 2025 Stack Exchange Inc; user contributions licensed under CC BY-SA. rev 2025.8.13.32804 By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy. Accept all cookies Necessary cookies only Customize settings
394
Published Time: 2025-07-30 Mendelian randomization linking metabolites with enzymes reveals known and novel pathway regulation and therapeutic avenues | medRxiv =============== Skip to main content Home Submit FAQ Blog ALERTS / RSS Resources About Search for this keyword Advanced Search Follow this preprint Mendelian randomization linking metabolites with enzymes reveals known and novel pathway regulation and therapeutic avenues =========================================================================================================================== View ORCID ProfileAdriaan van der Graaf, View ORCID ProfileSadegh Rizi, View ORCID ProfileChiara Auwerx, View ORCID ProfileZoltán Kutalik doi: [This article is a preprint and has not been peer-reviewed what does this mean?]. It reports new medical research that has yet to be evaluated and so should not be used to guide clinical practice. Adriaan van der Graaf 1 Department of Computational Biology, University of Lausanne, Lausanne Switzerland 2 Swiss Institute of Bioinformatics, Lausanne, Switzerland Find this author on Google Scholar Find this author on PubMed Search for this author on this site ORCID record for Adriaan van der Graaf Sadegh Rizi 3 University of Tehran, Tehran, Iran Find this author on Google Scholar Find this author on PubMed Search for this author on this site ORCID record for Sadegh Rizi Chiara Auwerx 1 Department of Computational Biology, University of Lausanne, Lausanne Switzerland 2 Swiss Institute of Bioinformatics, Lausanne, Switzerland 4 Center for Integrative Genomics, University of Lausanne, Lausanne, Switzerland 5 Center for Genomic Medicine, Massachusetts General Hospital, Boston, MA, USA 6 University Center for Primary Care and Public Health, Lausanne, Switzerland Find this author on Google Scholar Find this author on PubMed Search for this author on this site ORCID record for Chiara Auwerx Zoltán Kutalik 1 Department of Computational Biology, University of Lausanne, Lausanne Switzerland 2 Swiss Institute of Bioinformatics, Lausanne, Switzerland 6 University Center for Primary Care and Public Health, Lausanne, Switzerland Find this author on Google Scholar Find this author on PubMed Search for this author on this site ORCID record for Zoltán Kutalik For correspondence: Zoltan.Kutalik{at}unil.ch Abstract Full Text Info/History Metrics Supplementary material Data/Code Preview PDF Abstract Reactions between metabolites are catalyzed by enzymes. These biochemical reactions form complex metabolic networks, which are only partially characterized in humans and whose regulation remains poorly understood. Here, we assess human biochemical reactions and regulation using Mendelian randomization (MR), a genetic observational causal inference technique to understand the methods’ strengths and weaknesses in identifying metabolic reactions and regulation. We combine four metabolite and two protein quantitative trait locus (QTL) studies to determine how well MR recovers 945 curated canonical enzyme-substrate/product relationships. Using genetic variants from an enzyme’s transcribed (cis) region as instrumental variables, MR-inferred estimates have high precision (35%-47%) but low recall (3.2%-4.6%) to identify the substrates and products of an enzyme. Testing reverse causality from metabolites to enzymes using genome-wide instruments, yields lower precision (1.8%-8.5%) and recall (1.0%-1.9%) due to increased multiple testing burden. Literature review of 106 Bonferroni significant results identifies 45 links (43%) confirmed by different degrees of evidence, including bidirectional links between linoleate and Cytochrome P450 3A4 (CYP3A4) levels (P = 8.6 . 10-32). Eleven enzymes in the 106 links involve drug targets, allowing for an interpretation between N-acetyl putrescine and IL1RAP (P = 2.7 . 10-7), as IL1RAP is target of the psoriasis drug Spesolimab, and putrescine levels are elevated in psoriatic tissues. This work highlights how MR can be leveraged to explore human metabolic regulation and identify both canonical reactions and previously unknown regulation. Introduction Metabolic reactions with unfavorable activation energy are catalyzed by enzymes. These enzymes are tightly regulated to maintain optimal metabolite concentrations under fluctuating environments, preventing both the depletion of essential metabolites and their accumulation to toxic levels. Enzymes can be regulated through processes such as competitive and uncompetitive inhibition, covalent enzyme modifications, phosphorylation and pathway inhibition or activation1. Yet, our knowledge of human metabolism remains incomplete: i) novel human metabolic pathways are still being discovered2, ii) the human genome still contains ‘orphan’ enzymes predicted to have a catalytic function that have yet to be experimentally demonstrated3,4 and iii) the regulation of the activation or repression of metabolic pathways is still an active field of study, for instance through metabolic flux analysis5. It is possible to model metabolite pathways as a series of causal relationships, where the enzymatic activity of a protein is causal to the concentration(s) of the substrate and the product. This approach allows the identification of new putative metabolic links and regulatory mechanisms. These conclusions are usually derived from techniques such as randomized control experiments, metabolic flux analyses or Bayesian causal networks, and have successfully identified metabolic pathways in humans and other organisms5–8. In this work, we explore a different causal inference technique to infer human pathway regulation: Mendelian randomization (MR). MR is an observational causal inference technique that uses genetic information to establish directional links between an exposure and an outcome10,11. In principle, it is possible to identify pathway regulation between metabolites and proteins using MR (Box 1). Motivated by the increased availability of well-powered metabolite (mQTL) and protein (pQTL) quantitative trait locus studies, as well as the ability of cis MR methodology to meaningfully improve drug target prediction12,13, we expect that MR is valuable in identifying the substrates and products of an enzyme and its regulation. To this end, we explore if and how MR can be used to derive meaningful conclusions in human metabolism and its regulation. Here, we identify regulation between enzymes and metabolites through a form of observational causal inference called Mendelian randomization (MR). MR allows the identification of causal relationships between a heritable exposure and an outcome even when these traits are not measured in the same individuals by using genetic variants that influence an exposure of interest. Should there be a causal relationship between this exposure and an outcome, the genetics of the exposure should also be associated to the outcome in a proportional manner. MR operates on three main assumptions: i) The relevance assumption: the genetic variant needs to be robustly associated to the exposure of interest. ii) The independence assumption: the genetic variant needs to be independent from any confounder of the exposure-outcome relationship and iii) The exclusion restriction: the genetic variant needs to affect the outcome only through the exposure, and there should be no other paths. Violations of the exclusion restriction assumption – otherwise known as horizontal pleiotropy – is difficult to account for, especially when there is only a limited number of independent genetic variants available11,14. Recent advances in MR allow for pleiotropy-robust analysis of molecular phenotypes that have a single or only a handful of associated regions15,16. Our goal in this study is to identify to which extent enzymatic reactions can be reidentified by MR based on available pQTL and mQTL studies. Then, we aim to study enzyme-metabolite pairs significantly linked by MR but are not in the gold standard benchmark set to estimate if these reflect true human biology or rather false positives. For this, we use two pathway references to build a metabolic map between metabolites and their enzymes in humans (Figure 1). Then, using pQTLs derived from two independent studies 17,18 and mQTLs derived from four independent studies 19–22 (Figure 1a), we estimate the causal effects that proteins have on metabolites and vice versa, using the protein levels as a proxy for enzymatic activity, distinguishing between MR results based on instruments in the cis region of the enzyme and those based on meta-analyzing all cis and trans regions together (Figure 1b). We find that MR methods have limited discriminative ability to determine if a metabolite is catalyzed by a specific protein. Yet, Bonferroni-significant MR estimates are enriched for true reactions. Extensive literature search of the Bonferroni significant combinations that are not in our metabolic map proposes possible mechanistic explanations for some of the newly identified links and points out multiple drug targets among the involved enzymes, suggesting putative metabolite downstream drug effects. Download figure Open in new tab Figure 1 Overview of the data used in this study. (a) The different metabolite quantitative trait locus (mQTL) studies and protein quantitative trait locus (pQTL) studies, combined with two pathway reaction references ‘Expasy + KEGG’ and ‘MetaCyc’. Based on the matching of proteins to metabolites we identify 1,742 enzyme metabolite links. (b) The number of measured enzyme metabolite links present in each pathway reference.x Methods Metabolite quantitative trait locus studies We analyzed four metabolite quantitative trait studies, as described in our previous publication16. In short, the summary statistics for four mQTL studies were downloaded from their respective resources19–22. and the metabolites were harmonized to Human metabolite database (HMDB) identifiers. This resulted in 1,109 metabolites to which an HMDB ID was assigned. Protein quantitative trait locus studies We analyzed two pQTL studies: the DeCode study18 and the UK biobank Pharma Proteomics Project (PPP)17. The DeCODE study comprises genetic associations of 4,907 plasma proteins across 35,559 individuals. The PPP study contains genetic associations for 2,923 plasma proteins across 54,219 individuals. The summary statistics and meta data were downloaded from their respective resources, and were harmonized using the UniProt identifiers, which were provided by all three studies. Identification of catalytic enzymes We used two independent resources to build our metabolic map: KEGG+Expasy and Metacyc. For KEGG+Expasy, we used the Expasy enzyme resource to retrieve the enzymes that have catalytic effects in humans23, and the KEGG resource to identify metabolites that are involved in the reactions. Out of 8,227 enzymes from the Expasy resource, we identified 4,277 enzyme commission (EC) numbers that match to human enzymes, including 3,592 unique enzymes (some enzymes catalyze multiple reactions)24. We used KEGG to match the compounds to each enzymatic reaction. We identified 229 human KEGG pathways in which the measured compounds are present and downloaded the XML files for each human pathway. From these, we derived the 1,489 reactions for which protein and reactant information was available for at least one pQTL and mQTL study. After matching these reactions to the enzymes that catalyze them, this resulted in 1,236 measured enzyme-reactant combinations. As a secondary metabolite reference, we used HumanCyc v24.0 database to identify enzymes and reactants that are present in a pathway 25. Matching a total of 1,925 unique reactions that can be catalyzed by 2,933 unique proteins. After matching the reactions to the measured proteins and metabolites we ended up with 231 unique exposure and outcome combinations. Summary statistics harmonization We jointly harmonized summary statistics from all pQTL and mQTL studies in the same way: First, if necessary, the files were lifted over to human genome build GRCh37 to match our allele reference using the UCSC liftover tool ( Second, variants were matched to the UK10K linkage disequilibrium reference panel26, matching variants on their genomic positions and alleles, while removing palindromic variants and variants that have a minor allele frequency less than 0.5% in the panel. We then harmonize the effect sizes of the summary statistics to the standard deviation per standard deviation scale: Where n is the variants sample size and z the Z score of the association. We set n to be the maximum sample size if n was not available for a given variant. The P value and effect size sign of the variant-trait association was converted into a Z score if Z score was not available16. If per variant sample size was available, we removed variants that were missing more than 5% of the maximum sample size in a region. Identification of associated regions We identified associated regions by P value clumping genetic variants at a P value threshold of smaller than 5. 10-8, using the plink (v1.90b7)27 –clump command with a clumping window of 250 Kb and an LD threshold of 0.01 r 2. Overlapping clumped regions were combined. Non-overlapping regions were analyzed by all four cis MR methods16. If a method requires instrumental variables, these were clumped from the full region using the same P value and LD threshold. To reduce the dependence of reverse causality, we removed the protein cis region, if the protein is the outcome in the meta-analysis. MR methods used in this study We used four MR methods: MR-IVW11, MR-IVW-LD, MR-PCA28 and MR-link-216. These methods have been chosen because they are appropriate to use on one or multiple associated regions. Code implementations for all the methods can be found at ( We performed three ways of interpreting the MR analyses here: i) cis-MR, ii) cis + trans MR, and iii) meta-analyzed bidirectional meta-analysed MR: performing region-based MR for every exposure associated region in both protein-to-metabolite and metabolite-to-protein directions and meta-analyzing the region-based MR estimates. In cis MR, we performed regional MR when any clumped variants in the gene region (defined in the section Identification of associated regions) is within 250Kb of the start or end of the gene that transcribes the enzyme according to Ensembl biomart ( In the cis + trans analysis, we analyze all associated regions of an enzyme and treat each individual associated region as an independent MR analysis. Here we only test the direction from the enzyme to the metabolite. In the meta-analyzed MR we perform a bidirectional MR identifying causality between enzymes and metabolites, and between metabolites and enzymes. This can identify canonical metabolic reactions (from the enzyme onto the metabolite), but also their regulation (both the enzyme onto the metabolite and the metabolite onto the enzyme). We perform an MR for all genetic associations of an enzyme or a metabolite and we meta-analyze these estimates. The enzyme to metabolite direction is the only direction that we consider truly causative, but as we perform biological follow up in both directions, we include both directions in the final approach. Discriminative ability analysis using AUC We used the AUC to assess the discriminative ability of different MR methods. True links were defined as enzymes and their reactants, whereas the true non-links are all the other protein-metabolite combinations. The AUC was calculated with the scikitlearn library 29. Manual literature search and classification of evidence For all MR-link-2-significant link we performed a literature review. Specifically, we searched for both the protein-metabolite combination and the metabolite in combination with the products and substrates of the reaction that the protein catalyzes in PUBMED, OMIM and the BRENDA enzyme database30,31. We classified our findings into five categories of evidence: Canonical reaction: significant combinations where the exposure is the enzyme that directly catalyzes the metabolite outcome. Reverse canonical evidence: causal relationships where the metabolite acts on their catalyzing enzyme. Pathway evidence: causal relationships where the metabolite and the enzyme are in the same pathway. Homologous evidence: causal relationships where a paralog (similar gene in the same organism) or ortholog (similar gene in another organism) is related to the metabolite. Which indicates that there is a strong effect of based on the BRENDA data resource). Supporting evidence: These combinations are cryptic and may represent novel and unknown regulation, it is however also possible that these combinations are false positives. OpenTargets matching drugs to their consequences We downloaded the ‘molecule’ and ‘diseases’ tables from OpenTargets platform ( June 2024) and matched 1,550 OpenTargets drug targets to 358 of the unique enzyme measurements. We conducted a literature search of all drug targets that are Bonferroni significant in our original analysis. Results Matching pathway references with protein and metabolite QTLs To build a ground truth dataset of enzymatic reactions that can be matched to in pQTL and mQTL studies, we first use two human pathway reference datasets to identify enzymes and their reactants in our metabolic map: KEGG+Expasy and MetaCyc to build a metabolic map25,32,33 (Methods). KEGG+Expasy contains 1,489 reactions, while MetaCyc contains 1,925 reactions (Figure 1a). Each reaction consists of one enzyme, one or more substrates, and one or more products (Supplementary Table 1) (Methods). We matched enzymes from our metabolic map to two protein QTL studies: the DeCODE study comprising 4,907 proteins across 35,559 individuals18 and the UK biobank Pharma Proteomics Project (UKB-PPP) comprising 2,923 proteins across 54,219 individuals (Figure 1a). After matching proteins to our metabolic map, 2,696 and 2,180 protein measurements had catalytic activity according to KEGG+Expasy and MetaCyc, respectively (Figure 1a). Across pQTL studies, 1,624 unique proteins had a catalytic function according to at least one pathway reference (Supplementary Table 2). We matched the metabolites from our metabolic map to those in four mQTL studies: Shin et al., Lotta et al., Chen et al., and Borgess et al.19–22. This resulted in 324 unique enzymes across 424 measurements in reactions with 121 unique metabolites across 214 measurements. Overall, our metabolic map encompasses 1,310 protein-metabolite pairs, including 540 and 770 pairs in which the metabolite acts as substrate and product, respectively (Figure 1a) (Figure 1b) (Supplementary Table 3). When considering these ground truth reactions from the metabolic map, enzyme abundances to reactant concentration are the true positives, as we expect that enzyme abundance causally influences the concentrations of substrates and products. We make a distinction between the enzyme-onto-metabolite direction versus the metabolite-onto-enzyme direction as the enzyme-onto-metabolite direction can be understood as directly causative: An enzyme converts a metabolite in a well understood physical process. As a reaction occurs, we expect substrate depletion and an increase in product concentration. Under the assumption that enzymes catalyze a reaction in a single direction, we expect to see negative causal estimates from the enzyme to the substrate and positive causal estimates from the enzyme to the product. In contrast, we do not consider the metabolite-onto-enzyme direction directly causative as there is no obvious physical process through which the reactant influences the abundance of the catalyzing enzyme. Instead, we consider such influences as indirect regulatory effects, such as for instance the regulation between cholesterol concentrations and the degradation of HMG-CoA reductase34, as well as between xenobiotics and cytochrome P450 3A4 abundances35. Mendelian randomization analyses We perform MR using four cis MR methods (MR-IVW, MR-PCA, MR-IVW-LD and MR-link-2) to test how well MR can identify the causal relationships between an enzyme and its substrates and products (Methods). We consider three forms of MR analysis: i) cis-MR, where only the genetic variants proximal to the gene encoding for the enzyme of interest are used as instrumental variables in the MR; ii) cis+trans MR, which performs MR independently for each associated locus; and iii) bidirectional cis+trans meta-MR which meta-analyzes region-based MR estimates (Methods). First, for the cis-MR analysis, we identify enzyme-to-metabolite effects using only the cis region (where the enzyme is transcribed) as a source of MR instruments Secondly, In cis+trans MR, we perform MR in each enzyme-associated region separately yielding multiple (region-specific) causal effect estimates. These MR effect estimates are tested against the gold standard set independently. Thirdly, for meta-analyzed MR, we combine the individual region-based MR estimates from the cis+trans MR for each enzyme-to-metabolite MR, as well as for each metabolite-to-enzyme MR using inverse variance weighting meta-analysis (Methods). We only test the enzyme to metabolite direction when performing cis and cis+trans MR, as we are testing the merit of these methods for identifying enzymatic reactions and a cis region is not available for a metabolite. In contrast to the meta-analyzed MR, where we also perform the metabolite-to-protein direction, as here we broaden the research question, and assess any potential indirect regulation through literature search. In all these estimates, we consider the protein to metabolite direction to be the only truly causative direction, and we perform literature follow up on significant findings only for the most conservative, meta-analysis MR approach. Cis Mendelian randomization to identify reactants and their enzymes We applied MR to cis regions that associate (SNP P ≤ 5 . 10-8) to 159 enzymes (197 measurements) that are in the metabolic map. These enzymes catalyze reactions involving 88 metabolites (139 measurements), leading to 499 protein-metabolite pairs that are the positive controls in this study (Supplementary Table 4). All other 25,420 possible pairwise comparisons between enzymes and metabolites are considered as negative controls. We provide an example cis MR result in Figure 2. Here, arginase 1 converts arginine into ornithine and urea (EC 3.5.3.1) (Figure 2a) along with the MR-link-2 results, where the product causal direction is (Figure 2b-d). Download figure Open in new tab Figure 2 Example of MR effect estimation. (a) Considering the reaction catalyzed by arginase 1 (ARG1), with arginine as a substrate, and ornithine and urea as the product, one can estimate a causal relationship between the enzyme and the reactants, provided they are measured in each study. (b) The causal estimates between ARG1 and the arginine substrate, as arginine is a substrate a negative causal effect is expected. (c) The causal estimates between ARG1 and the reaction product ornithine. (d) The causal estimates between ARG1 and the reaction product urea. It is not possible to estimate a causal effect if either the enzyme or the metabolite is unmeasured in a single study As cis regions are only available for enzymes, this methodology will only be able to identify causality from enzyme concentrations onto metabolite levels. When testing for the directionality of MR methods on substrates and proteins. MR-link-2 is the only method that identifies a significantly larger causal effect for products than for substrates (Kruskal-Wallis P = 0.011) (median product causal effect estimate α̂: 0.008, median substrate α̂: = -0.013) across 219 substrates and 320 products from our metabolic map (Supplementary Figure 1). Using the positive control enzyme-metabolite combinations, it is possible to determine the discriminative ability of different MR methods, which is limited, with area under the receiver operator characteristic curve (AUC) ranging between 0.543-0.554 (Supplementary Table 5) (Supplementary Figure 2). We found that precision (proportion true positives over all detected) for all MR methods was highest in the low P value range, we show precision and recall (proportion of true positives over all positives) on a log scale (Supplementary Figure 3a) (Supplementary Figure 3b). Analyzing causal estimates below the cis- specific Bonferroni threshold , all methods had low recall 3.2%-4.6% combined with high precision 35%-47% (Supplementary Figure 3). Each MR method had similar precision and recall at any specific P value rank, with MR-IVW-LD having the highest recall (with lower precision) and MR-link-2 the highest precision (with lower recall), in line with our previous work16. Considering only unique enzyme-to-metabolite combinations, MR-PCA identified the largest number (n = 28) of unique enzyme-to-metabolite links, including 12 positive controls (Supplementary Figure 3c), and 16 likely false positives (Figure 3d). In contrast, MR-link-2 identified eight positive controls (Figure 3c) and five likely false positives (Figure 3d) (Supplementary Figure 3). Download figure Open in new tab Figure 3 Precision and recall curves of MR methods when meta-analyzing results together in a bidirectional manner between enzymes and metabolites. (a) Recall and (b) precision depending on the chosen P value threshold ordering. (c) Set memberships for the true positives at each methods’ Bonferroni threshold. (d) Set memberships of false positives at each methods’ Bonferroni threshold. Abbreviations: (TP) True positive, (FP) False positive. To extend the true positive set, we added further links that were reported in the BRENDA database31 (Methods). We determined for each enzyme-metabolite combination whether any of the metabolites had been found as a substrate, product or inhibitor in the enzymes that were otherwise missed by our original true positive dataset. Two out of five unconfirmed links found by MR-link-2 were an inhibitor or a substrate according to BRENDA. In the case of MR-PCA, 4 out of 16 links represented a substrate, product or inhibitor relationship.(Supplementary Table 6) (Supplementary Figure 4) (Methods)31. cis + trans MR to extend causality Considering that the cis MR results had high precision with relatively low recall, we considered that including trans associated loci could increase the number of causal links that may be biologically informative. We set out to investigate the ability for cis MR methods to identify further protein-to-metabolite causality using all associated regions (cis + trans ) After analyzing a total of 451,502 combinations, of which 4,987 are considered true positives. At a Bonferroni significance threshold recall ranged between 0.4% and 1% depending on the MR method (Supplementary Figure 5a), while precision ranged between 11.6% and 1.8% (Supplementary Figure 5b) (Supplementary Table 7). Even though we expected to increase recall in the trans analysis over the cis analysis, at the cost of precision, the inclusion of separate trans region-based MR estimates identified fewer unique true positives for MR-link-2 the methods compared to the cis MR analysis. This is likely due to: the increased multiple testing burden, and an increase in the set of positives, as not all positives can be tested in the cis analysis. For illustration, MR-link-2 identified eight unique true positives in the cis analysis, compared to seven in the cis+trans analysis (Supplementary Figure 3c) (Supplementary Figure 5c-d) (Supplementary Table 7). Nonetheless, including trans regions in an analysis is not without merit: Some trans regions were individually significant at the stringent multiple testing threshold: for instance, ARG1 abundances were regulated in trans on chromosome 19, providing a concordant Bonferroni significant true positive causal estimate on ornithine concentrations (α̂ = 0.67, P = 4.0 . 10-8) (Supplementary Table 7). Meta-analyzing MR between enzymes and metabolites Based on the information that cis as well as trans regions can be informative, we decided to combine all associated regions into a single causal estimate per enzyme-to-metabolite pair, as well as in the reverse direction: additionally testing the effect of metabolites on enzyme abundances using the standard inverse variance weighting meta-analysis (Methods). This is the first analysis where we include the reverse metabolite-to-enzyme regulation direction. we still only consider the enzyme-to-metabolite direction causative and attribute any significant results in the metabolite-to-enzyme direction to indirect regulation, which we later follow up in literature search. Combining all enzyme-metabolite pairs resulted in 215,556 combinations (506 enzymes 213 metabolites 2 directions), out of which 146,062 could be instrumented by MR, setting our meta-analysis Bonferroni-corrected threshold for significance at (P < = 3.42 . 10-7) (Methods). Of these, 945 enzyme-to-metabolite combinations represent ground truth positive controls in our metabolic map (Supplementary Table 8). We find that all tested MR methods in meta-analysis have modest discriminative ability (AUC: 0.512-0.531) (Supplementary Table 9). When considering Bonferroni-significant results, there is strong enrichment for true metabolite reactions across all methods (odds ratio: 2.9-13.2, P: 3.0 . 10-4- 7.4 . 10-8) (Supplementary Table 10). The differences in precision and recall between MR methods was more pronounced compared to cis-MR (Figure 3a) (Figure 3b); MR-link-2, MR-PCA, MR-IVW, and MR-IVW-LD identified 106, 660, 870, and 912 Bonferroni-significant effects, respectively. Methods differ strongly in their precision (Figure 3a) (Supplementary Table 8), with the best precision obtained by MR-link-2 (8.5%) compared to up to 2.3% for other methods. This comes at the cost of lower recall: nine out of 945 true comparisons pass Bonferroni significance with MR-link-2, compared to 19 for MR-IVW-LD (Figure 3b) (Supplementary Table 8). These precision and recall curves can also be used to compare MR methods. Generally, all methods have similar precision and recall, but there are deviations, especially when considering the most significant parts of the curves. (Figure 3a) (Figure 3b) (Supplementary Table 8) Significant meta-analyzed MR-link-2 results show different types of evidence for pathway regulation Since our ground truth dataset of reactions is not representative of the full biology between the metabolites and enzymes, we performed a literature review to annotate an MR-derived positive set of candidate reactions that were not reported in our metabolic map. We expected meta-analyzing region-based MR estimates be the most robust analysis method and MR-link-2 yielded the highest precision, we manually annotated all 106 (across 84 unique combinations) MR-link-2 Bonferroni-significant meta-analyzed results (Figure 4c,d) (Figure 5). We classified identified links into five classes: i) “evidence of a canonical reaction” (n = 12, 11%), ii) “reverse canonical evidence” (i.e., the metabolite influences the enzyme, n = 8, 7.5%), iii) “pathway evidence” (i.e., the metabolite and enzyme share a pathway, n = 6, 5.6%), iv) “homologous reaction evidence” (i.e., the enzyme was shown to interact with the metabolite based on homologous enzymes, n = 4, 3.8%) and v) “supporting evidence” (i.e., biological evidence present that the enzyme and metabolite are related in model organisms or shared biological pathways, n = 15, 14%). The manual annotation of the Bonferroni significant results excluding “supporting evidence” are shown in Table 1, and all annotations are visualized as a graph in Figure 4. The manual annotations of the significant MR-link-2 meta-analyzed results that include “supporting evidence” are listed in Supplementary Table 11. Download figure Open in new tab Figure 4 Bonferroni significant MR-link-2 results with their biological. The pie chart depicts the numbers found for each class of interpretation and serves as a legend. The metabolites are depicted as diamonds, while proteins are depicted as circles. A blunted arrow indicates a negative causal relationship, while a pointy arrow indicates a positive causal relationship. View this table: View inline View popup Download powerpoint Table 1: Bonferroni significant MR-link-2 results with supporting evidence. Causal relationships are classified according to whether there is evidence of a canonical reaction, reverse canonical evidence (the metabolite influences the enzyme), pathway evidence (the metabolite and enzyme share a pathway), homologous reaction evidence (the enzyme has been shown to interact with the metabolite from homologous evidence). Enzymes or metabolites might be measured in multiple studies, leading to multiple estimates (indicated by a ‘2x’ or ‘3x’ in the “Causal relationship” column). We only report the most significant result and the number of Bonferroni-signifcant (P < 3.4 · 10-7) combinations. Full results in Supplementary Table 11. Canonical reactions Twelve Bonferroni-significant protein-to-metabolite effects represent canonical reactions where the enzymes are estimated to causally influence the substrate or the product of the reaction (Table 1; Figure 5). Three of these direct reactions were missed by our metabolic map due to differences in harmonization of compounds and reactions and retrieved through manual curation: TYMP (thymidine phosphorylase) causal to 2’- deoxyuridine levels (α̂ = -1.2, P = 3.1 . 10-18), ACY1 (aminoacylase 1) causal to N-acetyl-glutamate levels (α̂ = -0.13, P = 3.9 . 10-8) and ACY1 causal to glycine levels (α̂ = -0.13, P = 3.9 . 10-8). Reverse Canonical Reactions MR-link-2 identifies eight metabolites that influence their metabolizing enzyme (Table 1) (Figure 5) (Supplementary Table 11). These reactions can be further divided depending on whether the enzyme also causally influences the metabolite (i.e., bidirectional effects; n = 4; arginine ➔ ARG1 (2x), N-acetyl-neuraminate ➔ NPL and 2’-deoxyuridine ➔ TYMP (Table 1)), or not (i.e., unidirectional effects; n = 4; (2x) N-acetyl-aspartyl-glutamate (NAAG) ➔ FOLH1, aspartate ➔ DARS and alanine ➔ ACY1). In the latter scenario, all tested enzymes had an instrumental variable, and the MR estimate from enzyme to metabolite was non-significant (Supplementary Table 8). This suggests existence of feedback mechanisms wherein metabolites sense and regulate enzyme levels. In such examples, NAAG has a negative effect on its catalyzing enzyme, folate hydrolase (FOLH1, also known as glutamate carboxypeptidase 2; measured in two studies: α̂ = -0.10, P = 6.4 . 10-40 and α̂ = -0.07, P = 1.6 . 10-25). The enzyme converts NAAG into N-acetylaspartate and glutamate, suggesting a potential feedback mechanisms that would prevent accumulation of glutamate, which has neurotoxic properties at high concentrations44,45 (Table 1). Pathway regulation We identify five enzyme-metabolite pairs that are closely related in the same pathway but not connected through a canonical reaction (Table 1) (Figure 5). For instance, TYMP ➔ uridine (uridine was measured twice, two causal estimates: α̂ = -0.57, P = 3.2 . 10-11 and α̂ = -0.55, P = 5.6 . 10-8), is two reactions away from 2’-deoxyridine, a TYMP substrate (uridine ➔ uracil ➔ 2’-deoxyuridine) (KEGG: hsa01232)32,33. We also found that creatine kinase B (CKB) concentration causally affects glycine levels (α̂ = 0.16, P = 8.2 . 10-8). Creatine is a substrate of CKB and is two reactions away from glycine (glycine ➔ guanidinoacetate ➔ creatine)38. Interestingly, no MR method identified a causal relationship between CKB and creatine ( P >0.09) (the reverse direction) (Supplementary Table 8), suggesting that the causal link between glycine and creatine may represent a regulatory mechanism. Homologous evidence Sometimes, a homologous enzyme has activity that can be related to the metabolite that MR-link-2 implicates. This can either be through activity in another organism (orthologous enzyme) or through activity in a copy of the gene (paralogous enzyme). We identify that alkaline phosphatase, intestinal (ALPI) has a causal effect on three metabolites: glucose (α̂ = -0.03, P = 2.6 . 10-9), pyruvate (α̂ = -0.03, P = 2.0 . 10-7) and histidine (α̂ = 0.04, P = 2.9 . 10-9). ALPI is a wide-acting phosphatase, and studies of the yeast ortholog have shown that pyruvate and glucose can be a reactant of the enzyme39,40 (Table 1). In line with this, ALPI levels have been found as risk factors for diabetes46,47. We also estimated that CKB causally lowers valine concentration (α̂ : -0.187 P: 1.9 . 10-7), which can be related to energy pathways as CKB stores adenosine triphosphate (ATP) in creatine in neuronal cells and muscles, whereas valine can be converted into ATP in neuronal cells43,48. In mice, valine has been shown to inhibit CKB, providing orthologous evidence of a causal relationship42. Regulation that has supporting evidence Fifteen other enzyme-metabolite pairs have additional emerging evidence, either from animal studies, presence in related pathways, or there is reasoning on why the causal relationship may be valid, but no definitive human experimental result (Supplementary Table 11). Three combinations (PLA2G7 onto linoleate (18:2n6), linoleate (18:2n6) onto PLA2G7 and Glycine onto ornithine transcarboxylase (OTC)) were recurrently identified in 2 independent studies, representing independent lines of evidence for the causality of these relationships. Furthermore, we identified multiple bidirectional causal relationships between lipoprotein-associated phospholipase A2 (PLA2G7) and the essential fatty acid linoleate, as well as with cholesterol. MR-link-2 identified two linoleate to PLA2G7 causal relationships (PLA2G7 measured twice, two causal estimates: α̂ = 0.19, P = 9.7 . 10-14 and α̂ = 0.36, P = 1.3 . 10-47) and two causal relationships from the enzyme to the metabolite (PLA2G7 measured twice, two causal estimates: α̂ = 0.05, P = 4.7 . 10-8 and α̂ = 0.24, P = 2.2 . 10-61). There is emerging evidence linking PLA2G7 and linoleate: PLA2G7 releases linoleate from dorsal root ganglia 49 and PLA2G7 is protective of the peroxidation of conjugated linoleic acids 50. In line with this, we identify a causal relationship between cholesterol levels and PLA2G7, which has been associated with low-density lipoprotein particles, the primary cholesterol transporter in blood (Tellis and Tselepis, 2009) Regulation without supporting evidence For 62 pairs, our literature search did not find any supporting evidence for causal relationships. Some patterns suggest that they may be false positives, while others could reflect novel biology, which currently has limited evidence (Supplementary Table 11). One example involves eight links derived from two pQTL studies and two mQTL studies that link bilirubin and biliverdin to BCHE (range of α̂ = -0.1 to -0.15 , range of P = 2 . 10-7 to 9 . 10-15), which might represent confounding by liver damage, which causes both increases in BCHE and bilirubin51. Alternatively, there is weak evidence that bilirubin and biliverdin are related to choline esterase from in vitro experiments and in model organisms52,53. Integrating drugs and their effects into biological understanding Using the OpenTargets resource, we matched drugs to their target proteins and assessed the downstream consequences of these proteins on metabolites54. Such an analysis provides insights into whether a drug has the intended effect on molecular phenotypes an can help identify if the perturbation of the gene that the drug targets has unintended side effects on metabolite levels12. Out of the 50 unique proteins that MR-link-2 identifies as having at least one causal effect, 11 are targets of compounds that are either registered as medicines (n = 20), are under investigation or have failed trials (n = 19) (Supplementary Table 12). One notable example is that MR-link-2 identifies a causal relationship between N-acetyl putrescine and IL1RAP (α̂ = 0.10, P = 2.7 . 10-7), which is targeted by the drug Spesolimab, used to treat psoriasis. Putrescine and other polyamines are present in psoriatic tissues, supporting the biological relevance of the estimated causal relationship55,56. It is worth noting that putrescine was not measured in the mQTL studies included in this work, and n-acetyl putrescine is one reaction away from putrescine (KEGG R01154). Discussion In this study, we performed bidirectional MR between 213 metabolites and 506 enzymes to assess if MR is suitable to identify known biochemical reactions and discover putative novel enzymatic regulation of metabolites. In cis MR, all MR-methods perform similarly and have high precision (35-47%) and low recall (3.2%-4.6%), resulting in limited discriminative ability (AUC < 0.554). Extending our analyses to include genome-wide instrumental variables, we performed bidirectional MR between enzymes and metabolites. Discriminative ability remains low (AUC < 0.531), but we found a strong enrichment for direct pathway regulation among Bonferroni significant meta-analyzed MR results . We further focused on metabolite-protein links identified by MR-link-2, a method that we have developed which showed high precision in the Bonferroni significant results, and overall lower type 1 error rate and better recall/precision ratio compared to other methods16. We justify this focus as our aim was to understand how well MR methods can recover bona fide metabolite-enzyme relationships, rather than method comparison. Manual review of 106 Bonferroni significant enzyme-metabolite pairs identified 12 canonical reactions, eight reverse canonical reactions, four combinations with homologous evidence, six with pathway evidence and 23 with supporting evidence. For the remaining combinations we could not identify literature support. Finally, when matching 11 enzymes to therapeutics using OpenTargets, we identify the causal relationship between N-acetyl putrescine and IL1RAP that aligns with the therapeutic effect of the Spesolimab drug, used to treat psoriasis. One important benefit of MR analysis is that the exposure and outcome do not need to be measured in the same individual11. This allows for a phenotype measured in one cohort to be tested for causality with a phenotype measured in another cohort. This provides the opportunity to study a much broader set of potential causal relations, such as bidirectional regulation between metabolites and enzymes or the metabolic downstream consequences of drugs. Still, we warrant caution in the interpretation of these analyses as MR results still suffer from large false positive rates and robust validation of regulatory pathway mechanisms would require targeted experimentation and case-control trials. Indeed, there are substantial challenges to overcome before MR can be widely applied to study enzyme-metabolite links. An inherent limitation of large-scale metabolomics and proteomics studies coupled to the genotype information of the participants is that not all molecular species are measured. Even among those that are, not all are instrumentable in MR. A promising avenue is to use protein or metabolite ratios to understand intermediate pathway phenotypes57,58. These composite phenotypes are still understudied but could provide indications of unmeasured phenotypes or mechanisms in known pathways, even though analyzing ratio phenotypes comes with their own challenges59. A second major challenge is tissue specificity. Measurements underlying this study are derived from blood. Yet, most of human metabolism is tissue-specific, with certain tissues being more important for certain pathways60. It is still unclear how much our results will translate into other tissues, although the limited specificity of our analysis already provides evidence that we will not be able to recover most elements of pathway regulation in gold standard datasets. There are still unknowns about how to conduct an appropriate MR analysis, for instance it’s unclear how much MR performance improvement can be expected from further increased sample size to derive molecular QTLs. Power analysis for MR studies usually do not consider locus selection mechanism and are hard to extend to pleiotropy robust methods61,62, making it difficult to distinguish true negatives from links where MR is simply underpowered. As study sample size increase, it is likely that more causal links from other tissues can be proxied by blood, albeit with weaker effects. Here, it is important to consider that more subtle QTLs need to be analyzed with pleiotropy robust methods, as the mixture of different tissue effects can violate the exclusion restriction in MR. This study also highlights that the selection of suitable genetic regions for MR can be improved, as illustrated by the cis MR analysis, which has the highest precision and high recall, with the caveat that not all phenotype combinations can be tested. When extending to the cis+trans analysis, the recall (in terms of unique true positive findings) does not necessarily increase, even though some loci do provide correct inference. This indicates that putting biological priors on certain genomic regions may be the correct course of action to reduce type 1 error rate. Taken together, our study indicates that there is strong potential for the analysis of molecular pathways using observational causal inference techniques, which is illustrated by our biologically informed findings. Such approaches are not only cost-effective, but their top-ranking metabolite-protein links are many-fold enriched for true associations, warranting experimental follow-up. Declaration of Interest The authors declare no competing interest. Data Availability The summary statistics used in this study are available from their respective publications (Methods). The genotype information underlying the LD matrices for the UK10K data resource were downloaded from the EGA under accession IDs EGAD00001000740 and EGAD00001000741. As this is individual level genotype data, a data access agreement is required for access. Conditions for this data access agreement can be found at Author contributions A.vd.G. and Z.K. conceptualized the study, A.vd.G. and S.R. performed data curation. A.vd.G. and Z.K performed formal analysis, including statistical, and computational techniques to synthesize and analyze data. Z.K. acquired funding. A.vd.G., S.R, C.A and Z.K. developed and designed methodology, Z.K. provided computational resources. A.vd.G. and S.R. performed software development. A.vd.G. created visualizations and wrote the original manuscript draft. A.vd.G., C.A., S.R and Z.K. reviewed, and edited the manuscript. Data and code availability The summary statistics used in this study are available from their respective publications (Methods). The genotype information underlying the LD matrices for the UK10K data resource were downloaded from the EGA under accession IDs EGAD00001000740 and EGAD00001000741. As this is individual level genotype data, a data access agreement is required for access. Conditions for this data access agreement can be found at Supplementary Figures Download figure Open in new tab Supplementary Figure 1: Causal estimates of enzymes onto products (left boxplot) and substrates (right boxplot) of for different MR methods. P values are based on a Kruskal-Wallis test. Middle line of the boxplots are medians, boxes contain 75% of all the datapoints, whiskers contain 95% of datapoints. Outliers outside these values are not shown. Download figure Open in new tab Supplementary Figure 2: Discriminative ability of different MR methods from cis enzymes to their metabolites. (489 positives, compared to 25883 negatives). Download figure Open in new tab Supplementary Figure 3 (a) recall and (b) precision curves of the cis MR methods tested in this study, only including genetic instruments in the cis regions of the focal protein, all valid comparison across studies are considered a single datapoint, using the region of the gene that transcribes the enzyme used in our metabolic map. (c) Set memberships for the true positives at each methods’ Bonferroni threshold. (d) Set memberships of false positives at each methods’ Bonferroni threshold. Abbreviations: (TP) True positive, (FP) False positive. Download figure Open in new tab Supplementary Figure 4: Graphs of significant cis MR results between enzymes and their reactants at Bonferroni significance. Panel a depicts the results for MR-link-2 and panel b depicts the results for MR-PCA. Depicting true positives (Green) from KEGG+Expasy and metacyc. False positives depicted in gray. Download figure Open in new tab Supplementary Figure 5 Precision and recall curves of MR methods when analyzing cis and non-cis regional results together in a bidirectional manner between enzymes and metabolites. (a) Recall and (b) precision depending on the chosen p value threshold ordering. (c) Set memberships for the true positives at each methods’ Bonferroni threshold. (d) Set memberships of false positives at each methods’ Bonferroni threshold . Supplementary Table legends View this table: View inline View popup Supplementary Table 1 All retrieved reactions from both KEGG+expasy and Metacyc databases (database), with their respective reaction idenfier (reaction_id), enzyme commission enzyme number (enzymes) and their enzyme definition (definition). View this table: View inline View popup Supplementary Table 2 All combined catalyzing proteins (Gene name, Ensembl ID, UNIPROT ID) measured in the two protein quantitative trait locus studies (study, accession) harmonized in this study, as well as where the enzyme comes from View this table: View inline View popup Supplementary Table 3 All enzyme (Exposure accession, Exposure name) metabolite (Outcome accession, Outcome name) combinations that we can consider true positives in this study after harmonizing the ground truth datasets with the protein quantitative trait locus (QTL) and metabolite QTL studies, considering the enzyme (reaction reference, dataset) and if the metabolite is a product / substrate. View this table: View inline View popup Supplementary Table 4 Cis-Mendelian randomization estimates per method (MR method), exposure (exposure accession, exposure name) and outcome (outcome accession) combination, with their respective MR estimate (alpha, se, p(alpha)), and meta-data on the origin of the exposure and outcome (pQTL dataset, uniprot ID, Metabolite dataset, metabolite HMDB ID), and finally if the combination is considered causal based on our ground truth dataset (causality). View this table: View inline View popup Supplementary Table 5 Area under the receiver operator characteristic curve (AUC) for cis-Mendelian randomization estimates per MR-Method, combined with the number of positives in the datasets and the number of negatives View this table: View inline View popup Supplementary Table 6 Brenda annotations (BRENDA Annotations) of Bonferroni significant (P < 1.93 . 10-6) (MR method) cis MR results (exposure, outcome, alpha, p(alpha)), their annotation in the ground truth dataset (causality), their metabolite and protein names (Metabolite name and Protein name) View this table: View inline View popup Supplementary Table 7 All cis+trans regional Mendelian randomization results per method (MR method), associated region, exposure (exposure accession, Protein name) and outcome (outcome accession, Metabolite name) combination, with their respective MR estimate (alpha, se, p(alpha)), and meta-data on the origin of the exposure and outcome (protein dataset, UniProt ID, Metabolite dataset, metabolite HMDB ID, Ensembl ID), and finally if the combination is considered causal based on our ground truth dataset (causality). View this table: View inline View popup Supplementary Table 8 All meta-analyzed combined Mendelian randomization results per method (MR method), associated region, exposure (exposure accession, Protein name) and outcome (outcome accession, Metabolite name) combination, with the number of meta-analyzed regions (n associated regions) and their respective meta-analyzed MR estimate (weighted alpha, weighted se, weighted p(alpha)), and meta-data on the origin of the exposure and outcome (protein dataset, UniProt ID, Metabolite dataset, metabolite HMDB ID, Ensembl ID), and if the combination is considered causal based on our ground truth dataset (causality). View this table: View inline View popup Supplementary Table 9 Area under the receiver operator characteristic curve (AUC) for meta-analyzed Mendelian randomization estimates (MR) per MR-Method, combined with the number of positives in the datasets and the number of negatives View this table: View inline View popup Supplementary Table 10 Enrichment results when comparing Bonferroni significant meta-analyzed results to all other results, with per method (MR method) the baseline probability of identifying a metabolic reactions (baseline probability), at Bonferroni significance (probability in Bonferroni significant), the resulting odds ratio and the fisher exact test P value View this table: View inline View popup Supplementary Table 11 Bonferroni significant MR-link-2 results with all supporting evidence. Causal relationships are classified according to whether there is evidence of a canonical reaction, reverse canonical evidence (the metabolite influences the enzyme), pathway evidence (the metabolite and enzyme share a pathway), homologous reaction evidence (the enzyme has been shown to interact with the metabolite from homologous evidence) or Supporting evidence, when there is some evidence of a relationship existing, but it is not as strong as the other types. Enzymes or metabolites might be measured in multiple studies, leading to multiple estimates (indicated by a ‘2x’ or ‘3x’ in the “Causal relationship” column). Acknowledgements We would like to thank all the study participants for their altruistic donations of their biological materials. Z.K. was funded by the Swiss National Science Foundation (SNSF 315230-219587). References 1.↵ Blanco, A. Blanco, G. Blanco, A.&Blanco, G. Chapter 8 - Enzymes. in Medical Biochemistry (Second Edition) (eds. Blanco, A.&Blanco, G.) 165–190 (Academic Press, 2022).doi:10.1016/B978-0-323-91599-1.00029-8. OpenUrlCrossRefGoogle Scholar 2.↵McLelland, G.-L.et al.Identification of an alternative triglyceride biosynthesis pathway. Nature 621, 171–178 (2023). OpenUrlCrossRefPubMedGoogle Scholar 3.↵Sorokina, M., Stam, M., Médigue, C., Lespinet, O.&Vallenet, D. Profiling the orphan enzymes. Biology Direct 9, 10 (2014). OpenUrlPubMedGoogle Scholar 4.↵Zhou, M., Li, J., Xu, J., Zheng, L.&Xu, S. Exploring human CYP4 enzymes: Physiological roles, function in diseases and focus on inhibitors. Drug Discovery Today 28, 103560 (2023). OpenUrlPubMedGoogle Scholar 5.↵Falco, B. de, Giannino, F., Carteni, F., Mazzoleni, S.&Kim, D.-H.Metabolic flux analysis: a comprehensive review on sample preparation, analytical techniques, data analysis, computational modelling, and main application areas. RSC Adv. 12, 25528–25548 (2022). OpenUrlPubMedGoogle Scholar 6.Eales, J. M.et al.Uncovering genetic mechanisms of hypertension through multi-omic analysis of the kidney. Nature Genetics 53, 630–637 (2021). OpenUrlCrossRefPubMedGoogle Scholar 7.Reed, J. L., Senger, R. S., Antoniewicz, M. R.&Young, J. D. Computational Approaches in Metabolic Engineering. BioMed Research International 2010, 207414 (2010). OpenUrlGoogle Scholar 8.↵Watson, E., Yilmaz, L. S.&Walhout, A. J. M. Understanding Metabolic Regulation at a Systems Level: Metabolite Sensing, Mathematical Predictions, and Model Organisms. Annual Review of Genetics 49, 553–575 (2015). OpenUrlCrossRefPubMedGoogle Scholar 9.Berzuini, C., Guo, H., Burgess, S.&Bernardinelli, L. A Bayesian approach to Mendelian randomization with multiple pleiotropic variants. Biostatistics doi:10.1093/biostatistics/kxy027. OpenUrlCrossRefGoogle Scholar 10.↵Katan, M.APOUPOPROTEIN E ISOFORMS, SERUM CHOLESTEROL, AND CANCER. The Lancet 327, 507–508 (1986). OpenUrlCrossRefGoogle Scholar 11.↵Burgess, S.&Thompson, S. G. Mendelian Randomization Methods for Using Genetic Variants in Causal Estimation. (CRC Press, 2015). Google Scholar 12.↵Schmidt, A. F.et al.Genetic drug target validation using Mendelian randomisation. Nat Commun 11, 3255 (2020). OpenUrlCrossRefPubMedGoogle Scholar 13.↵Sadler, M. C., Auwerx, C., Deelen, P.&Kutalik, Z. Multi-layered genetic approaches to identify approved drug targets. Cell Genomics 3, 100341 (2023). OpenUrlPubMedGoogle Scholar 14.↵Mackay, T. F. C.&Anholt, R. R. H. Pleiotropy, epistasis and the genetic architecture of quantitative traits. Nat Rev Genet 1–19 (2024)doi:10.1038/s41576-024-00711-3. OpenUrlCrossRefGoogle Scholar 15.↵van der Graaf, A.et al.Mendelian randomization while jointly modeling cis genetics identifies causal relationships between gene expression and lipids. Nat Commun 11, 4930 (2020). OpenUrlCrossRefPubMedGoogle Scholar 16.↵van der Graaf, A.et al.MR-link-2: pleiotropy robust cis Mendelian randomization validated in three independent reference datasets of causality. Nat Commun 16, 6112 (2025). OpenUrlPubMedGoogle Scholar 17.↵Sun, B. B.et al.Plasma proteomic associations with genetics and health in the UK Biobank. Nature 622, 329–338 (2023). OpenUrlCrossRefPubMedGoogle Scholar 18.↵Ferkingstad, E.et al.Large-scale integration of the plasma proteome with genetics and disease. Nat Genet 53, 1712–1721 (2021). OpenUrlCrossRefPubMedGoogle Scholar 19.↵Chen, Y.et al.Genomic atlas of the plasma metabolome prioritizes metabolites implicated in human diseases. Nat Genet 55, 44–53 (2023). OpenUrlCrossRefPubMedGoogle Scholar 20.Borges, M. C.et al.Role of circulating polyunsaturated fatty acids on cardiovascular diseases risk: analysis using Mendelian randomization and fatty acid genetic association data from over 114,000 UK Biobank participants. BMC Medicine 20, 210 (2022). OpenUrlPubMedGoogle Scholar 21.Shin, S.-Y.et al.An atlas of genetic influences on human blood metabolites. Nature Genetics 46, 543–550 (2014). OpenUrlCrossRefPubMedGoogle Scholar 22.↵Lotta, L. A.et al.A cross-platform approach identifies genetic regulators of human metabolism and health. Nat Genet 53, 54–64 (2021). OpenUrlCrossRefPubMedGoogle Scholar 23.↵Bairoch, A. The ENZYME database in 2000. Nucleic Acids Research 28, 304–305 (2000). OpenUrlCrossRefPubMedWeb of ScienceGoogle Scholar 24.↵International Union of Biochemistry and Molecular Biology. Nomenclature Committee, author. Enzyme Nomenclature 19921: Recommendations of the Nomenclature Committee of the International Union of Biochemistry and Molecular Biology on the Nomenclature and Classification of Enzymes. (San Diegorl: Published for the International Union of Biochemistry and Molecular Biology by Academic Press, 1992). Google Scholar 25.↵Caspi, R.et al.The MetaCyc database of metabolic pathways and enzymes - a 2019 update. Nucleic Acids Res 48, D445–D453 (2020). OpenUrlCrossRefPubMedGoogle Scholar 26.↵Walter, K.et al.The UK10K project identifies rare variants in health and disease. Nature 526, 82–90 (2015). OpenUrlCrossRefPubMedGoogle Scholar 27.↵Chang, C. C.et al.Second-generation PLINK: rising to the challenge of larger and richer datasets. Gigascience 4, 7 (2015). OpenUrlCrossRefPubMedGoogle Scholar 28.↵Burgess, S., Zuber, V., Valdes-Marquez, E., Sun, B. B.&Hopewell, J. C. Mendelian randomization with fine-mapped genetic data: Choosing from large numbers of correlated instrumental variables. Genetic Epidemiology 41, 714–725 (2017). OpenUrlCrossRefPubMedGoogle Scholar 29.↵Pedregosa, F.et al.Scikit-learn: Machine Learning in Python. Journal of Machine Learning Research 12, 2825–2830 (2011). OpenUrlGoogle Scholar 30.↵Amberger, J. S., Bocchini, C. A., Scott, A. F.&Hamosh, A. OMIM.org: leveraging knowledge across phenotype–gene relationships. Nucleic Acids Research 47, D1038–D1043 (2019). OpenUrlCrossRefPubMedGoogle Scholar 31.↵Chang, A.et al.BRENDA, the ELIXIR core data resource in 2021: new developments and updates. Nucleic Acids Research 49, D498–D508 (2021). OpenUrlCrossRefPubMedGoogle Scholar 32.↵Kanehisa, M.&Goto, S. KEGG: Kyoto Encyclopedia of Genes and Genomes. Nucleic Acids Research 28, 27–30 (2000). OpenUrlCrossRefPubMedWeb of ScienceGoogle Scholar 33.↵Kanehisa, M., Furumichi, M., Sato, Y., Kawashima, M.&Ishiguro-Watanabe, M. KEGG for taxonomy-based analysis of pathways and genomes. Nucleic Acids Res 51, D587–D592 (2023). OpenUrlCrossRefPubMedGoogle Scholar 34.↵Shimano, H.&Sato, R. SREBP-regulated lipid metabolism: convergent physiology — divergent pathophysiology. Nat Rev Endocrinol 13, 710–730 (2017). OpenUrlCrossRefPubMedGoogle Scholar 35.↵Istrate, M. A., Nussler, A. K., Eichelbaum, M.&Burk, O. Regulation of CYP3A4 by pregnane X receptor: The role of nuclear receptors competing for response element binding. Biochemical and Biophysical Research Communications 393, 688–693 (2010). OpenUrlCrossRefPubMedGoogle Scholar 36.Bylund, J., Kunz, T., Valmsen, K.&Oliw, E. H. Cytochromes P450 with bisallylic hydroxylation activity on arachidonic and linoleic acids studied with human recombinant enzymes and with human and rat liver microsomes. J Pharmacol Exp Ther 284, 51–60 (1998). OpenUrlAbstract/FREE Full TextGoogle Scholar 37.Cikes, D.et al.PCYT2-regulated lipid biosynthesis is critical to muscle health and ageing. Nat Metab 5, 495–515 (2023). OpenUrlPubMedGoogle Scholar 38.↵da Silva, R. P., Nissim, I., Brosnan, M. E.&Brosnan, J. T. Creatine synthesis: hepatic metabolism of guanidinoacetate and creatine in the rat in vitro and in vivo. Am J Physiol Endocrinol Metab 296, E256–E261 (2009). OpenUrlCrossRefPubMedGoogle Scholar 39.↵Galabova, D., Tuleva, B., Vasileva-Tonkova, E.&Christova, N. Purification and Properties of Alkaline Phosphatase with Protein Phosphatase Activity from Saccharomyces cerevisiae. Zeitschrift für Naturforschung C 55, 588–593 (2000). OpenUrlGoogle Scholar 40.↵Ali, A. T.et al.The relationship between alkaline phosphatase activity and intracellular lipid accumulation in murine 3T3-L1 cells and human preadipocytes. Analytical Biochemistry 354, 247–254 (2006). OpenUrlCrossRefPubMedGoogle Scholar 41.Adler, L. Properties of alkaline phosphatase of the halotolerant yeast Debaryomyces hansenii. Biochimica et Biophysica Acta (BBA) - Enzymology 522, 113–121 (1978). OpenUrlGoogle Scholar 42.↵Pilla, C.et al.Kinetic studies on the inhibition of creatine kinase activity by branched-chain α̂-amino acids in the brain cortex of rats. International Journal of Developmental Neuroscience 21, 145–151 (2003). OpenUrlPubMedGoogle Scholar 43.↵Kazak, L.&Cohen, P. Creatine metabolism: energy homeostasis, immunity and cancer biology. Nat Rev Endocrinol 16, 421–436 (2020). OpenUrlCrossRefPubMedGoogle Scholar 44.↵Zink, C. F.et al.Association of Missense Mutation in FOLH1 With Decreased NAAG Levels and Impaired Working Memory Circuitry and Cognition. Am J Psychiatry 177, 1129–1139 (2020). OpenUrlPubMedGoogle Scholar 45.↵Zhou, J., Neale, J. H., Pomper, M. G.&Kozikowski, A. P. NAAG peptidase inhibitors and their potential for diagnosis and therapy. Nat Rev Drug Discov 4, 1015–1026 (2005). OpenUrlCrossRefPubMedWeb of ScienceGoogle Scholar 46.↵Chen, S. C.-C., et al.Liver Fat, Hepatic Enzymes, Alkaline Phosphatase and the Risk of Incident Type 2 Diabetes: A Prospective Study of 132,377 Adults. Sci Rep 7, 4649 (2017). OpenUrlCrossRefPubMedGoogle Scholar 47.↵Fawley, J.&Gourlay, D. M. Intestinal alkaline phosphatase: a summary of its role in clinical disease. Journal of Surgical Research 202, 225–234 (2016). OpenUrlCrossRefPubMedGoogle Scholar 48.↵Berg, J. M., Stryer, L., Tymoczko, J.&Gatto, G.Biochemistry. (WH Freeman, New York, 2019). Google Scholar 49.↵Boyd, J. T.et al.Elevated dietary ω-6 polyunsaturated fatty acids induce reversible peripheral nerve dysfunction that exacerbates comorbid pain conditions. Nat Metab 3, 762–773 (2021). OpenUrlPubMedGoogle Scholar 50.↵Vermonden, P.et al.Phospholipase PLA2G7 is complementary to GPX4 in mitigating punicic-acid-induced ferroptosis in prostate cancer cells. iScience 27, 109774 (2024). OpenUrlPubMedGoogle Scholar 51.↵Santarpia, L., Grandone, I., Contaldo, F.&Pasanisi, F. Butyrylcholinesterase as a prognostic marker: a review of the literature. J Cachexia Sarcopenia Muscle 4, 31–39 (2013). OpenUrlPubMedGoogle Scholar 52.↵Janampalli, M.et al.Choline supplementation mitigates effects of bilirubin in cerebellar granule neurons in vitro. Pediatr Res 1–7 (2024)doi:10.1038/s41390-023-02968-6. OpenUrlCrossRefGoogle Scholar 53.↵Waddell, J., Rickman, N. C., He, M., Tang, N.&Bearer, C. F. Choline supplementation prevents the effects of bilirubin on cerebellar-mediated behavior in choline-restricted Gunn rat pups. Pediatr Res 89, 1414–1419 (2021). OpenUrlPubMedGoogle Scholar 54.↵Ghoussaini, M.et al.Open Targets Genetics: systematic identification of trait-associated genes using large-scale genetics and functional genomics. Nucleic Acids Research 49, D1311–D1320 (2021). OpenUrlCrossRefPubMedGoogle Scholar 55.↵Ruseva, S., Popova, I., Lozanov, V.&Mitev, V. Insight into the Metabolite Pattern of Psoriasis: Correlation among Homocysteine, Methionine, and Polyamines. Symmetry 13, 606 (2021). OpenUrlGoogle Scholar 56.↵Lowe, N. J., Breeding, J.&Russell, D.Cutaneous polyamines in psoriasis. British Journal of Dermatology 107, 21–26 (1982). OpenUrlPubMedGoogle Scholar 57.↵Suhre, K. Genetic associations with ratios between protein levels detect new pQTLs and reveal protein-protein interactions. Cell Genomics 4, 100506 (2024). OpenUrlPubMedGoogle Scholar 58.↵Petersen, A.-K.et al.On the hypothesis-free testing of metabolite ratios in genome-wide and metabolome-wide association studies. BMC Bioinformatics 13, 120 (2012). OpenUrlCrossRefPubMedGoogle Scholar 59.↵McCaw, Z. R.et al.Pitfalls in performing genome-wide association studies on ratio traits. 2023.10.27.564385 Preprint at doi:10.1101/2023.10.27.564385 (2024). OpenUrlAbstract/FREE Full TextGoogle Scholar 60.↵Robinson, J. L.et al.An atlas of human metabolism. Science Signaling 13, eaaz1482 (2020). OpenUrlAbstract/FREE Full TextGoogle Scholar 61.↵Deng, L., Zhang, H.&Yu, K. Power calculation for the general two-sample Mendelian randomization analysis. Genet Epidemiol 44, 290–299 (2020). OpenUrlCrossRefPubMedGoogle Scholar 62.↵Brion, M.-J. A., Shakhbazov, K.&Visscher, P. M. Calculating statistical power in Mendelian randomization studies. Int J Epidemiol 42, 1497–1501 (2013). OpenUrlCrossRefPubMedWeb of ScienceGoogle Scholar Back to top PreviousNext Posted July 30, 2025. Download PDF Print/Save Options Download PDFFull Text & In-line FiguresXML More Info Author Declarations Supplementary Material Data/Code Email Share Mendelian randomization linking metabolites with enzymes reveals known and novel pathway regulation and therapeutic avenues Adriaan van der Graaf, Sadegh Rizi, Chiara Auwerx, Zoltán Kutalik medRxiv 2025.07.29.25332349; doi: [This article is a preprint and has not been peer-reviewed what does this mean?]. It reports new medical research that has yet to be evaluated and so should not be used to guide clinical practice. Share This Article: Copy Citation Tools Get QR code Subject Area Epidemiology Reviews and Context 0 Comment 0 TRIP Peer Reviews 0 Community Reviews 0 Automated Services 0 Blogs/Media 0 Author Videos Subject Areas All Articles Addiction Medicine(462) Allergy and Immunology(779) Anesthesia(239) Cardiovascular Medicine(3540) Dentistry and Oral Medicine(381) Dermatology(303) Emergency Medicine(504) Endocrinology (including Diabetes Mellitus and Metabolic Disease)(1244) Epidemiology(13782) Forensic Medicine(26) Gastroenterology(937) Genetic and Genomic Medicine(5488) Geriatric Medicine(529) Health Economics(829) Health Informatics(3540) Health Policy(1198) Health Systems and Quality Improvement(1281) Hematology(458) HIV/AIDS(1073) Infectious Diseases (except HIV/AIDS)(14966) Intensive Care and Critical Care Medicine(956) Medical Education(516) Medical Ethics(137) Nephrology(562) Neurology(5283) Nursing(280) Nutrition(798) Obstetrics and Gynecology(942) Occupational and Environmental Health(841) Oncology(2679) Ophthalmology(780) Orthopedics(295) Otolaryngology(363) Pain Medicine(343) Palliative Medicine(102) Pathology(568) Pediatrics(1398) Pharmacology and Therapeutics(579) Primary Care Research(585) Psychiatry and Clinical Psychology(4464) Public and Global Health(7925) Radiology and Imaging(1824) Rehabilitation Medicine and Physical Therapy(1100) Respiratory Medicine(1037) Rheumatology(501) Sexual and Reproductive Health(551) Sports Medicine(444) Surgery(582) Toxicology(77) Transplantation(245) Urology(219) Evaluation/discussion of this paper x Comments 0 TRiP 0 Community 0 Automated 0 Blogs/Media 0 Video 0 Comments medRxiv aims to provide a venue for anyone to comment on a medRxiv preprint. Comments are moderated for offensive or irrelevant content (this can take ~24 h). Please avoid duplicate submissions and read our Comment Policy before commenting. The content of a comment is not endorsed by medRxiv. Share this comments tab (click to copy link)Copied! TRiP medRxiv partners with journals and review services to enable posting of peer reviews and editorial decisions related to preprints they are evaluating. Reviews are posted with the consent of the authors. TRiP reviews are part of a peer review pilot project in which participating organizations post peer reviews of manuscripts they are evaluating There is no TRiP material for this paper. Community Reviews medRxiv aims to inform readers about online discussion of this preprint occurring elsewhere. The content at the links below is not endorsed by either medRxiv or the preprint's authors. Community reviews for this article: There are no community reviews for this paper. Automated Evaluations Certain services provide automated analysis of preprints. Analyses invited by the authors are displayed at the top of this tab. Those done independently of authors are shown underneath . None of these analyses is endorsed by medRxiv. Automated Evaluations: There are no automated evaluations for this paper. Blog/Media Links medRxiv aims to inform readers about online discussion of this preprint occurring elsewhere. The content at the links below is not endorsed by either medRxiv or the preprint's authors. Video medRxiv partners with conferences and institutions to display recordings of talks and seminars related to preprints. These are posted with the consent of the authors. Video: There are no videos for this paper. Powered by Powered by Follow this preprint X You can now receive automatic notifications when a preprint is revised, withdrawn, commented on, peer reviewed, or published in a journal. Select the events you would like to follow below and click "Submit". To see all of the preprints you are currently following, please go to the medRxiv Alerts Page. Sign In to Follow this Preprint Email Email this Article close Thank you for your interest in spreading the word about medRxiv. NOTE: Your email address is requested solely to identify you as the sender of this article. Your Email Your Name Send To Enter multiple addresses on separate lines or separate them with commas. You are going to email the following Mendelian randomization linking metabolites with enzymes reveals known and novel pathway regulation and therapeutic avenues Message Subject (Your Name) has forwarded a page to you from medRxiv Message Body (Your Name) thought you would like to see this page from the medRxiv website. Your Personal Message CAPTCHA This question is for testing whether or not you are a human visitor and to prevent automated spam submissions. Citation Tools close Mendelian randomization linking metabolites with enzymes reveals known and novel pathway regulation and therapeutic avenues Adriaan van der Graaf, Sadegh Rizi, Chiara Auwerx, Zoltán Kutalik medRxiv 2025.07.29.25332349; doi: [This article is a preprint and has not been peer-reviewed what does this mean?]. It reports new medical research that has yet to be evaluated and so should not be used to guide clinical practice. Citation Manager Formats BibTeX Bookends EasyBib EndNote (tagged) EndNote 8 (xml) Medlars Mendeley Papers RefWorks Tagged Ref Manager RIS Zotero We use cookies on this site to enhance your user experience. By clicking any link on this page you are giving your consent for us to set cookies. Continue Find out more
395
Therapies of Nonsense-Associated Diseases - Madame Curie Bioscience Database - NCBI Bookshelf =============== An official website of the United States government Here's how you know The .gov means it's official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you're on a federal government site. The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely. Log inShow account info Close Account Logged in as: username Dashboard Publications Account settings Log out Access keysNCBI HomepageMyNCBI HomepageMain ContentMain Navigation Bookshelf Search database Search term Search Browse Titles Advanced Help Disclaimer NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health. Madame Curie Bioscience Database [Internet]. Austin (TX): Landes Bioscience; 2000-2013. Madame Curie Bioscience Database [Internet]. Show details Austin (TX): Landes Bioscience; 2000-2013. Contents Search term < PrevNext > Therapies of Nonsense-Associated Diseases Kim M. Keeling, Ming Du, and David M Bedwell. Author Information and Affiliations Authors Kim M. Keeling, Ming Du, and David M Bedwell. Affiliations Corresponding Author: Department of Microbiology, BBRB 432/Box 8, 1530 3rd Avenue, South,The University of Alabama at Birmingham, Birmingham, Alabama 35294-2170, U.S.A. Email: [email protected] A large number of diseases are caused by premature stop mutations that often lead to a complete loss of protein function and a severe reduction in mRNA levels due to nonsense-mediated mRNA decay (NMD). Two main approaches to develop treatments for diseases caused by premature stop mutations have been investigated. The first is to reduce the efficiency of translation termination through the use of pharmacological agents or by the expression of suppressor tRNAs. The second approach is to replace the premature stop mutation with wild-type sequence using gene repair techniques. Although each of these approaches have been demonstrated using in vitro studies or mouse models, currently only strategies using pharmacological agents to suppress stop mutations have reached preliminary clinical trials. The future of suppression therapy will require finding ways to increase the efficacy of current compounds to suppress premature stop mutations without side effects, or to design or discover safe new compounds that suppress premature stop mutations with increased efficiency. In addition, combined therapies that simultaneously suppress a premature stop mutation and inhibit NMD of the nonsense-containing mRNA may be the most effective way to increase the efficiency of suppression therapy. Introduction A large number of diseases including cystic fibrosis, Duchenne muscular dystrophy, β-thalassemia, and many types of cancers are caused by the presence of premature stop mutations in mRNAs. Premature stop mutations can arise as a result of mutations within germline or somatic DNA, inaccurate or inefficient pre-mRNA splicing, or improper RNA editing. According to the Human Gene Mutation Database, 12% of all mutations reported are singlepoint mutations that result in a premature stop codon.1 If mutations that alter the translational reading frame such as deletions, insertions, and splicing mutations are also considered, premature stop mutations may be responsible for as many as one-third of all inherited genetic disorders or cancers.2 Furthermore, the disease phenotypes caused by premature stop mutations are frequently more severe than those that result from missense mutations, since premature stop mutations often result in a complete loss of protein function. Suppression of Premature Stop Mutations One approach to treat diseases that result from in-frame premature stop mutations is to reduce the efficiency of translation termination so production of some full-length, functional protein is restored. Translation termination in eukaryotic cells occurs when one of the three stop codons, UAA (ochre), UAG (amber), or UGA (opal), enters the ribosomal A site. Stop codon recognition is not carried out by codon-anticodon interactions since no tRNA anticodons are complementary to any of the stop codons. Rather, stop codon recognition is mediated by a protein known as eukaryotic release factor 1 (eRF1). eRF1 recognizes each of the three stop codons in the ribosomal A site,3,4 and RNA cross-linking studies suggest that this interaction is direct.5 Upon recognition of a stop codon, eRF1 transmits a signal to the ribosomal peptidyl transferase center that leads to release of the nascent polypeptide from the peptidyl-tRNA located in the ribosomal P site.6 Another eukaryotic release factor, eRF3, is a GTPase that binds eRF17 and facilitates the efficiency and accuracy of stop codon recognition.8 Under normal conditions, translation termination is a very efficient process with an estimated error rate of approximately 0.1%.9-12 However, eRF1 and near-cognate aminoacyl-tRNAs (aminoacyl-tRNAs with an anticodon complementary to two of the three nucleotides of the stop codon) normally compete for A site binding. Under certain conditions, the rate at which near-cognate aminoacyl-tRNAs successfully compete with eRF1 at a stop codon can be increased, resulting in incorporation of an amino acid carried by a near-cognate aminoacyl-tRNA into the nascent polypeptide.13 This process is termed “termination suppression” or “readthrough”. In the case of a premature stop mutation, readthrough normally results in the continued elongation of the polypeptide chain in the correct reading frame and the production of full-length protein. Obvious perturbations to the efficiency of translation termination include mutations in the translational machinery such as ribosomal proteins,14-19 ribosomal RNAs (rRNAs),20-23 termination factors,24-31 and aminoacyl-tRNAs.32-35 Interestingly, the identity of the stop codon itself also affects termination efficiency. Generally, termination is most efficient at UAA stop codons, followed by the UAG and UGA stop codons. In addition, the sequence context both upstream and downstream of the stop codon also influences termination efficiency.12,36-42 In particular, the first nucleotide downstream of the stop codon plays an important role in determining the efficiency of translation termination, and sequence analysis of natural stop codons has revealed a strong bias at that position in many species (including humans). This observation led to the proposal that eRF1 may normally recognize a tetranucleotide termination signal.42 RNA cross-linking studies have confirmed that eRF1 contacts the first nucleotide following the stop codon.43 Aminoglycoside-Mediated Nonsense Suppression The efficiency of translation termination can also be reduced through the action of a large class of structurally related antibiotics called aminoglycosides. Aminoglycosides bind to a region of the small subunit rRNA known as the decoding site44 that normally monitors proper codon-anticodon interactions. Several nucleotides in the decoding site act to probe the conformation of the codon-anticodon helix to ensure that tRNA selection is correct. When aminoglycosides bind to the decoding site, they induce a conformational change that reduces the ability of rRNA to discriminate between cognate and near-cognate aminoacyl-tRNAs.45-50 This reduction in the accuracy of codon recognition increases the probability that translational misreading will occur, including the readthrough of stop codons. Remarkably, the rRNA sequences that make up the prokaryotic and eukaryotic decoding sites are very similar (Fig. 1). One of the main differences lies within the major groove of the decoding site where aminoglycosides bind. A key residue for aminoglycoside binding to the prokaryotic decoding site, the A1408 nucleotide, is a G nucleotide in the corresponding position of the eukaryotic decoding site.51 Introduction of the A1408G mutation in the bacterial decoding site has been shown to reduce the affinity for aminoglycoside binding significantly.52 This suggests that this key nucleotide difference at least partially accounts for the specificity of aminoglycosides to inhibit the bacterial ribosome, resulting in their utility as antibiotics in humans. Figure 1 Comparison of the decoding sites in E. coli 16S rRNA and human 18S rRNA. Residues in the E. coli decoding site that are protected from dimethyl sulfoxide (DMS) modification by paromomycin are circled. Residues that are protected from DMS modification (more...) However, several studies have shown that some aminoglycosides can also stimulate low levels of misreading that leads to termination suppression in eukaryotic systems.53-63 A recent yeast study revealed that aminoglycosides induced little or no misreading at sense codons, while the suppression of nonsense mutations was generally robust.64 This result suggests that stop codons are generally much more susceptible to aminoglycoside-induced misreading than sense codons in eukaryotes. This selectivity could be due to inherent differences in the fidelity of the elongation and termination processes. The observations that aminoglycosides suppress premature stop codons in eukaryotic systems have led to a number of investigations to determine whether aminoglycosides could provide sufficient readthrough of premature stop mutations to suppress the phenotypes associated with human diseases. In most cases this question remains to be answered. However, it has been shown in mammalian cells that aminoglycosides can induce the suppression of nonsense mutations that cause many diseases, resulting in the restoration of low levels of functional protein. Aminoglycosides have been shown to induce readthrough of nonsense mutations that cause cystic fibrosis,65-67 Duchenne muscular dystrophy,68,69 Hurler syndrome,70,71 infantile neuronal ceroid lipofuscinosis,72 cystinosis,73 X-linked nephrogenic diabetes insipidus,74 recessive spinal muscular atrophy,75 and polycystic kidney disease.76 They have also been shown to suppress nonsense mutations in the p5360 and ATM77 tumor suppressor genes. Particularly promising results have been obtained using mouse models for both Duchenne muscular dystrophy (DMD) and cystic fibrosis (CF). A mouse model for Duchenne muscular dystrophy known as the mdx mouse carries a naturally occurring UAA nonsense mutation in the dystrophin gene. It was shown that the administration of gentamicin via subcutaneous injections resulted in the partial restoration of dystrophin protein in muscle tissue of the mdx mouse.69 In addition, muscle contraction assays demonstrated that gentamicin treatment restored enough functional dystrophin protein to significantly reduce muscle contractile injury that is the hallmark of DMD. In another study, a transgenic CF mouse model that carried a UGA nonsense mutation in the cystic fibrosis transmembrane conductance regulator (CFTR) gene was used to examine whether subcutaneous injections of gentamicin could restore CFTR protein expression. Treated mice showed a partial restoration of CFTR protein expression by immunofluoresence (Fig. 2) and a partial restoration of cAMP-activated chloride channel activity.78 These findings indicate that the administration of gentamicin stimulated readthrough of the CFTR premature stop mutation, resulting in the production of functional CFTR protein. Figure 2 Immunofluorescence staining of submucosal glands in the duodenum shows the appearance of cystic fibrosis transmembrane conductance regulator (CFTR) protein following gentamicin treatment. Samples from homozygous Cftr-/- hCFTR-G542X transgenic mice (harvested (more...) Several small clinical trials have also administered aminoglycosides to CF or DMD patients who carry premature stop mutations to determine whether any restoration of protein function occurred. To date, three such trials with CF patients have been reported. In two of the trials, the administration of gentamicin via nasal droplets restored some CFTR activity in the nasal epithelia of CF patients with a CFTR nonsense mutation.79,80 In a third study, a partial restoration of CFTR function was detected in the nasal epithelium of CF patients that carried a CFTR nonsense mutation when gentamicin was administered intravenously.81 Decidedly more mixed results were obtained in two clinical trials in which DMD patients with nonsense mutations were administered intravenous gentamicin. In one trial with four DMD patients, no increase in dystrophin levels or physical improvement could be ascertained after aminoglycoside treatment.82,83 However, the results of another clinical trial were more promising, since three of the four treated patients showed a partial restoration of dystrophin protein expression.83 The chemical structure of aminoglycosides determines their ability to suppress nonsense mutations (Fig. 3). Only a subset of these compounds is effective at suppressing nonsense mutations in eukaryotes, and only three aminoglycosides from this group (gentamicin, amikacin, and tobramycin) are approved for internal human use. In addition, the susceptibility of premature stop codons to aminoglycoside-mediated suppression depends on the identity of the stop codon and its surrounding mRNA sequence context.59,60 This codon and context dependence of aminoglycoside suppression is consistent with the results of a DMD clinical trial in which three DMD patients that carried UGA premature stop mutations showed a partial restoration of dystrophin expression after gentamicin treatment. However, no dystrophin could be detected in a patient with a UAA stop mutation after receiving the same gentamicin regimen.83 These results suggest that aminoglycoside suppression of a UGA stop codon may result in significantly more readthrough in patients than aminoglycoside suppression of a UAA codon, as previously concluded from in vitro studies.59,68,84 Thus, it may be necessary to alter the efficiency or stop codon specificity of nonsense suppression if a greater level of readthrough is to be obtained for diseases caused by certain stop mutations. Figure 3 Structures of aminoglycosides commonly used in a clinical setting. The 2-deoxystreptamine ring is labeled ring II in each structure. A major hurdle to the long-term use of aminoglycosides as a therapy to treat diseases caused by nonsense mutations is their toxicity, which can lead to kidney damage and hearing loss. However, the majority of these side effects do not appear to be due to their ability to induce translational misreading, but rather to other consequences related to their charged nature. Aminoglycosides are taken into cells via megalin, a multi-ligand, endocytic receptor that is particularly abundant in the proximal tubules of the kidney and the hair cells of the inner ear.85 Upon entering kidney cells, the positively charged nature of aminoglycoside molecules promotes their binding to acidic phospholipids in the lysosomal membrane,85,86 which alters the activity of a number of enzymes. In addition, aminoglycosides have been shown to promote the generation of free radical species that leads to tissue damage. Approaches that may reduce aminoglycoside toxicity include: altering the route and duration of their administration;87,88 coadministering compounds such as antioxidants to circumvent free radical damage;89-92 and coadministering polyanions such as poly-L-aspartate93,94 or daptomycin95,96 to sequester aminoglycosides away from the lysosomal membrane. Another potential drawback associated with the suppression of premature stop mutations using aminoglycosides is the potential suppression of native stop codons. If this occurred on a global basis, it would lead to the production of many proteins with an extended C-terminus that could result in misfolding and loss of protein function. However, a previous study found that human cells cultured in the presence of high concentrations of aminoglycosides exhibited only a small increase in the level of the Hsp70 molecular chaperone, suggesting that little protein misfolding occurred during aminoglycoside treatment.70 The apparent lack of global readthrough at normal stop codons could be explained in several ways. Evidence of an evolutionary bias toward natural stop codons and surrounding sequence context(s) that may represent the most efficient termination signals has been observed at the end of genes in many species.42 No such selection for resistance to readthrough would occur at premature stop mutations. Thus, while the efficacy of aminoglycosides may be limited by the identity of the premature stop mutations and surrounding sequence context, these differences may also prevent readthrough at normal termination signals. In addition, multiple, in-frame stop codons are frequently found at the end of mRNAs.97-100 The presence of multiple stop codons should dramatically reduce the ability of aminoglycosides to induce readthrough of normal termination signals. Furthermore, the termination complex formed at premature stop codons appears to differ from the complex formed at native stop codons at the end of an mRNA.101,102 This intriguing finding suggests that the ribosome may terminate translation less efficiently at premature stop codons than native stop codons, possibly because the interactions between the termination complex at a premature stop codon and other factors bound in the 3' untranslated region of an mRNA cannot occur in a normal manner (see chapter by Amrani and Jacobson for further details). Other Pharmacological Compounds That Suppress Nonsense Mutations Since many of the toxic side effects caused by aminoglycosides are not directly associated with their ability to suppress stop mutations, another way to avoid these problems is to identify new, safer classes of compounds that suppress stop mutations. One such compound that has been investigated is negamycin. Negamycin is a dipeptide antibiotic that interacts with the ribosomal decoding site, much like aminoglycosides, even though it is structurally unrelated to aminoglycosides. Negamycin was shown to suppress the dystrophin premature stop mutation in the mdx mouse model, and was reported to be less toxic than aminoglycosides.103 Another drug, PTC124, is a novel compound discovered by PTC Therapeutics, Inc. that has been shown to suppress nonsense mutations in cell culture and in animal models.104 These results suggest that developing new pharmaceutical agents that suppress premature stop mutations without inducing the toxic side effects associated with aminoglycosides may have great potential for future therapeutic use. Suppression of Nonsense Mutations Using Suppressor tRNAs Another means of suppressing nonsense mutations involves expressing suppressor tRNAs. In this approach, DNA encoding a tRNA with an anticodon complementary to a stop codon is introduced into cells. This type of mutant tRNA, referred to as a suppressor tRNA, can compete with the termination factor eRF1 much more effectively than a near-cognate tRNA, resulting in a significant increase in stop codon suppression. This approach has been shown to suppress premature stop mutations that cause β-thalassemia105 and Duchenne muscular dystrophy106 in mammalian cells. It has also been shown that the injection of DNA encoding a suppressor tRNA into the skeletal and heart muscles of a transgenic mouse expressing a reporter gene with a premature stop codon resulted in the suppression of the stop codon in vivo.107 Besides the potential suppression of natural stop codons that was discussed in a previous section, there are other significant drawbacks to this therapeutic approach. Although tRNA genes have strong RNA polymerase III promoters and the encoded tRNA molecules are generally stable after they are transcribed and undergo maturation,108 the efficient introduction and maintenance of tRNA genes into cells by gene therapy methods remains a challenge. In addition, the termination signal to be suppressed and its surrounding context also affect the efficiency of suppressor tRNA-mediated suppression.108,109 Mutation Repair Another novel approach that could be used to treat diseases caused by premature stop mutations is the repair of a mutation directly in the genome using a nucleotide exchange antisense oligonucleotide approach. Unlike nonsense suppression therapy, which requires that the mutation be in the correct open reading frame, the use of antisense oligonucleotides also has the potential to repair various point mutations, small deletions, insertions, or splicing defects. In this method, a mutant DNA sequence is replaced by the wild-type sequence. Although a number of approaches have been designed to target DNA for mutation repair, including the use of ribozymes, group II introns, and triplex-forming oligonucleotides, single-stranded DNA currently appears to generate the most robust and reproducible gene repair. This exchange is directed by a double-stranded DNA chimeric oligonucleotide that is typically 70-80 nucleotides long. This chimera is synthesized as a single-stranded molecule that is designed with sequence complementarity such that is folds into a double-hairpin structure. The double-hairpin structure prevents the molecule from nuclease digestion as well as concatenation.110 When these molecules are introduced into cells, the chimeric oligonucleotide hybridizes to its complementary sequence in the target gene except for the region of mismatch sequence where the mutation lies. This mismatch is recognized by the cellular mismatch-repair system, which then catalyzes the exchange of the wild-type nucleotides for the mutant nucleotides, thus repairing the mutation in the DNA. Gene repair of mutations has been demonstrated in yeast111,112 and mammalian cell culture, where correction of mutations that caused sickle cell anemia113 thalassemia,114 alkaline phosphatase deficiency,115 apolipoprotein A2-linked atherosclerosis,116 and epidermolysis bullosa simplex117 were accomplished. In addition, animal models of tyrosinemia,118 muscular dystrophy, 119-122 hemophilia,123 and renal tubular acidosis124 have been used to demonstrate a partial correction of mutations and a restoration of some protein expression. The amount of normal protein levels restored among the various disease phenotypes varied from 0.5% to 20% of wild-type levels.110 This level of functional protein could improve the phenotype of patients with these (and many other) genetic diseases. Although this approach to correct mutations and restore wild-type protein production in mammalian cells is promising, its development for clinic application is still a work in progress. There are several problems that currently prevent the implementation of this approach.125 First, the design of stable oligonucleotides capable of replacing the mutation efficiently can be difficult. Second, as with many gene therapy approaches, the introduction of oligonucleotides into the appropriate cell type is a major obstacle, and the efficiency of repair is cell-cycle dependent. Third, the percentage of corrected cells decreases with time. Finally, apoptosis has been observed in some cells that have undergone this form of targeted sequence alteration. Thus, further studies are required to determine the ultimate utility of this approach. Suppression of NMD Many strategies aimed at suppressing premature stop mutations could be compromised by the fact that mRNAs that contain premature stop mutations are often unstable, resulting in a severe reduction in their steady-state level. The reduced abundance of mRNAs that carry a stop mutation is due to the NMD pathway (see chapter by Maquat). Therefore, approaches that stabilize nonsense-containing mRNAs that are normally degraded by NMD should increase the steady-state amount of mRNA available for translation. This, in turn, could greatly enhance the level of protein produced by suppression therapy. The NMD pathway and the NMD factors Upf1, Upf2, and Upf3 are conserved in eukaryotes ranging from yeast to humans (see chapters by Baker and Parker, Singh and Lykke-Andersen, Anderson, and Behm-Ansmant and Izaurralde). Several factors involved in mammalian NMD bind to mRNAs in the nucleus during transcription and the subsequent stages of mRNA processing. In particular, some NMD components assemble with the exon-junction complex (EJC), located ˜20-24 nucleotides upstream of exon-exon junctions as a consequence of pre-mRNA splicing, and remain bound to the mRNA as it is exported to the cytoplasm126 (see chapter by Maquat). In human cells, Upf3 binds to nuclear mRNA-protein (mRNP) complexes as a component of the EJC complex. Upf2 binds to these complexes as they leave the nucleus, while Upf1 is thought to associate with the complex in the cytoplasm.127,128 According to current models, once the mRNA reaches the cytoplasm these bound nuclear factors are removed as the ribosome translates the mRNA during the initial or “pioneer round” of translation129,130 (see chapter by Maquat). If translation proceeds to the normal stop codon and all of the nuclear proteins are removed from the coding sequence during the pioneer round of translation, the transcript is remodeled to become steady-state mRNP and NMD can no longer occur. However, if a premature stop codon is present in the mRNA, the ribosome will not remove any nuclear proteins distal to the premature stop codon during this initial round of translation. This causes the mRNA to be identified as faulty and results in its rapid degradation by the NMD pathway. Generally, NMD occurs if an mRNA molecule carries a premature stop codon that is ≥50 nucleotides upstream of the 3'-most exon-exon junction.131 The destabilization of a nonsense-containing mRNA by NMD requires active translation of that mRNA. This conclusion is supported by the observations that several proteins that function in the NMD pathway associate with polysomes,132-135 and NMD can be inhibited by translation elongation inhibitors such as cycloheximide and puromycin. In yeast, the NMD factors have been shown to associate with the translation termination factors eRF1 and eRF3136 (see chapters by Baker and Parker, and Amrani and Jacobson). Of the Upf factors, at least Upf1 associates with eRF1 and eRF3 in mammals (see chapters by Maquat, and Singh and Lykke-Andersen), also indicating that NMD and the process of translation termination are likely to be tightly linked. It has also been shown that suppression of a premature stop codon can stabilize the mRNA. Overexpression of a suppressor tRNA can inhibit the degradation of nonsense-containing mRNAs in yeast and in mammalian cells,137,138 and suppression of premature stop codons by aminoglycosides has also been shown to stabilize nonsense-containing mRNAs in some cases. For example, a 5-fold increase in the abundance of CFTR mRNA containing a premature stop codon was detected in a human bronchial epithelial cell line derived from a CF patient that carried the CFTR-W1282X premature stop mutation after incubation with the aminoglycoside G418 for 24 hours67 (Fig. 4). A 2-fold increase in mRNA was observed in human fibroblasts from a patient with Smith-Lemli-Opitz syndrome that carried a premature termination codon in the 7-dehydrocholesterol-delta 7-reductase (DHCR7) gene after G418 treatment.139 Finally, a 2-fold increase in mRNA was observed in human fibroblasts derived from a patient with Hurler syndrome that carried nonsense mutations in the α-L-iduronidase (IDUA) gene after gentamicin treatment (K.M. Keeling and D.M. Bedwell, unpublished data). These results are surprising since aminoglycosides only induced readthrough of these premature stop mutations at a low frequency of 1-20%. While it is possible that premature stop mutations are more susceptible to aminoglycoside-mediated suppression during the pioneer round of translation than in subsequent rounds of translation, the mechanism by which such a modest level of readthrough can inhibit NMD is currently unknown. Figure 4 CFTR-W1282X mRNA abundance is increased in the IB3-1 bronchial epithelial cell line following G418 treatment. IB3-1 cells carry the W1282X premature stop mutation on one CFTR allele and the ΔF508 mutation on the other CFTR allele. The ΔF508 (more...) Certain factors in the NMD pathway have also been identified as potential targets for pharmacological inhibition of mRNA degradation by the NMD pathway. One such target is Upf1, which is essential for the NMD process and is conserved in all eukaryotes. Upf1 is a phosphoprotein, and changes in its phosphorylation state regulate its function in the NMD pathway.140,141 Several modifiers of the Upf1 phosphorylation state have been identified. For example, SMG1 is a kinase that phosphorylates at least two serine residues in Upf1 and stimulates the NMD process142-144 (see chapter by Yamashita et al). SMG1 is a member of the phosphatidylinositol-4,5-bisphosphate (PIP2) family of protein kinases, which are frequently inhibited by caffeine and wortmannin. Accordingly, these compounds were found to abrogate the degradation of the collagen VI α2 subunit mRNA by NMD in human fibroblasts, presumably by inhibiting SMG1 activity.145 The phosphorylation status of Upf1 is also influenced by SMG5, SMG6 and SMG7, which participate in the dephosphorylation of Upf1 and thus complete its phosphorylation-dephosphorylation cycle.146-149 These Upf1 modifiers also represent potential targets for pharmacological inhibition of NMD. Although the yeast Upf1 protein is also a phosphoprotein (K.M. Keeling and D.M. Bedwell, unpublished data), the kinase and phosphatase involved in its regulation have yet to be identified. Interestingly, a compound called diazoborine has been shown to stabilize aberrant mRNAs that are normally degraded by the NMD pathway in yeast.150 While the target of this compound is not yet known, it is possible that this compound could also target a component of the Upf1 phosphorylation cycle or some other step of yeast NMD. It is important to note that the NMD pathway has additional functions other than degrading mRNAs that carry premature stop mutations (see chapters by Abraham and Oliveira, Azzalin et al, Kaygun and Marzluff, and Kim and Maquat). Mutations in Upf1, Upf2, and Upf3 influence the abundance of many normal mRNA species in yeast (see chapter by He and Jacobson) and mammals (see chapters by Sharifi and Dietz, and Soergel et al).151,152 Factors in the NMD pathway may also act as checkpoints for RNA processing and nuclear export.153,154 Therefore, therapeutic approaches aimed at inhibition of the NMD pathway must be monitored carefully and optimized such that other vital cellular pathways are not adversely affected. Future Development of Therapies for Nonsense-Associated Diseases Even though the results of several clinical trials indicate that the suppression of nonsense mutations can partially restore protein function, none of these studies were designed in a manner that allowed the investigators to determine whether enough protein expression was restored to confer a therapeutic improvement in the disease phenotype. The threshold level of functional protein needed to alleviate a disease phenotype is unknown for most diseases. This is a complex issue, since the minimal level of functional protein needed to improve a particular disease phenotype will vary widely depending upon the structure and function of each protein, and the key tissue(s) in which each protein is required. For example, estimates of the amount of CFTR protein required to alleviate the CF disease phenotype range from 5%155 to 30%156 of wild-type CFTR levels. In contrast, as little as 1% of α-L-iduronidase can reduce the severity of the Hurler syndrome disease phenotype.157,158 Therefore, suppression therapy may hold more potential for treating some disorders than others. The development of new compounds or approaches that can suppress nonsense mutations without the side effects currently associated with aminoglycosides shows great potential for the future of nonsense suppression therapy. New pharmacological targets could include regions of rRNA, ribosomal proteins, translation termination factors, and components of the NMD machinery. Future therapies for the treatment of human diseases caused by premature stop mutations may include treatments designed to correct defects occurring at multiple levels simultaneously. For example, combining the suppression of premature stop mutations with the inhibition of NMD could restore a significantly higher level of functional protein than with either approach alone, thus providing a much greater opportunity to alleviate a disease phenotype. Currently, more research on the basic mechanisms of translation termination and NMD is needed to provide a better understanding of potential targets for therapies to treat nonsense-associated diseases. References 1. Krawczak M, Ball EV, Fenton I. et al. Human gene mutation database-a biomedical information and research resource. Hum Mutat. 2000;15(1):45–51. [PubMed: 10612821] 2. Frischmeyer PA, Dietz HC. Nonsense-mediated mRNA decay in health and disease. Hum Mol Genet. 1999;8(10):1893–1900. [PubMed: 10469842] 3. Kisselev L, Ehrenberg M, Frolova L. Termination of translation: Interplay of mRNA, rRNAs and release factors? EMBO J. 2003;22(2):175–182. [PMC free article: PMC140092] [PubMed: 12514123] 4. Bertram G, Innes S, Minella O. et al. Endless possibilities: Translation termination and stop codon recognition. Microbiology. 2001;147(Pt 2):255–269. [PubMed: 11158343] 5. Chavatte L, Frolova L, Kisselev L. et al. The polypeptide chain release factor eRF1 specifically contacts the s(4)UGA stop codon located in the A site of eukaryotic ribosomes. Eur J Biochem. 2001;268(10):2896–2904. [PubMed: 11358506] 6. Frolova L, Le Goff X, Rasmussen HH. et al. A highly conserved eukaryotic protein family possessing properties of polypeptide chain release factor. Nature. 1994;372(6507):701–703. [PubMed: 7990965] 7. Frolova L, Le Goff X, Zhouravleva G. et al. Eukaryotic polypeptide chain release factor eRF3 is an eRF1- and ribosome-dependent guanosine triphosphatase. RNA. 1996;2(4):334–341. [PMC free article: PMC1369376] [PubMed: 8634914] 8. Salas-Marco J, Bedwell DM. GTP Hydrolysis by eRF3 facilitates stop codon decoding during eukaryotic translation termination. Mol Cell Biol. 2004;24(17):7769–7778. [PMC free article: PMC506980] [PubMed: 15314182] 9. Stansfield I, Jones KM, Herbert P. et al. Missense translation errors in Saccharomyces cerevisiae. J Mol Biol. 1998;282(1):13–24. [PubMed: 9733638] 10. Mori N, Funatsu Y, Hiruta K. et al. Analysis of translational fidelity of ribosomes with protamine messenger RNA as a template. Biochemistry. 1985;24(5):1231–1239. [PubMed: 4096904] 11. Loftfield RB, Vanderjagt D. The frequency of errors in protein biosynthesis. Biochem J. 1972;128(5):1353–1356. [PMC free article: PMC1174024] [PubMed: 4643706] 12. Bonetti B, Fu L, Moon J. et al. The efficiency of translation termination is determined by a synergistic interplay between upstream and downstream sequences in Saccharomyces cerevisiae. J Mol Biol. 1995;251(3):334–345. [PubMed: 7650736] 13. Fearon K, McClendon V, Bonetti B. et al. Premature translation termination mutations are efficiently suppressed in a highly conserved region of yeast Ste6p, a member of the ATP-binding cassette (ABC) transporter family. J Biol Chem. 1994;269(27):17802–17808. [PubMed: 7517933] 14. Zhang S, Ryden-Aulin M, Kirsebom LA. et al. Genetic implication for an interaction between release factor one and ribosomal protein L7/L12 in vivo. J Mol Biol. 1994;242(5):614–618. [PubMed: 7932718] 15. Van Dyke N, Xu W, Murgola EJ. Limitation of ribosomal protein L11 availability in vivo affects translation termination. J Mol Biol. 2002;319(2):329–339. [PubMed: 12051910] 16. Torres M, Condon C, Balada JM. et al. Ribosomal protein S4 is a transcription factor with properties remarkably similar to NusA, a protein involved in both nonribosomal and ribosomal RNA antitermination. EMBO J. 2001;20(14):3811–3820. [PMC free article: PMC125540] [PubMed: 11447122] 17. Tate WP, Dognin MJ, Noah M. et al. The NH2-terminal domain of Escherichia coli ribosomal protein L11. Its three-dimensional location and its role in the binding of release factors 1 and 2. J Biol Chem. 1984;259(11):7317–7324. [PubMed: 6373771] 18. Dahlgren A, Ryden-Aulin M. A novel mutation in ribosomal protein S4 that affects the function of a mutated RF1. Biochimie. 2000;82(8):683–691. [PubMed: 11018284] 19. Brot N, Tate WP, Caskey CT. et al. The requirement for ribosomal proteins L7 and L12 in peptide-chain termination. Proc Natl Acad Sci USA. 1974;71(1):89–92. [PMC free article: PMC387938] [PubMed: 4589896] 20. Velichutina IV, Hong JY, Mesecar AD. et al. Genetic interaction between yeast Saccharomyces cerevisiae release factors and the decoding region of 18 S rRNA. J Mol Biol. 2001;305(4):715–727. [PubMed: 11162087] 21. Velichutina IV, Dresios J, Hong JY. et al. Mutations in helix 27 of the yeast Saccharomyces cerevisiae 18S rRNA affect the function of the decoding center of the ribosome. RNA. 2000;6(8):1174–1184. [PMC free article: PMC1369991] [PubMed: 10943896] 22. Liu R, Liebman SW. A translational fidelity mutation in the universally conserved sarcin/ricin domain of 25S yeast ribosomal RNA. RNA. 1996;2(3):254–263. [PMC free article: PMC1369368] [PubMed: 8608449] 23. Chernoff YO, Vincent A, Liebman SW. Mutations in eukaryotic 18S ribosomal RNA affect translational fidelity and resistance to aminoglycoside antibiotics. EMBO J. 1994;13(4):906–913. [PMC free article: PMC394890] [PubMed: 8112304] 24. Zadorskii SP, Borkhsenius AS, Sopova Iu V. et al. Suppression of nonsense and frameshift mutations obtained by different methods for inactivating the translation termination factor eRF3 in yeast Saccharomyces cerevisiae. Genetika. 2003;39(4):489–494. [PubMed: 12760248] 25. Wakem LP, Sherman F. Isolation and characterization of omnipotent suppressors in the yeast Saccharomyces cerevisiae. Genetics. 1990;124(3):515–522. [PMC free article: PMC1203945] [PubMed: 2179051] 26. Vincent A, Newnam G, Liebman SW. The yeast translational allosuppressor, SAL6: A new member of the PP1-like phosphatase family with a long serine-rich N-terminal extension. Genetics. 1994;138(3):597–608. [PMC free article: PMC1206211] [PubMed: 7851758] 27. Vincent A, Liebman SW. The yeast omnipotent suppressor SUP46 encodes a ribosomal protein which is a functional and structural homolog of the Escherichia coli S4 ram protein. Genetics. 1992;132(2):375–386. [PMC free article: PMC1205143] [PubMed: 1427034] 28. Ono B, Tanaka M, Awano I. et al. Two new loci that give rise to dominant omnipotent suppressors in Saccharomyces cerevisiae. Curr Genet. 1989;16(5-6):323–330. [PubMed: 2692850] 29. Liebman SW, Cavenagh M. An antisuppressor that acts on omnipotent suppressors in yeast. Genetics. 1980;95(1):49–61. [PMC free article: PMC1214221] [PubMed: 7000618] 30. Kulikov VN, Tikhodeev ON, Forafonov FS. et al. Suppression of frameshift mutation as a result of partial inactivation of translation termination factors in Saccharomyces cerevisiae yeast. Genetika. 2001;37(5):602–609. [PubMed: 11436550] 31. Carr-Schmid A, Valente L, Loik VI. et al. Mutations in elongation factor 1beta, a guanine nucleotide exchange factor, enhance translational fidelity. Mol Cell Biol. 1999;19(8):5257–5266. [PMC free article: PMC84369] [PubMed: 10409717] 32. Valle RP, Morch MD, Haenni AL. Novel amber suppressor tRNAs of mammalian origin. EMBO J. 1987;6(10):3049–3055. [PMC free article: PMC553742] [PubMed: 3691479] 33. Vacher J, Grosjean H, de Henau S. et al. Construction of a UGA suppressor tRNA by modification in vitro of yeast tRNACys. Eur J Biochem. 1984;138(1):77–81. [PubMed: 6363071] 34. Kuchino Y, Beier H, Akita N. et al. Natural UAG suppressor glutamine tRNA is elevated in mouse cells infected with Moloney murine leukemia virus. Proc Natl Acad Sci USA. 1987;84(9):2668–2672. [PMC free article: PMC304719] [PubMed: 3472229] 35. Beier H, Grimm M. Misreading of termination codons in eukaryotes by natural nonsense suppressor tRNAs. Nucleic Acids Res. 2001;29(23):4767–4782. [PMC free article: PMC96686] [PubMed: 11726686] 36. Tork S, Hatin I, Rousset JP. et al. The major 5' determinant in stop codon read-through involves two adjacent adenines. Nucleic Acids Res. 2004;32(2):415–421. [PMC free article: PMC373328] [PubMed: 14736996] 37. Namy O, Hatin I, Rousset JP. Impact of the six nucleotides downstream of the stop codon on translation termination. EMBO Rep. 2001;2(9):787–793. [PMC free article: PMC1084031] [PubMed: 11520858] 38. Mottagui-Tabar S, Tuite MF, Isaksson LA. The influence of 5' codon context on translation termination in Saccharomyces cerevisiae. Eur J Biochem. 1998;257(1):249–254. [PubMed: 9799126] 39. McCaughan KK, Brown CM, Dalphin ME. et al. Translational termination efficiency in mammals is influenced by the base following the stop codon. Proc Natl Acad Sci USA. 1995;92(12):5431–5435. [PMC free article: PMC41708] [PubMed: 7777525] 40. Harrell L, Melcher U, Atkins JF. Predominance of six different hexanucleotide recoding signals 3' of read-through stop codons. Nucleic Acids Res. 2002;30(9):2011–2017. [PMC free article: PMC113845] [PubMed: 11972340] 41. Cassan M, Rousset JP. UAG readthrough in mammalian cells: Effect of upstream and downstream stop codon contexts reveal different signals. BMC Mol Biol. 2001;2(1):3. [PMC free article: PMC29092] [PubMed: 11242562] 42. Brown CM, Stockwell PA, Trotman CN. et al. Sequence analysis suggests that tetra-nucleotides signal the termination of protein synthesis in eukaryotes. Nucleic Acids Res. 1990;18(21):6339–6345. [PMC free article: PMC332501] [PubMed: 2123028] 43. Bulygin KN, Repkova MN, Ven'yaminova AG. et al. Positioning of the mRNA stop signal with respect to polypeptide chain release factors and ribosomal proteins in 80S ribosomes. FEBS Lett. 2002;514(1):96–101. [PubMed: 11904189] 44. Moazed D, Noller HF. Transfer RNA shields specific nucleotides in 16S ribosomal RNA from attack by chemical probes. Cell. 1986;47(6):985–994. [PubMed: 2430725] 45. Yoshizawa S, Fourmy D, Puglisi JD. Structural origins of gentamicin antibiotic action. EMBO J. 1998;17(22):6437–6448. [PMC free article: PMC1170992] [PubMed: 9822590] 46. Vicens Q, Westhof E. Crystal structure of paromomycin docked into the eubacterial ribosomal decoding A site. Structure (Camb). 2001;9(8):647–658. [PubMed: 11587639] 47. Recht MI, Fourmy D, Blanchard SC. et al. RNA sequence determinants for aminoglycoside binding to an A-site rRNA model oligonucleotide. J Mol Biol. 1996;262(4):421–436. [PubMed: 8893854] 48. Fourmy D, Yoshizawa S, Puglisi JD. Paromomycin binding induces a local conformational change in the A-site of 16 S rRNA. J Mol Biol. 1998;277(2):333–345. [PubMed: 9514734] 49. Fourmy D, Recht MI, Puglisi JD. Binding of neomycin-class aminoglycoside antibiotics to the A-site of 16 S rRNA. J Mol Biol. 1998;277(2):347–362. [PubMed: 9514735] 50. Fourmy D, Recht MI, Blanchard SC. et al. Structure of the A site of Escherichia coli 16S ribosomal RNA complexed with an aminoglycoside antibiotic. Science. 1996;274(5291):1367–1371. [PubMed: 8910275] 51. Van de Peer Y, Van den Broeck I, De Rijk P. et al. Database on the structure of small ribosomal subunit RNA. Nucleic Acids Res. 1994;22(17):3488–3494. [PMC free article: PMC308309] [PubMed: 7524022] 52. Recht MI, Douthwaite S, Puglisi JD. Basis for prokaryotic specificity of action of aminoglycoside antibiotics. EMBO J. 1999;18(11):3133–3138. [PMC free article: PMC1171394] [PubMed: 10357824] 53. Wilhelm JM, Pettitt SE, Jessop JJ. Aminoglycoside antibiotics and eukaryotic protein synthesis: Structure-function relationships in the stimulation of misreading with a wheat embryo system. Biochemistry. 1978;17(7):1143–1149. [PubMed: 656378] 54. Wilhelm JM, Jessop JJ, Pettitt SE. Aminoglycoside antibiotics and eukaryotic protein synthesis: Stimulation of errors in the translation of natural messengers in extracts of cultured human cells. Biochemistry. 1978;17(7):1149–1153. [PubMed: 656379] 55. Stahl G, Bidou L, Rousset JP. et al. Versatile vectors to study recoding: Conservation of rules between yeast and mammalian cells. Nucleic Acids Res. 1995;23(9):1557–1560. [PMC free article: PMC306897] [PubMed: 7784210] 56. Singh A, Ursic D, Davies J. Phenotypic suppression and misreading Saccharomyces cerevisiae. Nature. 1979;277(5692):146–148. [PubMed: 366438] 57. Palmer E, Wilhelm JM, Sherman F. Phenotypic suppression of nonsense mutants in yeast by aminoglycoside antibiotics. Nature. 1979;277(5692):148–150. [PubMed: 366439] 58. Palmer E, Wilhelm JM. Mistranslation in a eucaryotic organism. Cell. 1978;13(2):329–334. [PubMed: 75070] 59. Manuvakhova M, Keeling K, Bedwell DM. Aminoglycoside antibiotics mediate context-dependent suppression of termination codons in a mammalian translation system. RNA. 2000;6(7):1044–1055. [PMC free article: PMC1369979] [PubMed: 10917599] 60. Keeling KM, Bedwell DM. Clinically relevant aminoglycosides can suppress disease-associated premature stop mutations in the IDUA and p53 cDNAs in a mammalian translation system. J Mol Med. 2002;80(6):367–376. [PubMed: 12072912] 61. Grentzmann G, Ingram JA, Kelly PJ. et al. A dual-luciferase reporter system for studying recoding signals. RNA. 1998;4(4):479–486. [PMC free article: PMC1369633] [PubMed: 9630253] 62. Firoozan M, Grant CM, Duarte JA. et al. Quantitation of readthrough of termination codons in yeast using a novel gene fusion assay. Yeast. 1991;7(2):173–183. [PubMed: 1905859] 63. Burke JF, Mogg AE. Suppression of a nonsense mutation in mammalian cells in vivo by the aminoglycoside antibiotics G-418 and paromomycin. Nucleic Acids Res. 1985;13(17):6265–6272. [PMC free article: PMC321951] [PubMed: 2995924] 64. Salas-Marco J, Bedwell DM. Discrimination between defects in elongation fidelity and termination efficiency provides mechanistic insights into translational readthrough. J Mol Biol. 2005;348(4):801–815. [PubMed: 15843014] 65. Zsembery A, Jessner W, Sitter G. et al. Correction of CFTR malfunction and stimulation of Ca-activated Cl channels restore HCO3- secretion in cystic fibrosis bile ductular cells. Hepatology. 2002;35(1):95–104. [PubMed: 11786964] 66. Howard M, Frizzell RA, Bedwell DM. Aminoglycoside antibiotics restore CFTR function by overcoming premature stop mutations. Nat Med. 1996;2(4):467–469. [PubMed: 8597960] 67. Bedwell DM, Kaenjak A, Benos DJ. et al. Suppression of a CFTR premature stop mutation in a bronchial epithelial cell line. Nat Med. 1997;3(11):1280–1284. [PubMed: 9359706] 68. Howard MT, Shirts BH, Petros LM. et al. Sequence specificity of aminoglycoside-induced stop codon readthrough: Potential implications for treatment of Duchenne muscular dystrophy. Ann Neurol. 2000;48(2):164–169. [PubMed: 10939566] 69. Barton-Davis ER, Cordier L, Shoturma DI. et al. Aminoglycoside antibiotics restore dystrophin function to skeletal muscles of mdx mice. J Clin Invest. 1999;104(4):375–381. [PMC free article: PMC481050] [PubMed: 10449429] 70. Keeling KM, Brooks DA, Hopwood JJ. et al. Gentamicin-mediated suppression of Hurler syndrome stop mutations restores a low level of alpha-L-iduronidase activity and reduces lysosomal glycosaminoglycan accumulation. Hum Mol Genet. 2001;10(3):291–299. [PubMed: 11159948] 71. Hein LK, Bawden M, Muller VJ. et al. alpha-L-iduronidase premature stop codons and potential read-through in mucopolysaccharidosis type I patients. J Mol Biol. 2004;338(3):453–462. [PubMed: 15081804] 72. Sleat DE, Sohar I, Gin RM. et al. Aminoglycoside-mediated suppression of nonsense mutations in late infantile neuronal ceroid lipofuscinosis. Eur J Paediatr Neurol. 2001;5(Suppl A):57–62. [PubMed: 11589009] 73. Helip-Wooley A, Park MA, Lemons RM. et al. Expression of CTNS alleles: Subcellular localization and aminoglycoside correction in vitro. Mol Genet Metab. 2002;75(2):128–133. [PubMed: 11855931] 74. Schulz A, Sangkuhl K, Lennert T. et al. Aminoglycoside pretreatment partially restores the function of truncated V(2) vasopressin receptors found in patients with nephrogenic diabetes insipidus. J Clin Endocrinol Metab. 2002;87(11):5247–5257. [PubMed: 12414899] 75. Sossi V, Giuli A, Vitali T. et al. Premature termination mutations in exon 3 of the SMN1 gene are associated with exon skipping and a relatively mild SMA phenotype. Eur J Hum Genet. 2001;9(2):113–120. [PubMed: 11313744] 76. Aguiari G, Banzi M, Gessi S. et al. Deficiency of polycystin-2 reduces Ca2+ channel activity and cell proliferation in ADPKD lymphoblastoid cells. FASEB J. 2004;18(7):884–886. [PubMed: 15001556] 77. Lai CH, Chun HH, Nahas SA. et al. Correction of ATM gene function by aminoglycoside-induced read-through of premature termination codons. Proc Natl Acad Sci USA. 2004;101(44):15676–15681. [PMC free article: PMC524838] [PubMed: 15498871] 78. Du M, Jones JR, Lanier J. et al. Aminoglycoside suppression of a premature stop mutation in a Cftr-/- mouse carrying a human CFTR-G542X transgene. J Mol Med. 2002;80(9):595–604. [PubMed: 12226741] 79. Wilschanski M, Yahav Y, Yaacov Y. et al. Gentamicin-induced correction of CFTR function in patients with cystic fibrosis and CFTR stop mutations. N Engl J Med. 2003;349(15):1433–1441. [PubMed: 14534336] 80. Wilschanski M, Famini C, Blau H. et al. A pilot study of the effect of gentamicin on nasal potential difference measurements in cystic fibrosis patients carrying stop mutations. Am J Respir Crit Care Med. 2000;161(3 Pt 1):860–865. [PubMed: 10712334] 81. Clancy JP, Bebok Z, Ruiz F. et al. Evidence that systemic gentamicin suppresses premature stop mutations in patients with cystic fibrosis. Am J Respir Crit Care Med. 2001;163(7):1683–1692. [PubMed: 11401894] 82. Wagner KR, Hamed S, Hadley DW. et al. Gentamicin treatment of Duchenne and Becker muscular dystrophy due to nonsense mutations. Ann Neurol. 2001;49(6):706–711. [PubMed: 11409421] 83. Politano L, Nigro G, Nigro V. et al. Gentamicin administration in Duchenne patients with premature stop codon. Preliminary results. Acta Myol. 2003;22(1):15–21. [PubMed: 12966700] 84. Howard MT, Anderson CB, Fass U. et al. Readthrough of dystrophin stop codon mutations induced by aminoglycosides. Ann Neurol. 2004;55(3):422–426. [PubMed: 14991821] 85. Nagai J, Takano M. Molecular aspects of renal handling of aminoglycosides and strategies for preventing the nephrotoxicity. Drug Metab Pharmacokinet. 2004;19(3):159–170. [PubMed: 15499183] 86. Mingeot-Leclercq MP, Tulkens PM. Aminoglycosides: Nephrotoxicity. Antimicrob Agents Chemother. 1999;43(5):1003–1012. [PMC free article: PMC89104] [PubMed: 10223907] 87. Beauchamp D, Labrecque G. Aminoglycoside nephrotoxicity: Do time and frequency of administration matter? Curr Opin Crit Care. 2001;7(6):401–408. [PubMed: 11805542] 88. Bartal C, Danon A, Schlaeffer F. et al. Pharmacokinetic dosing of aminoglycosides: A controlled trial. Am J Med. 2003;114(3):194–198. [PubMed: 12637133] 89. Sener G, Sehirli AO, Altunbas HZ. et al. Melatonin protects against gentamicin-induced nephrotoxicity in rats. J Pineal Res. 2002;32(4):231–236. [PubMed: 11982792] 90. Nakashima T, Teranishi M, Hibi T. et al. Vestibular and cochlear toxicity of aminoglycosides - A review. Acta Otolaryngol. 2000;120(8):904–911. [PubMed: 11200584] 91. Mazzon E, Britti D, De Sarro A. et al. Effect of N-acetylcysteine on gentamicin-mediated nephropathy in rats. Eur J Pharmacol. 2001;424(1):75–83. [PubMed: 11470263] 92. Kawamoto K, Sha SH, Minoda R. et al. Antioxidant gene therapy can protect hearing and hair cells from ototoxicity. Mol Ther. 2004;9(2):173–181. [PubMed: 14759801] 93. Gilbert DN, Wood CA, Kohlhepp SJ. et al. Polyaspartic acid prevents experimental aminoglycoside nephrotoxicity. J Infect Dis. 1989;159(5):945–953. [PubMed: 2651534] 94. Beauchamp D, Laurent G, Maldague P. et al. Protection against gentamicin-induced early renal alterations (phospholipidosis and increased DNA synthesis) by coadministration of poly-L-aspartic acid. J Pharmacol Exp Ther. 1990;255(2):858–866. [PubMed: 2243354] 95. Thibault N, Grenier L, Simard M. et al. Attenuation by daptomycin of gentamicin-induced experimental nephrotoxicity. Antimicrob Agents Chemother. 1994;38(5):1027–1035. [PMC free article: PMC188145] [PubMed: 8067733] 96. Thibault N, Grenier L, Simard M. et al. Protection against gentamicin nephrotoxicity by daptomycin in nephrectomized rats. Life Sci. 1995;56(22):1877–1887. [PubMed: 7746096] 97. Major LL, Edgar TD, Yee Yip P. et al. Tandem termination signals: Myth or reality? FEBS Lett. 2002;514(1):84–89. [PubMed: 11904187] 98. Dalphin ME, Stockwell PA, Tate WP. et al. TransTerm, the translational signal database, extended to include full coding sequences and untranslated regions. Nucleic Acids Res. 1999;27(1):293–294. [PMC free article: PMC148161] [PubMed: 9847206] 99. Dalphin ME, Brown CM, Stockwell PA. et al. The translational signal database, TransTerm, is now a relational database. Nucleic Acids Res. 1998;26(1):335–337. [PMC free article: PMC147192] [PubMed: 9399869] 100. Brown CM, Dalphin ME, Stockwell PA. et al. The translational termination signal database. Nucleic Acids Res. 1993;21(13):3119–3123. [PMC free article: PMC309741] [PubMed: 8332534] 101. Sachs MS, Wang Z, Gaba A. et al. Toeprint analysis of the positioning of translation apparatus components at initiation and termination codons of fungal mRNAs. Methods. 2002;26(2):105–114. [PubMed: 12054887] 102. Amrani N, Ganesan R, Kervestin S. et al. A faux 3'-UTR promotes aberrant termination and triggers nonsense-mediated mRNA decay. Nature. 2004;432(7013):112–118. [PubMed: 15525991] 103. Arakawa M, Shiozuka M, Nakayama Y. et al. Negamycin restores dystrophin expression in skeletal and cardiac muscles of mdx mice. J Biochem (Tokyo). 2003;134(5):751–758. [PubMed: 14688241] 104. Hirawat S, Northcutt VJ, Welch EM. et al. Phase 1 safety and PK study of PTC124 for nonsense-mutation suppression therapy of cystic fibrosis. Pediatric Pulmonology. 2004;38(S27):248. 105. Temple GF, Dozy AM, Roy KL. et al. Construction of a functional human suppressor tRNA gene: An approach to gene therapy for beta-thalassaemia. Nature. 1982;296(5857):537–540. [PubMed: 6803169] 106. Kiselev AV, Ostapenko OV, Rogozhkina EV. et al. Suppression of nonsense mutations in the Dystrophin gene by a suppressor tRNA gene. Mol Biol (Mosk). 2002;36(1):43–47. [PubMed: 11862712] 107. Buvoli M, Buvoli A, Leinwand LA. Suppression of nonsense mutations in cell culture and mice by multimerized suppressor tRNA genes. Mol Cell Biol. 2000;20(9):3116–3124. [PMC free article: PMC85606] [PubMed: 10757796] 108. Atkinson J, Martin R. Mutations to nonsense codons in human genetic disease: Implications for gene therapy by nonsense suppressor tRNAs. Nucleic Acids Res. 1994;22(8):1327–1334. [PMC free article: PMC307985] [PubMed: 8190621] 109. Phillips-Jones MK, Watson FJ. et al. The 3' codon context effect on UAG suppressor tRNA is different in Escherichia coli and human cells. J Mol Biol. 1993;233(1):1–6. [PubMed: 8377179] 110. Kmiec EB. Targeted gene repair - In the arena. J Clin Invest. 2003;112(5):632–636. [PMC free article: PMC182220] [PubMed: 12952907] 111. Rice MC, Czymmek K, Kmiec EB. The potential of nucleic acid repair in functional genomics. Nat Biotechnol. 2001;19(4):321–326. [PubMed: 11283588] 112. Liu L, Rice MC, Kmiec EB. In vivo gene repair of point and frameshift mutations directed by chimeric RNA/DNA oligonucleotides and modified single-stranded oligonucleotides. Nucleic Acids Res. 2001;29(20):4238–4250. [PMC free article: PMC60207] [PubMed: 11600713] 113. Cole-Strauss A, Yoon K, Xiang Y. et al. Correction of the mutation responsible for sickle cell anemia by an RNA-DNA oligonucleotide. Science. 1996;273(5280):1386–1389. [PubMed: 8703073] 114. Dominski Z, Kole R. Restoration of correct splicing in thalassemic pre-mRNA by antisense oligonucleotides. Proc Natl Acad Sci USA. 1993;90(18):8673–8677. [PMC free article: PMC47420] [PubMed: 8378346] 115. Kren BT, Cole-Strauss A, Kmiec EB. et al. Targeted nucleotide exchange in the alkaline phosphatase gene of HuH-7 cells mediated by a chimeric RNA/DNA oligonucleotide. Hepatology. 1997;25(6):1462–1468. [PubMed: 9185769] 116. Tagalakis AD, Graham IR, Riddell DR. et al. Gene correction of the apolipoprotein (Apo) E2 phenotype to wild-type ApoE3 by in situ chimeraplasty. J Biol Chem. 2001;276(16):13226–13230. [PubMed: 11278248] 117. D'Alessandro M, Morley SM, Ogden PH. et al. Functional improvement of mutant keratin cells on addition of desmin: An alternative approach to gene therapy for dominant diseases. Gene Ther. 2004;11(16):1290–1295. [PubMed: 15215887] 118. Alexeev V, Yoon K. Stable and inheritable changes in genotype and phenotype of albino melanocytes induced by an RNA-DNA oligonucleotide. Nat Biotechnol. 1998;16(13):1343–1346. [PubMed: 9853616] 119. Rando TA, Disatnik MH, Zhou LZ. Rescue of dystrophin expression in mdx mouse muscle by RNA/DNA oligonucleotides. Proc Natl Acad Sci USA. 2000;97(10):5363–5368. [PMC free article: PMC25834] [PubMed: 10805797] 120. Mann CJ, Honeyman K, Cheng AJ. et al. Antisense-induced exon skipping and synthesis of dystrophin in the mdx mouse. Proc Natl Acad Sci USA. 2001;98(1):42–47. [PMC free article: PMC14541] [PubMed: 11120883] 121. Lu QL, Mann CJ, Lou F. et al. Functional amounts of dystrophin produced by skipping the mutated exon in the mdx dystrophic mouse. Nat Med. 2003;9(8):1009–1014. [PubMed: 12847521] 122. Bertoni C, Rando TA. Dystrophin gene repair in mdx muscle precursor cells in vitro and in vivo mediated by RNA-DNA chimeric oligonucleotides. Hum Gene Ther. 2002;13(6):707–718. [PubMed: 11936970] 123. Kren BT, Bandyopadhyay P, Steer CJ. In vivo site-directed mutagenesis of the factor IX gene by chimeric RNA/DNA oligonucleotides. Nat Med. 1998;4(3):285–290. [PubMed: 9500600] 124. Lai LW, Chan DM, Erickson RP. et al. Correction of renal tubular acidosis in carbonic anhydrase II-deficient mice with gene therapy. J Clin Invest. 1998;101(7):1320–1325. [PMC free article: PMC508709] [PubMed: 9525974] 125. Parekh-Olmedo H, Ferrara L, Brachman E. et al. Gene therapy progress and prospects: Targeted gene repair. Gene Ther. 2005;12(8):639–646. [PubMed: 15815682] 126. Le Hir H, Moore MJ, Maquat LE. Pre-mRNA splicing alters mRNP composition: Evidence for stable association of proteins at exon-exon junctions. Genes Dev. 2000;14(9):1098–1108. [PMC free article: PMC316578] [PubMed: 10809668] 127. Serin G, Gersappe A, Black JD. et al. Identification and characterization of human orthologues to Saccharomyces cerevisiae Upf2 protein and Upf3 protein (Caenorhabditis elegans SMG-4). Mol Cell Biol. 2001;21(1):209–223. [PMC free article: PMC88795] [PubMed: 11113196] 128. Lykke-Andersen J, Shu MD, Steitz JA. Human Upf proteins target an mRNA for nonsense-mediated decay when bound downstream of a termination codon. Cell. 2000;103(7):1121–1131. [PubMed: 11163187] 129. Lejeune F, Ishigaki Y, Li X. et al. The exon junction complex is detected on CBP80-bound but not eIF4E-bound mRNA in mammalian cells: Dynamics of mRNP remodeling. EMBO J. 2002;21(13):3536–3545. [PMC free article: PMC126094] [PubMed: 12093754] 130. Ishigaki Y, Li X, Serin G. et al. Evidence for a pioneer round of mRNA translation: mRNAs subject to nonsense-mediated decay in mammalian cells are bound by CBP80 and CBP20. Cell. 2001;106(5):607–617. [PubMed: 11551508] 131. Nagy E, Maquat LE. A rule for termination-codon position within intron-containing genes: When nonsense affects RNA abundance. Trends Biochem Sci. 1998;23(6):198–199. [PubMed: 9644970] 132. Zhang J, Maquat LE. Evidence that translation reinitiation abrogates nonsense-mediated mRNA decay in mammalian cells. EMBO J. 1997;16(4):826–833. [PMC free article: PMC1169683] [PubMed: 9049311] 133. Lim SK, Sigmund CD, Gross KW. et al. Nonsense codons in human beta-globin mRNA result in the production of mRNA degradation products. Mol Cell Biol. 1992;12(3):1149–1161. [PMC free article: PMC369546] [PubMed: 1545796] 134. Atkin AL, Schenkman LR, Eastham M. et al. Relationship between yeast polyribosomes and Upf proteins required for nonsense mRNA decay. J Biol Chem. 1997;272(35):22163–22172. [PubMed: 9268361] 135. Atkin AL, Altamura N, Leeds P. et al. The majority of yeast UPF1 colocalizes with polyribosomes in the cytoplasm. Mol Biol Cell. 1995;6(5):611–625. [PMC free article: PMC301219] [PubMed: 7545033] 136. Czaplinski K, Ruiz-Echevarria MJ, Paushkin SV. et al. The surveillance complex interacts with the translation release factors to enhance termination and degrade aberrant mRNAs. Genes Dev. 1998;12(11):1665–1677. [PMC free article: PMC316864] [PubMed: 9620853] 137. Gozalbo D, Hohmann S. Nonsense suppressors partially revert the decrease of the mRNA level of a nonsense mutant allele in yeast. Curr Genet. 1990;17(1):77–79. [PubMed: 2311129] 138. Belgrader P, Cheng J, Maquat LE. Evidence to implicate translation by ribosomes in the mechanism by which nonsense codons reduce the nuclear level of human triosephosphate isomerase mRNA. Proc Natl Acad Sci USA. 1993;90(2):482–486. [PMC free article: PMC45687] [PubMed: 8421679] 139. Correa-Cerro LS, Wassif CA, Waye JS. et al. DHCR7 nonsense mutations and characterisation of mRNA nonsense mediated decay in Smith-Lemli-Opitz syndrome. J Med Genet. 2005;42(4):350–357. [PMC free article: PMC1736027] [PubMed: 15805162] 140. Page MF, Carr B, Anders KR. et al. SMG-2 is a phosphorylated protein required for mRNA surveillance in Caenorhabditis elegans and related to Upf1p of yeast. Mol Cell Biol. 1999;19(9):5943–5951. [PMC free article: PMC84455] [PubMed: 10454541] 141. Bhattacharya A, Czaplinski K, Trifillis P. et al. Characterization of the biochemical properties of the human Upf1 gene product that is involved in nonsense-mediated mRNA decay. RNA. 2000;6(9):1226–1235. [PMC free article: PMC1369996] [PubMed: 10999600] 142. Yamashita A, Ohnishi T, Kashima I. et al. Human SMG-1, a novel phosphatidylinositol 3-kinase-related protein kinase, associates with components of the mRNA surveillance complex and is involved in the regulation of nonsense-mediated mRNA decay. Genes Dev. 2001;15(17):2215–2228. [PMC free article: PMC312771] [PubMed: 11544179] 143. Pal M, Ishigaki Y, Nagy E. et al. Evidence that phosphorylation of human Upfl protein varies with intracellular location and is mediated by a wortmannin-sensitive and rapamycin-sensitive PI 3-kinase-related kinase signaling pathway. RNA. 2001;7(1):5–15. [PMC free article: PMC1370068] [PubMed: 11214180] 144. Grimson A, O'Connor S, Newman CL. et al. SMG-1 Is a Phosphatidylinositol kinase-related protein kinase required for nonsense-mediated mRNA decay in Caenorhabditis elegans. Mol Cell Biol. 2004;24(17):7483–7490. [PMC free article: PMC506987] [PubMed: 15314158] 145. Usuki F, Yamashita A, Higuchi I. et al. Inhibition of nonsense-mediated mRNA decay rescues the phenotype in Ullrich's disease. Ann Neurol. 2004;55(5):740–744. [PubMed: 15122717] 146. Ohnishi T, Yamashita A, Kashima I. et al. Phosphorylation of hUPF1 induces formation of mRNA surveillance complexes containing hSMG-5 and hSMG-7. Mol Cell. 2003;12(5):1187–1200. [PubMed: 14636577] 147. Chiu SY, Serin G, Ohara O. et al. Characterization of human Smg5/7a: A protein with similarities to Caenorhabditis elegans SMG5 and SMG7 that functions in the dephosphorylation of Upf1. RNA. 2003;9(1):77–87. [PMC free article: PMC1370372] [PubMed: 12554878] 148. Cali BM, Kuchma SL, Latham J. et al. smg-7 is required for mRNA surveillance in Caenorhabditis elegans. Genetics. 1999;151(2):605–616. [PMC free article: PMC1460488] [PubMed: 9927455] 149. Anders KR, Grimson A, Anderson P. SMG-5, required for C. elegans nonsense-mediated mRNA decay, associates with SMG-2 and protein phosphatase 2A. EMBO J. 2003;22(3):641–650. [PMC free article: PMC140740] [PubMed: 12554664] 150. Jungwirth H, Bergler H, Hogenauer G. Diazaborine treatment of Baker's yeast results in stabilization of aberrant mRNAs. J Biol Chem. 2001;276(39):36419–36424. [PubMed: 11477081] 151. Lelivelt MJ, Culbertson MR. Yeast Upf proteins required for RNA surveillance affect global expression of the yeast transcriptome. Mol Cell Biol. 1999;19(10):6710–6719. [PMC free article: PMC84660] [PubMed: 10490610] 152. He F, Li X, Spatrick P. et al. Genome-wide analysis of mRNAs regulated by the nonsense-mediated and 5' to 3' mRNA decay pathways in yeast. Mol Cell. 2003;12(6):1439–1452. [PubMed: 14690598] 153. Reed R, Hurt E. A conserved mRNA export machinery coupled to pre-mRNA splicing. Cell. 2002;108(4):523–531. [PubMed: 11909523] 154. Galy V, Gadal O, Fromont-Racine M. et al. Nuclear retention of unspliced mRNAs in yeast is mediated by perinuclear Mlp1. Cell. 2004;116(1):63–73. [PubMed: 14718167] 155. Ramalho AS, Beck S, Meyer M. et al. Five percent of normal cystic fibrosis transmembrane conductance regulator mRNA ameliorates the severity of pulmonary disease in cystic fibrosis. Am J Respir Cell Mol Biol. 2002;27(5):619–627. [PubMed: 12397022] 156. Kerem E. Pharmacologic therapy for stop mutations: How much CFTR activity is enough? Curr Opin Pulm Med. 2004;10(6):547–552. [PubMed: 15510065] 157. Ashton LJ, Brooks DA, McCourt PA. et al. Immunoquantification and enzyme kinetics of alpha-L-iduronidase in cultured fibroblasts from normal controls and mucopolysaccharidosis type I patients. Am J Hum Genet. 1992;50(4):787–794. [PMC free article: PMC1682646] [PubMed: 1550122] 158. Aronovich EL, Pan D, Whitley CB. Molecular genetic defect underlying alpha-L-iduronidase pseudodeficiency. Am J Hum Genet. 1996;58(1):75–85. [PMC free article: PMC1914939] [PubMed: 8554071] 159. Purohit P, Stern S. Interactions of a small RNA with antibiotic and RNA ligands of the 30S subunit. Nature. 1994;370(6491):659–662. [PubMed: 8065453] 160. Trapnell BC, Chu CS, Paakko PK. et al. Expression of the cystic fibrosis transmembrane conductance regulator gene in the respiratory tract of normal individuals and individuals with cystic fibrosis. Proc Natl Acad Sci USA. 1991;88(15):6565–6569. [PMC free article: PMC52127] [PubMed: 1713683] 161. Hamosh A, Trapnell BC, Zeitlin PL. et al. Severe deficiency of cystic fibrosis transmembrane conductance regulator messenger RNA carrying nonsense mutations R553X and W1316X in respiratory epithelial cells of patients with cystic fibrosis. J Clin Invest. 1991;88(6):1880–1885. [PMC free article: PMC295756] [PubMed: 1721624] 162. Hamosh A, Rosenstein BJ, Cutting GR. CFTR nonsense mutations G542X and W1282X associated with severe reduction of CFTR mRNA in nasal epithelial cells. Hum Mol Genet. 1992;1(7):542–544. [PubMed: 1284888] Copyright © 2000-2013, Landes Bioscience. Bookshelf ID: NBK6183 Contents < PrevNext > Share Views PubReader Print View Cite this PageKeeling KM, Du M, Bedwell DM. Therapies of Nonsense-Associated Diseases. In: Madame Curie Bioscience Database [Internet]. Austin (TX): Landes Bioscience; 2000-2013. In this Page Introduction Suppression of Premature Stop Mutations Aminoglycoside-Mediated Nonsense Suppression Other Pharmacological Compounds That Suppress Nonsense Mutations Suppression of Nonsense Mutations Using Suppressor tRNAs Mutation Repair Suppression of NMD Future Development of Therapies for Nonsense-Associated Diseases References Related information PMCPubMed Central citations PubMedLinks to PubMed Recent Activity Clear)Turn Off)Turn On) Therapies of Nonsense-Associated Diseases - Madame Curie Bioscience DatabaseTherapies of Nonsense-Associated Diseases - Madame Curie Bioscience Database Your browsing activity is empty. Activity recording is turned off. Turn recording back on) See more... Follow NCBI Connect with NLM National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894 Web Policies FOIA HHS Vulnerability Disclosure Help Accessibility Careers NLM NIH HHS USA.gov
396
Copyright c ⃝2018 by Karl Sigman 1 Continuous-Time Markov Chains A Markov chain in discrete time, {Xn : n ≥0}, remains in any state for exactly one unit of time before making a transition (change of state). We proceed now to relax this restriction by allowing a chain to spend a continuous amount of time in any state, but in such a way as to retain the Markov property. As motivation, suppose we consider the rat in the maze Markov chain. Clearly it is more realistic to be able to keep track of where the rat is at any continuous-time t ≥0 as oppposed to only where the rat is after n “steps”. Assume throughout that our state space is S = Z = {· · · , −2, −1, 0, 1, 2, · · · } (or some subset thereof). Suppose now that whenever a chain enters state i ∈S, independent of the past, the length of time spent in state i is a continuous, strictly positive (and proper) random variable Hi called the holding time in state i. When the holding time ends, the process then makes a transition into state j according to transition probability Pij, independent of the past, and so on.1 Letting X(t) denote the state at time t, we end up with a continuous-time stochastic process {X(t) : t ≥0} with state space S. Our objective is to place conditions on the holding times to ensure that the continuous-time process satisfies the Markov property: The future, {X(s + t) : t ≥0}, given the present state, X(s), is independent of the past, {X(u) : 0 ≤u < s}. Such a process will be called a continuous-time Markvov chain (CTMC), and as we will conclude shortly, the holding times will have to be exponentially distributed. The formal definition is given by Definition 1.1 A stochastic process {X(t) : t ≥0} with discrete state space S is called a continuous-time Markvov chain (CTMC) if for all t ≥0, s ≥0, i ∈S, j ∈S, P(X(s + t) = j|X(s) = i, {X(u) : 0 ≤u < s}) = P(X(s + t) = j|X(s) = i) = Pij(t). Pij(t) is the probability that the chain will be in state j, t time units from now, given it is in state i now. For each t ≥0 there is a transition matrix P(t) = (Pij(t)), i, j ∈S, and P(0) = I, the identity matrix. As for discrete-time Markov chains, we are assuming here that the distribution of the future, given the present state X(s), does not depend on the present time s, but only on 1Pii > 0 is allowed, meaning that a transition back into state i from state i can ocurr. Each time this happens though, a new Hi, independent of the past, determines the new length of time spent in state i. See Section 1.14 for details. 1 the present state X(s) = i, whatever it is, and the amount of time that has elapsed, t, since time s. In particular, Pij(t) = P(X(t) = j|X(0) = i). But unlike the discrete-time case, there is no smallest “next time” until the next transition, there is a continuum of such possible times t. For each fixed i and j, Pij(t), t ≥ 0 defines a function which in principle can be studied by use of calculus and differential equations. Although this makes the analysis of CTMC’s more difficult/technical than for discrete-time chains, we will, non-the-less, find that many similarities with discrete-time chains follow, and many useful results can be obtained. A little thought reveals that the holding times must have the memoryless property and thus are exponentially distributed. To see this, suppose that X(t) = i. Time t lies somewhere in the middle of the holding time Hi for state i. The future after time t tells us, in particular, the remaining holding time in state i, whereas the past before time t, tells us, in particular, the age of the holding time (how long the process has been in state i). In order for the future to be independent of the past given X(t) = i, we deduce that the remaining holding time must only depend (in distribution) on i and be independent of its age; the memoryless property follows. Since an exponential distribution is completely determined by its rate we conclude that for each i ∈S, there exists a constant (rate) ai > 0, such that the chain, when entering state i, remains there, independent of the past, for an amount of time Hi ∼exp(ai): A CTMC makes transitions from state to state, independent of the past, ac-cording to a discrete-time Markov chain, but once entering a state remains in that state, independent of the past, for an exponentially distributed amount of time before changing state again. Thus a CTMC can simply be described by a transition matrix P = (Pij), describing how the chain changes state step-by-step at transition epochs, together with a set of rates {ai : i ∈S}, the holding time rates. Each time state i is visited, the chain spends, on average, E(Hi) = 1/ai units of time there before moving on. 1.1 The embedded discrete-time Markov chain Letting τn denote the time at which the nth change of state (transition) occurs, we see that Xn = X(τn+), the state right after the nth transition, defines the underlying discrete-time Markov chain, called the embedded Markov chain. {Xn} keeps track, consecutively, of the states visited right after each transition, and moves from state to state according to the one-step transition probabilities Pij = P(Xn+1 = j|Xn = i). This transition matrix (Pij), together with the holding-time rates {ai}, completely determines the CTMC. 1.2 Chapman-Kolmogorov equations The Chapman-Kolmogorov equations for discrete-time Markov chains generalizes to 2 Lemma 1.1 (Chapman-Kolmogorov equations for CTMC’s) For all t ≥0, s ≥ 0, P(t + s) = P(s)P(t), that is, for all t ≥0, s ≥0, i ∈S, j ∈S Pij(t + s) = X k∈S Pik(s)Pkj(t). As for discrete-time chains, the (easy) proof involves first conditioning on what state k the chain is in at time s given that X(0) = i, yielding Pik(s), and then using the Markov property to conclude that the probability that the chain, now in state k, would then be in state j after an additional t time units is, independent of the past, Pkj(t). 1.3 Examples of CTMC’s 1. Poisson counting process: Let {N(t) : t ≥0} be the counting process for a Poisson process ψ = {tn} at rate λ. Then {N(t)} forms a CTMC with S = {0, 1, 2, . . .}, Pi,i+1 = 1, ai = λ, i ≥0: If N(t) = i then, by the memoryless property, the next arrival, arrival i + 1, will, independent of the past, occur after an exponentially distributed amount of time at rate λ. The holding time in state i is simply the interarrival time, ti+1 −ti, and τn = tn since N(t) only changes state at an arrival time. Assuming that N(0) = 0 we conclude that Xn = N(tn+) = n, n ≥0; the embedded chain is deterministic. This is a very special kind of CTMC for several reasons. (1) all holding times Hi have the same rate ai = λ, and (2) N(t) is a non-decreasing process; it increases by one at each arrival time, and remains constant otherwise. As t →∞, N(t) ↑∞step by step. 2. Consider the rat in the closed maze, in which at each transition, the rat is equally likely to move to one of the neighboring two cells, but where now we assume that the holding time, Hi, in cell i is exponential at rate ai = i, i = 1, 2, 3, 4. Time is in minutes (say). Let X(t) denote the cell that the rat is in at time t. Given the rat is now in cell 2 (say), he will remain there, independent of the past, for an exponential amount of time with mean 1/2, and then move, independent of the past, to either cell 1 or 4 w.p.=1/2. The other transitions are similarly explained. {X(t)} forms a CTMC. Note how cell 4 has the shortest holding time (mean 1/4 minutes), and cell 1 has the longest (mean 1 minute). Of intrinisic interest is to calculate the long-run proportion of time (continuous time now) that the rat spends in each cell; Pi def = lim t→∞ 1 t Z t 0 I{X(s) = i}ds, i = 1, 2, 3, 4. We will learn how to compute these later; they serve as the continuous-time analog to the discrete-time stationary probabilities πi for discrete-time Markov chains. 3 ⃗ P = (P1, P2, P3, P4) is called the limiting (stationary) distribution for the CTMC. The intuitive interpretation: If way out in the future we were to observe the maze, then Pi is the probability that we would find the rat in cell i. 3. FIFO M/M/1 queue: Arrivals to a single-server queue are Poisson at rate λ. There is one line (queue) to wait in, and customers independently (and independent of the Poisson arrival process) have service times {Sn} that are exponentially distributed at rate µ. We assume that customers join the tail of the queue, and hence begin service in the order that they arrive first-in-queue-first-out-of-queue (FIFO). Let X(t) denote the number of customers in the system at time t, where “system” means the line plus the service area. So (for example), X(t) = 2 means that there is one customer in service and one waiting in line. Note that a transition can only occur at customer arrival or departure times, and that departures occur whenever a service completion occurs. At an arrival time X(t) jumps up by the amount 1, whereas at a departure time X(t) jumps down by the amount 1. Determining the rates ai: If X(t) = 0 then only an arrival can occur next, so the holding time is given by H0 ∼exp(λ) the time until the next arrival; a0 = λ, the arrival rate. If X(t) = i ≥1, then the holding time is given by Hi = min{Sr, X} where Sr is the remaining service time of the customer in service, and X is the time until the next arrival. The memoryless property for both service times and interarrival times implies that Sr ∼exp(µ) and X ∼exp(λ) independent of the past. Also, they are independent r.v.s. because the service times are assumed independent of the Poisson arrival process. Thus Hi ∼exp(λ + µ) and ai = λ + µ, i ≥1. The point here is that each of the two independent events “service completion will ocurr”, “new arrival will ocurr” is competing to be the next event so as to end the holding time. The transition probabilities Pij for the embedded discrete-time chain are derived as follows: Xn denotes the number of customers in the system right after the nth transition. Transitions are caused only by arrivals and departures. If Xn = 0, then the system is empty and we are waiting for the next arrival; P(Xn+1 = 1|Xn = 0) = 1. But if Xn = i ≥1, then Xn+1 = i + 1 w.p. P(X < Sr) = λ/(λ + µ), and Xn+1 = i −1 w.p. P(Sr < X) = µ/(λ + µ), depending on whether an arrival or a departure is the first event to occur next. So, P0,1 = 1, and for i ≥1, Pi,i+1 = p = λ/(λ+ µ), and Pi,i−1 = 1−p = µ/(λ+ µ). We conclude that The embedded Markov chain for a FIFO M/M/1 queue is a simple ran-dom walk (“up” probability p = λ/(λ + µ), “down” probability 1 −p = µ/(λ + µ)) that is restricted to be non-negative (P0,1 = 1). 4. M/M/c multi-server queue: This is the same as the FIFO M/M/1 queue except there are now c servers working in parallel. As in a USA postoffice, arrivals wait in one FIFO line (queue) and enter service at the first available free server. X(t) 4 denotes the number of customers in the system at time t. For illustration, let’s assume c = 2. Then, for example, X(t) = 4 means that two customers are in service (each with their own server) and two others are waiting in line. When X(t) = i ∈{0, 1}, the holding times are the same as for the M/M/1 model; a0 = λ, a1 = λ + µ. But when X(t) = i ≥2, both remaining service times, denoted by Sr1 and Sr2, compete to determine the next departure. Since they are independent exponentials at rate µ, we deduce that the time until the next departure is given by min{Sr1, Sr2} ∼exp(2µ). The time until the next arrival is given by X ∼exp(λ) and is independent of both remaining service times. We conclude that the holding time in any state i ≥2 is given by Hi = min{X, Sr1, Sr2} ∼exp(λ + 2µ). For the general case of c ≥2, the rates are determined analogously: ai = λ+iµ, 0 ≤ i ≤c, ai = λ + cµ, i > c. For the embedded chain: P0,1 = 1 and for 0 ≤i ≤c−1, Pi,i+1 = λ/(λ + iµ), Pi,i−1 = iµ/(λ + iµ). Then for i ≥c, Pi,i+1 = λ/(λ + cµ), Pi,i−1 = cµ/(λ + cµ). This is an example of a simple random walk with state-dependent “up”, “down” probabilities: at each step, the probabilities for the next increment depend on i, the current state, until i = c at which point the probabilities remain constant. 5. M/M/∞infinite-server queue: Here we have a M/M/c queue with c = ∞; a special case of the M/G/∞queue. Letting X(t) denote the number of customers in the system at time t, we see that ai = λ + iµ, i ≥0 since there is no limit on the number of busy servers. For the embedded chain: P0,1 = 1 and Pi,i+1 = λ/(λ + iµ), Pi,i−1 = iµ/(λ + iµ), i ≥ 1. This simple random walk thus has state-dependent “up”, “down” probabilities that continue to depend on each state i, the current state. Note how, as i in-creases, the down probability, Pi,i−1, increases, and approaches 1 as i →∞: when the system is heavily congested, departures occur rapidly; this model is always stable. 1.4 Birth and Death processes Except for Example 2 (rat in the closed maze) all of the CTMC examples in the previous section were Birth and Death (B&D) processes, CTMC’s that can only change state by increasing by one, or decreasing by one; Pi,i+1 + Pi,i−1 = 1, i ∈S. (In Example 2, P1,3 > 0, for example, so it is not B&D.) Here we study B&D processes more formally, since they tend to be a very useful type of CTMC. Whenever the state increases by one, we say there is a birth, and whenever it decreases by one we say there is a death. We shall focus on the case when S = {0, 1, 2, . . .}, in which case X(t) can be thought of as the population size at time t. For each state i ≥0 we have a birth rate λi and a death rate µi: Whenever X(t) = i, independent of the past, the time until the next birth is a r.v. X ∼exp(λi) and, independently, the time until the next death is a r.v. Y ∼exp(µi). Thus the holding 5 time rates are given by ai = λi + µi because the time until the next transition (change of state) in given by the holding time Hi = min{X, Y } ∼exp(λi + µi). The idea here is that at any given time the next birth is competing with the next death to be the next transition. (We always assume here that µ0 = 0 since there can be no deaths without a population.) This means that whenever X(t) = i ≥1, the next transition will be a birth w.p. Pi,i+1 = P(X < Y ) = λi/(λi + µi), and a death w.p. Pi,i−1 = P(Y < X) = µi/(λi + µi). Thus the embedded chain for a B&D process is a simple random walk with state dependent “up”, “down” probabilities. When µi = 0, i ≥0, and λi > 0, i ≥0, we call the process a pure birth process; the population continues to increase by one at each transition. The main example is the Poisson counting process (Example 1 in the previous Section), but this can be generalized by allowing each λi to be different. The reader is encouraged at this point to go back over the B&D Examples in the previous Section. 1.5 Explosive CTMC’s Consider a pure birth process {X(t)}, Pi,i+1 = 1, i ≥0, in which ai = λi = 2i, i ≥0. This process spends, on average, E(Hi) = 1/λi = 2−i units of time in state i and then changes to state i + 1; . Thus it spends less and less time in each state, consequently jumping to the next state faster and faster as time goes on. Since X(t) →∞as t →∞, we now explore how fast this happens. Note that the chain will visit state i at time H0 + H1 + · · · + Hi−1, the sum of the first i holding times. Thus the chain will visit all of the states by time T = ∞ X i=0 Hi. Taking expected value yields E(T) = ∞ X i=0 2−i = 2 < ∞, and we conclude that on average all states i ≥0 have been visited by time t = 2, a finite amount of time! But this implies that w.p.1., all states will be visited in a finite amount of time; P(T < ∞) = 1. Consequently, w.p.1., X(T +t) = ∞, t ≥0. This is an example of an explosive Markov chain: The number of transitions in a finite interval of time is infinite. We shall rule out this kind of behavior in the rest of our study, and assume from now on that all CTMC’s considered are non-explosive, by which we mean that the number of transitions in any finite interval of time is finite. This will always hold for any CTMC with a finite state space, or any CTMC for which there are only a finite number of distinct values for the rates ai, and more generally whenever sup{ai : i ∈S} < ∞. Every Example given in the previous Section was non-explosive. Only the M/M/∞queue needs 6 some clarification since ai = λ + iµ →∞as i →∞. But only arrivals and departures determine transitions, and the arrivals come from the Poisson process at fixed rate λ, so the arrivals can not cause an explosion; N(t) < ∞, t ≥0. Now observe that during any interval of time, (s, t], the number of departures can be no larger than N(t), the total number of arrivals thus far, so they too can not cause an explosion. In short, the number of transitions in any interval (s, t] is bounded from above by 2N(t) < ∞; the non-explosive condition is satisfied. This method of bounding the number of transitions by the underlying Poisson arrival process will hold for essentially any CTMC queueing model. 1.6 Communication classes, irreducibility and recurrence State j is said to be reachable from state i for a CTMC if P(X(s) = j|X(0) = i) = Pij(s) > 0 for some s ≥0. As with discrete-time chains, i and j are said to communicate if state j is reachable from state i, and state i is reachable from state j. It is immediate that i and j communicate in continuous time if and only if they do so for the embedded discrete-time chain {Xn}: They communicate in continuous-time if and only if they do so at transition epochs. Thus once again, we can partition the state space up into disjoint communication classes, S = C1 ∪C2 ∪· · · , and an irreducible chain is a chain for which all states communicate (S = C1, one communication class). We state in passing A CTMC is irreducible if and only if its embedded chain is irreducible. Notions of recurrence, transience and positive recurence are similar as for discrete-time chains: Let Ti,i denote the amount of (continuous) time until the chain re-visits state i (at a later transition) given that X(0) = i (defined to be ∞if it never does return); the return time to state i. The chain will make its first transition at time Hi (holding time in state i), so Tii ≥Hi. State i is called recurrent if, w.p.1., the chain will re-visit state i with certainty, that is, if P(Tii < ∞) = 1. The state is called transient otherwise. This (with a little thought) is seen to be the same property as for the embedded chain (because X(t) returns to state i for some t if and only if Xn does so for some n): A state i is recurrent/transient for a CTMC if and only if it is recurrent/transient for the embedded discrete-time chain. Thus communication classes all have the same type of states: all together they are transient or all together they are recurrent. 1.7 Positive recurrence and the existence a limiting distribution ⃗ P = {Pj} State i is called positive recurrent if, in addition to being recurrent, E(Tii) < ∞; the expected amount of time to return is finite. State i is called null recurrent if, in addition 7 to being recurrent, E(Tii) = ∞; the expected amount of time to return is infinite. Unlike recurrence, positive (or null) recurrence is not equivalent to that for the embedded chain: It is possible for a CTMC to be positive recurrent while its embedded chain is null recurrent (and vice versa). But positive and null recurrence are still class properties, so in particular: For an irreducible CTMC, all states together are transient, positive recurrent, or null recurrent. A CTMC is called positive recurrent if it is irreducible and all states are positive recurrent. We define (when they exist, independent of initial condition X(0) = i) the limiting probabilities {Pj} for the CTMC as the long-run proportion of time the chain spends in each state j ∈S: Pj = lim t→∞ 1 t Z t 0 I{X(s) = j|X(0) = i}ds, w.p.1., (1) which after taking expected values yields Pj = lim t→∞ 1 t Z t 0 Pij(s)ds. (2) When each Pj exists and P j Pj = 1, then ⃗ P = {Pj} (as a row vector) is called the limiting (or stationary) distribution for the Markov chain. Letting P∗=    ⃗ P ⃗ P . . .    (3) denote the matrix in which each row is the limiting probability distribution ⃗ P, (2) can be expressed nicely in matrix form as lim t→∞ 1 t Z t 0 P(s)ds = P∗. (4) As for discrete-time Markov chains, positive recurrence implies the existence of lim-iting probabilities by use of the SLLN. The basic idea is that for fixed state j, we can break up the evolution of the CTMC into i.i.d. cycles, where a cycle begins every time the chain makes a transition into state j. This yields an example of what is called a regenerative process because we say it regenerates every time a cycle begins. The cycle lengths are i.i.d. distributed as Tjj, and during a cycle, the chain spends an amount of time in state j equal in distribution to the holding time Hj. This leads to 8 Proposition 1.1 If {X(t)} is a positive recurrent CTMC, then the limiting probability distribution ⃗ P = (Pi,j) as defined by Equation (1) exists, is unique, and is given by Pj = E(Hj) E(Tjj) = 1/aj E(Tjj) > 0, j ∈S. In words: “The long-run proportion of time the chain spends in state j equals the expected amount of time spent in state j during a cycle divided by the expected cycle length (between visits to state j)”. Moreover, the stronger mode of convergence (weak convergence) holds: Pj = lim t→∞Pij(t), i, j ∈S. (5) Finally, if the chain is either null recurrent or transient, then Pj = 0, j ∈S; no limiting distribution exists. Proof : Fixing state j, we can break up the evolution of the CTMC into i.i.d. cycles, where a cycle begins every time the chain makes a transition into state j. This follows by the (strong) Markov property, since every time the chain enters state j, the chain starts over again from scratch stochastically, and is independent of the past. Letting τn(j) denote the nth time at which the chain makes a transition into state j, with τ0(j) = 0, the cycle lengths, Tn(j) = τn(j)−τn−1(j), n ≥1, are i.i.d., distributed as the return time Tjj. {τn(j) : n ≥1} forms a renewal point process because of the assumed recurrence, of the chain, and we let Nj(t) denote the number of such points during (0, t]. From the Elementary Renewal Theorem, wp1, lim t→∞ Nj(t) t = 1 E(Tjj). (6) Letting Jn = Z τn(j) τn−1(j) I{X(s) = j}ds, (the amount of time spent in state j during the nth cycle) we conclude that {Jn} forms an i.i.d. sequence of r.v.s. distributed as the holding time Hj; E(J) = E(Hj). Thus Z t 0 I{X(s) = j}ds ≈ Nj(t) X n=1 Jn, from which we obtain 1 t Z t 0 I{X(s) = j}ds ≈Nj(t) t × 1 Nj(t) Nj(t) X n=1 Jn. 9 Letting t →∞yields Pj = E(Hj) E(Tjj), where the denominator is from (6) and the numerator is from the SLLN applied to {Jn}. Pj > 0 if E(Tjj) < ∞(positive recurrence), whereas Pj = 0 if E(Tjj) = ∞(null recurrence). And if transient, then I{X(s) = j|X(0) = i} →0 as s →∞, wp1, yielding Pj = 0 as well from (1). Uniqueness of the Pj follows by the unique representation, Pj = 1/aj E(Tjj). The weak convergence in (5) holds in addition to the already established time-average convergence because the cycle-length distribution (the distribution of Tjj for any fixed j) is non-lattice.2 Tjj has a non-lattice distribution because it is of phase type hence a continuous distribution. In general, a positive recurrent regenerative process with a non-lattice cycle-length distribution converges weakly. The details of this will be dealt with later when we return to a more rigorous study of renewal theory. 1.8 Allowing an arbitrary random initial value for X(0) If ⃗ ν is a probability distribution on S, and if X(0) ∼⃗ ν, then the distribution of X(t) is given by the vector-matrix product ⃗ νP(t); X(t) ∼⃗ νP(t), t ≥0. Recalling the definition of P∗in (3), note that ⃗ νP∗= ⃗ P for any probability distribution ⃗ ν. Thus for a positive recurrent chain it holds more generally from (4) (by multiplying each left side by ⃗ ν) that lim t→∞ 1 t Z t 0 ⃗ νP(s)ds = ⃗ P. (7) This merely means that the initial condition (e.g., the value of X(0)) can be random as opposed to only deterministic (e.g., X(0) = i) without effecting the limit; the same limiting distribution ⃗ P is obtained regardless. 1.9 The limiting distribution yields a stationary distribution and hence a stationary version of the Markov chain As in discrete-time, the limiting distribution ⃗ P is also called a stationary distribution because it yields a stationary version of the chain if the chain is initially distributed as ⃗ P at time t = 0: Proposition 1.2 For a positive recurrent Markov chain with limiting distribution ⃗ P: If X(0) ∼⃗ P, then X(t) ∼⃗ P for all t ≥0; that is, ⃗ PP(t) = ⃗ P, t ≥0. This means that X i∈S PiPi,j(t) = Pj, j ∈S, t ≥0. 2The distribution of a non-negative rv X is said to be non-lattice if there does not exists a d > 0 such that P(X ∈{nd : n ≥0}) = 1. Any continuous distribution, in particular, is non-lattice. 10 In fact this limiting distribution ⃗ P is the only distribution (it is unique) that is stationary, that is, for which ⃗ PP(t) = ⃗ P, t ≥0. Moreover, letting {X∗(t) : t ≥0} denote the chain when X(0) ∼⃗ P, it forms a stationary stochastic process: {X∗(s + t) : t ≥0} has the same distribution for all s ≥0. Proof : From the definition of P∗(each row is ⃗ P) we must equivalently show that P∗P(t) = P∗for any t ≥0. (Intuitively we are simply asserting that P(∞)P(t) = P(∞) because ∞+ t = ∞.) Recalling the Chapman-Kolmogorov equations, P(s + t) = P(s)P(t), and using (4), we get P∗P(t) =  lim u→∞ 1 u Z u 0 P(s)ds  P(t) = lim u→∞ 1 u Z u 0 P(s)P(t)ds = lim u→∞ 1 u Z u 0 P(s + t)ds = lim u→∞ 1 u Z u 0 P(s)ds = P∗. The second to last equality follows due to fact that adding the fixed t is asymptotically negligible: R u 0 P(s + t)du = R u 0 P(s)ds + R u+t u P(s)ds − R t 0 P(s). All elements of P(s) are bounded by 1, and so the last two integrals when divided by u tend to 0 as u →∞. If a probability distribution ⃗ ν satisfies ⃗ νP(t) = ⃗ ν, t ≥0, then on the one hand, since the chain is assumed positive recurrent, we have Equation (7) and hence lim t→∞ 1 t Z t 0 ⃗ νP(s)ds = ⃗ νP∗= ⃗ P. (8) But on the other hand ⃗ νP(t) = ⃗ ν implies that 1 t Z t 0 ⃗ νP(s)ds = 1 t Z t 0 ⃗ νds = ⃗ ν, and we conclude that ⃗ ν = ⃗ P; the stationary distribution is unique. By the Markov property, a Markov process is completely determined (in distribution) by its initial state. Thus {X∗(s + t) : t ≥0} has the same distribution for all s ≥0 because for all s, its initial state has the same distribution, X∗(s) ∼⃗ P. 1.10 Interpretation of the {ai} as transition rates; the transition rate matrix (infinitesimal generator) Q Assume here that Pi,i = 0 for all i ∈S. ai can be interpreted as the transition rate out of state i given that X(t) = i; the intuitive idea being that the exponential holding time 11 will end, independent of the past, in the next dt units of time with probability aidt. This can be made rigorous. It can be shown that for i ̸= j P ′ i,j(0) = lim h↓0 Pi,j(h)/h = aiPi,j. (9) aiPi,j can thus be interpreted as the transition rate from state i to state j given that the chain is currently in state i. When i = j, Pi,i(h) = 1 −P(X(h) ̸= i | X(0) = i) and it can be shown that P ′ i,i(0) = lim h↓0(Pi,i(h) −1)/h = −ai. (10) Definition 1.2 The matrix Q = P ′(0) given explicitly by (9) and (10) is called the transition rate matrix, or infinitesimal generator, of the Markov chain For example, if S = {0, 1, 2, 3, 4}, then Q =       −a0 a0P0,1 a0P0,2 a0P0,3 a0P0,4 a1P1,0 −a1 a1P1,2 a1P1,3 a1P1,4 a2P2,0 a2P2,1 −a2 a2P2,3 a2P2,4 a3P3,0 a3P3,1 a3P3,2 −a3 a3P3,4 a4P4,0 a4P4,3 a4P4,3 a4P4,3 −a4       Note in passing that since we assume that Pi,i = 0, i ∈S, we conclude that each row of Q sums to 0. 1.11 Computing Pij(t): Kolmogorov backward equations We have yet to show how to compute the transition probabilities for a CTMC, Pij(t) = P(X(t) = j|X(0) = i), t ≥0. For discrete-time Markov chains this was not a problem since P (n) ij = P(Xn = j|X0 = i), n ≥1 could be computed by using the fact that the matrix (P (n) ij ) was simply the transition matrix P multiplied together n times, P n. In continuous time however, the problem is a bit more complex; it involves setting up linear differential equations for Pij(t) known as the Kolmogorov backward equations and then solving. We present this derivation now. Proposition 1.3 (Kolmogorov Backward Equations) For a (non-explosive) CTMC with transition rate matrix Q = P ′(0) as in Definition 1.2, the following set of linear dif-ferential equations is satisfied by {P(t)}: P ′(t) = QP(t), t ≥0, P(0) = I, (11) that is, P ′ ij(t) = −aiPij(t) + X k̸=i aiPikPkj(t), i, j ∈S, t ≥0. (12) 12 The unique solution is thus of the exponential form; P(t) = eQt, t ≥0, (13) where for any square matrix M, eM def = ∞ X n=0 M n n! . Proof : The Chapman-Kolmogorov equations, P(t + h) = P(h)P(t), yield P(t + h) −P(t) = (P(h) −I)P(t) (14) = (P(h) −P(0))P(t). (15) dividing by h and letting h →0 then yields P ′(t) = P ′(0)P(t) = QP(t). (Technically, this involves justifying the interchange of a limit and an infinite sum, which indeed can be justified here even when the state space is infinite.) The word backward refers to the fact that in our use of the Chapman-Kolmogorov equations, we chose to place the h on the right-hand side in back, P(t + h) = P(h)P(t) as opposed to in front, P(t + h) = P(t)P(h). The derivation above can be rigorously justified for any non-explosive CTMC. It turns out, however, that the derivation of the analogous forward equations P ′(t) = P(t)Q, t ≥0, P(0) = I, that one would expect to get by using P(t + h) = P(t)P(h) can not be rigorously justified for all non-explosive CTMCs; there are examples (infinite state space) that cause trouble; the interchange of a limit and an infinite sum can not be justified. But it does not matter, since the unique solution P(t) = eQt to the backward equations is the unique solution to the forward equations, and thus both equations are valid. For a (non-explosive) CTMC, the transition probabilities Pi,j(t) are the unique solution to both the Kolmogorov backward and forward equations. Remark 1.1 It is rare that we can explicitly compute the infinite sum in the solution P(t) = eQt = ∞ X n=0 (Qt)n n! . But there are various numerical recipes for estimating eQt to any desired level of accuracy. For example, since eM = limn→∞(1 + M/n)n, for any square matrix M, one can choose n large and use eQt ≈(1 + (Qt)/n)n. 13 1.12 Balance equations, rates, and positive recurrence Consider any deterministic function x(t), t ≥0 with values in S. Clearly, every time x(t) enters a state j, it must first leave that state in order to enter it again. Thus the number of times during the interval (0, t] that it enters state j differs by at most one, from the number of times during the interval (0, t] that it leaves state j. We conclude (by dividing by t and letting t →∞) that the long-run rate at which the function leaves state j equals the long-run rate at which the function enters state j. In words, “the rate out of state j is equal to the rate into state j, for each state j”. We can apply this kind of result to each sample-path of a stochastic process. For a positive recurrent CTMC with limiting distribution ⃗ P = {Pj}, the rate out of state j is given by ajPj, while the rate into state j is given by P i̸=j PiaiPij, j ∈S, by interpreting the limiting probability Pj as a proportion of time and recalling Section 1.10 on transition rates. But we can show that fact directly; we do so next. And it leads to a set of equations that allow us to solve for the limiting distribution when it does exist. A direct analysis of transition rates and how they lead to the balance equa-tions Letting N (e) j (t) denote the number of times during (0, t] that {X(t)} entered state j, and N (d) j (t) denote the number of times during (0, t] that {X(t)} departed state j, we have (recalling Equation 6) wp1 that the long-run rate entering state j equals the long-run rate departing state j and is given by wp1 lim t→∞ N (e) j (t) t = lim t→∞ N (d) j (t) t = 1 E(Tjj). The above is just the elementary renewal theorem applied to the renewal process of the times the chain visits state j, with iid interarrival times the lengths of time between visits to state j, having mean E(Tjj). Moreover, from Proposition 1.1, we have Pj = E(Hj) E(Tjj) = 1/aj E(Tjj), which implies that 1 E(Tjj) = ajPj, j ∈S. We conclude that Proposition 1.4 The long-run rate entering state j equals the long-run rate departing state j and is given by the product ajPj, j ∈S. 14 Definition 1.3 The balance equations for a positive recurrent CTMC are given by ajPj = X i̸=j aiPiPij, j ∈S, (16) which in matrix form are given by ⃗ PQ = ⃗ 0, (17) where Q = P ′(0) is the transition rate matrix from Section 1.10. From Proposition 1.4, the balance equations in words are given by “the rate out of state j is equal to the rate into state j, for each state j”. The left hand side of (16) is the rate out of state j (ajPj). To get into state j, the chain has to come from the other states i ̸= j, and so the right hand side sums up (over all other states i ̸= j ) the rate that the chain departs state i (aiPi) and moves next to state j (Pi,j); that yields on the right hand side the (total) rate that the chain enters state j. Theorem 1.1 An irreducible (and non-explosive) CTMC is positive recurrent if and only if there is a (necessarily unique) probability solution ⃗ P to the balance equations ⃗ PQ = ⃗ 0. The solution satisfies Pj > 0, j ∈S and is the limiting (stationary) distribution as defined in Equations (1)-(4). Proof : Suppose that the chain is positive recurrent. Then from Proposition 1.1 and Proposition 1.2, there is a unique limiting probability distribution ⃗ P and it is a stationary distribution; ⃗ PP(t) = ⃗ P, t ≥0. Taking the derivative at t = 0 on both sides of ⃗ PP(t) = ⃗ P yields ⃗ PQ = ⃗ 0, the balance equations. Conversely, suppose that ⃗ P is a probability solution to the balance equations. We will first show that any such solution must also satisfy ⃗ PP(t) = ⃗ P, t ≥0, that is, it is a stationary distribution. We then will show that if an irreducible chain has such a stationary distribution, then the chain must be positive recurrent. To this end: Suppose that ⃗ PQ = ⃗ 0. Multiplying both right sides by P(t) yields ⃗ PQP(t) = ⃗ 0, which due to the Kolmogorov Backward equations, P ′(t) = QP(t), is equivalent to ⃗ PP ′(t) = ⃗ 0 which is equivalent to d(⃗ PP(t)) dt = ⃗ 0. But this implies that ⃗ PP(t) is a constant in t and hence that ⃗ PP(t) = ⃗ PP(0) = ⃗ PI = ⃗ P; ⃗ P is indeed a stationary distribution. Now suppose that the chain is not positive recurrent. For an irreducible CTMC, all states together are transient, positive recurrent, or null recurrent, so the chain must be either null recurrent or transient and hence by Proposition 1.1, we have lim t→∞ 1 t Z t 0 P(s)ds = 0. (18) 15 Multiplying both sides on the left by ⃗ P yields lim t→∞ 1 t Z t 0 ⃗ PP(s)ds = 0. (19) But using the already established ⃗ PP(t) = ⃗ P we have 1 t R t 0 ⃗ PP(s)ds = 1 t R t 0 ⃗ Pds = ⃗ P and we end with a contradiction ⃗ P = 0 (⃗ P is a probability distribution by assumption). Finally, from Proposition 1.2 we know that there can only be one stationary distribution for a positive recurrent chain, the limiting distribution as defined in Equations (1)-(4), so we conclude that ⃗ P here is indeed the limiting distribution. As for discrete-time Markov chains, when the state space is finite, we obtain a useful and simple special case: Theorem 1.2 An irreducible CTMC with a finite state space is positive recurrent; there is always a unique probability solution to the balance equations. Proof : Suppose (without loss of generality) the state space is S = {1, 2, . . . , b} for some integer b ≥1. We already know that the chain must be recurrent because the embedded chain is so. We also know that the embedded chain is positive recurrent because for finite state discrete-time chains irreducibility implies positive recurrence. Let τ1,1 denote the discrete return time to state 1, and let T1,1 denote the corresponding continuous return time. We know that E(τ1,1) < ∞. Also, T1,1 is a random sum of τ1,1 holding times, starting with H1. Let a∗= min{a1, . . . , ab}. Then a∗> 0 and every holding time Hi satisfies E(Hi) ≤1/a∗< ∞, i ∈{1, 2, . . . , b}. Letting {Yn} denote iid exponential rvs at rate a∗, independent of τ1,1, we conclude (Wald’s Equation) that E(T1,1) ≤E  τ1,1 X n=1 Yn  = E(τ1,1)(1/a∗) < ∞. 1.13 Examples of setting up and solving balance equations Here we apply Theorems 1.1 and 1.2 to a variety of models. In most cases, solving the resulting balance equations involves recursively expressing all the Pj in terms of one particular one, P0 (say), then solving for P0 by using the fact that P j∈S Pj = 1. In the case when the state space is infinite, the sum is an infinite sum that might diverge unless further restrictions on the system parameters (rates) are enforced. 1. FIFO M/M/1 queue: X(t) denotes the number of customers in the system at time t. Here, irreducibility is immediate since as pointed out earlier, the embedded chain 16 is a simple random walk (hence irreducible), so, from Theorem 1.1, we will have positive recurrence if and only if we can solve the balance equations (16): λP0 = µP1 (λ + µ)P1 = λP0 + µP2 (λ + µ)P2 = λP1 + µP3 . . . (λ + µ)Pj = λPj−1 + µPj+1, j ≥1. These equations can also be derived from scratch as follows: Given X(t) = 0, the rate out of state 0 is the arrival rate a0 = λ, and the only way to enter state 0 is from state i = 1, from which a departure must occur (rate µ). This yields the first equation. Given X(t) = j ≥1, the rate out of state j is aj = λ + µ (either an arrival or a departure can occur), but there are two ways to enter such a state j: either from state i = j −1 (an arrival occurs (rate λ) when X(t) = j −1 causing the transition j −1 →j), or from state i = j +1 (a departure ocurrs (rate µ) when X(t) = j causing the transition j + 1 →j). This yields the other equations. Note that since λP0 = µP1 (first equation), the second equation reduces to λP1 = µP2 which in turn causes the third equation to reduce to λP2 = µP3, and in general the balance equations reduce to λPj = µPj+1, j ≥0, (20) which asserts that for each j, the rate from j to j + 1 equals the rate from j + 1 to j, or Pj+1 = ρPj, j ≥0, from which we recursivly obtain P1 = ρP0, P2 = ρP1 = ρ2P0 and in general Pj = ρjP0. Using the fact that the probabilities must sum to one yields 1 = P0 ∞ X j=0 ρj, from which we conclude that there is a solution if and only if the geometric series converges, that is, if and only if ρ < 1, equivalently λ < µ, “the arrival rate is less than the service rate”, in which case 1 = P0(1 −ρ)−1, or P0 = 1 −ρ. Thus Pj = ρj(1 −ρ), j ≥0 and we obtain a geometric stationary distribution. Summarizing: 17 The FIFO M/M/1 queue is positive recurrent if and only if ρ < 1 in which case its stationary distribution is geometric with paramater ρ; Pj = ρj(1−ρ), j ≥0. (If ρ = 1 it can be shown that the chain is null recurrent, and transient if ρ > 1.) When ρ < 1 we say that the M/M/1 queue is stable, unstable otherwise. Stability intuitively means that the queue length doesn’t grow without bound over time. When the queue is stable, we can take the mean of the stationary distribution to obtain the average number of customers in the system l = lim t→∞ 1 t Z t 0 X(s)ds (21) = ∞ X j=0 jPj (22) = ∞ X j=0 j(1 −ρ)ρj (23) = ρ 1 −ρ. (24) 2. Birth and Death processes: The fact that the balance equations for the FIFO M/M/1 queue reduced to “for each state j, the rate from j to j + 1 equals the rate from j + 1 to j” is not a coincidence, and in fact this reduction holds for any Birth and Death process. For in a Birth and Death process, the balance equations are: λ0P0 = µ1P1 (λ1 + µ1)P1 = λ0P0 + µ2P2 (λ2 + µ2)P2 = λ1P1 + µ3P3 . . . (λj + µj)Pj = λj−1Pj−1 + µj+1Pj+1, j ≥1. Plugging the first equation into the second yields λ1P1 = µ2P2 which in turn can be plugged into the third yielding λ2P2 = µ3P3 and so on. We conclude that for any Birth and Death process, the balance equations reduce to λjPj = µj+1Pj+1, j ≥0, the Birth and Death balance equations. (25) Solving recursively, we see that Pj = P0 λ0 × · · · × λj−1 µ1 × · · · × µj = P0 j Y i=1 λi−1 µi , j ≥1. 18 Using the fact that the probabilities must sum to one then yields: An irreducible Birth and Death process is positive recurrent if and only if ∞ X j=1 j Y i=1 λi−1 µi < ∞, in which case P0 = 1 1 + P∞ j=1 Qj i=1 λi−1 µi , and Pj = Qj i=1 λi−1 µi 1 + P∞ j=1 Qj i=1 λi−1 µi , j ≥1. (26) For example, in the M/M/1 model, 1 + ∞ X j=1 j Y i=1 λi−1 µi = ∞ X j=0 ρj, which agrees with our previous analysis. We note in passing that the statement “for each state j, the rate from j to j + 1 equals the rate from j + 1 to j” holds for any deterministic function x(t), t ≥0, in which changes of state are only of magnitude 1; up by 1 or down by 1. Arguing along the same lines as when we introduced the balance equations, every time this kind of function goes up from j to j + 1, the only way it can do so again is by first going back down from j+1 to j. Thus the number of times during the interval (0, t] that it makes an “up” transition from j to j + 1 differs by at most one, from the number of times during the interval (0, t] that it makes a “down” transition from j + 1 to j. We conclude (by dividing by t and letting t →∞) that the long-run rate at which the function goes from j to j + 1 equals the long-run rate at which the function goes from j + 1 to j. Of course, as for the balance equations, being able to write this statement simply as λjPj = µj+1Pj+1 crucially depends on the Markov property 3. M/M/1 loss system: This is the M/M/1 queueing model, except there is no waiting room; any customer arriving when the server is busy is “lost”, that is, departs without being served. In this case S = {0, 1} and X(t) = 1 if the server is busy and X(t) = 0 if the server is free. P01 = 1 = P10; the chain is irreducible. Since the state space is finite we conclude from Theorem 1.2 that the chain is positive recurrent for any λ > 0 and µ > 0. We next solve for P0 and P1. We let ρ = λ/µ. There is only one balance equation, λP0 = µP1. So P1 = ρP0 and since P0+P1 = 1, we conclude that P0 = 1/(1 + ρ), P1 = ρ/(1 + ρ). So the long-run proportion of time that the server is busy is ρ/(1 + ρ) and the long-run proportion of time that the server is free (idle) is 1/(1 + ρ). 19 4. M/M/∞queue: X(t) denotes the number of customers (busy servers) in the system at time t. Being a Birth and Death process we need only consider the Birth and Death balance equations (25) which take the form λPj = (j + 1)µPj+1, j ≥0. Irreducibility follows from the fact that the embedded chain is an irreducible simple random walk, so positive recurrence will follow if we can solve the above equations. As is easily seen by recursion, Pj = ρj/j!P0. Forcing these to sum to one (via using the Taylor’s series expansion for ex), we obtain 1 = eρP0, or P0 = e−ρ. Thus Pj = e−ρρj/j! and we end up with the Poisson distribution with mean ρ: The M/M/∞queue is always positive recurrent for any λ > 0, µ > 0; its stationary distribution is Poisson with mean ρ = λ/µ. The above result should not be surprising, for we already studied (earlier in this course) the more general M/G/∞queue, and obtained the same stationary dis-tribution. But because we now assume exponential service times, we are able to obtain the result using CTMC methods. (For a general service time distribution we could not do so because then X(t) does not form a CTMC; so we had to use other, more general, methods.) 5. M/M/c loss queue: This is the M/M/c model except there is no waiting room; any arrival finding all c servers busy is lost. This is the c−server analog of Example 3. With X(t) denoting the number of busy servers at time t, we have, for any λ > 0 and µ > 0, an irreducible B&D process with a finite state space S = {0, . . . , c}, so positive recurrence follows from Theorem 1.2. The B&D balance equations (25) are λPj = (j + 1)µPj+1, 0 ≤j ≤c −1, or Pj+1 = Pjρ/(j + 1), 0 ≤j ≤c −1; the first c equations for the FIFO M/M/∞ queue. Solving we get Pj = ρj/j!P0, 0 ≤j ≤c, and summing to one yields 1 = P0  1 + c X j=1 ρj j!  = P0  c X j=0 ρj j!  , yielding P0 =  c X j=0 ρj j! −1 . Thus Pj = ρj j!  c X n=0 ρn n! −1 , 0 ≤j ≤c. (27) 20 In particular Pc = ρc c!  c X n=0 ρn n! −1 , (28) the proportion of time that all servers are busy. Later we will see from a result called PASTA, that Pc is also the proportion of lost customers, that is, the proportion of arrivals who find all c servers busy. This turns out to be a very famous/celebrated queueing theory result because the solution in (27), in particular the formula for Pc in (28), turns out to hold even if the service times are not exponential (the M/G/c-loss queue), a result called Erlang’s Loss Formula. 6. Population model with family immigration: Here we start with a general B&D pro-cess (birth rates λi, death rates µi), but allow another source of population growth, in addition to the births. Suppose that at each of the times from a Poisson process at rate γ, independently, a family of random size B joins the population (immi-grates). Let bi = P(B = i), i ≥1 denote corresponding family size probabilities. Letting X(t) denote the population size at time t, we no longer have a B&D process now since the arrival of a family can cause a jump larger than size one. The balance equations (“the rate out of state j equals the rate into state j”) are: (λ0 + γ)P0 = µ1P1 (λ1 + µ1 + γ)P1 = (λ0 + γb1)P0 + µ2P2 (λ2 + µ2 + γ)P2 = γb2P0 + (λ1 + γb1)P1 + µ3P3 . . . (λj + µj + γ)Pj = λjPj−1 + µPj+1 + j−1 X i=0 γbj−iPi, j ≥1. The derivation is as follows: When X(t) = j, any one of three events can happen next: A death (rate µj), a birth (rate λj) or a family immigration (rate γ). This yields the rate out of state j. There are j additional ways to enter state j, besides a birth from state j −1 or a death from state j + 1, namely from each state i < j a family of size j −i could immigrate (rate γbj−i). This yields the rate into state j. 1.14 Transitions back into the same state; Pi,i > 0. In our study of CTMC’s we have inherently been assuming that Pi,i = 0 for each i ∈S, but this is not necessary as we illustrate here. Suppose that 0 < Pi,i < 0. Assume X0 = i and let K denote the total number of transitions (visits) to state i before making a transition out to another state. Since X0 = i, we count this initial visit as one such visit. Then P(K = n) = (1−p)n−1p, n ≥1, where p = 1 −Pi,i. Letting Yn denote iid exponential rvs at rate ai (the holding time 21 rate), we can represent the total holding time HT in state i as an independent geometric sum HT = K X n=1 Yn. In particular E(HT) = E(K)/ai = 1/pai. In fact HT ∼exp(pai) as is easily seen by deriving its Laplace transform: E(esHT ) = paj pai + s, s ≥0. (Condition on K first.) Thus, we can reset ai = pai, reset Pi,i = 0 and reset Pi,j = Pi,j/p for j ̸= i. This yields the same CTMC {X(t)} (e.g., it has the same distribution), but for which Pi,i = 0. In any case, even if we keep Pi,i > 0, as long as one is consistent (on both sides of the balance equations), then the same balance equations arise in the end. We illustrate with a simple example: A CTMC with two states, 0, 1, and embedded chain transition matrix P = 0.25 0.75 .20 0.80  . a0 > 0 and a1 > 0 are given non-zero holding-time rates. By definition, ai is the holding time rate when in state i, meaning that after the holding time Hi ∼exp(ai) is completed, the chain will make a transition according to the transition matrix P = (Pij). If we interpret a transition j →j as both a transition out of and into state j, then the balance equations are a0P0 = (0.25)a0P0 + (0.20)a1P1 a1P1 = (0.75)a0P0 + (0.80)a1P1. As the reader can check, these equations reduce to the one equation (0.75)a0P0 = (0.20)a1P1, which is what we get if we were to instead interpret a transition j →j as neither a transition into or out of state j. Resetting the parameters as explained above means resetting a0 = (0.75)a0, a1 = (0.20)a1 and P to P = 0 1 1 0  . So, it makes no difference as far as {X(t)} is concerned3. This is how it works out for any CTMC. 3But there might be other associated stochastic processes that will become different by making this change. For example, in queueing models, allowing Pi,i > 0 might refer to allowing customers to return to the end of the queue for another round after completing service. By resetting Pi,i = 0, we are forcing the customer to re-enter service immediately for the extra round instead of waiting at the end of the queue. This of course would effect quantities of interest such as average waiting time. 22 1.15 Poisson Arrivals See Time Averages (PASTA) For a stable M/M/1 queue, let πa j denote the long-run proportion of arrivals who, upon arrival, find j customers already in the system. If X(t) denotes the number in system at time t, and tn denotes the time of the nth Poisson arrival, then πa j def = lim N→∞ 1 N N X n=1 I{X(tn−) = j}, where X(tn−) denotes the number in system found by the nth arrival. On the one hand, λπa j is the long-run rate (number of times per unit time) that X(t) makes a transition j →j + 1. After all, arrivals occur at rate λ, and such transitions can only happen when arrivals find j customers in the system. On the other hand, from the B&D balance equations (20), λPj is also the same rate in question. Thus λπa j = λPj, or πa j = Pj, j ≥0, which asserts that the proportion of Poisson arrivals who find j customers in the system is equal to the proportion of time there are j customers in the system. This is an example of Poisson Arrivals See Time Averages (PASTA), and it turns out that PASTA holds for any queueing model in which arrivals are Poisson, no matter how complex, as long as a certain (easy to verify) condition, called LAC, holds. (Service times do not need to have an exponential distribution, they can be general, as in the M/G/∞queue.) Moreover, PASTA holds for more general quantities of interest besides number in system. For example, the proportion of Poisson arrivals to a queue who, upon arrival, find a particular server busy serving a customer with a remaining service time exceeding x (time units) is equal to the proportion of time that this server is busy serving a customer with a remaining service time exceeding x. In general, PASTA will not hold if the arrival process is not Poisson. To state PASTA more precisely, let {X(t) : t ≥0} be any stochastic process, and ψ = {tn : n ≥0} a Poisson process. Both processes are assumed on the same probability space. We have in mind that X(t) denote the state of some “queueing” process with which the Poisson arriving “customers” are interacting/participating. The state space S can be general such as multi-dimensional Euclidean space. We assume that the sample-paths of X(t) are right-continuous with left-hand limits. 4 The lack of anticipation condition (LAC) that we will need to place on the Poisson process asserts that for each fixed t > 0, the future increments of the Poisson process 4A function x(t), t ≥0, is right-continuous if for each t ≥0, x(t+) def = limh↓0 X(t + h) = x(t). It has left-hand limits if for each t > 0, x(t−) def = limh↓0 x(t −h) exists (but need not equal x(t)). If x(t−) ̸= x(t+), then the function is said to be discontinuous at t, or have a jump at t. Queueing processes typically have jumps at arrval times and departure times. 23 after time t, {N(t + s) −N(t) : s ≥0}, be independent of the joint past, {(N(u), X(u)) : 0 ≤u ≤t}. This condition is stronger than the independent increments property of the Poisson process, for it requires that any future increment be independent not only of its own past but of the past of the queueing process as well. If the Poisson process is completely independent of the queueing process, then LAC holds, but we have in mind the case when the two processes are dependent via the arrivals being part of and participating in the queueing system. Let f(x) be any bounded real-valued function on S, and consider the real-valued process f(X(t)). We are now ready to state PASTA. (The proof, ommitted, is beyond the scope of this course.) Theorem 1.3 (PASTA) If the Poisson process satisfies LAC, then w.p.1., lim N→∞ 1 N N X n=1 f(X(tn−)) = lim t→∞ 1 t Z t 0 f(X(s))ds, in the sense that if either limit exists, then so does the other and they are equal. A standard example when X(t) is the number of customers in a queue, would be to let f denote an indicator function; f(x) = I{x = j}, so that f(X(t)) = I{X(t) = j}, and f(X(tn−)) = I{X(tn−) = j}. This would, for example, yield πa j = Pj for the M/M/1 queue. The reader should now go back to Example 5 in Section 1.13, the M/M/c-loss queue, where we first mentioned PASTA in the context of Erlang’s Loss Formula. 1.16 Multi-dimensional CTMC’s So far we have assumed that a CTMC is a one-dimensional process, but that is not necessary. All of the CTMC theory we have developed in one-dimension applies here as well (except for the Birth and Death theory). We illustrate with some two-dimensional examples, higher dimensions being analogous. 1. Tandem queue: Consider a queueing model with two servers in tandem: Each customer, after waiting in line and completing service at the first single-server facility, immediately waits in line at a second single-server facility. Upon completion of the second service, the customer finally departs. in what follows we assume that the first facility is a FIFO M/M/1, and the second server has exponential service times and also serves under FIFO, in which case this system is denoted by FIFO M/M/1/ − →/M/1. Besides the Poisson arrival rate λ, we now have two service times rates (one for each server), µ1 and µ2. Service times at each server are assumed i.i.d. and independent of each other and of the arrival process. 24 Letting X(t) = (X1(t), X2(t)), where Xi(t) denotes the number of customers in the ith facility, i = 1, 2, it is easily seen that {X(t)} satisfies the Markov property. This is an example of an irreducible two-dimensional CTMC. Balance equations (rate out of a state equals rate into the state) can be set up and used to solve for stationary probabilities. Letting Pn,m denote the long-run proportion of time there are n customers at the first facility and m at the second (a joint probability), λP0,0 = µ2P0,1, because the only way the chain can make a transion into state (0, 0) is from (0, 1) (no one is at the first facility, exactly one customer is at the second facility, and this one customer departs (rate µ2)). Similarly when n ≥1, m ≥1, (λ + µ1 + µ2)Pn,m = λPn−1,m + µ1Pn+1,m−1 + µ2Pn,m+1, because either a customer arrives, a customer completes service at the first facility and thus goes to the second, or a customer completes service at the second facility and leaves the system. The remaining balance equations are also easily derived. Letting ρi = λ/µi, i = 1, 2, it turns out that the solution is Pn,m = (1 −ρ1)ρn 1 × (1 −ρ2)ρm 2 , n ≥0, m ≥0, provided that ρi < 1, i = 1, 2. This means that as t →∞, X1(t) and X2(t) become independent r.v.s. each with a geometric distribution. This result is quite surprising because, after all, the two facilities are certainly dependent at any time t, and why should the second facility have a stationary distribution as if it were itself an M/M/1 queue? (For example, why should departures from the first facility be treated as a Poisson process at rate λ?) The proof is merely a “plug in and check” proof using Theorem 1.2: Plug in the given solution (e.g., treat it as a “guess”) into the balance equations and verify that they work. Since they do work, they are the unique probability solution, and the chain is positive recurrent. It turns out that there is a nice way of understanding part of this result. The first facilty is an M/M/1 queue so we know that X1(t) by itself is a CTMC with stationary distribution Pn = (1 −ρ1)ρn 1, n ≥0. If we start offX1(0) with this stationary distribution (P(X1(0) = n) = Pn, n ≥0), then we know that X1(t) will have this same distribution for all t ≥0, that is, {X1(t)} is stationary. It turns out that when stationary, the departure process is itself a Poisson process at rate λ, and so the second facility (in isolation) can be treated itself as an M/M/ 1 queue when {X1(t)} is stationary. This at least explains why X2(t) has the geometric stationary distribution, (1 −ρ2)ρm 2 , m ≥0, but more analysis is required to prove the independence part. 2. Jackson network: 25 Consider two FIFO single-server facilities (indexed by 1 and 2), each with exponen-tial service at rates µ1 and µ2 respectively. For simplicity we refer to each facility as a “node”. Each node has its own queue with its own independent Poisson arrival process at rates λ1 and λ2 respectively. Whenever a customer completes service at node i = 1, 2, they next go to the queue at node j = 1, 2 with probability Qij, independent of the past, or depart the system with probability Qi,0, where the state 0 refers to departing the system, and we require that Q0,0 = 1, an absorbing state. We always assume that states 1 and 2 are transient, and state 0 is absorbing. So typically, a customer gets served a couple of times, back and forth between the two nodes before finally departing. In general, we allow feedback, which means that a customer can return to a given node (perhaps many times) before departing the system. The tandem queue does not have feedback; it is the special case when Q1,2 = 1 and Q2,0 = 1 and λ2 = 0, an example of a feedforward network. In general, Q = (Qij) is called the routing transition matrix, because it represents the transi-tion matrix of a Markov chain. Letting X(t) = (X1(t), X2(t)), where Xi(t) denotes the number of customers in the ith node, i = 1, 2, {X(t)} yields an irreducible CTMC. Like the tandem queue, it turns out that the stationary distribution for the Jackson network is of the product form Pn,m = (1 −ρ1)ρn 1 × (1 −ρ2)ρm 2 , n ≥0, m ≥0, provided that ρi < 1, i = 1, 2. Here ρi = λi µi E(Ni), where E(Ni) is the expected number of times that a customer attends the ith facility. E(Ni) is completely determined by the routing matrix Q: Each customer, independently, is routed according to the discrete-time Markov chain with transition matrix Q, and since 0 is absorbing (and states 1 and 2 transient), the chain will visit each state i = 1, 2 only a finite number of times before getting absorbed. Notice that αi = λiE(Ni) represents the total arrival rate to the ith node. So ρi < 1, i = 1, 2, just means that the total arrival rate must be smaller than the service rate at each node. As with the tandem queue, the proof can be carried out by the “plug in and check” method. The αi can be computed equivalently as the solution to the flow equations: αi = λi + X j αjQj,i, i, j ∈{1, 2}. Letting QT = (Qj,i), i, j ∈{1, 2}, denote the 2 × 2 matrix without the absorbing state 0 included, the flow equations in matrix form are ⃗ α = ⃗ λ + ⃗ αQT, 26 with solution ⃗ α = ⃗ λ(I −QT)−1. We recognize that (I −QT)−1 = S = (si,j), where si,j denotes the expected number of times the discrete-time chain visits transient state j given it started in transient state i, i = 1, 2. 27
397
Published Time: Fri, 04 Jul 2025 00:47:07 GMT arXiv:2507.02426v1 [math.AG] 3 Jul 2025 The Classification of the Stable Marked Reduction of Genus 2 Curves in Residue Characteristic 2 Tim Gehrunger Department of Mathematics ETH Z¨ urich 8092 Z¨ urich Switzerland [email protected] July 4, 2025 Abstract Consider a hyperelliptic curve of genus 2 over a field K of characteristic zero. After extending K we can view it as a marked curve with its 6 Weierstrass points. We classify the structure of the stable reduction of such curves for a valuation of residue characteristic 2 over a finite extension of K. We implement this classification into a computer algebra system and compute it for all curves defined over Q with conductor at most 2 20 . MSC 2020 classification: 14H30 (14H10, 11G20) 1Contents 1 Introduction 32 Set-up 5 2.1 Models of curves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62.2 Hyperelliptic curves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62.3 Thickness bound . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 3 A classification for genus 2 curves 8 3.1 Classification with thickness . . . . . . . . . . . . . . . . . . . . . . . . . . 83.2 Computing δ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 4 Examples 18 4.1 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 4.2 Curves with small conductor . . . . . . . . . . . . . . . . . . . . . . . . . . 19 21 Introduction 1.1 Motivation and strategy: Let K be a valued field of characteristic 0 and residue characteristic 2. Moreover, let C be a hyperelliptic curve over K, that is, a curve defined by a Weierstrass equation z2 = F (x) for some polynomial F . After extending K if necessary, the ramification points of the canonical double covering π : C ↠ ¯C ∼= P1 K are defined over K and there exists a stable marked model C of C with these points marked. The special fiber of this model encodes many arithmetic properties of C.In , Richard Pink and the author of this article provide an algorithm that describes C explicitly. For genus 2, we gave a classification of the combinatorial structure of the special fiber C0 of C. In this article, we enrich the classification by the thicknesses of the double points of C0. Moreover, we reprove part of the classification using elementary arguments, making it independent of the criterion of Liu in , which was previously used. Finally, we implement the classification into SageMath and compute it for all curves defined over Q with conductor at most 2 20 .In the case of residue characteristic ̸ = 2, the notion of the cluster picture as described in is equivalent to the combinatorial structure of C0. In that same article, several invariants of C such as the conductor exponent, valuation of the minimal discriminant and the Tamagawa number are expressed in terms of the cluster picture and the action of Gal( L/K ) on it, where L/K is the minimal extension over which C admits stable reduction. We expect that the classification presented in this article will eventually lead to similar results. However, the combinatorial complexity is much larger, as there are 54 possibilities for the dual graph of C0 in residue characteristic 2, where for residue characteristic ̸ = 2 there are only 7 possibilities. 1.2 Overview: We now explain the content of this article in greater detail. First, we reduce the general case throughout to the case that K is algebraically closed. Let R denote its valuation ring with maximal ideal m and k = R/ m its residue field. Classification Let C be a genus 2 curve and write ¯C for the stable marked model of ¯C with the 6 branch points of π marked. We denote the special fiber of ¯C by ¯C0. As established in , the reduction type of C depends only on the dual graph of C0 and the thicknesses of its double points as well as on δ, which is the valuation of a certain expression in the coefficients of F . There are 54 different reduction types, denoted (A1)-(A3), (B1)-(B11), (C1)-(C6), (D1)-(D24), (E1), (F1)-(F3) and (G1)-(G6). Using the thickness bound from [8, Prop. 5.1.1], we compute the thicknesses of C0 and express these in terms of the thicknesses of ¯C0, denoted by α, β, γ, ε and δ as well. Moreover, we use this result to distinguish between the cases (B9) and (B10), (D19) and (D20), and (D23) and (D24), which in required invoking Liu’s criterion from . In summary, we prove: Theorem A (=Theorem 3.1.2) There are 54 cases for the reduction type of C0. The space of parameters (α, β, γ, δ, ε ) decomposes into half-open polyhedral regions 1 Pi ⊂ R5 ⩾0 1Here we mean regions in Euclidean space described by a finite set of linear equalities and strict linear inequalities. 3associated to these 54 reduction types such that a curve C is of reduction type i if and only if the corresponding parameters (α, β, γ, δ, ε ) are contained in Pi. The thicknesses of the double points of C0 only depend on (α, β, γ, δ, ε ). Computing δ using its definition requires x = 0 and x = ∞ to be Weierstrass points of C. Obtaining such a Weierstrass equation is often impractical and computationally expensive. Let J8 be the eighth Igusa invariant of F as defined by Igusa in , where we require that F is normalised as in Subsection 3.1. Define δ′ := v(J8)8 + 2. Theorem B (=Theorem 3.2.1) The same half-open polyhedral regions from Theorem 3.1.2 describe the type of C0 in terms of (α, β, γ, δ ′, ε ) and the thicknesses of C0 are given by the same formulas, where δ is replaced by δ′. Computing δ′ instead of δ for a curve given by any Weierstrass equation merely requires using the transformation properties of J8 and avoids having to compute roots of the equa-tion. Implementation and curves with small conductor Using this reformulation of the genus 2 classification in terms of δ′, we implement the classification into SageMath. Given a Weierstrass equation of a curve defined over Q2, it returns δ′, the thicknesses of the double points of ¯C0 and the type of reduction of C0. For the thicknesses of the double points of C0, we use the existing implementation of the cluster picture package in . We then use our implementation to compute the classification for all genus 2 curves with conductor at most 2 20 defined over Q. It turns out that 53 out of the 54 cases of the classification are realized for such curves, missing only (D21). The case (D19), whose arithmetic conditions on the thicknesses of C0 and δ are similar to those of (D21) is the rarest of the other 53 cases and the curves realizing it have by far the largest mean conductor exponent out of all the cases. Hence we conjecture that there exist curves of case (D21) defined over Q with conductor exceeding 2 20 , which leads us to the conjecture: Conjecture C (= Conjecture 4.2.1) All 54 reduction types of the genus 2 classification are realised over Q. Finally, we plot the behaviour of the conductor exponent and the valuation of the minimal discriminant for some one dimensional families. 1.4 Structure of the paper: Section 2 contains preparatory material: In Subsections 2.1 and 2.2 we review basic facts about semistable and stable marked models of hyperelliptic curves over R. In the final Subsection 2.3 we state the thickness bound that is later used to refine the classification. In Section 3 we turn to the genus 2 classification. In Subsection 3.1, we present the refined classification including the thicknesses of the double points of C0. In Subsection 3.2, we define δ′ and prove that replacing δ by δ′ does not change the classification. In Subsection 4.1, we explain and present the implementation of the classification into SageMath. Finally, in Subsection 4.2, we list the results for the curves with conductor at most 2 20 .41.5 Relation with other work: This article builds on , which was written by Richard Pink and the author of this article and where the classification that is extended in this article was established. Historically, semistable reductions of hyperelliptic curves have mainly been studied and constructed in residue characteristic ̸ = 2, see for example the construction of Bosch in . A more recent approach including a classification for genus 2 curves is the article by Dokchitser, Dokchitser, Maistret, and Morgan, which describes the special fiber in their notion of cluster pictures and uses the classification to compute many arithmetic invariants. Moreover, in , Liu gives criteria for the type of the stable reduction of the unmarked curve C in terms of Igusa invariants. This result is first proved in the setting of char( k)̸ = 2 and carries over to the wild case by a moduli argument. In , Dokchitser and Morgan show that the cluster picture controls the local arithmetic for hyperelliptic curves with good ordinary reduction in the case of residue characteristic 2. For genus 2, this corresponds to case ( D1), which is the only case in our classification where the cluster picture carries the same information as the reduction type of C0. Acknowledgments First and foremost, I want to thank Richard Pink and Robert Nowak for many helpful and interesting conversations regarding the content of this paper. I am very grateful to Raymond van Bommel for his help with the cluster picture package and for Andrew Booker and Andrew Sutherland for granting me access to their data set of genus 2 curves before the release of their own article. Last but not least, I thank Johannes Schmitt for many valuable comments on earlier versions of this paper. 2 Set-up This article uses the notation and conventions of and , which we will briefly review in this section. Throughout this article let R1 be a complete discrete valuation ring with quotient field K1. At several places we will need to replace R1 by its integral closure in a finite extension of K1. As in , we find it more convenient to work over an algebraic closure instead. So we fix an algebraic closure K of K1 and let R denote the integral closure of R1 in K. Since R1 is complete, the valuation on K1 extends to a unique valuation with values in Q on K, whose associated valuation ring is R. Let v denote a corresponding valuation on K, and let m be the maximal ideal and k := R/ m the residue field of R.By [11, Thm 2.1.5] we can work over the single field K and avoid cumbersome changes of notation. From now on we assume that K has characteristic 0 and k has characteristic 2. We normalize the valuation v on K in such a way that v(2) = 1. For every integer n ⩾ 1we fix an n-th root 2 1/n ∈ K in a compatible way, such that for all n, m ⩾ 1 we have (2 1/mn )m = 2 1/n . For any rational number α = m/n we then set 2 α := (2 1/n )m. This defines a group homomorphism Q → K×, which by the normalization of v satisfies v(2 α) = α.52.1 Models of curves Let Y be a connected smooth proper algebraic curve over K. By a model of Y we mean a flat and finitely presented curve Y over R with generic fiber Y . We call such a model semistable if the special fiber Y0 is smooth except possibly for finitely many ordinary double points. Every double point p ∈ Y0 then possesses an ´ etale neighborhood in C which is ´ etale over Spec R[x, y ]/(xy − a) for some nonzero a ∈ m, such that p corresponds to the point x = y = 0. Here the valuation v(a) depends only on the local ring of Y at p, for instance by Liu [14, §10.3.2 Cor. 3.22]. Following Liu [14, §10.3.1 Def. 3.23] we call v(a) the thickness of p.Any model is an integral separated scheme. Thus for any two models Y and Y′ over R,the identity morphism on Y extends to at most one morphism Y → Y ′. If this morphism exists, we say that Y dominates Y′. This defines a partial order on the collection of all models of Y up to isomorphism. By blowing up one model one can construct many other models that dominate it. 2.2 Hyperelliptic curves Now let C be a hyperelliptic curve of genus 2 over K. Thus C is a connected smooth proper algebraic curve which comes with a double covering π : C ↠ ¯C of a rational curve ¯C ∼= P1 K .As K has characteristic 0, the covering π is only tamely ramified, and by the Hurwitz formula it is ramified at precisely 6 closed points, namely, at the Weierstrass points of C.Let P1, . . . , P 6 ∈ C(K) denote these points and ¯P1, . . . , ¯P6 ∈ ¯C(K) their images under π.Since 2 g + 2 ⩾ 4, both ( C, P 1, . . . , P 6) and ( ¯C, ¯P1, . . . , ¯P6) are stable marked curves. Let ( C, P1, . . . , P6) and ( ¯C, ¯P1, . . . , ¯P6) be the stable models of ( C, P 1, . . . , P 6) and ( ¯C, ¯P1, . . . , ¯P6), respectively. By general theory, the quotient ˆC of C by the hyperelliptic extension is a semistable marked model of ( ¯C, ¯P1, . . . , ¯P6) which dominates ¯C.In , Richard Pink and the author of this article developed an algorithm to compute the model C for arbitrary genus, which starts with ¯C and also computes ˆC.When the residue characteristic of K is not 2, the morphism ˆC ↠ ¯C is an isomorphism and the combinatorial structure of C0 only depends on ¯C0, see for example . In the article by Dokchitser, Dokchitser, Maistret, and Morgan, the special fiber of ¯C0 is described in their notion of cluster pictures and several arithmetic properties of C are computed in terms of it. For reference we collect the schemes that we have introduced in the following diagram. Recall that we have natural morphisms C ↠ ˆC ↠ ¯C that are compatible with the given sections. We let ( C0, p 1, . . . , p 6) and ( ˆC0, ˆp1, . . . , ˆp6) and ( ¯C0, ¯p1, . . . , ¯p6) denote the special 6fibers of ( C, P1, . . . , P6) and ( ˆC, ˆP1, . . . , ˆP6) and ( ¯C, ¯P1, . . . , ¯P6), respectively. (2.2.1) C   / / π    C     C0 ?_oo     ˆC   / / ˆC     ˆC0    ?_oo ¯C   / /     ¯C     ¯C0 ?_oo     Spec K   / / Spec R Spec k?_oo To describe the relation between the special fibers ˆC0 and ¯C0, we use the terminology concerning the type of an irreducible component of ˆC0 from [11, Def 2.3.2]. In particular, an irreducible component of ˆC0 is called • of type (a) if it maps isomorphically to an irreducible component of ¯C0; • of type (b) if it lies between irreducible components of type (a); • of type (c) if it is not of type (a) or (b) and is not a leaf; • of type (d) if it is not of type (a) or (b) and is a leaf. We also divide the double points of ¯C0 into two classes. For this recall that ¯C0 is marked with 6 distinct points in the smooth locus. As the complement of a double point consists of two connected components, this divides the 6 marked points into two groups. Definition 2.2.2 A double point ¯p of ¯C0 is called even if each connected component of ¯C0 ∖ {¯p} contains an even number of the points ¯p1, . . . , ¯p6. Otherwise, it is called odd . 2.3 Thickness bound Let X be an irreducible component of ¯C0. Moreover, let x be a global coordinate of ¯C along X and consider a hyperelliptic equation z2 = F (x) of C with F ∈ R[x±1] and v(F ) = 0. In [8, Thm.-Def. 2.4.1], the square defect of X, denoted by w(X), was defined as w(X) := min {2, sup {v(F − H2) | H ∈ R[x±1]}} . This depends only on X and is independent of the choice of x and F .Now consider a closed smooth unmarked point ¯ p ∈ X. Suppose that there exists an irreducible component of ˆC0 above ¯ p. Fix such a component ˆT of type (d). Let T be its inverse image in C0 and write g(T ) for the genus of T . Furthermore, let Y be the semistable model of ¯C obtained from blowing down all components of type (c) and (d) above ¯ p other 7than ˆT . Then Y has a unique double point ¯ q above ¯ p of thickness ε. By [8, Prop. 5.1.1], we have (2.3.1) ε ⩽ 2 − w(X)2g(T ) + 1 Moreover, equality holds if and only if ˆT is the only component of type (d) above ¯ p. 3 A classification for genus 2 curves 3.1 Classification with thickness In , a classification of the stable marked reduction for genus g = 2 curves was given. Using inequality (2.3.1) from [8, Prop. 5.1.1], we refine this classification by also providing the thicknesses of double points. In all cases but (A2), (B9), (D18) and (D23), this is a straightforward computation using the data already provided in the worksheets of . The detailed computations for these more difficult cases can be found in . Moreover, the verification of the classification in partly relied on the stable reduction criteria for the unmarked curve C from Liu [15, Thm. 1] in terms of Igusa invariants. This was precisely needed for distinguishing between (B9) and (B10), (D19) and (D20), and (D23) and (D24). In the worksheets, we provide an elementary way to derive the stable marked reduction in these cases, making the proof of the classification independent of Liu’s work. As a reminder, note that in the case of g = 2 the morphism C ↠ ¯C has 6 branch points and we start with the associated stable marked model ( ¯C, ¯P1, . . . , ¯P6) of ¯C. The seven possibilities for the combinatorial structure of its closed fiber ( ¯C0, ¯p1, . . . , ¯p6) are shown in Figure 1. Filled circles signify even double points and empty circles odd double points. The arrows indicate the ways that one type can degenerate into another. We observe that ¯C0 has at most three even double points and at most one odd double point, and that the combinatorial structure is invariant under symmetries interchanging the former transitively. Let α ⩾ β ⩾ γ ⩾ 0 denote the thicknesses of the even double points and ε ⩾ 0 that of the odd double point, where we interpret 0 as the thickness of a double point that does not exist. Moreover, each even double point is connected to a unique leaf component with exactly two marked points. After possibly interchanging the marked points we can assume that ¯P1 meets this component for the double point of thickness α if α > 0. We also assume that ¯P2 has maximal distance from ¯P1, that is, that the sum of the thicknesses of the double points between them is maximal. Then ¯P2 must meet the leaf component containing the double point of thickness β if β > 0. We identify ¯C with P1 K in such a way that ¯P1 is identified with 0 and ¯P2 with ∞. Then C is defined by an equation of the form (3.1.1) z2 = F (x) = ax + bx 2 + cx 3 + dx 4 + ex 5 8(A)   % % (B)   % % (E)   (C)   % % (F)   (D) (G) Figure 1: The possibilities for ( ¯C0,¯p1, . . . , ¯p6) in genus 2. with F ∈ K[x] separable of degree 5. Rescaling x and z by factors in K×, we can now arrange to have v(f ) = 0 and  v(a) = α + 2 εv(e) = βv(disc( F )) = 2 α + 2 γ + 6 εv(b) ⩾ εv(b) = ε if α > 0  or equivalently  α = v(a) − 2εβ = v(e) γ = 12 v(disc( F )) − α − 3εε = min {v(b), 12 v(disc( F )) − v(a)}  , where disc( F ) denotes the discriminant of F . With the equation in this form we choose a square root √bd of bd and set δ := vc − 2√bd . It turns out that the combinatorial structure of ( C0, p 1, . . . , p 6) depends only on the values of α, β, γ, δ and ε, which are all ⩾ 0. Computations show that we always have δ ⩾ min {2, γ }, with equality if γ < min {β, 2}.All in all, the seven cases from Figure 1 divide into 54 subcases for the combinatorial structure of ( C0, p 1, . . . , p 6). In each subcase we draw irreducible components of type (a) in black, those of type (b) in orange, those of type (c) in green, and those of type (d) in blue. We also label any irreducible component of genus g′ > 0 by g = g′, while all irreducible 9components of genus 0 remain unlabeled. The thickness of a double point is written in red. In summary, we prove: Theorem 3.1.2 There are 54 cases for the reduction type of C0. The space of parameters (α, β, γ, δ, ε ) decomposes into half-open polyhedral regions 2 Pi ⊂ R5 ⩾0 associated to these 54 reduction types such that a curve C is of reduction type i if and only if the corresponding parameters (α, β, γ, δ, ε ) are contained in Pi. The thicknesses of the double points of C0 only depend on (α, β, γ, δ, ε ). (A1) (A2) (A3) δ = 0 0 < δ < 4/5 δ ⩾ 4/5 13 g=1 13 g=1 δ 4 4−5δ 12 g=1 4−5δ 12 g=1 15 g=2 Figure 2: The possibilities for ( C0, p 1, . . . , p 6) in the case (A). Case (A): This is the case of “equidistant geometry” of Lehr-Matignon , that is, where ¯C0 is smooth. Hence we have α = β = γ = ε = 0, and so the combinatorial structure of C0 depends only on δ. The 3 possible subcases are sketched in Figure 2. 2Here we mean regions in Euclidean space described by a finite set of linear equalities and strict linear inequalities. 10 Case (B): Here ¯C0 has exactly one even double point of thickness α > 0, and we have β = γ = ε = 0. The combinatorial structure of C0 depends only on α and δ. The 11 possible subcases are sketched in Figure 3. (B1) (B2) (B3) δ = 0 ∧ α < 4 δ = 0 ∧ α = 4 δ = 0 ∧ α > 4 α 4 g =1 4−α 12 α 4 13 g =1 g =1 11 13 g =1 α-4 α-4 11 13 g =1 (B4) (B5) (B6) 0 < 2δ < α ∧ α + δ < 4 0 < 2δ < α ∧ α + δ = 4 0 < δ < 43 ∧ α + δ > 4 δ 4 α-2 δ 4 4-3 δ 12 g =1 4-α-δ 12 g =1 α+δ 4 δ 4 α-2 δ 4 4-3 δ 12 g =1 g =1 α+δ 4α+δ-4 α+δ-4 1 δ 4 4−3δ 4 4-3 δ 12 g =1 (B7) (B8) (B9) δ ⩾ 43 ∧ α > 83 α = 2 δ < 83 α < 2δ < 8+2 α 53α-8 33α-8 3 g =1 131 α 83α 88-3 α 24 g =1 8-3 α 24 g =1 α 83α 82δ-α 84-5 δ+α 12 g =1 4-5 δ+α 12 g =1 (B10) (B11) α < 8+2 α 5 ⩽ 2δ α = 83 ⩽ 2δ α 8 3α 8 8-3 α 40 g =2 α 8 3α 8 g =2 Figure 3: The possibilities for ( C0, p 1, . . . , p 6) in the case (B). 11 Case (C): Here ¯C0 has two even double points of respective thicknesses α ⩾ β > 0, and we have γ = δ = ε = 0. It turns out that the combinatorial structure of C0 over each double point is the same as in the case of genus 1 and depends only on the thickness of that double point. The 6 possible subcases are sketched in Figure 4. (C1) (C2) (C3) 4 > α ⩾ β 4 = α > β α = 4 = β g=1 α 4 α 4 4-α 12 g=1 β 4 β 4 4-β 12 g=1 11 g=1 β 4 β 4 4-β 12 g=1 11 g=1 11 (C4) (C5) (C6) α > 4 > β α > 4 = β α ⩾ β > 4 α-4 α-4 11 g=1 β 4 β 4 4-β 12 α-4 α-4 11 g=1 11 α-4 α-4 11 β-4 β-4 11 Figure 4: The possibilities for ( C0, p 1, . . . , p 6) in the case (C). Case (D): Here ¯C0 has one irreducible component without marked points in the middle, which is connected by double points of thicknesses α ⩾ β ⩾ γ > 0 to leaf components with two marked points each. We also have ε = 0, and the combinatorial structure of C0 depends only on α, β, γ and δ. The 24 possible subcases are sketched in Figures 5 and 6. 12 (D1) (D2) (D3) α = β = γ = 2 α > β = γ = 2 β > γ = 2 111 g =2 α-2 α-2 1 11 g =1 α-2 α-2 β-2 β-2 11 1 (D4) (D5) (D6) γ > 2 β > γ ∧ α + γ < 4 β > γ ∧ β + γ < 4 = α + γ α-2 α-2 β-2 β-2 11 γ-2 γ-2 1 g =1 α-γ 4 α+γ 4 4-α-γ 12 g =1 β+γ 4 β-γ 4 4-β-γ 12 γ 2 g =1 α-γ 41 g =1 β+γ 4 β-γ 4 4-β-γ 12 γ 2 (D7) (D8) (D9) β > γ ∧ β + γ = 4 = α + γ β > γ ∧ β + γ < 4 < α + γ 2 > γ ∧ β + γ = 4 < α + γ g =1 α-γ 41 g =1 1 β-γ 4 γ 2 α+γ-4 α+γ-4 1 2-γ 2 g =1 β+γ 4 β-γ 4 4-β-γ 12 γ 2 α+γ-4 α+γ-4 1 2-γ 2 g =1 1 β-γ 4 γ 2 (D10) (D11) (D12) 2 > γ < β ∧ β + γ > 4 β = γ = δ < α ∧ α + γ < 4 β = γ = δ < α ∧ α + γ = 4 α+γ-4 α+γ-4 1 2-γ 2 β+γ-4 β+γ-4 2-γ 2 1 γ 2γ 2 γ 2 α-γ 4 g =1 4-α-γ 12 α+γ 4 2-γ 6 g =1 γ 2 γ 2 α-γ 4 g =1 α+γ 4 2-γ 6 g =1 Figure 5: The first 12 possibilities for ( C0, p 1, . . . , p 6) in the case (D). 13 (D13) (D14) (D15) β = γ = δ < 2 ∧ α + γ > 4 γ < δ < α+γ 2 ∧ α + δ < 4 γ < δ < α+γ 2 ∧ α + δ = 4 γ 2 γ 2 α+γ-4 α+γ-4 2-γ 21 2-γ 6 g =1 γ 2 γ 2 δ-γ 4 α+γ-2 δ 4 4-3 δ+γ 12 g =1 4-α-δ 12 g =1 α+δ 4 γ 2 γ 2 δ-γ 4 α+γ-2 δ 4 4-3 δ+γ 12 g =1 g =1 1 (D16) (D17) (D18) γ < δ < 4+ γ 3 ∧ α + δ > 4 δ ⩾ 4+ γ 3 ∧ α > 8−γ 3 2 3α + γ < 8 ∧ δ = α+γ 2 γ γ 2 γ 2 α+δ-4 α+δ-4 1 δ-γ 4 4-3 δ+γ 4 g =1 γ 2 γ 2 3α-8 +γ 8 3α-8 +γ 8 g =1 2-γ 6 1 γ 2 γ 2 α-γ 83α+γ 8 8-3 α-γ 24 g =1 8-3 α-γ 24 g =1 (D19) (D20) (D21) γ < α ∧ α+γ 2 < δ < α+2 γ+4 5 γ < 3α+γ 4 < 2 ∧ δ ⩾ α+2 γ+4 5 3α + γ = 8 ∧ δ ⩾ α+γ 2 γ γ 2 γ 2 α-γ 8 3α+γ 8 2δ-α-γ 8 4-5 δ+2γ+α 12 g =1 4-5 δ+2γ+α 12 g =1 γ 2 γ 2 α-γ 8 3α+γ 8 8-3 α-γ 40 g =2 γ 2 γ 2 α-γ 8 3α+γ 8 g =2 (D22) (D23) (D24) α = γ = δ < 2 2 > α = γ < δ < α+2 γ+4 5 2 > α = γ ∧ δ ⩾ α+2 γ+4 5 α 2 α 2 α 22-α 6 g =1 2-α 6 g =1 α 2 α 2 α 2 δ-α 44-5 δ+3α 12 g =1 4-5 δ+3α 12 g =1 α 2 α 2 α 22-α 10 g =2 Figure 6: The remaining 12 possibilities for ( C0, p 1, . . . , p 6) in the case (D). Cases (E–G): Here ¯C0 has an odd double point ¯ p of thickness ε > 0, and we always have γ = δ = 0. The combinatorial structure of C0 depends only on the values α ⩾ β ⩾ 0. It turns out that the situation on each side of ¯ p is the same as for the reduction of a curve 14 of genus 1, and that the two sides are independent of each other. Case (E): Here we have α = β = 0, and there is a single subcase only, which is sketched in Figure 7. (E1) 13 g =1 13 g =1 ε 2 Figure 7: The single possibility for ( C0, p 1, . . . , p 6) in the case (E). Case (F): Here we have α > β = 0. The 3 possible subcases are sketched in Figure 8. (F1) (F2) (F3) α < 4 α = 4 α > 4 α 4 4-α 12 g =1 α 4 13 g =1 ε 2 1 g =1 1 13 g =1 ε 2α-4 α-4 1 1 13 g =1 ε 2 Figure 8: The possibilities for ( C0, p 1, . . . , p 6) in the case (F). Case (G): Here we have α ⩾ β > 0. The 6 possible subcases are sketched in Figure 9. 15 (G1) (G2) (G3) 4 > α ⩾ β 4 = α > β α = 4 = β α 4 4-α 12 g =1 α 4 β 4 4-β 12 g =1 β 4 ε 2 1 g =1 1 β 4 4-β 12 g =1 β 4 ε 2 1 g =1 1 1 g =1 1 ε 2 (G4) (G5) (G6) α > 4 > β α > 4 = β α ⩾ β > 4 α-4 α-4 1 1 β 4 4-β 12 g =1 β 4 ε 2α-4 α-4 1 1 1 g =1 1 ε 2α-4 α-4 1 1 β-4 β-4 1 1 ε 2 Figure 9: The possibilities for ( C0, p 1, . . . , p 6) in the case (G). From the above results, one can also determine the closed fiber Cst 0 of the stable reduc-tion of the unmarked curve C. The list of possible cases and their names are taken from Liu [15, Th. 1]. There the case distinctions were given in terms of Igusa invariants, which are complicated polynomials in the coefficients of f . Our results yield relatively simple conditions in terms of the numbers α, β, γ, δ alone. The seven cases for Cst 0 are sketched in Figure 10. 16 (I) (II) (III) 3α + γ ⩽ 8 ∧ δ ⩾ 4+ α+2 γ 5 3α + γ > 8 ∧ 3δ − γ > 4 ⩾ β + γ γ = 2 < β g =2 6α+2γ-16 3g =1 2β-4 2α-4 (IV) (V) (VI) γ > 2 α + δ ⩽ 4 ∧ δ < 4+ α+2 γ 5 3δ − γ < 4 ⩾ β + γ ∧ δ + α > 4 2γ-4 2β-4 2α-4 4-5 δ+α+β+γ+3ε 6 g =1 g =1 2α+2δ-8 8-6 δ+β+γ+3ε 6 g =1 (VII) γ < 2 ∧ β + γ > 4 2α+2γ-8 2β+2γ-8 4-2 γ+ε 4 Figure 10: The possibilities for the stable reduction of the unmarked curve. 3.2 Computing δ Let F and x be as in Section 3.1. Let J8 be the eighth Igusa invariant of F as defined by Igusa in . Define δ′ := v(J8)8 + 2. Theorem 3.2.1 The same half-open polyhedral regions from Theorem 3.1.2 describe the type of C0 in terms of (α, β, γ, δ ′, ε ) and the thicknesses of C0 are given by the same for-mulas, where δ is replaced by δ′. 17 Proof. We have 216 J8 = 3( c − 2√bd )8 + 48 √b √d (c − 2√bd )7 ( −208 ae + 304 db ) ( c − 2√bd )6 +  960 b32 d32 − 2496 √b √d ae + 96 a d 2 + 96 b2e  (c − 2√bd )5 +  1536 d2b2 − 10464 dbae + 960 d52 √b a + 960 b52 √d e − 1632 a2e2  (c − 2√bd )4 +  1024 b52 d52 − 17152 b32 d32 ae + 3072 d3ba + 3072 d b 3e −13056 √bd a 2e2 − 1152 d2a2e − 1152 b2a e 2  (c − 2√bd )3 +  −6144 d2b2ae + 3072 d72 b32 a + 3072 b72 d32 e − 29184 db a 2e2 −6912 d52 √b a 2e − 6912 b52 √d a e 2 + 14080 a3e3 + 512 d4a2 + 512 b4e2  (c − 2√bd )2 +  6144 b52 d52 ae − 12288 b32 d32 a2e2 − 8192 d3b a 2e − 8192 d b 3a e 2 +56320 √b √d a 3e3 + 2048 d92 √b a 2 + 2048 b92 √d e 2 − 25600 d2a3e2 − 25600 b2a2e3  (c − 2√bd ) − 83200 a4e4 + 5120 b4a e 3 + 5120 d4a3e + 1792 d2b2a2e2 125440 db a 3e3 + 2048 d72 b32 a2e + 2048 b72 d32 a e 2 − 51200 d52 √b a 3e2 − 51200 b52 √d a 2e3. We proceed by comparing the valuations of 2 16 J8 and ( c − 2√bd ) in each of the 54 cases and show that replacing δ by δ′ always yields the same case of the classification in 3.1 and that δ = δ′ whenever δ appears in a formula for the thickness of a double point. These are straightforward computations, which can be found in . □ 4 Examples 4.1 Implementation In [9, genus2 classification.sage], we provide an implementation of the genus 2 classification into SageMath. Here, we choose to compute δ′ instead of δ as this is computationally cheaper. Moreover, we use the Cluster picture package for the computation of α, β, γ, ε . The main functions to call are: • AlphaBetaGammaDeltaEpsilonComputation(f) returns a list of the form [ α, β, γ, δ, ε ]for the curve defined by y2 = f (x). 18 • Genus2ClassificationLabel([alpha, beta, gamma, delta, epsilon]) returns the case of the classification from Section 3.1. • ABGDEWithLabel(List) returns a list of the form [ α, β, γ, δ, ε, ’label’], where the label is the case of the classification from Section 3 for the curve defined by y2 + yh (x) = f (x), where List=[coeffs(f),coeffs(h)]. Example 4.1.1 sage: AlphaBetaGammaDeltaEpsilonComputation(x^5+x^4-4x^3-10x^2+12x) : [1, 1/2, 1/2, 1, 0] sage: AlphaBetaGammaDeltaEpsilonComputation(x^5-1) : [0, 0, 0, +Infinity, 0] sage: Genus2ClassificationLabel([1, 1/2, 1/2, 1, 0]) : ’D19’ sage: Genus2ClassificationLabel([0, 0, 0, +Infinity, 0]) : ’A3’ sage: ABGDEWithLabel(()) : [1/2, 1/2, 1/2, 1, 0, ’D23’] sage: ABGDEWithLabel(()) : [0, 0, 0, +Infinity, 0, ’A3’] 4.2 Curves with small conductor In an upcoming article , Booker and Sutherland list all genus 2 curves defined over Q with conductor at most 2 20 , of which there are 6 216 958. Using our implementation, we compute α, β, γ, δ, ε and the label for each of these curves. Once the article is published, we will add a file containing the curves together with the values we computed to . In Figure 11, we plot the frequency of the labels from the genus 2 classification. It turns out that 53 of the 54 cases of our classification are obtained by the curves in , only missing case (D21). The case (D19), whose arithmetic conditions on α − δ are very similar to that of (D21) is the rarest of the appearing cases, only appearing for 3961 curves and with large conductors. A possible explanation for this is that the conditions on α − δ for (D19) and (D21) force the conductor to be large for a curve defined over Q. We expect that there exist curves of case (D21) defined over Q but have not yet found such a curve. In summary: Conjecture 4.2.1 All 54 reduction types of the genus 2 classification are realized over Q. 19 A1 A2 A3 B1 B2 B3 B4 B5 B6 B7 B8 B9 B10 B11 C1 C2 C3 C4 C5 C6 D1 D2 D3 D4 D5 D6 D7 D8 D9 D10 D11 D12 D13 D14 D15 D16 D17 D18 D19 D20 D22 D23 D24 E1 F1 F2 F3 G1 G2 G3 G4 G5 G6 Labels 0 100000 200000 300000 400000 500000 Frequency Figure 11: Label frequencies for the genus 2 curves with conductor at most 2 20 For a genus 2 curve C/ Q, we denote the 2-adic valuation of its conductor N as conductor exponent . Brumer and Kramer in [4, Thm. 6.2] show that the conductor exponent of a genus 2 curve defined over Q is at most 20. In Figure 12, we plot the conductor exponent values for the reduction types from the classification in Subsection 3.1. It is likely that these are not all possible values for curves defined over Q. In particular, for curves in the data set, conductor exponent 20 can only be obtained if the conductor is equal to 2 20 .We proceed by plotting the conductor exponent and the valuation of the minimal discriminant for some cases that only depend on one parameter. For a curve C, we denote its conductor by N and its minimal discriminant by D. As explained in the introduction, in residue characteristic p̸ = 2 the cluster picture as defined in together with the action of Gal( K/ Q) for the minimal extension over which the curve admits stable reduction determine the p-adic valuation of these. As our classification is finer than the cluster picture, we hope that the same might be true for p = 2. 20 Figure 12: Conductor exponent values by cases for the genus 2 curves with conductor at most 2 20 In Figures 13 and 14, we plot the valuation of the minimal discriminant and conductor exponent against α in case (C5). 21 6 8 10 12 14 16 18 α 25 30 35 40 45 50 v( D)Figure 13: Valuation of the discriminant for curves in case (C5) Similarly, in Figures 15 and 16, we plot δ′ against the valuation of the minimal discrim-inant and the conductor exponent in case (A) for δ′ ⩽ 1. We choose this bound on δ′ since it implies δ = δ′.22 6 8 10 12 14 16 18 α 2 4 6 8 10 12 v( N)Figure 14: Conductor exponent values for curves in case (C5) References Alex J. Best et al. “A user’s guide to the local arithmetic of hyperelliptic curves”. In: Bulletin of the London Mathematical Society 54.3 (2022), pp. 825–867. A. R. Booker and A. V. Sutherland. “Genus 2 curves of small conductor”. in prepa-ration. Siegfried Bosch. “Formelle Standardmodelle hyperelliptischer Kurven”. In: Math. Ann. 251.1 (1980), pp. 19–42. doi : 10 . 1007 / BF01420278 . url : https : / / doi . org/10.1007/BF01420278 . Armand Brumer and Kenneth Kramer. “The conductor of an abelian variety”. en. In: Compositio Mathematica 92.2 (1994), pp. 227–248. url : org/item/CM_1994__92_2_227_0/ . Tim Dokchitser, Vladimir Dokchitser, C´ eline Maistret, and Adam Morgan. “Arith-metic of hyperelliptic curves over local fields”. In: Math. Ann. 385.3-4 (2023), pp. 1213– 1322. doi : 10.1007/s00208- 021- 02319- y . url : s00208-021-02319-y .23 0.2 0.4 0.6 0.8 1.0 δ ′ 10 20 30 40 50 v( D)Figure 15: Valuation of the discriminant for curves in case (A) with δ′ ⩽ 1 Vladimir Dokchitser and Adam Morgan. “A note on hyperelliptic curves with ordi-nary reduction over 2-adic fields”. In: Journal of Number Theory 244 (2023), pp. 264– 278. doi : . url : sciencedirect.com/science/article/pii/S0022314X22001949 . Vladimir Dokchitser, Adam John Morgan, Tim Dokchitser, and Celine Maistret. “Semistable types of hyperelliptic curves”. English. In: Algebraic curves and their applications . Ed. by Lubjana Beshaj and Tony Shaska. Vol. 724. Contemporary Math-ematics. United States: American Mathematical Society, Dec. 2018, pp. 73–136. Tim Gehrunger. Computing the Stable Reduction of Hyperelliptic Curves in Residue Characteristic 2 . 2025. arXiv: 2506.19663 [math.AG] . url : abs/2506.19663 . Tim Gehrunger. Worksheets for ”The Classification of the Stable Marked Reduction of Genus 2 curves in residue characteristic 2”. url : ethz-b-000743349 . Tim Gehrunger and Richard Pink. Reduction of Hyperelliptic Curves in Character-istic ̸ = 2. 2021. arXiv: 2112.05550 [math.AG] .24 0.2 0.4 0.6 0.8 1.0 δ ′ 2 4 6 8 10 12 14 v( N)Figure 16: Conductor exponent values for curves in case (A) with δ′ ⩽ 1 Tim Gehrunger and Richard Pink. Reduction of Hyperelliptic Curves in Residue Characteristic 2 . 2024. arXiv: 2404.14214 [math.AG] . Jun-Ichi Igusa. “Arithmetic Variety of Moduli for Genus Two”. In: Annals of Math-ematics 72.3 (1960), pp. 612–649. url : . Claus Lehr and Michel Matignon. “Wild monodromy and automorphisms of curves”. In: Duke Mathematical Journal 135.3 (2006), pp. 569–586. Qing Liu. Algebraic Geometry and Arithmetic Curves . Oxford graduate texts in math-ematics. Oxford University Press, 2002. Qing Liu. “Courbes stables de genre 2 et leur sch´ ema de modules”. In: Mathematische Annalen 295.1 (1993), pp. 201–222. The Sage Developers. SageMath, the Sage Mathematics Software System (Version 9.5) . . 2025. 25
398
Geometry | Simple City =============== Simple City Richard Elwes's Blog and Website Book: Maths 1001 Maths 1001: Errata Maths 1001: Reviews Book: Chaotic Fishponds… Book: How to Build a Brain Book: Maths in 100 Key Breakthroughs Book: The Maths Handbook Writing Talking Teaching Research Who is Richard Elwes? Song Lyrics Geometry How Many Sides Does A Circle Have? Posted by richardelwes on September 6, 2019 Posted in: Geometry, Maths. 13 Comments It’s a primary school homework question which causes disagreement among seasoned mathematicians. In order to find the correct answer, I called upon that most rigorous of scientific instruments, the Twitter poll: In retrospect I regret not including a “none of the above” option, but more on that later. In this post I’m going to go through these three answers (and “None of the above”), and discuss their pros and cons as I see them, before dramatically revealing the correct response. But first: why can’t we straightforwardly give the right answer? The words in the question are hardly mysterious. We all know what a “circle” is, what it means to count “how many” of something, and what a “side” is… or don’t we? Here are (rough) definitions I distilled from discussions with two primary school students who had been on the receiving end of this question: A line forming part of a shape’s boundary of a plane figure. The purpose of saying plane figure rather than “shape” here is that we want shapes that live in 2-dimensions (e.g. squares or circles, but not spheres or cubes). The next question is what a “line” is in definition 1. Here’s a variant which pins that down: A straight line forming part of the boundary of a plane figure. If you type “define: side” into Google, the most relevant definition is: a part or region near the edge and away from the middle of something. “a minibus was parked at the side of the road” synonyms: edge, border, verge, boundary, margin, fringe, fringes, flank, brink, bank, brim, rim, lip, perimeter, circumference, extremity, periphery, limit, outer limit, limits, bound, bounds,… antonyms: centre, heart, end each of the lines forming the boundary of a plane rectilinear figure. “the farm buildings formed three sides of a square” A rectilinear figure is one constructed from straight lines. So this definition is a further refinement of definition 2, and allows us to affirm that a square has four sides, but on the face of it has nothing to say about non-rectilinear plane figures such as circles. Infinitely Many Sides? I think it’s a safe bet that the respondants to my Twitter poll have a higher level of mathematical education than the national average. The fact that they were split on this question at all, and that a small majority selected an answer which is pretty well unavailable to this question’s usual audience (primary school students), certainly suggests something going wrong somewhere. So, does a circle have infinitely many sides? It is definitely useful to consider a circle as the limit of n–sided polygons as n gets bigger and bigger. This is exactly the approach that Archimedes, Liu Hui, and countless others have used down the centuries to study circular geometry, including coming up with approximations for π. A 16th century Ming dynasty edition of the Jiuzhang suanshu (Nine Chapters on the Mathematical Art), third century CE. Sometimes it is absolutely sensible, as a convenient shorthand, to think of a circle as being like a polygon with infinitely many sides. But, as ~~an insufferable pedant~~a mathematician, I’d want to distinguish between convenient shorthand and literal truth. If we’re adamant that a circle really is a polygon with infinitely many sides, then the question presents itself: what are the sides? And surely the only plausible answer is: the individual points of the circle. How long are these so-called sides? Zero centimetres. And are these sides separated by corners? Not apparently, either there are no corners at all, or every point is both a side and a corner. I’d say sides of length zero are… a problematic concept. How can you tell whether you have got any? For instance, suppose I’m studying a system where a square arises as a limit of octagons like this: In this situation, it might well make sense for me to think of my square as having eight sides, four of which have length zero. But if I was to insist that my (perfectly ordinary) square really does have eight sides, you might raise an eyebrow. So this – the winning answer in my poll – is the only one I am going to declare to be definitively wrong, while it is also the only one which offers any geometrical insight at all. A paradox? Not really. Reasoning by analogy is a valuable skill in mathematics and in life; at the same time it is important to hold on to the realisation that that is what we are doing. For infinitely many sides: geometrically illuminating. Against infinitely many sides: eight-sided squares. Off on a tangent 1: apeirogons Even if a circle isn’t one, are there such things as polygons with infinitely many sides? Well, there is a word to describe such a thing an apeirogon. A regular apeirogon would then have sides of equal (non-zero) length with equal angles between them. The only option here is this stupendously unexciting object: If you object to this as being a “polygon” (either bcause of the angles of 180° or the chain of edges not closing in a loop), how about something like this: s tart at the bottom of a circle, and at each stage move around half of what remains of the circle, and replace the arc you’ve just travelled with a straight edge: Is this a genuine polygon? Once again it depends on your terms. According to one common definition, that of a “closed polygonal chain”, this fails to qualify since the starting corner (bottom left) only connects to one edge. But it’s a very near miss: that point is the limit of a sequence of edges from the right, making this shape a “non-self-intersecting piecewise linear closed curve”, another definition of polygon that people use. If we leave our usual Euclidean world and enter hyperbolic space, then there is no ambiguity. Apeirogons (even regular apeirogons) simply exist: A tiling of the hyperbolic plane by regular apeirogons. (By Anton Sherwood – Own work, Public Domain, ) Off on a tangent 2: extreme points It might be more defensible to say that a circle has infinitely many corners than infinitely many sides (although this is not a question that seems to get asked very often). To start with, if a corner of a square is a point at which its boundary line is not straight, then every point on the circle satisfies that. More sophisticatedly, there is a notion of an extreme point of a shape: that is any point through which you can draw a segment of straight line which touches the shape only at that exact point. For a square and many familiar shapes the extreme points exactly coincide with the corners. Every point on the boundary of the circle is an extreme point, so it is certainly true that a circle has infinitely many. We might worry that some shapes such as this chevron have corners which are not extreme points: Here the lower central corner is not an extreme point (the other three corners are). What’s going wrong is that this shape is not convex(roughly, it has some bits sticking out too far). A circle is convex, so perhaps we needn’t worry. Alternatively, we might remedy the situation by defining a “corner” to be a point which is an extreme point either of the shape in question or of its complement, i.e. the whole plane with the shape cut out of it. That approach would detect corners of all polygons, including the chevron. For smooth curves, it would identify all boundary points as “corners” except for inflection points (which is not unreasonable since we might argue that the boundary is straight there). One Side? In primary school, it seems that “one” is the answer that gets the tick. And there is a moderately decent justification. Remember definition 1 above: 1. A line forming part of the boundary of a plane figure. The immediate question is what counts as a “line”, especially if we are not insisting on straightness. If we are too relaxed about this, then any plane figure could be said to have “one side”, in the same sense that it has one boundary, perimeter, or circumference. But this has got to be wrong, as we surely want a square to have four. Well, a square has four points where it is not smooth, with four smooth sections in between. Perhaps it was really the smooth sections that we were counting all along. So implicitly we have a new refinement of definition 1 (and also take the opportunity to ditch the vague term “figure”): 4. Each smooth section of a piecewise-smooth closed curve. A “closed curve” is one which loops around to meet itself so that it has no free ends. “Piecewise-smooth” means that it is built from smooth sections, which meet at isolated unsmooth points. It is perfectly legitimate to want to count the smooth sections of such a shape’s boundary, and it’s by no means outrageous to use the word “side” when doing that. So I’m certainly not saying this is definitively the wrong answer. The question is whether that interpretation of “side” is not merely coherent, but natural enough that it can simply be presumed without being stated explicitly (which it seldom if ever is). What happens when smoothness and straightness tally up differently? Consider this tombstone shape, created by replacing the top of a square with a semicircle of equal diameter. This has two smooth sections (the bottom line and the rest) but three straight edges (plus a curved piece smoothly joining two of them). So how many sides does it have? I consulted my Twitter friends again: This time I should have included “infinitely many” as an option, although that can be absorbed into “None of the above”. Anyone voting that the circle has infinitely many sides should automatically vote “None of the above” here, unless – an important caveat – the nature of this shape indicates to the reader a different notion of “side”. The fact that the most popular choices in these two polls are incompatible suggests this may be the case (or at least reinforces that the waters are muddy). Although two is a perfectly respectable answer, compatible with definition 4 above and with a circle’s one-sidedness, I am not satisfied it is definitively the right one, or that three or four are categorically wrong. It depends what you want to count: smooth sections, straight edges, or straight edges plus whatever’s then leftover, any of which might be the answer you want depending on context (more on this below). Relatedly, I am not sure counting the number of smooth sections fully matches my intuition of the word “side”. After all, the tombstone’s two upright sections are – I think it is fair to say – “on opposite sides”. Are we really content that they are simultaneously part of the “same side”? You might protest that I am conflating two different meanings of “side”, that terminology sometimes clashes, and we just have to live with it. I am not so sure, though. The point of this exercise is to extrapolate from a situation (rectilinear figures) where the two notions mesh pretty well. If there was a new idea which captured everything we liked about the original but also applied to a broader category of shapes, then that would have an overwhelming claim to being the unique right answer. But if all our attempts at generalisation involve sacrificing desirable aspects of the original, then perhaps there is no single correct generalisation. There are different choices, with different trade-offs, which might be suitable in different contexts (and if we’re in a situation where more than one are in play, then they could helpfully be given different names). Here’s a another variation: a Weierstrass tombstone created by replacing the top edge of a square by a section of Weiestress function, an infinitely wiggly line which isn’t smooth anywhere. Here (and spot the typo) is what my Twitter friends made of this – although fewer ventured an opinion: Notwithstanding the scepticism of my Twitter followers, I’ll explain in a minute why I don’t think it’s silly to see this as having four sides (one of which is non-smooth). On the other hand, if you prefer your sides smooth, then you again have a choice between seeing it has having infinitely many sides (three of which have length 1, and the rest having length 0), or having 3 sides plus a stretch of definitely-not-a-side boundary. For one side: a single smooth curve. Against one side: the same side on opposite sides. Off on a tangent 3: sides versus edges How many sides does a square have? Four. How many edges does it have? Four. So are edges and sides the same thing? Not necessarily. Here are two configurations which are – at least arguably – each four-sided but have 5 and 3 edges respectively: Usually, I would say, an “edge” is a topological object, in that its function, not its shape, is what matters. Think of the London tube map. If you asked how many edges there are in that network, there’s no merit in totting up straight or smooth sections. It’s connections between stations (or vertices) that count. As already mentioned, it’s common to think of a polygon as a very simple sort of network called a closed polygonal chain: a string of vertices (in this case the corners of the polygon), linked with edges, in such a way that every vertex lies on exactly two edges, and the whole thing forms a single loop. In this situation edges and sides coincide, as do vertices and corners. But in general you can break this concurrence, as in the two small networks above. If you want to think about things network-theoretically, but the vertices aren’t clearly marked, then you have to guess where they are. With a polygon this is easy – the vertices are at the corners – which is why switching between geometric and topological approaches comes so naturally. But with other shapes, such as either of the tombstones above, it may not be so obvious. Nevertheless, in each case if you were told there were vertices in there somewhere, and were asked to locate them, I think it would be sensible to guess that there are four, namely the corners of the original square, and that the top edge has for some reason been represented as a non-straight line. And if we do want to think about things that way, with each of the tombstones having 4 edges, then it might seem peverse (although logically coherent!) to insist that they have some other number of sides (especially as the top side is – despite its own geometry – clearly “on one side” of the figure). In fact, rather than guessing, one of my Twitter correspondents asked me “Have both top vertices been removed?”, a question which only makes sense from a network-theoretic perspective. Where does this leave the circle? The trouble is that no point on the circle has a better claim to be vertex than any other. So although it is tempting (and again coherent) to view a circle as a network with one edge, if we are going to insert vertices, there is no obvious reason to prefer one to any other number. Could we view it as a network with no vertices at all, a sort of Tube line with no stations? The usual mathematical conception of a network would not allow that, but that shouldn’t deter us too much. This suggests a purely topological approach. The trouble is that from that point of view, while a circle may be a sort of network with no vertices and one edge, so too is a square (if that’s how the Tube line happens to be laid out). In topology, a square is a circle. (This is not a paradox, it is simply saying that the boundary is a single loop, whose shape doesn’t matter.) So while this sort of network has “one edge”, obviously a square does not have “one side”, so the relationship between sides (geometric) and edges (topological) has again broken down, just as it does in the two little networks pictured above. So this appoach doesn’t take us very far forward. No Sides? It might seem paradoxical to argue that a circle (or any shape) has “no sides”. But the argument for the defence is straightforward. We return to definition 2: 2. A straight line forming part of the boundary of a plane figure. This is a simple, easily understandable phrase that perfectly captures the sides of a square. We have failed to find a satisfactory generalisation of this to curved figures, so the best thing to do is stick with the original. And a circle doesn’t have any. For no sides: true, according to a sensible notion of “side”. Against no sides: sounds like a Zen koan. None of the Above? Recall the definition supplied by Google: 3. Each of the lines forming the boundary of a plane rectilinear figure. Attempting to apply this to a circle – a non-rectilinear figure – produces nothing. The question is as meaningless as “How many sides does Monday have?” Since Definition 3 is the most official (the only one in this post not made up by me or my children), doesn’t that make “None of the Above” categorically the right answer? Maybe. On the other hand: when someone poses us a question, the principle of charity perhaps requires us to assume that it is meaningful unless we can firmly establish otherwise, and definitions 1, 2, 4 and other variants make that possible. Further, definition 3 is linguistic rather than formally mathematical, and is therefore descriptive rather than prescriptive, so we should not be hidebound by it. For none of the above: semantic malfunction. Against none of the above: dialogic charity. The Right Answer What prompted me to write this post? Like countless primary school students, my five-year-old twin sons – the primary school students mentioned at the start – were recently asked this question in their homework. One plumped for “1” and the other “0”, and I have tried to capture and expand on their reasoning above. I think both answers are wholly defensible – and neither is definitively right. So, what should you do if you’re asked the question: How many sides does a circle have? In my opinion, the optimal response is to approach the mathematician in your life to write a 3000 word treatise on the topic, which you can then print out and triumphantly deliver to your unfortunate teacher. But failing that, the best approach is to follow the example of Socrates and respond to the question with a counter-question: What do you mean by “side”? When all’s said and done, counting up to zero, or to one, or refusing to answer the question, tells us virtually nothing about the geometry of circles. But there is a lot to be gained by teasing apart familiar notions, dropping or adding extra conditions, challenging our intution by moving from one context to a slightly different one, and trying to write down precisely what we mean by a particular term in a particular setting. That’s what real mathematics is all about. Footnotes You could do something else: e.g. pick a starting point P on the circle, from which to measure distance around the circumference. Then declare that the points a rational distance from P are corners and rest are sides. This has effect of yielding a countably infinite number of corners and an uncountably infinite number of sides. Or one could stipulate the opposite. This might be a convenient fit for the polygonal limit approach to circles, but I’d struggle to agree that it’s easy or obvious enough to be considered “the right answer”. One of my Twitter correspondents was worried about how smooth the curve is. This tombstone is continuously differentiable but not twice so. It would certainly be interesting if many people thought this was a critical matter, and this could probably be tested with an infinitely smooth tombstone built from something like this, although I haven’t thought through the details. We might try to formalise this as follows: in a square (or any polygon), a side has the property that starting from any position in the interior, you can cut the shape straight through your location, so that your chosen side is firmly on one side of the cut. That doesn’t work for the two-sided tombstone: any cut will always sever the long side. We could weaken this by saying that to count as a side, there has to be at least one way of slicing through the shape so that the side is on one side of the cut. That would allow us to say that the tombstone has four sides (even though the curved section is not on one side of points in the upper region). For the circle though, its supposed one side is never on one side (so would be ruled out), and only the straight section of a semicircle would count as a side. It’s not easy to come up with a rigorous justification that works for both tombstones, but I am thinking more informally in terms of Schelling Points: that is to say locations which stand out as being special for reasons which may not be easy to predict in advance. Acknowledgements Thanks to everyone who participated in or retweeted my polls, or discussed this with me on Twitter. Magforming the Johnson Solids Posted by richardelwes on May 18, 2018 Posted in: Geometry. 16 Comments [Disclaimer: this is not a sponsored post or advert – it is a product purely of my own enthusiasm. But in the interests of full transparency, let me say one thing: for reasons you may come to understand, I developed a deep desire for some Magformers octagons. Although these exist, they are like gold-dust. So I wrote to Magformers UK and asked whether they might sell me six octagons as a special deal, and they very generously replied that they would give me six octagons, which indeed they did, for which I am eternally grateful, and which you can spot in some of the photos below. They feature more prominently in the follow-up post which is [update] here.] The Platonic solids: the tetrahedron, cube, octahedron, dodecahedron, and icosahedron. When some kind soul gave my children a set of Magformers– a magnetic construction toy mainly comprising regular polygons – needless to say the first thing I did was steal them for myself and set about building up the collection until I could create the five Platonic solids. The next shapes to move on to are the Archimedean solids. There are 13, of which 10 are realistically buildable (the truncated dodecahedron and truncated icosidodecahedron require decagons which Magformers don’t (yet?) make, and the snub dodecahedron requires unfeasibly many triangles (80) and would in any case collapse under its own weight). Here’s a sample of three: A cuboctahedron, rhombicuboctahedron, and truncated icosahedron. Continue Reading The Grothendieck Song Posted by richardelwes on January 2, 2015 Posted in: Elwes Elsewhere, Geometry, Music, Nonsense. 6 Comments There are many songs in the world about love and loss, heartbreak and heart-ache. There are altogether fewer about algebraic geometry in the style of Alexander Grothendieck. Here is my attempt to fill that gap: Disclaimer: I hope none of this needs saying, but just in case of misinterpretation: No views expressed therein are attributable to any organisation to which I am affiliated. Most of the views expressed therein are not attributable to me either, but are a deliberate exaggeration, for comic effect, of a common initial reaction to one’s first meeting with Grothendieck-style algebraic geometry. It is not intended as a serious critique of any mathematician or school of mathematics! Lyrics When I was a young boy doing maths in class I thought I knew it all. Every test that I took, I was sure to pass. I felt pride, and there never came a fall. Up at university, I found what life is for: A world of mathematics, and all mine to explore. Learning geometry and logic, I was having a ball. Until I hit a wall… For I adore Euler and Erdős, Élie Cartan and Ramanujan Newton and Noether. But not to sound churlish There’s one man I cannot understand. No, I can’t get to grips with Grothendieck, My palms feel sweaty and my knees go weak. I’m terrified that never will I master the technique Of Les Éments de Géométrie Algébrique. He’s a thoughtful and a thorough theory-builder, sans pareil. But can anybody help me find the secret, s’il vous plaît Of this awe-inspiring generality and abstraction? I have to say it’s driving me to total distraction. For instance… A Euclidean point is a location in space, and that we can all comprehend. René Descartes added coordinates for the power and the rigour they lend. Later came Zariski topology, where a point’s a type of algebraic set Of dimension nought. Well, that’s not what I thought. But it’s ok. There’s hope for me yet! But now and contra all prior belief We hear a point’s a prime ideal In a locally ringed space, overlaid with a sheaf. Professor G, is truly this for real? No, I can’t make head nor tail of Grothendieck Or Deligne, or Serre, or any of that clique. I’ll have to learn not to care whenever people speak Of Les Fondements de la Géométrie Algébrique. But don’t take me for a geometrical fool. I can do much more than merely prove the cosine rule. I’ll calculate exotic spheres in dimension 29 And a variety of varieties, projective and affine. I’m comfortable with categories (though not if they’re derived) I’ll tile hyperbolic space in dimension 25 I can compute curvature with the Gauss-Bonnet law And just love the Leech Lattice in dimension 24. But algebro-geometric scheming Leaves me spluttering and screaming. And in logic too, you may call me absurd But I wouldn’t know a topos, if trampled by a herd. I’ve tried Pursuing Stacks but they vanished out of sight, I’ve fought with étale cohomology with all my might. And Les Dérivateurs. It’s 2000 pages long. I reach halfway through line 3, before it all goes badly wrong. No, I’ll never get to grips with Grothendieck And I’m frightened that I’m failing as a mathematics geek. All the same, I can’t deny the lure and the mystique Of Le Séminaire de Géométrie Algébrique. – Richard Elwes, 2015 Comments are closed on the Youtube page (because obviously) and here (because broken) but open on Google plus, Facebook, and Twitter. and here Points and lines Posted by richardelwes on November 4, 2014 Posted in: Geometry, Maths. Leave a comment Last week, we Leeds mathematicians were treated to a talk by ~~Professor Green~~Professor Green with the delightfully elementary title of Points and Lines. It reported on joint work of Ben Green and Terence Tao. Here I’m going to talk about the background to their work. As always, any errors are mine. The idea is that we will be presented with some points, specifically a finite collection of points in the plane. We want to predict the answer to questions of the form: how many lines are there through exactly (n) many of these points? We want to do this while knowing as little as possible about the details of what we’ll be given. Actually, on a first pass, these questions are pretty trivial: There are guaranteed to be infinitely many lines through zero points since when you’re positioning a line, a finite set of points is very easy to miss. Similarly, there will be infinitely many lines which hit exactly one point, since once you’ve decided which to hit, you can easily tweak that line to miss the rest. Must there be any lines which hit exactly two points? Not necessarily. If all the points lie on a single straight line (and if there are more than two of them), then any line either goes through 0, 1, or all of them. Arranging the points around a circle illustrates that there also need not be any lines which hit exactly 3 points. The same argument works for n= 4,5,6… So far, so straightforward: we can predict the answer exactly for (n=0,1) and not at all otherwise. But let’s think again: in the case of (n=3), there’s obviously nothing special about a circle. There are numerous other curves which would do just as well: a parabola, hyperbola, squircle, catenary, … all of these and many others share the property than any straight line hits them at most twice. However, for the case (n=2), there is something undeniably special about demanding that all our points lie along a single straight line. So the question arises: is this configuration the only obstacle? Must it be the case that either all the points lie along a line, or we will be able to find a line hitting exactly two of them? This problem was first posed, so far as we know, by J.J. Sylvester in the back pages of the Educational Times in 1893, under the unassuming title of mathematical question 11851. (The Royal Society’s Sylvester Medal is depicted above, of which Ben Green is the most recent winner, for another famous result of his jointly with Tao.) The question was later re-asked by Paul Erdős and then solved in 1940 by Tibor Gallai, who established that the answer was yes: if a set of points do not all lie on a line, then there must be some line through exactly two of them. A later proof by Leroy Kelly is, as Ben put it, a `classic of mathematics’ and actually induced an audible gasp from the audience. So let’s see it: Kelly’s proof (sketch) We assume that our points don’t all lie along a single line. That means we can definitely find two of them which lie on a line L, along with some other point X lying off L, as shown. In fact, there are likely to be numerous such configurations; we choose the one where the perpendicular distance from X to L is the least possible. The claim is then that L contains only those two points. Suppose not. Then there must be two points A & B both lying on L and on the same side of the perpendicular line XL. Now, let M = XA. This line also contains two points (obviously), but the distance from B to M is less than that from X to L, which is a contradiction. QED Clever, right? Anyway, for reasons unexplained, lines which pass through exactly two of our specified collection of points are known as ordinary lines. The Sylvester-Gallai theorem above therefore says that (so long as our points are not colinear) an ordinary line will always exist. But the next question is: how many must there be? In most cases there will be lots: if you try this for a random scattering of (m) points, then in all likelihood, whenever you draw a line through two points it will miss all the others. Thus there are around (frac{1}{2}m^2) ordinary lines. But it is not hard to come up with sets with fewer. For instance, take (m-1) points all lying along a line, and add in one extra point (X) off it. Then the only ordinary lines are those which connect (X) to another point, giving (m-1) many. A cunning construction by Károly Böröczky brings this down even further: a set of (m) points with only (frac{1}{2}m) many ordinary lines. Now we come to the theorem of Green and Tao. In their paper, they have shown that Böröczky’s examples are as good as it gets. For large enough values of (m), they prove that every non-colinear set of (m) points must have at least (frac{1}{2}m) many ordinary lines. How large is `large enough’? The bound they establish is in the vicinity (10^{10^{10}}), but the true answer may be as low as 14. For smaller sets, it is possible to do better. The left-hand picture here shows a set of 7 points with only 3 ordinary lines. And how is it proved? Well, I’m not going to go into details, but the key step is the unexpected (to me) connection of the property has few ordinary lines’ with the propertycan be covered with a single cubic curve’. For the cognoscenti: it achieves this by moving the action to the real projective plane, arranging (m) points symmetrically around the circle, and placing another (m) on the line at infinity at positions corresponding to the directions determined pairs of points from the first set. Then the only ordinary lines are the (m) many tangents to the circle at points. Finally, since no-one mentioned anything about lines at infinity being permitted, the whole set-up needs to be transported back to the plain old Euclidean plane, which can be achieved via a projective transformation. This however will distort the circle into an ellipse. See page 11 of their paper for a picture. ) Constructible Numbers Posted by richardelwes on September 13, 2013 Posted in: Bookery, Geometry, Number theory. This blog-post is an extract from my book Maths in 100 Key Breakthroughs Newton by William Blake (1795) Constructible Numbers A sure route to mathematical fame is to resolve a problem that has stood open for centuries, defying the greatest minds of previous generations. In 1837, Pierre Wantzel’s seminal analysis of constructible numbers was enough to settle not just one, but an entire slew of the most famous problems in the subject, namely those relating to ruler-and-compass constructions. As with so much in the history of mathematics, the topic had its origins in the empire of ancient Greece. The geometers of that period were interested not only in contemplating shapes in the abstract, but also in creating them physically. Initially, this was for artistic and architectural purposes, but later for the sheer challenge it posed. In time, mathematicians came to understand that the obstacles theyencountered in these ruler-and-compass constructions brought with them a great deal of mathematical insight. Nowhere was this more true than in the ancient enigma of squaring the circle, and what that revealed about the number (pi). Classical problems Greek geometers decided on a set of simple rules for building shapes, using only the simplest possible tools: a ruler and pair of compasses. The ruler is unmarked, so it can only be used for drawing straight lines, not for measuring length (therefore these are sometimes called straight-edge-and-compass constructions). The compass is used to draw circles, but it may only be set to a length that has already been constructed. Today’s schoolchildren still learn how to use these devices to divide a segment of straight line into two equal halves and to bisect a given angle. These were two of the very first ruler-and-compass constructions. A more sophisticated technique allows a line to be trisected, that is, divided into three equal parts. What of trisecting an angle, though? Various approximate methods were discovered, which were accurate enough for most practical purposes, but no one could find a method which worked exactly. This proved a mystery, and gave the first hint that there was real depth beneath this question. But what does it mean if one task can be carried out by ruler and compass and another cannot? The most famous of the ruler-and-compass problems, and indeed one of the most celebrated questions in mathematics, is that of squaring the circle. The question is this: given a circle, is it possible to create, by ruler and compass, a square which has exactly the same area? At the heart of this question lies the number (pi) (see page 54). The problem ultimately reduces to this: given a line 1 unit long, is it possible to construct by ruler and compass another line exactly (pi) units long? Another classical problem was that of doubling the cube. This problem had its origins in a legend from around 430 BC. To overcome a terrible plague, the citizens of the island of Delos sought help from the Oracle of Apollo. They were instructed to build a new altar exactly twice the size as the original. At first they thought it should be easy: it could be done by doubling the length of each side. But that process leads to the volume of the altar increasing by a factor of 8 (since that is the number of smaller cubes that can fit inside the new one). To produce a cube whose volume is double that of the original, the sides need to be increased by a factor of ( sqrt{2}) (that is the cube root of 2, just as 2 is itself the cube root of 8). The question of doubling the cube therefore reduces to this: given a line segment 1 unit long, is it possible to construct another exactly ( sqrt{2}) units long? Wantzel’s deconstruction Working in the turbulent setting of France in the early 19th century, Pierre Wantzel turned these ancient questions over in his mind. He recognized that the form of many ruler-and-compass questions is the same. The key to them was this: given a line 1 unit long, which other lengths can be constructed? And which cannot? If a line of length (x) can be constructed, then Wantzel deemed (x) a constructible number. Setting aside the geometrical origins of these problems, he devoted himself to studying the algebra of constructible numbers. Some things were obvious: for example, if (a) and (b) are constructible, then so must be (a + b), (a – b), (a times b), and (a div b). But these operations do not exhaust the range of constructible numbers; Wantzel realized that it is also possible to construct square roots, such as (sqrt{a}). His great triumph came in 1837, when he showed that everything constructible by ruler and compass must boil down to some combination of addition, subtraction, multiplication, division and square roots. Since (sqrt{2}) is a cube root, and cannot be obtained via these algebraic operations, it followed immediately that the Delians’ ambition to double the cube was unattainable. A similar line of thought revealed the impossibility of trisecting an angle. As for the greatest problem of all, squaring the circle, the final piece didn’t fall into place until 1882, when Ferdinand von Lindemann proved that (pi) is a transcendental number (see page 197). Then Wantzel’s work immediately implied the non-constructibility of (pi), and the impossibility of squaring the circle was finally established. In praise of Pick’s theorem Posted by richardelwes on September 8, 2012 Posted in: Education, Elwes Elsewhere, Geometry. Leave a comment Should Pick’s theorem be on the A-level maths syllabus? In a blogpost at the De Morgan Journal, I argue that it should. Linear Programming in New Scientist Posted by richardelwes on August 10, 2012 Posted in: Complexity, Elwes Elsewhere, Geometry, Maths. Leave a comment I’ve an article in the current edition of the New Scientist, about linear programming, convex polytopes, and Santos’ recent refutation of the Hirsch conjecture. It’s available online here (£) or in a newsagent near you, presuming you live somewhere where the New Scientist is sold… Pick’s Theorem & Ehrhart Polynomials Posted by richardelwes on February 1, 2012 Posted in: Geometry, Maths, Uncategorised. 2 Comments Pick’s theorem is a simple, beautiful, and usful fact of elementary geometry. It should be much better known than it is! In fact, I have half a mind that it should be on the A-level (high school) syllabus. Less famous – but equally wonderful – are Ehrhart polynomials, which are what you get when you try to lift Pick’s theorem into higher dimensions. Though geometrically intuitive, they quickly lead into deep mathematical waters. They’re also valued as tools in optimisation problems and in other areas of computer science (I’m told). This afternoon I gave a – hopefully fairly accessible – talk on these topics. The slides are available here. (Update: PDF of slides here). Webinar playback: some families of polyhedra Posted by richardelwes on May 16, 2011 Posted in: Bloggery, Geometry, Maths, Richard Elsewhere, Technology. 2 Comments On Saturday, I gave my first ever webinar, on the topic of “Some families of polyhedra”. And if you don’t know your tetrahemihexahedron from your tridiminished rhombicosidodecahedron, the good news is that the whole thing is available to see and hear here. It’s just over an hour long, but of course one advantage the recorded version has over the live one is the ability to fast forward, pause, and rewind. It was hosted over at Mathfuture, by Maria Droujkova. My aim in the talk was to give a very brisk overview of how several different families of wonderful, complex shapes all arise from juggling a very small number of simple criteria. I’m separately uploading the slides for my presentation here [pdf]. They are quite rough and ready, without any detailed explanations, or even any pictures – I used Stella for those. But it does sketch the central story (which I also covered in this blogpost). I may spruce them up one day, if I give the same talk again. I found the whole thing a thoroughly enjoyable experience, and the Elluminate technology worked extremely smoothly. The format allowed me to talk while sharing my whole desktop with the audience, with the optimal result of people being able to hear my voice and watch everything I was doing, without having to endure looking at my face. And we could all do it from the comfort of our living rooms! This is sort of thing the internet was intended for, isn’t it? Bad Packer Posted by richardelwes on October 28, 2010 Posted in: Geometry, Maths. Circle packing is a classical topic in discrete geometry. As Axel Thue and László Fejes Tóth showed, if you want to fit as many identical circular coins on a table as possible (all sitting side by side, no piling up or overlapping), the best you can achieve is for around 90.7% of the table to be covered. This is done by arranging the coins along a hexagonal lattice. That was an interesting result, and can be lifted into higher dimensions in the even subtler science of sphere and hypersphere-packing. That’s fine, but we can pose the same problem, using coins which are not circular. Now here is an interesting question: which shape is the worst packer? The question is only sensible for convex shapes, and we further assume that the shape is centrally symmetric. Then the answer is conjectured to be the smoothed octagon, with a maximum packing density of around 90.2%. The smoothing is done by rounding off each corner with a hyperpola which is tangent to the two meeting sides, and which asymptotically approaches the two sides beyond those. [Image from Wikipedia] Posts navigation ← Older Entries Some of my interesting things The Grothendieck Song A rhopalic sentence Wondering about this wallpaper? Topological limericks Affiliations I work at University of Leeds, UK I am a Holgate Session Leader for the London Mathematical Society I am a member of the European Mathematical Society (formerly the Publicity Officer) I am a Fellow of the Higher Education Academy I am a member of the British Society for the History of Mathematics I am a member of the Association of British Science Writers I am a member of the British Logic Colloquium Me on Facebook Me on Mastodon (link) Me on BlueSky (link) Me (still there but not active) on Twitter (link) Subscribe to Simple City RSS - Posts RSS - Comments Blog at WordPress.com. Simple City Blog at WordPress.com. SubscribeSubscribed Simple City Join 37 other subscribers Sign me up Already have a WordPress.com account? Log in now. Simple City SubscribeSubscribed Sign up Log in Report this content View site in Reader Manage subscriptions Collapse this bar Loading Comments... Write a Comment... Email (Required) Name (Required) Website
399
probability - distribution of the ratio of two gamma random variables - Cross Validated =============== Join Cross Validated By clicking “Sign up”, you agree to our terms of service and acknowledge you have read our privacy policy. Sign up with Google OR Email Password Sign up Already have an account? Log in Skip to main content Stack Exchange Network Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Visit Stack Exchange Loading… Tour Start here for a quick overview of the site Help Center Detailed answers to any questions you might have Meta Discuss the workings and policies of this site About Us Learn more about Stack Overflow the company, and our products current community Cross Validated helpchat Cross Validated Meta your communities Sign up or log in to customize your list. more stack exchange communities company blog Log in Sign up Home Questions Unanswered AI Assist Labs Tags Chat Users Teams Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Try Teams for freeExplore Teams 3. Teams 4. Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Explore Teams Teams Q&A for work Connect and share knowledge within a single location that is structured and easy to search. Learn more about Teams Hang on, you can't upvote just yet. You'll need to complete a few actions and gain 15 reputation points before being able to upvote. Upvoting indicates when questions and answers are useful. What's reputation and how do I get it? Instead, you can save this post to reference later. Save this post for later Not now distribution of the ratio of two gamma random variables [duplicate] Ask Question Asked 9 years, 4 months ago Modified7 years, 6 months ago Viewed 22k times This question shows research effort; it is useful and clear 16 Save this question. Show activity on this post. This question already has an answer here: Ratio of Gamma distributed variables with different parameters (1 answer) Closed 7 years ago. Assume that X∼G a(α 1,β 1)X∼G a(α 1,β 1) and Y∼G a(α 2,β 2)Y∼G a(α 2,β 2). Define Z=X/Y Z=X/Y. What 's the distribution of Z Z? probability distributions mathematical-statistics gamma-distribution Share Share a link to this question Copy linkCC BY-SA 3.0 Cite Improve this question Follow Follow this question to receive notifications edited Feb 10, 2018 at 12:24 kjetil b halvorsen♦ 84.8k 32 32 gold badges 213 213 silver badges 690 690 bronze badges asked Apr 15, 2016 at 20:21 user9292user9292 1,664 3 3 gold badges 20 20 silver badges 34 34 bronze badges 5 5 See en.wikipedia.org/wiki/…. (The χ 2 χ 2 distributions referred to there are scaled Gamma distributions.) Obviously the scale parameters only scale Z Z by predetermined amounts, so there's no trouble with them. –whuber♦ Commented Apr 15, 2016 at 20:37 This reads like a routine textbook-style question. Is this work for some class? (whether assigned work or practice question or past exam of ...) –Glen_b Commented Apr 16, 2016 at 3:13 @Glen_b No, this is not for hw or exam etc. –user9292 Commented Apr 18, 2016 at 13:35 @Glen_b what if two gamma is not independent? Are their ratio still follow chi-square or F? –overwhelmed Commented Oct 4, 2018 at 14:12 Generally not. .. . . . –Glen_b Commented Oct 5, 2018 at 2:36 Add a comment| 2 Answers 2 Sorted by: Reset to default This answer is useful 20 Save this answer. Show activity on this post. β 1 X∼G a m m a(α 1,1)β 1 X∼G a m m a(α 1,1) and β 2 Y∼G a m m a(α 2,1)β 2 Y∼G a m m a(α 2,1), then according to Wikipedia β 1 X β 2 Y∼Beta Prime distribution(α 1,α 2).β 1 X β 2 Y∼Beta Prime distribution(α 1,α 2). In addition, short hand you write β′(α 1,α 2)β′(α 1,α 2). Now the Wiki page also describes the density of the general Beta-prime distribution β′(α 1,α 2,p,q)β′(α 1,α 2,p,q), as f(x)=p(x q)α 1 p−1(1+(x q)p)−α 1−α 2 q B(α 1,α 2).f(x)=p(x q)α 1 p−1(1+(x q)p)−α 1−α 2 q B(α 1,α 2). The Beta-prime distribution is the special case of the general Beta-prime distribution when p=q=1 p=q=1. In addition the wiki page says for a constant k k k β′(α 1,α 2,p,q)=β′(α 1,α 2,p,k q).k β′(α 1,α 2,p,q)=β′(α 1,α 2,p,k q). Thus, X Y∼β′(α 1,α 2,1,β 2 β 1).X Y∼β′(α 1,α 2,1,β 2 β 1). Share Share a link to this answer Copy linkCC BY-SA 3.0 Cite Improve this answer Follow Follow this answer to receive notifications edited Feb 10, 2018 at 12:06 answered Apr 15, 2016 at 20:43 GreenparkerGreenparker 16.2k 4 4 gold badges 54 54 silver badges 102 102 bronze badges Add a comment| This answer is useful 13 Save this answer. Show activity on this post. Greenparker's reply says it all, but you can also note that Γ Γ distributions can be expressed as scaled χ 2 χ 2 distributions, and that the properly scaled ratio of two χ 2 χ 2 random variables follows an F F distribution. Following the parametrization and scaling rule above, if X∼Γ(α,β)X∼Γ(α,β), then 2 β X∼χ 2 2 α 2 β X∼χ 2 α 2 This means that for your two Gamma-distributed variables X X and Y Y, α 2 β 1 X α 1 β 2 Y∼F(2 α 1,2 α 2)α 2 β 1 X α 1 β 2 Y∼F(2 α 1,2 α 2) where F F is the F F distribution. You could also say that X/Y follows a scaled F F distribution. Share Share a link to this answer Copy linkCC BY-SA 3.0 Cite Improve this answer Follow Follow this answer to receive notifications edited Jun 15, 2016 at 6:51 answered Jun 15, 2016 at 6:43 Svein Olav NybergSvein Olav Nyberg 171 1 1 silver badge 4 4 bronze badges Add a comment| Start asking to get answers Find the answer to your question by asking. Ask question Explore related questions probability distributions mathematical-statistics gamma-distribution See similar questions with these tags. Featured on Meta Upcoming initiatives on Stack Overflow and across the Stack Exchange network... Will you help build our new visual identity? Report this ad Linked 5Ratio of Gamma distributed variables with different parameters 2Ratio of two independent Gamma distributions Related 5Ratio of Gamma distributed variables with different parameters 3sampling from a mixture of two Gamma distributions 3Absolute difference of two gamma distribution 4How do you compute the P(x>y) for a joint density function in R? 2Probability that one gamma r.v. is greater than another plus a constant 1Distribution of the ratio of Dirichlet/Gamma variates Hot Network Questions If I remove the point in a dataset which is furthest from the mean, does the sample variance automatically decrease, or at least not increase? Landmark identification in "The Angel" (Arsenal FC's anthem) Formula of Simonene Rectangle and circle with same area and circumference Use bigger sample for predictors in regression Why לֶחֶם instead of לַחַם? Where should I host software for individual papers when GitHub is now part of Microsoft AI? Is there any way to still use Manifest v2 extensions in Google Chrome 139+? 19V Reverse Polarization Protection Circuit Puts Bench Power Supply into OCP Intel NUC automatically shuts down when trying Ubuntu What violent acts or injuries are attributable to Palestine Action? Did recently killed Al Jazeera journalist Anas al-Sharif call the Oct 7 attackers "heroes"? History of Wilcoxon/Mann-Whitney being for the median? I failed to make Claus benzene. (With sticks.) Does trading for Kyogre in Pokémon Omega Ruby include its Mega Evolution? I found that we can calculate the time of solar eclipses that will happen in the very far future. Do we need relativity in this calculation? Why does my HDD keep spinning and seeking when I power off the computer? What keeps an index ETF pegged to the index? Do you email authors whose results you have improved? Why doesn't chatGPT learn from its interactions with users? Dropdown width with very long options What do you call this outfit that Japanese housewives always wear? David receiving gifts from pagan kings Why do we expect AI to reason instantly when humans require years of lived experience? Why are you flagging this comment? It contains harassment, bigotry or abuse. This comment attacks a person or group. Learn more in our Code of Conduct. It's unfriendly or unkind. This comment is rude or condescending. Learn more in our Code of Conduct. Not needed. This comment is not relevant to the post. Enter at least 6 characters Something else. A problem not listed above. Try to be as specific as possible. Enter at least 6 characters Flag comment Cancel You have 0 flags left today Cross Validated Tour Help Chat Contact Feedback Company Stack Overflow Teams Advertising Talent About Press Legal Privacy Policy Terms of Service Your Privacy Choices Cookie Policy Stack Exchange Network Technology Culture & recreation Life & arts Science Professional Business API Data Blog Facebook Twitter LinkedIn Instagram Site design / logo © 2025 Stack Exchange Inc; user contributions licensed under CC BY-SA. rev 2025.8.13.32804 By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy. Accept all cookies Necessary cookies only Customize settings Cookie Consent Preference Center When you visit any of our websites, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences, or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and manage your preferences. Please note, blocking some types of cookies may impact your experience of the site and the services we are able to offer. Cookie Policy Accept all cookies Manage Consent Preferences Strictly Necessary Cookies Always Active These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information. Cookies Details‎ Performance Cookies [x] Performance Cookies These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance. Cookies Details‎ Functional Cookies [x] Functional Cookies These cookies enable the website to provide enhanced functionality and personalisation. They may be set by us or by third party providers whose services we have added to our pages. If you do not allow these cookies then some or all of these services may not function properly. Cookies Details‎ Targeting Cookies [x] Targeting Cookies These cookies are used to make advertising messages more relevant to you and may be set through our site by us or by our advertising partners. They may be used to build a profile of your interests and show you relevant advertising on our site or on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device. Cookies Details‎ Cookie List Clear [x] checkbox label label Apply Cancel Consent Leg.Interest [x] checkbox label label [x] checkbox label label [x] checkbox label label Necessary cookies only Confirm my choices