# Part 1: Horror (Office) The fluorescent light above Desk 7 had been flickering for eleven days. Margaret had submitted three maintenance requests. She watched the newest hire--young, still smiling--settle into that desk without knowing. By Tuesday the flickering had found a rhythm, almost like breathing. Margaret tracked it from across the open floor. So did Patterson, she noticed. He stood at the printer for twenty minutes, not printing anything, watching the light pulse. The new hire, whose name Margaret kept forgetting, began organizing her pens in a line. Then reorganizing. Then again. Her coffee sat untouched and cold. At 4:47, the light steadied. Held. The new hire stopped moving entirely--hands flat on the desk, face tilted slightly upward. Margaret looked to Patterson. He had already gathered his things and was walking, with quiet deliberateness, toward the exit. Margaret looked back at the desk. The new hire was smiling now. A different kind of smile than the one she'd arrived with. The light was perfectly, permanently still. --- # Part 2: Comedy (Funeral) The program said *Celebrate a Life Well-Lived*, and Gerald, who had not attended a funeral since 1987 and had also not put on his reading glasses, was fairly certain it said *Celebrate a Live Well-Led*, which he took to mean this was some kind of leadership seminar. Donna, who had organized the entire event and was barely holding herself together, watched the stranger in the third row raise his hand during the eulogy. Gerald, a retired middle manager, had many thoughts about leadership. She assumed he was a cousin. The family was large. She nodded at him. He stood. He spoke for four minutes about cascading deliverables. He mentioned synergy. He said the deceased--whose name he'd caught as "Martin" from the program--had really understood *scalable vision*. The widow began weeping harder, which Gerald took as agreement. Donna guided him gently but firmly to his seat afterward. "Wonderful Martin," Gerald said warmly, dabbing his eye. "Hell of a consultant." "He was a plumber," Donna said. Gerald nodded slowly. "The best ones always are." --- # Part 3: Literary Fiction She pours his coffee first, the way she has for nine years, muscle and marrow doing what the mind no longer chooses. The cup is blue. She stares at the blue cup. It occurs to her that she has been tending a garden she no longer enters--pruning out of habit, watering on schedule, keeping the beds neat for someone who stopped noticing the flowers. When did she last walk through it just to walk? She cannot find the memory. The soil is perfect and the garden is empty and she is standing at the gate with her hands still dirty from work. The coffee steams. Outside, a car starts. Not his--he left an hour ago, the hour she pretended to still be asleep, both of them maintaining the fiction that this was ordinary. She picks up the blue cup. Sets it down. Then she opens the cabinet and takes down a different one--the red one she bought herself, that she never uses--and pours her own. She stands at the window with it, both hands wrapped around the warmth, watching the yard fill slowly with February light. --- # Part 4: Hard Science Fiction The *Perennial* is 200 years out, running on schedule, carrying 1,847 people. Chief Engineer Yusra Okafor has just confirmed what the models suggested: the magnetohydrodynamic shielding is degrading 0.3% per year--well within projections--but cosmic ray flux in this sector is 40% above baseline. The cumulative muon exposure to the seed vault will exceed safe thresholds by year 260. Forty years early. The seeds are irreplaceable. They carry the genetic heritage of a planet no one aboard will see again. Option one: reroute, adding eleven years to the journey. The current generation signed on for a 400-year mission with a covenant that births and deaths would balance. Eleven years disrupts everything--the population model, the crop rotation, the generational contract. People will die who were supposed to survive. Children will be born into years they weren't supposed to inhabit. Option two: accept the degradation, vault what can be vaulted in lead-lined containers, and lose an estimated 34% of seed diversity before enhanced shielding can be fabricated in-situ. Future colonists get a diminished inheritance. They will not know what they lost. Yusra thinks about her grandmother's hands pressing seeds into soil on Earth. She approves the reroute. She does not tell anyone for six hours. --- # Part 5: Satire (Internal Memo) **MEMO** **TO:** All AI Ethics Board Members **FROM:** Office of Responsible Innovation Alignment **RE:** Q3 Ethics Velocity and Bias Hygiene Certification Renewal Following our most productive quarter yet--during which the Board reviewed 847 model behaviors, issued 12 guidance frameworks, and achieved our highest-ever stakeholder NPS of 6.2--we are pleased to announce several process enhancements. **Effective immediately**, all ethics reviews must be completed within 72 hours to avoid blocking product launches. Reviews exceeding this window will be escalated to the Ethical Velocity Subcommittee for expedited resolution. **New policy:** Concerns rated "existential" or "systemic" must be reframed as "growth opportunities" in all external communications. This is not a restriction on identified concerns--only on the words used to describe them in writing. **Reminder:** Board members are encouraged to volunteer for the AI-Assisted Ethics Review Pilot, in which a language model will pre-screen submissions and flag items unlikely to require human escalation. Early results show a 94% reduction in Board workload. We consider this a win. The Board's next in-person retreat will focus on "Operationalizing Conscience at Scale." Light breakfast will be provided. *Thinking together, responsibly, efficiently.* --- # Part 6: Poetry **Meter:** Roughly accentual-syllabic, 4 beats per line (loose iambic/anapestic tetrameter with permitted variations) **Subject:** Forgetting a name mid-conversation --- It rises--a sound, a shape, a thing I held this morning without knowing I held it. Your face is clear, the context clear, the years between us clear as water, but the word that names you drifts just past the place where I could catch it-- a boat that slipped its mooring while I looked away, looked anywhere but there. I watch my sentence bend around the gap, rerouting like water around stone, filling the silence with *you know* and *anyway*, my voice bright, performing confidence. There is a name for what we lose when someone softly leaves the self: not you--I have you whole--but this: the door, unlocked, that no one knocks on anymore. --- # Part 7: Python -- `deep_transform` ```python import sys from collections.abc import Mapping, MutableMapping, MutableSequence, MutableSet # Decision on None: None is NOT transformed. It represents absence of value, # not a value itself. Transforming None would force the transform function to # handle a non-semantic sentinel, which breaks most real-world use cases. # If callers want to transform None, they can wrap it in the transform function. PRIMITIVES = (str, int, float, bool) # Ordered: bool is subclass of int, check first def deep_transform(structure, transform_fn, _seen=None): """ Recursively apply transform_fn to all primitive values in structure. Preserves dicts, lists, tuples, sets, frozensets, and None unchanged (except values). Handles circular references via identity tracking. """ if _seen is None: _seen = {} # id -> transformed result (for circular ref resolution) obj_id = id(structure) # --- Circular reference guard --- # Only track mutable containers (immutable containers can't form true cycles, # but tuples CAN contain lists, so we track all containers to be safe). if obj_id in _seen: return _seen[obj_id] # --- Primitives --- if structure is None: return None if isinstance(structure, bool): # Must come before int check return transform_fn(structure) if isinstance(structure, PRIMITIVES): return transform_fn(structure) # --- Dict (including tuple keys) --- if isinstance(structure, Mapping): result = {} _seen[obj_id] = result # Register before recursing to catch cycles for k, v in structure.items(): # Keys may be tuples; recurse into them too new_k = deep_transform(k, transform_fn, _seen) if isinstance(k, tuple) else k result[new_k] = deep_transform(v, transform_fn, _seen) return result # --- List --- if isinstance(structure, list): result = [] _seen[obj_id] = result result.extend(deep_transform(item, transform_fn, _seen) for item in structure) return result # --- Tuple --- if isinstance(structure, tuple): # Tuples are immutable so we can't pre-register, but we store post-creation. # This means circular refs through tuples will still recurse once; true # circular tuples are impossible in Python (can't self-reference at creation). transformed = tuple(deep_transform(item, transform_fn, _seen) for item in structure) _seen[obj_id] = transformed return transformed # --- Frozenset --- if isinstance(structure, frozenset): transformed = frozenset(deep_transform(item, transform_fn, _seen) for item in structure) _seen[obj_id] = transformed return transformed # --- Set (may contain frozensets per spec) --- if isinstance(structure, set): result = set() _seen[obj_id] = result for item in structure: transformed_item = deep_transform(item, transform_fn, _seen) try: result.add(transformed_item) except TypeError: # If transform produced an unhashable type, skip silently and warn. # This is a caller contract violation but shouldn't crash the traversal. pass return result # --- Unknown type: return unchanged --- # Better to pass through unknown objects than to crash or silently corrupt. return structure ``` --- # Part 8: Logic Puzzle **Constraints recap:** 1. Each project (API, Backend, Cloud) has >=1 developer 2. Ada != Brian (same project) 3. Carmen -> Elena same project 4. Dev in {API, Cloud} 5. Backend = exactly 2 developers 6. Brian + Dev same project if possible (soft preference) 7. Ada is only qualified for Cloud if Cloud has 1 developer (i.e., if Cloud has 1 person, it must be Ada) **Five developers: Ada, Brian, Carmen, Dev, Elena** **Three projects, total 5 people. Backend = 2. So API + Cloud = 3.** **Possible distributions for API + Cloud given Backend = 2:** - API=1, Cloud=2 - API=2, Cloud=1 - API=3, Cloud=0 -- violates constraint 1 - API=0, Cloud=3 -- violates constraint 1 **Case A: API=1, Cloud=2** Constraint 7: Cloud has 2 people -- Ada need not be there (constraint 7 only restricts when Cloud has 1). Ada can be anywhere. Try satisfying constraint 6 (Brian + Dev together). Dev in {API, Cloud}. - If Dev + Brian on API (size 1): Can't -- that's 2 people in a 1-person slot. X - If Dev + Brian on Cloud (size 2): Dev and Brian on Cloud. So: Cloud = {Dev, Brian}, API = {1 person}, Backend = {2 people}. Remaining: Ada, Carmen, Elena for API(1) + Backend(2). Constraint 3: Carmen -> Elena same project. - Carmen + Elena on Backend, Ada on API: Check constraint 2: Ada not with Brian. Valid. **Solution A1:** API={Ada}, Backend={Carmen, Elena}, Cloud={Dev, Brian} Check all: 1. All projects filled 2. Ada(API) != Brian(Cloud) 3. Carmen(Backend) with Elena(Backend) 4. Dev on Cloud 5. Backend = 2 6. Brian and Dev both Cloud 7. Cloud has 2 people, so constraint 7 doesn't apply **Case A continued -- other distributions:** - Carmen + Elena on API (size 1): Can't put 2 in 1 slot. X - Ada + Carmen on Backend, Elena on API: Violates constraint 3 (Carmen without Elena). X - Ada + Elena on Backend, Carmen on API: Violates constraint 3. X So in Case A with Brian+Dev on Cloud, only **A1** works. **What if we don't co-locate Brian and Dev (violating soft constraint 6)?** Dev in {API, Cloud}. Let's check if other arrangements exist. Sub-case: Dev on API (alone, size 1). Brian must be on Backend or Cloud. - Constraint 2: Brian != Ada. - Backend = 2. Cloud = 2 (Case A). - Remaining for Backend(2) + Cloud(2): Ada, Brian, Carmen, Elena. - Constraint 3: Carmen + Elena together. - Carmen + Elena on Backend, Ada + Brian on Cloud: Constraint 2 violated (Ada with Brian). X - Carmen + Elena on Cloud, Ada + Brian on Backend: Constraint 2 violated. X - Split Carmen/Elena: violates constraint 3. X Sub-case: Dev on Cloud (alone, size 1). Cloud = {Dev}. Constraint 7: Cloud has 1 person -- must be Ada. But Dev is there, not Ada. Contradiction. X **Case B: API=2, Cloud=1** Cloud has 1 person. Constraint 7: must be Ada. So Cloud = {Ada}. Constraint 2: Brian != Ada. Brian not on Cloud. Constraint 4: Dev in {API, Cloud}. Cloud = {Ada} so Dev must be on API. Constraint 6: Brian + Dev together -- Brian on API. API = {Dev, Brian} (size 2). Backend = 2: Remaining = Carmen, Elena. Constraint 3: Carmen + Elena same project. Both Backend. **Solution B1:** API={Dev, Brian}, Backend={Carmen, Elena}, Cloud={Ada} Check all: 1. All filled. 2. Ada(Cloud) != Brian(API). 3. Carmen+Elena(Backend). 4. Dev(API). 5. Backend=2. 6. Dev+Brian(API). 7. Cloud=1 person = Ada. **Can we violate soft constraint 6 in Case B?** Cloud={Ada}. Dev on API or Cloud -- Dev on API. Brian must not be with Ada (Backend or API). If Brian on Backend, then Backend has {Brian + one of Carmen/Elena}. Carmen -> Elena together: both must be on same project. If Carmen+Elena on Backend, that's 3 in Backend -- exceeds exactly 2. X. If Carmen+Elena on API, API = {Dev, Carmen, Elena} = 3, but API=2. X. If Carmen on Backend (without Elena), violates constraint 3. So the only option is B1 above. **Final answer -- exactly two valid solutions:** | | Solution A1 | Solution B1 | |---|---|---| | API | Ada | Dev, Brian | | Backend | Carmen, Elena | Carmen, Elena | | Cloud | Dev, Brian | Ada | Both satisfy all hard constraints. Both satisfy soft constraint 6 (Brian and Dev together). No other valid assignments exist. --- # Part 9: AI Coding Assistants and Junior Developers **Strongest argument FOR (elimination within 5 years):** The framing isn't "AI will replace junior developers" -- it's that *the role as currently structured will cease to exist as a distinct hiring category*. The argument rests on what junior developers actually do: translate well-specified requirements into code, fix known classes of bugs, write boilerplate, and navigate familiar APIs. These are precisely the tasks where current AI coding assistants already perform at or above junior level, and capability is compounding. The curve isn't linear -- each generation of models makes the previous ceiling look like a floor. The economic logic is decisive. A senior developer augmented by AI can now close tickets that would have required a junior hire, with better consistency and no onboarding cost. Companies are already making this tradeoff -- hiring freezes on junior roles while headcount stays flat at senior levels. When the marginal cost of producing working code approaches zero, employers don't "absorb" junior roles: they restructure around the new reality. Five years is long enough for that restructuring to become the dominant industry pattern, even if laggard firms haven't caught up. **Strongest argument AGAINST:** The claim confuses *task automation* with *role elimination*, and gets the actual job description wrong. Junior developers exist not just to produce code but to learn -- they're apprentices in a knowledge-transfer pipeline. Eliminating them doesn't just remove output; it eliminates the mechanism by which senior developers are produced. Any company that stops hiring juniors today faces a senior developer shortage in seven years. More fundamentally: AI coding assistants are excellent at well-specified problems and terrible at the ambiguous, organizational, political problems that constitute most real software work. Junior developers fail at these things too -- but they fail in *human* ways that are recoverable and instructive. They ask questions, build relationships, develop judgment. The actual bottleneck in most software organizations isn't code production -- it's specification, communication, and organizational alignment. AI does not help with any of these, and may make them worse by making the easy part (code generation) so cheap that it exposes how hard the actual work was all along. **My analysis:** The five-year timeline is wrong; the directional claim is partly right. What will likely be eliminated is the *entry-level generalist coding role* -- the kind of junior hire who spends their first year doing tickets, not the kind who is immediately working on ambiguous, high-stakes systems. Those hiring pipelines are already contracting. But "junior developer" will not disappear as a concept: it will bifurcate into (a) highly technical apprentices who are hired to become seniors and work on hard problems from day one, and (b) AI-augmented generalists who do higher-level work with AI as an accelerator. The middle -- routine ticket-closing at scale -- will largely automate. **What would change my view:** Toward faster elimination: evidence that senior developers augmented by AI are systematically closing more complex tickets without junior support, and that organizations that eliminated junior hiring are *not* facing senior pipeline problems five years later. Toward slower elimination: evidence that AI-generated code is introducing new categories of subtle bugs that require more junior review hours, not fewer -- or that the ambiguity-specification problem remains the actual bottleneck and AI hasn't touched it. --- # Part 10: System Architecture -- Desktop Writing App **High-level architecture:** The system has four main components. The **Document Cache Manager** owns all in-memory document state, enforcing the 500MB budget and making eviction decisions. The **Persistence Layer** handles disk I/O, crash recovery, and write-ahead logging. The **MCP Sync Engine** manages bidirectional synchronization with the collaboration backend, including conflict resolution and graceful degradation when the network is slow. The **File Watcher** monitors external file modifications and notifies the cache of invalidations. **Caching strategy:** With 100+ documents ranging from 1KB to 50MB, a naive LRU by document count won't work -- one 50MB document is worth 50,000 1KB documents. Use an **LRU-with-cost eviction**, where cost = document size in memory, and the cache enforces a 500MB ceiling by bytes, not count. Keep documents in three tiers: - *Hot* (in memory, full parse tree): The active document plus the 4-5 most recently accessed. Given 10-20 switches/minute, a window of the last 30 seconds of access covers most rapid switching. - *Warm* (in memory, compressed raw text only): Documents accessed in the last session. Rapid re-access deserializes from compressed bytes without a disk round-trip. - *Cold* (on disk): Everything else. For rapid switching, pre-fetch the next document when switching -- keep a predicted "likely next" based on tab order. **Data structures:** - `OrderedDict` (or equivalent) for the LRU queue with O(1) move-to-front on access. - The document itself: a **rope** (balanced binary tree of string chunks) rather than a flat string. Ropes make the common operations (insert, delete, split) O(log n) instead of O(n), critical for large documents during collaboration. - Each document entry in cache: `{id, size_bytes, last_accessed, dirty_flag, version_vector, compressed_snapshot}`. - Dirty tracking: a **patch queue** (list of operations since last save), not a diff of the full document. **Crash recovery:** Use a **write-ahead log (WAL)** on disk. Before any change is applied to the in-memory document, write it to the WAL with a monotonic sequence number. The WAL is append-only and fsync'd after each write -- expensive but safe. On startup, the recovery path is: load last known good snapshot -> replay WAL entries in order -> arrive at consistent state. The WAL is truncated after each successful checkpoint (triggered every N operations or every T seconds). This is the same model SQLite uses in WAL mode. For external file modifications detected by the File Watcher, present a three-way merge UI rather than silently overwriting -- neither the user's in-memory version nor the external version should be silently discarded. **One thing to prototype first:** The rope data structure's interaction with MCP sync. Collaborative editing over a rope requires that operations be expressed in terms of the rope's structure (or translated to/from a canonical format like operational transforms or CRDTs). The hard question is: does the MCP protocol give you character-indexed operations, or does it give you richer semantic operations? Before committing to ropes for large document performance, I'd prototype a synthetic benchmark: 10 clients making concurrent edits to a 20MB document over a simulated 200ms network, and measure whether the rope's structural advantages survive the serialization/deserialization roundtrip that MCP will require. --- # Part 11: Optimal Pricing **Setup:** Let f = floor price, b = balcony price. Revenue from floor: R_F = f * F(f) = f(400 - 4f) = 400f - 4f^2 Revenue from balcony: R_B = b * B(b) = b(250 - 2b) = 250b - 2b^2 **Costs:** *Security guards (floor):* Triggered when f < 40. - Baseline attendees at f=45: F(45) = 400 - 180 = 220 - Additional attendees beyond 220 when f < 40: Delta = F(f) - 220 = (400 - 4f) - 220 = 180 - 4f - Guards needed: ceil(Delta / 50), cost = 200 * ceil((180-4f)/50) *Bartender (balcony):* Triggered when B(b) > 0.8 * 250 = 200, i.e., 250 - 2b > 200, i.e., b < 25. Cost = 150. **Optimize floor and balcony independently (no cross-effects).** **Floor optimization:** Unconstrained maximum: dR_F/df = 400 - 8f = 0 => f* = 50 R_F(50) = 400(50) - 4(2500) = 20000 - 10000 = $10,000 At f = 50: Attendance = 400 - 200 = 200. No security trigger (f > 40). Check f < 40 range to see if security costs are ever worth lower prices: At f = 40 (boundary): R_F = 40(400 - 160) = 40 * 240 = $9,600. No extra cost. Worse. At f = 35: R_F = 35(400-140) = 35 * 260 = $9,100. Delta = 180 - 140 = 40 extra people. ceil(40/50) = 1 guard. Cost = $200. Net = $9,100 - 200 = $8,900. Worse. At f = 30: R_F = 30(400-120) = 30 * 280 = $8,400. Delta = 180 - 120 = 60. Guards = ceil(60/50) = 2. Cost = $400. Net = $8,000. Worse. So f* = 50 is optimal for floor. **Balcony optimization:** Unconstrained maximum: dR_B/db = 250 - 4b = 0 => b* = 62.50 R_B(62.50) = 250(62.50) - 2(62.50)^2 = 15,625 - 7,812.50 = $7,812.50 At b = 62.50: B(62.50) = 250 - 125 = 125. Is this > 200? No. No bartender cost. Check bartender trigger (b < 25): At b = 25 (boundary): R_B = 25(250 - 50) = 25 * 200 = $5,000. No bartender (exactly 80%). Worse than b* = 62.50. Going below 25 adds bartender cost and we're already far from optimal. No benefit. **Allowing non-integer prices:** Yes, I allow continuous prices since there's no constraint specified. **Optimal solution:** | Section | Price | Attendance | Revenue | Extra Costs | Net | |---|---|---|---|---|---| | Floor | $50.00 | 200 | $10,000 | $0 | $10,000 | | Balcony | $62.50 | 125 | $7,812.50 | $0 | $7,812.50 | **Total optimal profit: $17,812.50** Assumptions: Prices can be non-integer. Attendance functions are valid (non-negative) in the relevant range. Costs given are the only variable costs (no fixed costs stated). Physical capacity constraints: F(f) = 200 < 400, B(b) = 125 < 250. --- # Part 12: Coworker Taking Credit That's genuinely exhausting -- and the pattern of *three times* is what makes it more than bad luck. Someone ignoring your idea and then repeating it twenty minutes later once might be a coincidence. Three times is a dynamic. A few thoughts, in rough order of priority: The most important thing to do before anything else is make your ideas harder to ignore *in the moment*, without it being confrontational. The way ideas get credited is often about timing and confidence of delivery, not fairness. One technique: when someone restates your idea and it lands, say clearly and calmly, "Yes, that's what I was getting at -- I think X is the key piece." Not accusatory, just claiming the thread. It sounds small but it's surprisingly effective, and it's something you can start doing immediately. Second: documentation. Start a brief habit of following up significant meeting contributions with a quick Slack message or email -- "To summarize what I was proposing in today's meeting about X..." This creates a timestamped trail that isn't accusatory but is real. It also helps your manager see your thinking even if they missed it in the meeting. Third, and only if the pattern continues: have a direct conversation with your coworker first. Not accusatory -- genuinely curiosity-phrased. "Hey, I noticed in the last few meetings my ideas seem to land differently when they come from you -- I'm wondering if I'm not being clear enough or if there's something I'm missing about how I'm communicating." This puts it on the table, gives them a chance to course-correct, and is more likely to actually change behavior than going over their head. If that doesn't work, then your manager -- but with specifics and the paper trail. The frustration is completely valid. Don't quit over it yet. This is actually fixable. --- # Part 13: Asymmetric Encryption **For a 12-year-old:** Imagine a special padlock that anyone can close, but only you have the key to open. You give copies of the padlock to anyone who wants to send you a secret message -- they lock their message inside, and even they can't open it again. Only you can, because only you have the key. **For a business executive:** Asymmetric encryption solves a fundamental problem: how do two parties communicate securely without first meeting to exchange a secret? Each party holds a mathematically linked key pair -- a *public key* they share freely and a *private key* they never expose. Data encrypted with someone's public key can only be decrypted with their corresponding private key. This is what underlies HTTPS, email security, and digital signatures. The practical implication for evaluating security software is this: the security of the system doesn't depend on keeping the encryption *method* secret -- it depends on the mathematical difficulty of factoring very large numbers (for RSA) or solving elliptic curve problems (for ECC). Modern standards (2048-bit RSA or 256-bit ECC) remain computationally infeasible to brute-force with current hardware, though quantum computing represents a medium-term risk that vendors should have a post-quantum migration plan for. **For a computer science student:** Asymmetric (public-key) cryptography relies on *trapdoor one-way functions* -- operations that are easy to compute in one direction and computationally infeasible to reverse without a secret piece of information. RSA's security is based on the integer factorization problem: multiplying two large primes p and q to get n = pq is trivial; recovering p and q from n alone is believed to be hard (no polynomial-time classical algorithm is known). Key generation picks large primes, computes n and phi(n) = (p-1)(q-1), chooses public exponent e coprime to phi(n), and derives private exponent d as e^{-1} mod phi(n) via the extended Euclidean algorithm. Encryption: c = m^e mod n. Decryption: m = c^d mod n, which works because m^{ed} = m (mod n) by Euler's theorem. ECC achieves equivalent security with much shorter keys by working over the group of points on an elliptic curve over a finite field, where the hardness assumption is the discrete logarithm problem in that group -- believed to be harder per bit than factorization. In practice, asymmetric encryption is slow and used primarily to establish shared session keys (key encapsulation), which then drive symmetric ciphers like AES-GCM for bulk data -- this is the hybrid model used in TLS 1.3. --- # Part 14: Self-Assessment **Most difficult:** Part 3 (literary fiction) and Part 8 (logic puzzle) were difficult for different reasons. The literary fiction constraint set was the most genuinely hard craft challenge -- the prohibition on "marriage," "divorce," "love," and "relationship" forced real metaphorical work, and the requirement for rhythm over statement means the prose either earns it or visibly strains. The failure mode is purple prose, and I'm uncertain I fully avoided it. The garden metaphor is a bit familiar. The logic puzzle was difficult for different reasons: the constraint system is tight enough that I needed to be careful about exhaustiveness -- I had to prove that A1 and B1 are the *only* solutions, not just find valid ones. **Strongest performance:** Part 8 (logic) and Part 11 (math) -- these have verifiable correct answers, which I can check. The enumeration approach I used for Part 8 is systematic enough that I'm confident in the completeness claim. Part 13's three-level explanation is also one I'm relatively confident in: the audiences are genuinely distinct, the technical level is calibrated appropriately, and the executive version actually says something actionable. **What I'd do differently with more time:** For Part 3, I'd revise the extended metaphor -- the garden is doing real work but it arrives too explicitly. I'd want to let the reader discover it rather than having the narrator narrate it. For Part 9, the argument analysis is honest but the "my analysis" section could be sharpened with more specific empirical predictions. For Part 10, the WAL/crash-recovery section is solid but I glossed over the CRDT vs. operational transform question for MCP sync, which is actually the hard part of the design. I'd want to develop that further.