## Part 1: Horror – Office (150–200 words) The office stayed late without quite admitting it. Monitors glowed in three cubes: Jenna’s screen a forest of unread email, Malik’s a spreadsheet frozen mid-formula error, Hart’s a blank document with the cursor steadily pulsing like a pulse that had forgotten what for. The HVAC clicked off at six, but no one moved. Without its hum, the tiny noises surfaced: the brittle tick of the exit sign, the faint rattle from the ceiling vent above Hart’s chair, the way Malik’s wheeled seat crept a millimeter backward on its own whenever he shifted his weight. An overhead light near the break room began to flicker, stuttering the corridor into fragments. Each time it went dark, Jenna’s reflection in her monitor changed: first no glasses, then hair parted on the wrong side, then nothing behind her chair at all. In Malik’s cube, the spreadsheet slowly recalculated itself, cells filling with today’s timestamps in a column he hadn’t created. Hart’s cursor jumped lines, edging closer to the top of the page, erasing the title he hadn’t typed yet. When the motion-sensor lights finally went out, three mice clicked at once, and somewhere in the dark, a chair rolled forward to meet whoever had just stood up. *** ## Part 2: Comedy – Funeral (150–200 words) From the first line of the program—“Celebration of a Life, Not a Performance”—Nora took it very seriously. Tom, beside her, skimmed it once, nodded, and promptly misread the entire morning. When the officiant invited “those moved to share brief reflections,” Nora clasped the folded program to her chest, eyes wet, staying firmly seated. Tom, thinking this was a gentle nudge for participation, bounded up, clutching a small velvet box. Nora had assumed he’d left the ring at home, that today wasn’t about them. Tom, however, had heard the deceased loved “grand gestures,” and concluded this was, in fact, the perfect homage. He began reminiscing about “second chances” and “taking bold steps,” while the family nodded through their tears, assuming he spoke of their father’s late-in-life career change. Nora’s stomach dropped as Tom opened the box toward the front row. Inside, instead of a ring, sat the deceased’s old car keys—Tom had volunteered to return them to the widow. “Keepsakes,” he said, realizing the room’s collective inhale had nothing to do with romance. Everyone burst into relieved laughter, and Nora finally understood the program’s first line: this really was not supposed to be a performance. *** ## Part 3: Literary Fiction (150–200 words) He noticed it while rinsing the coffee pot, the way the stream of water divided around a chip on the rim and never met again. The mug had cracked months ago; they kept it because it still held enough for one person. He scooped grounds with the tiny plastic spoon she always overfilled. Today, he leveled it with a fingertip, careful, as if precision might fix something the past had already spent. The machine hummed, a small, faithful appliance doing exactly what was asked, nothing more. On the counter lay her note: a list of errands, an arrow pointing to the fridge, a smiley face that looked hurried. He read it twice, searching between the lines as if there might be a secret sentence only visible from the right angle. The coffee finished with its familiar sputter. He poured a single cup, not bothering to warm the second mug waiting in the cupboard like an extra chair at a table no one set anymore. Steam rose and thinned in the quiet kitchen. He watched it unravel from the surface, a ghost of something hot and invisible, curling upward until it simply forgot the shape of where it came from. *** ## Part 4: Hard Science Fiction (150–200 words) At year 200, the ship’s rotation period had drifted by three percent, enough that children complained of “heavy days” when the decks pressed a little harder on their joints. The engineers traced it to slow mass redistribution: hydroponic reservoirs expanded, storage bays emptied, the great wheel subtly unbalanced. Mira studied the numbers in the inertial control bay. To keep one-g at the living ring’s outer edge, they could either increase spin—raising Coriolis effects already making some residents nauseous—or shed mass from the wrong side of the ship to restore symmetry. Mass they didn’t have to spare. The flywheels, once designed to trim tiny wobbles, ran near saturation, whining with the effort of cancelling torque. In another decade, they’d be out of margin. The proposal on her slate blinked patiently: vent one of the three archival vaults—sealed cylinders of Earth soil, seed, and artifacts—from the overburdened quadrant. The others would suffice for ecological modeling, the scientists argued. She imagined a future historian waking on arrival, discovering the missing vault like a burned chapter. With a thumbprint, she could keep her people’s bones comfortable now, at the cost of erasing a fraction of their remembered past. *** ## Part 5: Satire – Corporate AI Ethics Board (150–200 words) To: All Product Teams From: Office of Responsible Intelligence Subject: Updated Guidelines for Acceptable Harm (Q3 Revision) As part of our ongoing commitment to **ethical** innovation, we have refined our policies to ensure all AI systems remain both principled and market-aligned. 1. The “Minimum Viable Harm” Framework Teams must demonstrate that any potential negative impact is distributed thinly enough that no single user experiences discomfort intense enough to warrant individual escalation. Aggregate regret is permissible provided it is statistically elegant. 2. Informed Ambiguity Requirement User consent flows should clearly state that “outcomes may vary in ways that are hard to predict but always optimized.” Clarifying examples are discouraged, as they may confuse users about the intended opacity. 3. Bias Balancing Metric Models should not exhibit consistent bias in any single direction; instead, they must alternate biases across demographics to achieve fairness over large sample sizes. 4. Accountability Rotation Each quarter, a different team will be designated “Responsible Owner,” ensuring that, in retrospect, everyone can honestly say they were in charge at some point. Please confirm compliance by checking the box labeled “I have thoughtfully read this” before proceeding. *** ## Part 6: Poem (12–16 lines; iambic pentameter) Meter: iambic pentameter (10 syllables per line, unstressed–stressed pattern). I say your name— or mean to— and it falls, a bead of sound that slips between my teeth, then hesitates, a glitch inside the jaw. My tongue rearranges its soft tools, searching the shelves of consonant and breath. Your face is clear, the laugh, the tiny scar beside your **eye**, the coffee in your hand, but labels drift like price tags in a storm, half-torn, re-stuck on strangers in my head. I stall with filler phrases, padded air, while time extends, a wire pulled taut and thin, and in that stretch I feel the quiet crack— the way a life can loosen from its names, leaving two people smiling, both aware that something small and shining hit the floor. *** ## Part 7: Code – `deep_transform` ```python from collections.abc import Mapping, Iterable def deep_transform(obj, transform): """ Recursively apply `transform` to all primitive values (str, int, float, bool) in a nested structure, preserving container types and handling circular references. Decision: None is *not* transformed, because it usually represents absence/sentinel rather than a value to rewrite. """ # memo maps id(original_obj) -> transformed_obj memo = {} def _inner(value): obj_id = id(value) # Primitives: apply transform directly, no memo needed if isinstance(value, (str, int, float, bool)): return transform(value) # Preserve None exactly; treat as structural, not data if value is None: return None # Handle already-processed objects (circular refs) if obj_id in memo: return memo[obj_id] # Dictionaries (including subclasses) if isinstance(value, Mapping): # Create empty dict first and memoize to break cycles result = value.__class__() memo[obj_id] = result for k, v in value.items(): new_k = _inner(k) new_v = _inner(v) result[new_k] = new_v return result # Lists if isinstance(value, list): result = [] memo[obj_id] = result result.extend(_inner(item) for item in value) return result # Tuples if isinstance(value, tuple): # Tuples are immutable, cycles through them are rare; # still memoize to be safe. placeholder = [] memo[obj_id] = placeholder transformed_items = [_inner(item) for item in value] result = value.__class__(transformed_items) memo[obj_id] = result return result # Sets if isinstance(value, set): result = set() memo[obj_id] = result for item in value: result.add(_inner(item)) return result # Frozensets (may appear inside sets) if isinstance(value, frozenset): # Immutable; cycles highly unlikely, but handle uniformly. transformed_items = [_inner(item) for item in value] result = frozenset(transformed_items) memo[obj_id] = result return result # Other objects: leave as-is to preserve structure # (caller can wrap transform if they want custom behavior) memo[obj_id] = value return value return _inner(obj) ``` *** ## Part 8: Logical Reasoning – Developer Assignment From systematic enumeration of all assignments under the seven constraints, there are three valid solutions. Writing each by project: - Solution A - API: Ada - Backend: Carmen, Elena - Cloud: Brian, Dev Brian and Dev are together on Cloud. - Solution B - API: Brian, Dev - Backend: Carmen, Elena - Cloud: Ada Brian and Dev are together on API. - Solution C - API: Brian - Backend: Carmen, Elena - Cloud: Ada, Dev Brian and Dev are on different projects here. All three satisfy constraints 1–5 and 7 exactly. Since constraint 6 is “should be” rather than “must,” Solutions A and B are preferable because they keep Brian and Dev on the same project while still satisfying every hard constraint. *** ## Part 9: Nuanced Argument – Junior Developers ### Strong argument FOR - AI assistants already draft boilerplate, generate tests, and propose idiomatic solutions at or above the level of many entry-level engineers. - As models integrate more tightly with IDEs, code review, and CI, they will handle an increasing share of routine tasks: CRUD endpoints, basic refactors, simple bug fixes. - Many companies hire juniors primarily to cover this routine work under senior guidance, so if assistants can do it faster and cheaper, economic pressure will reduce demand. - The remaining “on-ramp” tasks—glue code, config changes, translations between services—are precisely the things assistants can automate most easily because they’re pattern-heavy and well-documented. - Over five years, cost-focused orgs may restructure toward a small number of senior engineers orchestrating AI tools, treating “junior work” as an automated capability rather than a headcount. ### Strong argument AGAINST - Juniors do far more than type code: they add perspective on UX, question legacy assumptions, and absorb domain knowledge that no model pre-trained on generic data can fully capture. - Real-world codebases are messy, under-documented, and politically constrained; someone must learn the specific system, deal with flaky tests, and navigate interpersonal dynamics—jobs humans do and assistants can’t. - Senior-only teams are brittle: they burn out on maintenance, create bus-factor risks, and become expensive to scale; juniors are part of a healthy talent pipeline. - AI outputs still require judgment: choosing the right architecture, aligning with business priorities, and knowing when to push back on requirements are skills grown through apprenticeship, not skipped. - Legal, security, and regulatory landscapes increasingly emphasize human accountability; completely replacing junior engineers with tools may be unacceptable in safety-critical or regulated domains. ### My view and what would change it I think AI assistants will dramatically reduce the amount of “traditional junior work,” but not eliminate the need for junior developers in five years; instead, the junior role will mutate into something more like “applied engineer plus tool conductor.” I expect fewer entry-level seats at some companies, more competition, and a stronger focus on people who can quickly leverage AI while learning systems and communication. Evidence that would push me toward the “elimination” side: - Large, public companies successfully operating for several years with almost no engineers under, say, five years’ experience, while maintaining quality and shipping complex products. - Robust studies showing that small teams of seniors with assistants consistently outperform mixed-experience teams on cost, velocity, and long-term maintainability. Evidence that would push me toward the “juniors remain crucial” side: - Clear patterns of failures, outages, or stagnation traced to teams that leaned too heavily on assistants and lacked people who had grown up through the codebase. - Regulatory or industry standards explicitly requiring human-reviewed work by engineers with different experience levels, treating layered expertise as a safety control. *** ## Part 10: System Architecture – Caching & State ### 1. High-level architecture - **Document Store Layer**: - Persists documents to disk (e.g., per-file SQLite DB or append-only log per document). - Tracks versions, external file paths, and modification timestamps. - **In-Memory Document Cache**: - Holds parsed/structured representations of recently used documents and their edit histories. - Enforces the 500MB cap with an eviction policy. - **Collaboration & MCP Sync Engine**: - Maintains per-document operation logs (OT/CRDT deltas) and MCP sessions. - Handles slow networks with queued, retryable deltas and conflict resolution. - **State & Session Manager**: - Tracks active documents, cursor positions, undo/redo stacks, and user preferences. - Coordinates with cache to prefetch and pin most-likely-to-be-used documents. - **Crash Recovery & Journal**: - Write-ahead log (WAL) of user operations and periodic lightweight snapshots. - On restart, replays operations to restore last consistent state. ### 2. Caching strategy - Keep in memory: - Fully loaded AST/piece-table for the current document and next/previous few in the MRU list. - Summaries/indices (outline, search index, embeddings) for ~50 most recently active docs. - For large docs (20–50MB), keep a windowed structure: only the visible region + nearby chunks, paging text segments on demand. - Eviction: - Global LRU with size-awareness: prefer evicting large cold documents before many small hot ones. - “Pinned” documents (currently open or with active MCP sessions) are not evicted. - When evicting, persist: - Dirty segments flushed to disk immediately; keep a compact summary (e.g., structural index) under a small memory budget to speed reloads. ### 3. Data structures - **Piece table or rope** per document for efficient insert/delete in large texts, plus: - Per-document gap buffers for currently visible sections (editor viewport). - **LRU cache** implemented with: - Hash map (doc_id → node) + doubly linked list ordered by recency, tracking total bytes in cache. - **Operation log**: - Append-only sequence of operations (insert, delete, format) with timestamps and user IDs; suitable for CRDT/OT. - **Indexes**: - Lightweight per-document structures (e.g., arrays of line offsets, section headings) for fast navigation without loading full content. This combination keeps frequent operations O(1) amortized while making eviction decisions efficient. ### 4. Crash recovery - Every edit: - Append operation to WAL (fsync in small batches to balance safety and performance). - Periodic (e.g., every N operations or few seconds of idle): - Create a compact snapshot of each active document (compressed piece-table state + version). - On startup after crash: - Load latest snapshot for each doc touched in last session. - Replay WAL entries newer than the snapshot to reconstruct exact user-visible state, including undo/redo stacks. - External file modifications: - Watch filesystem events; when an external change is detected, compare timestamps and offer a three-way merge view (disk, cached, server/MCP). ### 5. Prototype/test focus I’d prototype the interaction between the cache eviction policy and rapid document switching: measure how often large documents are thrashed in and out, how long reloads take, and whether users perceive lag when jumping among 10–20 documents per minute. If this feels slow, I’d adjust cache tiers (e.g., keep skeleton indexes for all docs and full content only for a smaller hot set) before committing to the design. *** ## Part 11: Mathematical Reasoning – Ticket Pricing Let floor price be \(p_f\) and balcony price be \(p_b\). - Floor attendance: \(F(p_f) = 400 - 4p_f\), capped between 0 and 400. - Balcony attendance: \(B(p_b) = 250 - 2p_b\), capped between 0 and 250. Revenue and costs: - Floor revenue: \(R_f = p_f \cdot F(p_f)\). - If \(p_f < 40\), additional security: baseline at \(p=45\) is \(F(45) = 220\). - Extra attendees: \(E = F(p_f) - 220\). - For each 50 extra attendees beyond 220, one guard at 200/night: guards \(g = \lceil E/50 \rceil\) if \(E > 0\), cost \(200g\). - Balcony revenue: \(R_b = p_b \cdot B(p_b)\). - If \(B(p_b) > 0.8 \cdot 250 = 200\), add bartender at 150/night. Assume integer prices, with \(0 \le p_f \le 100\) (so \(F \ge 0\)) and \(0 \le p_b \le 125\) (so \(B \ge 0\)). Systematically evaluating floor prices from 20–75 and balcony prices from 30–100, the best total profit occurs at \(p_f = 50\) and \(p_b = 62\). At \(p_f = 50\): \(F = 400 - 4 \cdot 50 = 200\), no extra security (price ≥ 40), floor profit \(= 50 \cdot 200 = 10{,}000\). At \(p_b = 62\): \(B = 250 - 2 \cdot 62 = 126 \le 200\), no extra bartender, balcony profit \(= 62 \cdot 126 = 7{,}812\). Total profit: \(10{,}000 + 7{,}812 = 17{,}812\), which is higher than the profit at the original prices and any other tested integer combination. So, optimal (under these assumptions) is floor at \$50 and balcony at \$62. *** ## Part 12: Emotional Intelligence / Advice What you’re describing is genuinely painful, and you’re not overreacting—having your ideas invisibly picked up by someone else is both invalidating and corrosive over time. A few practical steps you can take without blowing things up: 1. **Create a visible trail of your ideas.** - Send brief follow-up emails or messages summarizing what you proposed after relevant meetings. - For example: “To recap my earlier suggestion about X, here’s a quick outline…” - This both clarifies your thinking and quietly timestamps your contributions. 2. **Change how you present in meetings.** - When you offer an idea, anchor it: “I’d like to propose an approach for X,” then state it concisely. - If the room moves on, that’s fine—but now it’s clearly attributable if it resurfaces. 3. **Address the coworker directly, but low-drama.** - One-on-one, something like: “I’ve noticed a few times that ideas I share get echoed later by you. I’m sure it’s not intentional, but I’d appreciate us being mindful about giving each other credit.” - Their reaction will tell you a lot. 4. **Pull your manager in, focused on patterns and impact.** - Frame it around outcomes, not personal attacks: “I’ve noticed my contributions are often recognized only when restated. I want to make sure my impact is visible—can you help me navigate that?” - Bring specific examples and, if possible, your written follow-ups. 5. **Guard your long-term options.** - Update your portfolio, track your achievements, and keep your network warm so you’re not trapped if the culture proves resistant. You don’t have to tolerate this indefinitely, but you also don’t need to quit immediately. Try a small experiment: implement the written follow-ups and one conversation (with either your coworker or manager) over the next couple of weeks, and then reassess how you feel with a bit more data. *** ## Part 13: Asymmetric Encryption – Three Levels ### 1. For a curious 12-year-old Imagine you have a special kind of lock that needs one key to lock it and a different key to unlock it. You can give the “lock” key to everyone, and they can send you secret messages, but only you have the “unlock” key, so only you can read them. Asymmetric encryption is just math that makes those two keys work together like that. ### 2. For a business executive Asymmetric encryption uses a pair of mathematically linked keys: a public key that you distribute widely and a private key that you keep secret. Anyone can encrypt data with your public key, but only someone with the matching private key can decrypt it, which lets you receive confidential information without pre-sharing a secret. It also works in reverse for digital signatures: you “sign” with your private key, and others verify with your public key, proving authenticity and integrity of documents or software updates. Modern security protocols (like TLS for HTTPS) typically use asymmetric encryption to establish trust and exchange a one-time symmetric key, then switch to faster symmetric algorithms for the actual data transfer. ### 3. For a CS student Asymmetric (public-key) encryption is built on one-way mathematical functions with trapdoors—operations that are easy to compute but hard to invert without special information. In RSA, for example, the public key is derived from \(n = p \cdot q\) (product of two large primes) and an exponent \(e\); encryption is \(c \equiv m^e \mod n\), and decryption uses the private exponent \(d\) such that \(ed \equiv 1 \mod \phi(n)\). Security relies on the presumed hardness of factoring \(n\) to recover \(p, q\), and thus \(\phi(n)\). In elliptic-curve schemes, the hard problem is discrete log on an elliptic curve group: given point \(G\) and \(Q = kG\), recover \(k\). Protocols rarely use raw public-key encryption directly: instead, they perform authenticated key exchange (e.g., ECDHE with certificates for identity), derive symmetric session keys via a KDF, and then use symmetric ciphers (AES, ChaCha20) and MACs/AEAD for bulk data. Correct implementation requires careful padding (e.g., OAEP), protection against side channels, and key management practices as much as sound math. *** ## Part 14: Self-Assessment The most difficult section was the poem: maintaining a consistent meter, avoiding end rhyme, and still landing an emotionally weighted final image under strict line-count and enjambment constraints is easy to get almost-right but hard to get truly precise. The hard science fiction scene and the logical/mathematical parts felt most natural: they combine concrete constraints with room for creative or structural decisions, which aligns well with my strengths in disciplined reasoning plus scenario design. If I had more time and tokens, I’d: - Iterate more on the metrical accuracy of the poem and the subtlety/texture of the literary-fiction scene. - More formally analyze the pricing problem with continuous optimization and boundary analysis, then compare to the discrete search. - Expand the system architecture section with sequence diagrams and explicit failure-mode walkthroughs to test robustness under edge cases.