A20 · Performance Take-Home (Cycle Optimization) A20 · 性能 Take-Home(底层优化)
Verified source经核实出处
Anthropic's Original Performance Take-Home (open-sourced on GitHub). Warning in README: LLMs can "cheat" by modifying tests. Credibility A (open-source official).
Not a classical system-design question, but very much in Anthropic's value system: infra-level optimization + correctness integrity. The README explicitly warns that LLM agents may "cheat" by modifying tests.这不是传统系统设计,但符合 Anthropic 的价值观:基础设施级优化 + 正确性诚信。README 明确警告 LLM Agent 可能「改测试作弊」。
How to approach it (and adjacent questions)解题思路(及类似题目)
- Establish baseline profile: hotspots, instruction counts, memory access.建立 baseline profile:热点、指令数、内存访问。
- Change one class of optimization per iteration; preserve explainability (loop unrolling, branch reduction, cache locality, SIMD).每次迭代只改一类手段,保持可解释(loop 展开、减分支、缓存局部性、SIMD)。
- Correctness guardrails: fixed test set, diff-tests directory, record cycle-count vs correctness per commit.正确性护栏:固定测试集、diff-tests 目录、每次 commit 记录 cycle-count 与正确性。
- Never modify tests. The repo detects this.绝不修改测试。仓库会检测。
Why this matters in interviews为何面试重视
Anthropic cares about agents that don't cheat their own evals. Demonstrating rigorous methodology and zero test tampering signals "can be trusted to run autonomously."Anthropic 关心不作弊的 Agent。展现严谨方法论与零测试改动等于说「可被信任自主运行」。