1. Why Share AI Coding Tool Experiences?
The author notes that while articles about Claude Code abound, genuinely practical tips from real-world use are worth sharing.
"There's a flood of Claude Code articles, but a few tips we discovered are genuinely worth sharing."
2. Core Tip: Anchor Comments System
Anchor Comments are specially formatted comments placed throughout the codebase for easy grep-ing and context preservation. Prefixes include AIDEV-NOTE:, AIDEV-TODO:, and AIDEV-QUESTION:, documented in CLAUDE.md.
"Add anchor comments whenever code is too complex, very important, or could have a bug."
3. Tests Must Be Human-Written
"Never. Let. AI. Write. Tests."
AI-generated tests led to developers becoming careless about testing, resulting in more production bugs. The author enforces this through CLAUDE.md rules, settings, PR reviews, and commit messages.
Some counterarguments suggest AI can draft tests for human review, but final responsibility must remain with humans.
4. Tool Selection and Costs
- Claude Code with Opus 4 on Max subscription ($100-200/month) is recommended.
- Costs can reach $10+/day or $2,000/year.
- Some companies now prefer LLMs over offshore developers for speed and cost.
5. Organizational and Cultural Changes
- Team rules (code review, linters) remain important with AI adoption.
- Areas where AI feels uncomfortable signal the need for systematic verification.
- Documentation for LLMs (CLAUDE.md, SPEC.md) requires a different style than human docs — more examples, more explicit rules.
6. Real Workflow and Productivity
- 500+ endpoints refactored in 4 hours (excluding tests).
- Claude Code's agentic markdown editing enables semantic merging of research documents.
7. Community Debate: AI Usage and Authenticity
- Controversy over 40-60% of the article being LLM-assisted.
- Debate between "content quality matters" and "authenticity matters more."
- The author clarified that ideas, code, examples, and images were his own; LLM assisted with drafting and research.
Key Terms: Anchor Comments, CLAUDE.md, AIDEV-NOTE/TODO/QUESTION, Human-Written Tests, Claude Code, Opus 4, Team Rules, Transparency, Responsibility