Claude Codeを使った開発の効率を上げる準備です。(2025.09.25)
Claude Code: Best practices for agentic coding
上記の公式HPから日本語にして抜粋するだけです。
3. Try common workflows
3-1. Explore, plan, code, commit
This versatile workflow suits many problems.
GPTに他の英語にさせてみました。
This flexible workflow is suitable for a wide range of problems.
This versatile workflow can be applied to many different problems.
This workflow is flexible enough to handle a variety of problems.
This adaptable workflow works well for many types of problems.
This general-purpose workflow fits a broad range of use cases.
This workflow is effective for solving a wide array of problems.
3-1-1. Ask Claude to read relevant files, images, or URLs,
providing either general pointers (“read the file that handles logging”) or specific filenames (“read logging.py”),
but explicitly tell it not to write any code just yet.
This is the part of the workflow where you should consider strong use of subagents, especially for complex problems.
3-1-2. Ask Claude to make a plan for how to approach a specific problem.
We recommend using the word “think” to trigger extended thinking mode, which gives Claude additional computation time to evaluate alternatives more thoroughly.
These specific phrases are mapped directly to increasing levels of thinking budget in the system: “think” < “think hard” < “think harder” < “ultrathink.” Each level allocates progressively more thinking budget for Claude to use.
If the results of this step seem reasonable, you can have Claude create a document or a GitHub issue with its plan so that you can reset to this spot if the implementation (step 3) isn’t what you want.
3-1-3. Ask Claude to implement its solution in code.
This is also a good place to ask it to explicitly verify the reasonableness of its solution as it implements pieces of the solution.
「codeを書くときに解決策の妥当性を検証するようにCluadeに要請してください」的なことを書いているので、具体的な例をClaude Opus 4.1に提示してもらいます。Sonnet 4.5よりOpus 4.1の方が説明は上手です。
指示内容 : 次の文章について具体例を使って説明してください。”This is also a good place to ask it to explicitly verify the reasonableness of its solution as it implements pieces of the solution.”
英語で指示 : Please explain the following sentence with an example: “This is also a good place to ask it to explicitly verify the reasonableness of its solution as it implements parts of the solution.”
出力結果です。
This sentence suggests that when working with an AI assistant on a complex task, you should ask it to check
if its approach makes sense as it goes along, rather than just accepting everything until the end.
Example Scenario
Let’s say you’re asking an AI to implement a battery monitoring system for a robot:
Without verification (problematic approach):
User: “Create a function to calculate remaining battery life”
AI: *writes code that assumes battery voltage drops linearly from 12V to 0V*
With explicit verification (better approach):
User: “Create a function to calculate remaining battery life. As you implement it, please verify that your
assumptions about battery discharge curves are reasonable.”
AI: “I’ll implement this function. Let me first verify my approach:
– Assumption: Battery voltage drops from 12V to 0V linearly
– Verification: Actually, this is incorrect. LiPo batteries typically operate between 4.2V (full) and 3.0V
(empty) per cell, and the discharge curve is non-linear.
– Better approach: I should use a lookup table or polynomial approximation for the actual discharge curve.
[Then proceeds with corrected implementation]”
By asking for verification during implementation, you catch incorrect assumptions early rather than getting a
complete solution based on flawed reasoning.
そのままでした。assumptionsに対してverifyしろって指示すれば良いということでした。
3-1-4. Ask Claude to commit the result and create a pull request.
If relevant, this is also a good time to have Claude update any READMEs or changelogs with an explanation of what it just did.
3-1のまとめ
Steps #1-#2 are crucial.
without them, Claude tends to jump straight to coding a solution.
While sometimes that’s what you want, asking Claude to research and plan first significantly improves performance for problems requiring deeper thinking upfront.
3-2. Write tests, commit; code, iterate, commit
This is an Anthropic-favorite workflow for changes that are easily verifiable with unit, integration, or end-to-end tests. Test-driven development (TDD) becomes even more powerful with agentic coding:
3-2-1. Ask Claude to write tests based on expected input/output pairs.
Be explicit about the fact that you’re doing test-driven development
so that it avoids creating mock implementations, even for functionality that doesn’t exist yet in the codebase.
以下、TDDの教科書の内容です。
3-2-2. Tell Claude to run the tests and confirm they fail.
(TDDが好きな人はこれが好き、TDDが嫌いな人はこれが嫌い、というやつですね。AIにやってもらうのは良いですね。)
Explicitly telling it not to write any implementation code at this stage is often helpful.
3-2-3. Ask Claude to commit the tests
Ask Claude to commit the tests when you’re satisfied with them.
3-2-4. Ask Claude to write code that passes the tests
Ask Claude to write code that passes the tests, instructing it not to modify the tests.
test群は変更しないようにClaudeに要請してください。
Tell Claude to keep going until all tests pass.
It will usually take a few iterations for Claude to write code, run the tests, adjust the code, and run the tests again.
At this stage, it can help to ask it to verify with independent subagents that the implementation isn’t overfitting to the tests
指示内容 : 独立したサブエージェントを使って実装がテストに過剰適合していないか検証してもらう、ということについて具体的な例で説明してください。
英語で指示 : Can you please provide a concrete example of using an independent subagent to verify that the implementation is not overfitting the tests?
> Can you please provide a concrete example of using an independent subagent to verify that the implementation
is not overfitting the tests?
● I’ll demonstrate how to use an independent subagent to verify that an implementation isn’t overfitting to
tests. This is particularly useful when you’ve written code that passes all tests but want to ensure it
handles edge cases and real-world scenarios properly.
● The agent has provided a comprehensive analysis showing how to detect test overfitting. Here’s a practical
example of how you’d use this approach in your workflow:
When to use this verification pattern:
1. After implementing a function that passes all tests suspiciously easily
2. When tests feel too specific or sparse
3. Before deploying code that handles critical business logic
4. When inheriting a codebase with minimal tests
Key takeaways from the example:
– An overfitted implementation might use hardcoded values or if-statements that only handle the exact test
cases
– The agent identified critical gaps like boundary testing, interpolation between test points, and edge case
handling
– Property-based testing (using tools like Hypothesis) can catch overfitting by testing random inputs
– A proper implementation should use algorithms/formulas rather than memorizing test answers
How to apply this in practice:
# If you have a function that passes tests too easily:
# 1. Test boundaries
assert function(test_value – 0.01) behaves_reasonably
assert function(test_value + 0.01) behaves_reasonably
# 2. Test between known points
assert function(midpoint_value) gives_sensible_result
# 3. Test invariants
# For any valid inputs x1 < x2:
assert property_holds(function(x1), function(x2))
The agent approach helps ensure your implementation truly understands the problem rather than just memorizing
test answers.
指示内容 : 独立したサブエージェントを使って実装がテストに過剰適合していないか検証してもらうための指示を教えてください。
英語で指示 : Please provide instructions for using an independent subagent to verify that the implementation is not overfitting the tests.
● I’ll help you create instructions for using an independent subagent to
verify that your implementation isn’t overfitting to tests. Let me
create a comprehensive guide for this verification process.
# Independent Subagent Verification for Test Overfitting
## Overview
This guide provides instructions for using an independent subagent to verify that your implementation
doesn’t overfit to existing test cases. The verification process ensures your solution generalizes well
beyond the specific test scenarios.
## Verification Strategy
### 1. Primary Implementation Phase
When implementing a feature or fixing a bug:
– Write your implementation based on requirements
– Run existing tests to ensure basic functionality
– **DO NOT** iterate excessively on test failures
### 2. Independent Verification Phase
Launch a separate subagent to:
– Review the implementation without seeing test details
– Generate additional edge cases
– Create property-based tests
– Verify algorithm correctness independently
## Subagent Instructions Template
### For Code Review Agent
“`markdown
Task: Review the implementation in [file_path] for potential overfitting to tests.
Please perform the following checks:
1. Analyze if the implementation handles general cases, not just specific test inputs
2. Look for hardcoded values that match test expectations
3. Identify missing edge cases not covered by current tests
4. Verify the algorithm’s theoretical correctness
5. Check for special-case handling that seems test-specific
Report any concerns about:
– Magic numbers matching test data
– Overly specific conditional logic
– Missing validation for untested inputs
– Algorithm shortcuts that only work for test cases
“`
### For Test Generation Agent
“`markdown
Task: Generate additional test cases for [feature/function] WITHOUT looking at existing tests.
Requirements:
1. Create tests based solely on the function signature and documentation
2. Include:
– Boundary conditions
– Invalid inputs
– Performance edge cases
– Random property-based tests
3. Generate at least 5 unique test scenarios not obvious from the API
Output format: Provide test cases with clear descriptions and expected behaviors.
“`
## Parallel Verification Pattern
“`python
# Example of using parallel agents for verification
def verify_implementation(implementation_file, test_file):
“””
Launch parallel agents to verify implementation quality
“””
# Agent 1: Review implementation for overfitting
review_agent = Task(
description=”Review for overfitting”,
prompt=f”Review {implementation_file} for test overfitting patterns”,
subagent_type=”general-purpose”
)
# Agent 2: Generate new test cases
test_gen_agent = Task(
description=”Generate new tests”,
prompt=f”Create additional test cases for functions in {implementation_file}”,
subagent_type=”general-purpose”
)
# Agent 3: Verify mathematical correctness
math_agent = Task(
description=”Verify correctness”,
prompt=f”Verify algorithmic correctness in {implementation_file}”,
subagent_type=”general-purpose”
)
# Run all agents in parallel
results = run_parallel(review_agent, test_gen_agent, math_agent)
return results
“`
## Common Overfitting Patterns to Detect
### 1. Hardcoded Test Data
“`python
# BAD: Overfitted to test
def calculate_result(input_val):
if input_val == 42: # Specific test value
return 100
elif input_val == 17: # Another test value
return 50
# Missing general case
# GOOD: General implementation
def calculate_result(input_val):
return input_val * 2.38 + 0.76 # General formula
“`
### 2. Test-Specific Edge Cases
“`python
# BAD: Only handles tested edge cases
def process_array(arr):
if len(arr) == 0:
return []
if len(arr) == 3: # Specific test size
return special_process(arr)
# Missing handling for other sizes
# GOOD: Handles all cases
def process_array(arr):
if len(arr) == 0:
return []
return [process_item(x) for x in arr] # General processing
“`
### 3. Algorithm Shortcuts
“`python
# BAD: Works only for test inputs
def find_optimal(data):
# Hardcoded solution for test data
if data == test_dataset_1:
return known_solution_1
# GOOD: Actual algorithm
def find_optimal(data):
# Implement real optimization algorithm
return optimize(data, constraints)
“`
## Verification Workflow
### Step 1: Initial Implementation
“`bash
# Implement feature
edit src/feature.py
# Run basic tests
pytest tests/test_feature.py -v
“`
### Step 2: Launch Verification Agents
“`bash
# Use Task tool to launch verification agents in parallel
# Each agent has a specific verification role
“`
### Step 3: Review Agent Findings
– Collect reports from all agents
– Identify potential overfitting issues
– Generate additional test cases
### Step 4: Refactor if Needed
“`python
# Based on agent feedback:
1. Remove hardcoded values
2. Generalize algorithms
3. Add missing validations
4. Implement proper edge case handling
“`
### Step 5: Run Extended Test Suite
“`bash
# Run original tests
pytest tests/test_feature.py
# Run generated tests
pytest tests/test_feature_extended.py
# Run property-based tests
pytest tests/test_feature_properties.py
“`
## ROS 2 Specific Verification
### For ROS Nodes
“`python
# Verification for ROS 2 nodes
def verify_ros_node(node_name):
“””
Special verification for ROS 2 nodes
“””
checks = [
“Parameter validation”,
“Message type handling”,
“Service response generation”,
“Action goal processing”,
“QoS profile compatibility”,
“Lifecycle state transitions”
]
for check in checks:
# Launch agent to verify each aspect
verify_aspect(node_name, check)
“`
### For Navigation Stack
“`bash
# Verify navigation doesn’t overfit to specific maps
ros2 run verification generate_random_map
ros2 launch nav2_bringup navigation_launch.py map:=random_map.yaml
“`
### For Control Algorithms
“`python
# Generate random trajectories for testing
def verify_controller():
trajectories = generate_random_trajectories(100)
for traj in trajectories:
result = controller.follow(traj)
assert verify_stability(result)
“`
## Automated Verification Script
“`python
#!/usr/bin/env python3
“””
Automated verification for test overfitting
“””
import subprocess
from pathlib import Path
def run_verification(module_path):
“””
Run complete verification suite
“””
# Step 1: Static analysis
print(“Running static analysis…”)
subprocess.run([“pylint”, module_path])
# Step 2: Generate new tests
print(“Generating additional tests…”)
# Launch test generation agent
# Step 3: Mutation testing
print(“Running mutation tests…”)
subprocess.run([“mutmut”, “run”, “–paths-to-mutate”, module_path])
# Step 4: Property-based testing
print(“Running property-based tests…”)
subprocess.run([“hypothesis”, “test”, module_path])
# Step 5: Coverage analysis
print(“Analyzing coverage…”)
subprocess.run([“coverage”, “run”, “-m”, “pytest”])
subprocess.run([“coverage”, “report”])
if __name__ == “__main__”:
run_verification(“src/robot_control”)
“`
## Best Practices
### DO:
– ✅ Implement general solutions first
– ✅ Use property-based testing (hypothesis)
– ✅ Generate random test inputs
– ✅ Verify mathematical properties
– ✅ Test with different random seeds
– ✅ Use mutation testing tools
– ✅ Implement fuzz testing
### DON’T:
– ❌ Look at test data while implementing
– ❌ Add special cases for each test failure
– ❌ Use magic numbers from tests
– ❌ Skip edge case validation
– ❌ Assume test coverage equals correctness
## Example Verification Session
“`markdown
User: Implement a path planning algorithm
⎿ Wrote 351 lines to /home/jn/develop/AI/claude_opus/002_claude_md/SUBAGENT_USAGE_EXAMPLES.md
# Subagent Usage Examples for Overfitting Prevention
## Quick Reference Commands
### 1. Basic Verification Agent
“`python
# Launch a verification agent
Task(
description=”Verify no overfitting”,
prompt=”””
Review the implementation in src/algorithm.py
Check for:
1. Hardcoded test values
2. Missing edge cases
3. Algorithm shortcuts
Report any test-specific code
“””,
subagent_type=”general-purpose”
)
“`
### 2. Parallel Multi-Agent Verification
“`python
# Launch multiple agents simultaneously
agents = [
Task(
description=”Review implementation”,
prompt=”Review src/controller.py for overfitting patterns”,
subagent_type=”general-purpose”
),
Task(
description=”Generate edge cases”,
prompt=”Create 10 edge case tests for controller.py functions”,
subagent_type=”general-purpose”
),
Task(
description=”Verify math correctness”,
prompt=”Verify mathematical correctness of algorithms in controller.py”,
subagent_type=”general-purpose”
)
]
“`
## Specific Verification Scenarios
### Scenario 1: Sorting Algorithm Verification
“`python
# After implementing a custom sorting algorithm
Task(
description=”Verify sorting algorithm”,
prompt=”””
File: src/custom_sort.py
Without looking at existing tests, verify:
1. Algorithm handles all input sizes (0 to n)
2. Correctly handles duplicates
3. Maintains stability if required
4. Performance matches expected complexity
5. No hardcoded array sizes or values
Generate 5 random test arrays with various properties
“””,
subagent_type=”general-purpose”
)
“`
### Scenario 2: ROS 2 Node Verification
“`python
# For a new ROS 2 node implementation
Task(
description=”Verify ROS node robustness”,
prompt=”””
Node: src/perception_node.py
Verify the node handles:
1. Different message frequencies (1Hz to 100Hz)
2. Missing or malformed messages
3. Various QoS settings
4. Parameter boundaries
5. Lifecycle transitions
Generate test scenarios for:
– Random sensor data inputs
– Network interruptions
– Resource constraints
“””,
subagent_type=”general-purpose”
)
“`
### Scenario 3: Navigation Algorithm
“`python
# For path planning implementation
Task(
description=”Test path planner generalization”,
prompt=”””
Implementation: src/nav/path_planner.py
Generate test environments that verify:
1. Various obstacle configurations
2. Different map sizes (tiny to huge)
3. Multiple valid path scenarios
4. No-solution cases
5. Dynamic obstacles
Create 5 procedurally generated maps with:
– Random obstacle placement
– Varying complexity levels
– Different start/goal positions
“””,
subagent_type=”general-purpose”
)
“`
### Scenario 4: Control System Verification
“`python
# For PID controller implementation
Task(
description=”Verify PID controller”,
prompt=”””
File: src/control/pid_controller.py
Without seeing test data, verify:
1. Stability for various setpoints
2. Response to step inputs
3. Handling of saturation
4. Anti-windup mechanisms
5. Noise rejection
Generate test signals:
– Random step functions
– Sinusoidal references
– Ramp inputs
– Noisy measurements
“””,
subagent_type=”general-purpose”
)
“`
## Advanced Verification Patterns
### Pattern 1: Differential Testing
“`python
# Compare against reference implementation
Task(
description=”Differential testing”,
prompt=”””
Compare src/new_algorithm.py against src/reference_algorithm.py
Generate 100 random inputs and verify:
1. Both produce identical outputs
2. Performance characteristics match
3. Edge case handling is consistent
Report any divergences
“””,
subagent_type=”general-purpose”
)
“`
### Pattern 2: Metamorphic Testing
“`python
# Test with transformed inputs
Task(
description=”Metamorphic testing”,
prompt=”””
For image processing in src/vision/processor.py:
Apply transformations and verify properties:
1. Rotation invariance where expected
2. Scale invariance properties
3. Color space conversions
4. Noise addition effects
Generate transformed test images
“””,
subagent_type=”general-purpose”
)
“`
### Pattern 3: Invariant Checking
“`python
# Verify algorithm invariants
Task(
description=”Check invariants”,
prompt=”””
For data structure in src/structures/tree.py:
Verify invariants hold:
1. Tree remains balanced after operations
2. Parent-child relationships maintained
3. Size counters accurate
4. No cycles introduced
Generate random operation sequences
“””,
subagent_type=”general-purpose”
)
“`
## Integration with CI/CD
### GitHub Actions Workflow
“`yaml
name: Overfitting Verification
on: [pull_request]
jobs:
verify:
runs-on: ubuntu-latest
steps:
– uses: actions/checkout@v2
– name: Run verification agents
run: |
python scripts/run_verification_agents.py
– name: Generate additional tests
run: |
python scripts/generate_edge_cases.py
– name: Run extended test suite
run: |
pytest tests/ -v
pytest generated_tests/ -v
“`
### Verification Script Template
“`python
#!/usr/bin/env python3
“””
run_verification_agents.py
“””
def launch_verification_suite(module):
agents = []
# Agent 1: Static analysis
agents.append(Task(
description=”Static analysis”,
prompt=f”Analyze {module} for code smells and overfitting”,
subagent_type=”general-purpose”
))
# Agent 2: Test generation
agents.append(Task(
description=”Generate tests”,
prompt=f”Create comprehensive tests for {module}”,
subagent_type=”general-purpose”
))
# Agent 3: Performance verification
agents.append(Task(
description=”Check performance”,
prompt=f”Verify {module} performance with random inputs”,
subagent_type=”general-purpose”
))
return agents
if __name__ == “__main__”:
import sys
module = sys.argv[1] if len(sys.argv) > 1 else “src/”
agents = launch_verification_suite(module)
# Execute agents and collect results
“`
## Common Pitfalls and Solutions
### Pitfall 1: Testing Only Happy Path
“`python
# Problem: Only testing successful cases
def test_only_success():
assert process_valid_data([1, 2, 3]) == [2, 4, 6]
# Solution: Use agent to generate failure cases
Task(
description=”Generate failure tests”,
prompt=”Create tests for invalid inputs, errors, and edge cases”,
subagent_type=”general-purpose”
)
“`
### Pitfall 2: Overfitting to Performance Tests
“`python
# Problem: Optimizing for specific benchmarks
def algorithm(data):
if len(data) == 1000: # Common benchmark size
return optimized_path(data)
return slow_path(data)
# Solution: Random size verification
Task(
description=”Random size testing”,
prompt=”Test algorithm with random data sizes from 1 to 1000000″,
subagent_type=”general-purpose”
)
“`
### Pitfall 3: Environment-Specific Code
“`python
# Problem: Works only in test environment
def connect():
return connect_to(“test-server”) # Hardcoded test server
# Solution: Environment verification
Task(
description=”Environment testing”,
prompt=”Verify code works with different configurations and endpoints”,
subagent_type=”general-purpose”
)
“`
## Verification Checklist
Before considering implementation complete:
– [ ] Run independent code review agent
– [ ] Generate additional test cases without seeing existing tests
– [ ] Verify algorithm correctness mathematically
– [ ] Test with random/generated inputs
– [ ] Check for hardcoded values
– [ ] Verify edge case handling
– [ ] Run property-based tests
– [ ] Perform mutation testing
– [ ] Check test coverage gaps
– [ ] Verify performance with various input sizes
## Quick Commands
“`bash
# Launch basic verification
python -c “from verify import Task; Task(‘Review’, ‘Check src/ for overfitting’, ‘general-purpose’)”
# Run all verification agents
./scripts/verify_all.sh
# Generate random tests
python scripts/generate_random_tests.py –module src/algorithm.py –count 50
# Check for magic numbers
grep -r “42\|100\|256″ src/ –include=”*.py” | grep -v test
# Run mutation testing
mutmut run –paths-to-mutate src/
# Property-based testing
hypothesis write src/algorithm.py > test_properties.py
pytest test_properties.py
“`
凄いです。
3-2-5. Ask Claude to commit the code
Ask Claude to commit the code once you’re satisfied with the changes.
3-2のまとめ
Claude performs best when it has a clear target to iterate against—a visual mock, a test case, or another kind of output.
By providing expected outputs like tests, Claude can make changes, evaluate results, and incrementally improve until it succeeds.
3-3. Write code, screenshot result, iterate
Similar to the testing workflow, you can provide Claude with visual targets:
3-3-1. Give Claude a way to take browser screenshots
(e.g., with the Puppeteer MCP server, an iOS simulator MCP server, or manually copy / paste screenshots into Claude).
3-3-2. Give Claude a visual mock
Give Claude a visual mock by copying / pasting or drag-dropping an image, or giving Claude the image file path.
3-3-3. Ask Claude to implement the design
Ask Claude to implement the design in code, take screenshots of the result, and iterate until its result matches the mock.
実装してvisual mockとscreenshotsがmatchするか確認して違ったらまた実装するを繰り返してください。
3-3-4. Ask Claude to commit
Ask Claude to commit when you’re satisfied.
3-3のまとめ
Like humans, Claude’s outputs tend to improve significantly with iteration.
While the first version might be good, after 2-3 iterations it will typically look much better.
Give Claude the tools to see its outputs for best results.
3-4. Safe YOLO mode
Instead of supervising Claude, you can use claude –dangerously-skip-permissions to bypass all permission checks and let Claude work uninterrupted until completion. This works well for workflows like fixing lint errors or generating boilerplate code.
Letting Claude run arbitrary commands is risky and can result in data loss, system corruption, or even data exfiltration (e.g., via prompt injection attacks). To minimize these risks, use –dangerously-skip-permissions in a container without internet access. You can follow this reference implementation using Docker Dev Containers.
firewallなどの設定が完了している環境です。
3-5. Codebase Q&A
When onboarding to a new codebase, use Claude Code for learning and exploration. You can ask Claude the same sorts of questions you would ask another engineer on the project when pair programming. Claude can agentically search the codebase to answer general questions like:
How does logging work?
How do I make a new API endpoint?
What does async move { … } do on line 134 of foo.rs?
What edge cases does CustomerOnboardingFlowImpl handle?
Why are we calling foo() instead of bar() on line 333?
What’s the equivalent of line 334 of baz.py in Java?
At Anthropic, using Claude Code in this way has become our core onboarding workflow, significantly improving ramp-up time and reducing load on other engineers. No special prompting is required! Simply ask questions, and Claude will explore the code to find answers.
質問したらClaudeが解析して答えてくれます。
3-6. Use Claude to interact with git
Claude can effectively handle many git operations. Many Anthropic engineers use Claude for 90%+ of our git interactions:
Searching git history to answer questions like “What changes made it into v1.2.3?”, “Who owns this particular feature?”, or “Why was this API designed this way?” It helps to explicitly prompt Claude to look through git history to answer queries like these.
Writing commit messages. Claude will look at your changes and recent history automatically to compose a message taking all the relevant context into account
Handling complex git operations like reverting files, resolving rebase conflicts, and comparing and grafting patches
3-7. Use Claude to interact with GitHub
Claude Code can manage many GitHub interactions:
Creating pull requests: Claude understands the shorthand “pr” and will generate appropriate commit messages based on the diff and surrounding context.
Implementing one-shot resolutions for simple code review comments: just tell it to fix comments on your PR (optionally, give it more specific instructions) and push back to the PR branch when it’s done.
Fixing failing builds or linter warnings
Categorizing and triaging open issues by asking Claude to loop over open GitHub issues
This eliminates the need to remember gh command line syntax while automating routine tasks.
3-8. Use Claude to work with Jupyter notebooks
Researchers and data scientists at Anthropic use Claude Code to read and write Jupyter notebooks. Claude can interpret outputs, including images, providing a fast way to explore and interact with data. There are no required prompts or workflows, but a workflow we recommend is to have Claude Code and a .ipynb file open side-by-side in VS Code.
You can also ask Claude to clean up or make aesthetic improvements to your Jupyter notebook before you show it to colleagues. Specifically telling it to make the notebook or its data visualizations “aesthetically pleasing” tends to help remind it that it’s optimizing for a human viewing experience.
広告
IT開発関連書とビジネス書が豊富な翔泳社の通販『SEshop』
さくらのレンタルサーバ

ムームードメイン

Oisix(おいしっくす)

らでぃっしゅぼーや

珈琲きゃろっと

エプソムソルト

AdGuard Ad Blocker
AdGuard VPN
AdGuard DNS