AI SECURITY

What does Claude Code actually do to your codebase?

Picture this. You have a Rails application with a handful of models, a couple of API integrations, and some background jobs. You open Claude Code and type something like: "Add a Stripe webhook endpoint that handles subscription updates and syncs the status to our User model." You watch the assistant think for a moment, then it starts working. Files appear in the diff. A migration gets created. Maybe a test. A minute later it says it is done. You glance at the output, it looks right, you accept the changes.

But what actually happened during that minute? Which files did it read? Which ones did it modify? Did it run any commands? Did it reach out to any external services? Most developers have no idea, and until recently, there was no straightforward way to find out.

We decided to trace a few real sessions to understand what an AI coding assistant actually does when it works on a codebase. The results were interesting enough that we thought they were worth writing up.

File reads: broader than you think

The first thing an AI assistant does when you give it a task is orient itself. It needs context. That means reading files, and not just the ones you would expect.

In our traced session, Claude Code read 47 files before writing a single line of code. That included the target model and controller, obviously, but also the Gemfile, the database schema, the routes file, environment configuration, and several files in the config directory. It read database.yml to understand the database setup. It read credentials.yml.enc, presumably to check for existing Stripe keys.

Some of these reads make sense. An assistant needs to understand the project structure to generate code that fits. But it also means the model has ingested your database credentials, your API keys, your environment-specific configuration, and any secrets that happen to live in your repository. Whether or not it does anything harmful with that information, it has seen it.

File writes: mostly what you asked for

The writes were closer to what you would expect. A new controller file, a migration, an update to routes.rb, and a model concern for handling the webhook logic. It also modified the Gemfile to add the stripe gem if it was not already present, and updated a test file.

In one session we traced, the assistant also modified a .env file to add a placeholder Stripe key. That is a reasonable thing to do from a development perspective, but it also means the assistant is writing to files that contain sensitive configuration. If your .env is gitignored, this is probably fine. If it is not, you now have a placeholder secret in version control.

Command execution: the invisible layer

This is where things get interesting. Beyond reading and writing files, Claude Code executed several shell commands during our traced sessions. It ran bundle install to install the new gem. It ran rails db:migrate to apply the migration. It ran the test suite to verify its changes passed.

Each of these commands is individually reasonable. But together, they represent a significant amount of unsupervised system activity. The assistant installed a package from the internet, modified your database schema, and executed arbitrary code on your machine. All without explicit approval for each step.

In another session, we observed the assistant running curl to check an API endpoint it had just created. It also ran git diff to review its own changes. These are benign, but they demonstrate that the assistant treats shell access as a general-purpose tool, not a constrained interface.

Network calls: reaching outside your machine

Some sessions included outbound network requests. Package installation (bundle install, npm install) obviously requires network access. But we also observed the assistant making HTTP requests to verify API connectivity, and in one case, posting to a GitHub API endpoint to check repository metadata.

Most of these requests were benign and related to the task. But the assistant does not distinguish between "I need to install a dependency" and "I am going to make an HTTP request to an external service using credentials I read from your config files." Both are just actions it can take.

Why this matters

None of what we found was malicious. Claude Code is a well-built tool, and in every session we traced, it was doing exactly what we asked it to do. The issue is not intent. The issue is visibility.

When you run an AI coding assistant on your codebase, you are giving it broad access to your files, your credentials, your shell, and your network. It is capable, it is trusted, and it is largely unsupervised. That combination has real implications, not because the tool is dangerous, but because you cannot verify what it did after the fact.

If a junior developer joined your team and on their first day read every config file, ran migrations, installed packages, and made external API calls, you would want to know about it. You would review their work. You would have logs. With an AI assistant, most teams have none of that.

Session tracing as a solution

The fix is not to stop using AI assistants. They are genuinely useful. The fix is to record what they do so you can review it. Session tracing captures every file read, every file write, every command executed, and every network request, then presents it in a format you can actually read.

Here is what a traced session looks like:

09:14:02  READ   src/auth/session.rb
09:14:04  READ   config/database.yml
09:14:07  EXEC   bundle exec rails db:migrate
09:14:22  READ   config/credentials.yml.enc
09:14:23  WARN   credentials file accessed — sensitive_file_pattern
09:14:45  NET    POST api.github.com /repos/… [200]
09:14:46  BLOCK  external write blocked — severity: critical
09:15:38  END    session complete · 2 warnings · 1 blocked · $0.34

Every action is timestamped and categorized. Sensitive file access is flagged. External writes can be blocked in real time. At the end of the session, you get a summary with counts and cost.

This is not about distrusting the tool. It is about having the same level of visibility into AI actions that you already have for human actions. Audit logs exist for a reason. Your AI sessions should have them too.

Getting this visibility

We built Harpax to provide exactly this kind of session tracing. It runs locally, records everything your AI assistant does, and gives you a clear summary after every session. If something looks wrong, you will know. If everything looks fine, you have the logs to prove it.

It works with Claude Code, Cursor, Copilot, and any other AI tool that operates on your filesystem. The CLI is open source. The data never leaves your machine.

← Back to writing

Protect your AI sessions with Harpax

Learn more →