Skip to main content

Leveraging AI as a Software Engineer: What Actually Works

Table of Contents
I have been using AI coding assistants seriously for about a year — not casually, not experimentally, but as a core part of how I do my job every day. This is not a post about AI generating code. It is about how AI changes the way an experienced engineer thinks, investigates, and makes decisions.

The Shift That Actually Matters
#

The naive version of AI-assisted development is: you describe what you want, the AI writes it, you review and move on. This is real and useful, but it is also the least interesting thing AI does for me.

The more important shift is that AI makes investigation cheap. Before, when I hit an unfamiliar problem, I had to choose: spend an hour reading documentation and tracing code paths, or make an educated guess and iterate. With AI, investigation has near-zero friction. I can explore a codebase I have never seen, understand a system’s behavior from its logs, or reason about failure modes, in minutes instead of hours.

This changes the shape of my work. I investigate more, guess less. I read the error carefully before trying a fix. I ask “why” before asking “how to fix”.

The Investigation-First Pattern
#

The most consistent pattern I have developed is: before doing anything, investigate.

When something breaks, I do not go straight to a fix. I paste the error, the relevant logs, the surrounding context, and ask: what is actually happening here? The AI reasons through it, often surfacing the root cause faster than I would tracing it manually. Then we fix the right thing.

Tip

The same pattern applies to design decisions. Before writing code, describe the problem and ask: how would you approach this? What are the tradeoffs? Not because you cannot think through it yourself, but because doing it in conversation surfaces things you would have missed working alone.

Graduated Autonomy
#

One of the more interesting dynamics is how I grant authority differently depending on the task. The key is keeping each stage distinct — never skipping from “I have a problem” directly to “fix everything.”

flowchart TD
    A([Something to investigate or build]) --> B[Exploration]
    B -->|check, inspect, analyze, find| C[Proposal]
    C -->|what do you think?| D{Decision}
    D -->|refine or redirect| C
    D -->|approve| E[Execution]
    E -->|ok, yes, proceed| F([Done])

The proposal stage is where I catch bad assumptions before they become bad code.

Context Is Everything
#

The quality of AI output scales directly with the quality of context you provide.

What to provideInstead of
The actual error messageA paraphrase of what went wrong
The relevant codeA description of what it does
The plan or spec document“Here is roughly what we want”
Hard constraintsLeaving them implicit
Important

A vague prompt produces a generic answer. A specific prompt with rich context produces a useful answer. The few extra seconds spent framing a question properly produce an outsized difference in output quality.

Chaining Tasks in a Session
#

AI coding assistants are most powerful when used as a sustained work session, not a series of one-off questions.

sequenceDiagram
    participant Me
    participant AI

    Me->>AI: Why is this service slow?
    AI-->>Me: N+1 query in this handler
    Me->>AI: Fix that
    AI-->>Me: Done, here is what changed
    Me->>AI: Also add an index on that column
    AI-->>Me: Added
    Me->>AI: Write a test for the fixed path
    AI-->>Me: Test written
    Me->>AI: Commit and push
    AI-->>Me: Pushed

Each step builds on the previous one. The AI has full context of what we changed and why. Once you have context established, extending the task costs almost nothing.

AI for Infrastructure, Not Just Code
#

One of the things that surprised me was how much value AI adds outside of writing application code.

Terraform, IAM policies, CI/CD pipelines, Kubernetes manifests — these are all just structured text with complex semantics. AI is as useful here as it is in application code, often more so, because the cost of mistakes is higher and the documentation is dense.

Describe what the service needs to do and get a least-privilege policy back. Read it, verify it, apply it.

policy.json
{
  "Statement": [{
    "Effect": "Allow",
    "Action": ["s3:GetObject", "s3:PutObject"],
    "Resource": "arn:aws:s3:::my-bucket/*"
  }]
}

Paste a confusing terraform plan diff and ask what the changes actually mean before applying.

main.tf
# ~ update in-place
resource "aws_security_group_rule" "egress" {
  ~ cidr_blocks = ["10.0.0.0/8"] -> ["0.0.0.0/0"]
}

Paste a failing pipeline job log and get a root cause diagnosis quickly.

pipeline.log
ERROR: Failed to push to registry
  unauthorized: authentication required
  hint: check REGISTRY_TOKEN secret expiry

The Dry-Run Habit
#

For any task that touches production or makes irreversible changes, I have developed a consistent habit: ask for a dry-run first.

“What would you do if I asked you to rotate all the API keys for this service?”

The AI describes its plan. I check whether it understands the scope correctly, whether the sequence of operations makes sense, whether there are edge cases it has not considered. Then I say “ok, do it.”

agent.py
def run(task, dry_run: bool = False):
    plan = agent.plan(task)   # always plan first
    if dry_run:
        return plan            # return plan, no side effects
    return agent.execute(plan)
Note

Dry-run mode is not just about catching errors. It is about building trust over time. If the plan looks right consistently, you become more confident granting full autonomy.

What Does Not Work
#

Asking for complete designs upfront

“Design a full authentication system for my application” produces a generic answer.

“I have a FastAPI app, users are in Keycloak, and I need to protect these three endpoints” produces something usable. Specific problems get specific solutions.

Using AI as a rubber stamp

Asking the AI to review code you have already decided is fine produces polite agreement.

Ask adversarially instead: “What could go wrong?” “What am I missing?” “Is there a simpler way?”

Over-specifying simple tasks
Long, elaborate prompts for simple tasks produce over-engineered output. “Write a function that parses this JSON” works better than three paragraphs of context when the function is straightforward.
Ignoring the AI's questions
When the AI asks a clarifying question, the task is under-specified in a way that will produce a wrong answer. Answer the question rather than pushing through.

The Practical Reality
#

AI has not made me faster at writing code. It has made me faster at solving problems. The distinction matters. Writing code is rarely the bottleneck in software engineering. Understanding what to build, finding the right approach, debugging unexpected behavior, navigating unfamiliar systems — those are the hard parts, and AI makes all of them faster.


The engineers who get the most out of AI tools are not the ones who prompt the best. They are the ones who investigate before acting, provide real context, ask adversarial questions, and stay genuinely in the loop. AI is a force multiplier for the habits that make good engineers good, and a poor substitute for them.

If you are working on similar problems or want to discuss any of this, reach out at manuel.fedele+website@gmail.com.

Related