Billing & The AI-Enabled Practice

Billing & The AI-Enabled Practice

For decades, the legal profession sold quality while billing time.

Whether the task was bet-the-company litigation strategy or routine contract review, law firms charged by the hour and justified those hours with a generalized promise of “quality legal services.” Clients did not pay for output in any disciplined sense. They paid for lawyer time, buffered by trust, reputation, and professional norms. The profession was never particularly good at value-based billing, and most clients did not demand it.

AI forces lawyers to confront a truth they have long managed around: not all legal work carries the same risk, requires the same scrutiny, or warrants the same investment of time. The billable hour flattened those differences. AI makes them unavoidable.

1. Ethics Did Not Change. Supervision Did.

Professional responsibility rules did not change with the arrival of AI. Lawyers remain responsible for competence, supervision, and the accuracy of the work delivered. If AI-assisted output is wrong, responsibility still rests with the lawyer, exactly as it does when work is delegated to associates, contract lawyers, or vendors.

What has changed is what supervision entails.

Before AI, supervision had three defining characteristics.

First, error localization was predictable. Senior lawyers knew where junior lawyers were likely to fail. Review focused on familiar pressure points: misunderstood facts, misapplied law, template misuse, or lack of commercial context. Supervision was selective and heuristic.

Second, human work product exposed its reasoning. Drafts revealed how conclusions were reached. Citations reflected research paths. Structure reflected legal theory. A reviewer could see how the work was done and intervene surgically.

Third, supervision scaled with experience. As lawyers improved, review time declined. Trust accumulated. The economics, while imperfect, were legible.

AI disrupts each of these mechanics.

2. What Supervision Requires in an AI-Enabled Practice

AI-assisted work does not fail in the same way human work fails.

Its errors are not developmental. They are statistical, context-dependent, and sometimes non-obvious. A system may handle complex doctrine correctly and still miss a basic issue. It may draft fluent language that masks faulty legal logic. It may cite authority that appears plausible but is misapplied or incomplete.

Most critically, error localization is no longer as predictable. Reasoning is often synthetic or opaque rather than embedded in the draft.

As a result, supervision shifts from selective review to systemic validation. The supervising lawyer cannot evaluate reasoning in isolation. They must validate inputs, assumptions, and outputs. That requires re-engaging with the legal problem itself, not just the draft.

The supervising lawyer must:

  1. validate factual inputs and constraints,
  2. independently confirm the governing legal framework,
  3. assess whether issues were omitted, not just mishandled,
  4. verify authority provenance and application,
  5. evaluate whether fluent drafting masks incorrect premises,
  6. and decide when, where, and how much human judgment is required.

3. The “One-Quality” Fiction Can No Longer Hold

For years, firms maintained the fiction that all legal work required the same rigor and justified the same hourly rates. In practice, lawyers and clients alike knew this was untrue. But the model rewarded uniformity, not precision.

AI makes that fiction untenable. Some tasks can be accelerated, or altogether handled, with AI assistance and others still demand deep human judgment.

Firms and legal departments are forced to confront distinctions they once avoided. For example,

  1. Which tasks truly require senior human judgment?
  2. Which tasks are routine but historically billed as premium?
  3. How much billed time is supervision versus original thinking?
  4. When does billing for validation feel reasonable, and when does it feel like billing to manage tool risk?

These questions were always present. AI simply removes the cover.

4. Transparency Is Now a Requirement, Not a Choice

There is a long-standing fear that transparency threatens the billable hour. In reality, opacity has become the greater risk.

As clients become aware of AI use, they will ask:

  1. how AI affects staffing decisions,
  2. how much time is spent supervising AI-assisted work,
  3. why that supervision is necessary,
  4. and whether they should be paying premium rates for it.

Firms that cannot explain their workflows coherently will struggle to defend their hours, even if total time remains reasonable.

For smaller firms and legal departments, this is leverage. Transparency does not require abandoning hourly billing. It requires explaining what the hours represent.

Transparency shifts the conversation from “trust us” to “here is how judgment, risk, and time intersect in this work.”

Side note: if your outside counsel cannot explain how AI changes supervision, staffing, and billing, that is not sophistication. It is avoidance.

5. The Market Is Splitting on Explainability, Not AI Adoption

The legal market is not dividing between firms that use AI and firms that do not. It is dividing between organizations that can explain their work and those that cannot.

Smaller firms and legal departments do not need enterprise governance frameworks to respond. They need discipline and clarity about questions they were previously able to defer:

  1. Which work requires full human supervision regardless of efficiency?
  2. What work can responsibly use AI with structured or limited review?
  3. Does validation differ from original legal analysis? If not, why?
  4. How are these distinctions communicated to clients or internal stakeholders?

These are not technology questions. They are leadership questions.

The firms and legal departments that can answer them will manage risk more effectively, defend their economics more credibly, and maintain trust in an environment where the work itself is finally visible.

Closing Thought

AI has not created new ethical duties for lawyers. It has changed the mechanics of supervision in ways that expose long-hidden assumptions about how legal work is produced and billed.

The challenge now is not adopting new tools. It is explaining, clearly and honestly, what lawyers are doing with their time and why it still deserves to be paid for.

[Contact Karta Legal today at info@kartalegal.com to learn how to transition your firm to value-based billing models that leverage GenAI.]

Go To Top