April 17, 2026
How AI can help therapists without replacing them

Most of a clinician’s difficult thinking happens alone. After a session that raises questions, the therapist mentally replays the encounter, notes personal reactions, and consults the literature to see whether others have described similar situations. Early in training, supervisors and case conferences add other perspectives. But later, especially in private practice, that reflective work becomes largely solitary. Solitary reflection is essential to clinical work, but it can also quietly narrow the range of perspectives a therapist considers when difficult questions persist.

I’m a practicing clinical psychologist, and over the past year, I have begun using a conversational AI tool as part of my reflective clinical work, primarily when reviewing published case material and in consultation contexts, not during active treatment sessions or to make clinical decisions. My aim has been straightforward: to test whether my own thinking had become too narrowly committed to a single explanation.

What I am describing here is not AI as a therapist, or even AI in the therapy room, but something quieter: its growing role in the private thinking clinicians do before any intervention ever occurs.

My expectations were modest. I was not looking for interpretation, intuition, or emotional understanding. What proved useful was the ability to pose a problem in dialogue and see alternative formulations laid out clearly enough to examine. When my own thinking began to circle, the process sometimes helped reopen questions that had quietly narrowed.

To test this process, I summarized a published psychotherapy case vignette and explored alternative formulations through structured dialogue with the AI tool.

In one published composite case described in the psychotherapy literature, a young woman experienced intrusive thoughts about losing control and harming herself, despite recognizing these thoughts as unwanted and inconsistent with her values. Her anxiety persisted because she repeatedly tested herself by seeking reassurance and deliberately approaching situations she feared, in order to confirm that she would not actually lose control or harm herself.

My initial formulation leaned toward understanding the symptoms strictly within an obsessive-compulsive framework, with intrusive thoughts neutralized through checking and reassurance.

But when I summarized the vignette and explored alternative formulations using AI, it became equally plausible to view her behavior as an effort to resolve uncertainty about agency itself, testing whether fear could still be relied upon as evidence of self-preservation.

Holding those possibilities side by side did not alter the diagnosis, but it slowed my tendency to treat the symptoms as fully explained by any single model. The shift mattered less for what I would do next than for how it changed the way solitary clinical reasoning can be reopened once it has quietly narrowed.

This is the function I am describing. When I am genuinely stuck, whether with a case from the literature or with difficult clinical material, I use dialogue to explore alternative ways of understanding what might be going on, drawing from different therapeutic traditions. The tool often points toward concepts or bodies of work that may be worth revisiting. I treat those references cautiously and verify them independently, since errors in this domain are common. The value lies in widening the field of possibilities. Judgment and decision-making remain entirely mine.

I have primarily used ChatGPT for this kind of reflective work, though other large language models could likely serve a similar function.

I want to be clear about scope. I have not used these tools during active treatment sessions or to make clinical decisions. My use has been limited to reflective thinking about published or thoroughly de-identified material. Like other reference tools, this kind of use can still shape how a clinician thinks more broadly, beyond any single example. Questions about whether and how such tools should be disclosed to patients have not yet been clearly addressed by the field and will require professional consensus rather than ad hoc judgment.

What has led me to take this seriously is not only personal experience, but the structure of contemporary clinical work. A large proportion of licensed therapists now practice independently, and even in clinic settings, formal supervision typically covers only a fraction of cases. At the same time, survey data from the American Psychological Association indicate that more than half of psychologists reported using AI tools in their professional work in 2025, with a substantial minority using them at least monthly.

Taken together, the combination of solitary practice and early tool adoption makes it likely that these systems are already influencing clinical thinking in routine, unremarked ways, before professional norms or guidance have had time to form.

Public debate about artificial intelligence and mental health has focused largely on replacement: whether machines will become therapists, whether patients will form attachments to them, or whether they will undermine the profession.

Those questions may matter eventually, but they obscure a quieter and more immediate development already underway: the use of AI not as a replacement for therapists, but as an influence on how therapists think when no one else is present.

Used carefully, these tools can interrupt the mental echo chamber that solitary practice can create without asserting correctness. They can surface assumptions without insisting on conclusions. They can help clinicians hold competing ideas a little longer than they might on their own. I have also had moments where I wondered whether a formulation arrived too smoothly, or whether I was relying on the dialogue longer than necessary. Those moments served as a reminder of why verification and restraint matter as much as openness.

When AI tools are used in reflective clinical thinking, strict confidentiality protections must apply, including limiting use to published case material or thoroughly de-identified information so that no patient can be identified. The ethical boundaries here are clear. Clinicians must verify information independently, remain fully accountable for all clinical decisions, and recognize this use as a supplement to professional judgment rather than a replacement for it. Used with those constraints, the practice widens perspective without displacing responsibility.

That is the issue the field needs to address now: not whether AI will replace therapists, but the fact that it is already being used quietly as a thinking partner by practicing clinicians, and whether professional standards will emerge through deliberation or only after problems force the issue.

Harvey Lieberman, Ph.D., is a clinical psychologist and former mental health services administrator who provides consultation and expert testimony, serves as a federal external grant reviewer, and writes about the intersection of mental health practice and emerging technology.

link

Leave a Reply

Your email address will not be published. Required fields are marked *