But what happens when a client’s "choice" is based on disinformation that threatens their life or others?
Furthermore, the rise of AI note-taking (like Nuance or Ambience) presents a new dilemma. Are we violating informed consent if we don't explicitly tell a client that a bot is listening to their trauma narrative to generate a treatment plan? In a changing society, 2. Self-Determination vs. The Disinformation Age Social work’s reverence for client self-determination is sacred. We are taught to respect the client’s right to choose their own path, even if we disagree with it.
By: The Modern Practitioner
Increasingly, welfare eligibility, child protective services triage, and housing allocation are being run by predictive algorithms. A machine flags a family as "high risk" based on zip code data, not clinical observation.
But what happens when the society those ethics were written for changes underneath your feet?
We are seeing this in medical social work (vaccine hesitancy) and community organizing (climate denial). The traditional model says: Provide the data and support the client’s autonomy. The modern reality says: Data no longer changes minds. When a parent refuses life-saving insulin for a diabetic child because of conspiracy theories on Telegram, where does "respect for the client" end and "duty to protect" (or duty to society) begin?