Generative AI, critical thinking and social work practice

Insight 77

With Dr Gavin Heron, Fergus Reid and Professor Feng Dong

Published in Insights on 12 Jan 2026

Key points

  • AI is increasingly embedded in everyday life and professional contexts. Critical thinking and professional curiosity are central both to social work practice and to the safe and responsible use of AI.
  • When used cautiously and critically, there are both identifiable benefits and limitations to incorporating AI tools and platforms into social work practice. While it has potential to act as a cognitive resource, its impact on individual practice, in terms of influence, cognitive over-reliance and skill erosion has not been adequately explored.
  • Ethical and social justice concerns are significant, both practically and philosophically and should inform the adoption and evaluation of AI tools.
  • AI literacy, supervision, and organisational regulation and governance are essential if AI is to be used ethically and responsibly in social work. Practitioners need training in AI literacy and agencies need clear policies and guidance.
  • It is essential to ensure that AI complements rather than undermines relationship-based and value-led practice, and augments rather than automates social work decision-making.

Introduction

Artificial Intelligence (AI) is a field of computer science focused on creating machines that can perform tasks requiring human-like intelligence, such as learning, problem-solving, decision-making, and understanding language, by processing large amounts of data to recognise patterns and adapt. AI is increasingly becoming embedded in everyday life, particularly generative AI, from the tools we use to communicate, to the systems that support administrative tasks and decision-making. Generative AI creates new content in text, audio or visual form, in response to a prompt (BASW, 2025). It draws from the internet and what people post and say, and is trained on large sets of data using natural language processing (NLP) – to help computers interpret and generate human language – and machine learning (ML) to find patterns, make decisions and predict outcomes without being programmed for every task.

Across a wide range of industries, AI is shaping how we work, learn and interact. Higher education, healthcare, law, and other professional fields are already experimenting with and adapting to these technologies, with varying degrees of enthusiasm and caution. Its impact on ‘users’ thinking and decision making is, however, poorly understood.

With ‘open’ or publicly accessible tools such as ChatGPT, Google Gemini or Microsoft Co-pilot proliferating, their widespread use is a particularly pertinent concern for social work practice. There are also certain licensed tools which are closed, with controlled access to data. While these minimise risk of data breaches more commonly associated with open models, they too vary in their reliability and should only be used as an augmentative resource or support tool. This is because such tools are not without their limitations. Social work practice involves assessing complex situations and making decisions around people’s best interests, welfare and safety; it requires practitioners to critically evaluate and balance different and potentially competing outcomes under conditions of uncertainty (Webb, 2002 cited by Kemshall, 2021). Social work practice depends on the exercise of professional curiosity (Phillips and colleagues, 2022) and critical thinking (Heron, 2020); equally, we argue, so does the use and adoption of open generative AI tools for professional practice as such tools are not designed to support in these professional contexts.

Understanding how AI might be used judiciously and ethically to support, rather than replace, the human-centred values and critical thinking that underpin social work practice is imperative. AI technologies will continue to develop and diversify, and social work – like other professions – will need to consider the potential ethical implications to ensure that any integration of AI aligns with the profession’s commitment to the pursuit of care, dignity, and social justice.

This Insight provides some introductory information on AI and its use in social work contexts. It then explores the limited literature in this area and presents our research on the impact of AI on critical thinking and professional curiosity. We then use this learning to outline some practical guidance on the responsible and ethical use of AI tools in social work practice. Taken together, our aim is to offer an evidence-informed insight into the opportunities and limitations of this emerging and dynamic space, and the importance of approaching these tools critically and ethically in contemporary and future social work practice.

Enhancing practice through technology: Artificial Intelligence

There are different types of AI. Narrow AI is designed for a specific task eg voice assistant or chatbot – like Siri or Alexa. Generative AI, sometimes called GenAI, creates new content eg text or images. It is often based on what is called ‘deep learning’ in so far as it is trained or programmed on large data sets, and unlike traditional machine learning that use algorithms to make predictions, deep learning models use networks made up of multiple layers of units called neurons (IBM n.d). Through the rapid emergence and development of natural language processing (NLP) and machine learning (ML) models, AI tools such as ChatGPT are becoming increasingly accessible in our everyday lives, work and study. These models offer an interactive and conversational approach and are designed to emulate how humans think, communicate, learn, and solve problems using algorithms (a set of rules to be followed to solve a problem or complete a task) and data. ChatGPT is one of the most well-known contemporary Large Language Models (LLMs) – a type of NLP that processes massive amounts of text data. Its key capabilities include, processing data and generating human-like text in response to questions; providing explanations and assisting with writing; synthesising information from diverse sources; retrieving information; and helping users learn in a clear and conversational manner (Ray, 2023).

AI tools are already being applied in social work practice to support a range of tasks, such as transcribing and summarising assessments or meetings, generating follow-up actions, and retrieving information from case management systems. For example, Magic Notes is a closed, web-based AI platform, designed specifically to help social workers record conversations and instantly generate detailed case notes and assessments by recording and transcribing conversations. AI generated summaries are provided in custom formats, including action points, and notes can easily be copied into any case management system (Weaver and Gillon, 2025).

AI literacy is vital and refers to the knowledge and skills needed to use such tools critically, responsibly and effectively. For example, the quality of response will be related to the quality of the question or instruction given to a LLM; this is known as prompt engineering (Sahoo and colleagues, 2024). However, AI literacy goes beyond simply knowing how to operate platforms, to understanding their strengths, limitations and ethical implications. If social work is to make the most of new technologies while protecting the values and standards that guide the profession, AI literacy is an essential skill for the future.

The policy context

The rapid development of AI means that research, policy and guidance are struggling to keep pace. At UK level, in 2025 Prime Minister Keir Starmer announced an AI Opportunities Action Plan (Department for Science, Innovation and Technology, 2025), framed around increasing productivity and growth, with ambitions for the public sector to ‘spend less time doing admin and more time delivering the services working people rely on.’ While such rhetoric reflects optimism about efficiency, it sits uneasily alongside concerns raised in the House of Lords Justice and Home Affairs Committee (2022) report on AI in the criminal justice system. That report recognised potential benefits in efficiency and problem-solving but warned that the lack of minimum standards, transparency, evaluation, and training in AI could compromise human rights and civil liberties. Addressing these concerns would, they reasoned, ‘consolidate the UK’s position as a frontrunner in the global race for AI, while respecting human rights and the rule of law’ (n.p). Scotland’s AI Strategy (Scottish Government, 2021) explicitly articulates a vision for Scotland as global leaders in the development of ‘ethical AI’.

Scotland will become a leader in the development and use of trustworthy, ethical and inclusive AI (p19)

Across Europe, the Confederation of European Probation (CEP, 2024) has issued guidelines on the ethical and organisational aspects of AI use in justice contexts. Meanwhile, EU agencies such as eu-Lisa and Eurojust (2022) have begun to explore how AI might support cross-border cooperation in criminal justice. In the UK, social work practice is also guided by the BASW Code of Ethics, which has provided guidance on aligning new technologies with social work’s commitments to care, dignity, and social justice (BASW, 2025). For practitioners, this context underscores the importance of organisational governance and regulation. Agencies must develop clear AI policies, and social workers should consult these, alongside professional codes, before adopting any AI tools in practice.

Summary of relevant evidence

Interest in and access to AI is growing exponentially as its technology advances. While expert systems and predictive modelling have long been part of social work practice, not least in the form of actuarial risk assessment tools (Hamilton and Ugwudike, 2023), the rate of innovation, diversification, and proliferation necessitates that we understand how AI tools and platforms work, where they are deployed and how, and what opportunities and challenges they present and to what effect.

Critical thinking and professional curiosity in social work practice

Critical thinking and professional curiosity lie at the heart of effective social work practice (Heron, 2020; Kemshall, 2021; Phillips and colleagues, 2022; 2024). Critical thinking, defined as ‘the ability to analyse, evaluate, and synthesise information to make reasoned decisions, is a fundamental cognitive skill essential for academic success, professional competence, and informed citizenship. It involves various cognitive processes, including problem-solving, decision-making, and reflective thinking, which are crucial for navigating complex and dynamic environments’ (Gerlich, 2025: 1). Professional curiosity is a complementary term which is defined as an investigative and critical approach to the practice of risk assessment and management (Phillips and colleagues, 2022). Together, these skills enable practitioners to analyse complex information, question assumptions, and make balanced decisions in contexts of uncertainty.

Professional curiosity – often described as a willingness to ‘look beneath the surface’ – requires practitioners to investigate, challenge, and reflect on the circumstances of people’s lives, ensuring that risks are neither overlooked nor exaggerated. Critical thinking complements this by helping social workers to evaluate evidence, weigh competing perspectives, and justify their decisions transparently. Both are central to ethical and accountable practice, underpinning risk assessment, decision-making, and the promotion of social justice. Additionally, Heron and Black (2023: 1) found that risk assessment practices among child-care social workers was further enhanced by ‘robust questioning from peers that encourage analysis’, which can promote professional exploration of uncertainty and engender reflection.

Increasingly AI is becoming an additional source of information about how to act and what decisions to take. As noted, critical thinking and levels of uncertainty are crucial to professionals’ reasoning, and AI capabilities, then, have potential to augment risk decision-making. There are, however, limitations and concerns, particularly where issues of protection, rights and liberties are contingent on the outcomes of professional decision-making.

Our study: What do I not know? Working with uncertainty

Our exploratory study investigated how the introduction of ChatGPT affected justice social workers’ critical thinking and decision-making in risk assessment contexts. Across three workshops with 16 Scottish justice social workers, participants engaged with fictional case studies supplied by the Risk Management Authority (RMA).

In the first workshop, practitioners carried out a risk assessment using the Level of Service/Case Management Inventory (LS/CMI) framework, the primary risk assessment tool in Scotland. In the second, they were introduced to ChatGPT and encouraged to experiment with it in relation to a second case study. The third workshop explored reflections on how the tool had shaped their thinking and decision-making.

We use the findings, drawing on broader research evidence, to structure our arguments below.

A critical friend?

Phillips and colleagues’ (2024: 331) identified that ‘professional curiosity requires the performance of emotional labour and critically reflective practice to be undertaken simultaneously (…) in a context of insufficient time, high workloads, difficulties in training and issues around gaining information from other agencies’. While this study was conducted with probation practitioners in England, the Scottish context is not dissimilar. Social Work Scotland’s (2022: 1) Setting the Bar survey revealed ‘a staff group who are struggling with administrative burdens, fearful of making mistakes, and living with the moral distress of having to work in a way which doesn’t align with their professional values’ (see also BASW, 2018). AI technology offers an interactive and conversational approach to thinking and reasoning that could potentially ameliorate, if not offset, some of the internal and/or external constraints on practitioners’ capacities to exercise or engage in critical risk thinking (Phillips and colleagues, 2024).

Generative AI has the potential to mirror aspects of interpersonal, professional communication and peer-consultation by posing questions, highlighting alternative viewpoints, or suggesting areas for reflection that can support practitioners to think more critically about their work. In this way, it may offer a useful complement to professional dialogue, particularly when opportunities for peer discussion are limited as critical risk thinking and decision-making is enhanced by dialogic exchange (Heron and Black, 2023). Dialogic exchange, or peer-to-peer professional consultation, allows colleagues to question, analyse and reflect together (Heron, 2023). This kind of robust discussion can strengthen risk assessment, decision-making and practice by encouraging reflexivity and openness to uncertainty.

Justice social workers in our study reported that engaging with ChatGPT could enhance idea generation, surface alternative perspectives, and support more reflective risk assessments. Practitioners valued AI’s ability to act as a ‘critical friend’ by offering justifications for decisions, challenging assumptions, and helping to structure complex information. Many could see how it could enhance their ability to generate new ideas and encourage creativity with cases where there was an impasse, or provide an external or objective viewpoint, increasing objectivity in their assessments. For example, one practitioner further reflected on how intimate or in-depth knowledge of a case sometimes led to confirmation bias (Kemshall, 2021) and selective information processing, reinforcing existing assumptions rather than critically reassessing information. They explained that:

The negative about knowing a case is knowing too much about a case. You are fixated on what you know, focusing on what you know, and you can become too narrow in your view.

However, while AI can assist in this, AI is not a substitute for human professional dialogue and supervision. Recent research (eg Cheng and colleagues, 2025) has revealed a tendency across LLMs to manifest ‘social sycophancy’ in their responses when asked for personal advice or guidance by users. Cheng and colleagues, (2025: 1) observe that LLMs:

Affirm users’ actions 50% more than humans do, and they can do so even in cases where user queries mention manipulation, deceptions or other relational harms.

This unquestioningly affirmatory tendency has the effect of influencing users’ decision-making and increases perceptions of the model’s trustworthiness, ‘even as that validation risks eroding their judgement’ (p2), altering their thinking and influencing their decision-making. This underlines the importance of approaching such tools critically, reflexively and in conjunction with other resources and sources of information.

Efficiency or Offloading?

One of the most cited benefits of AI is its potential to increase efficiency. By saving time, and reducing administrative burdens though, as we explain, this appears to be task and context dependent. For example, it can help with initial research on topics if sources are appropriately checked, to summarise text, to help re-word or explain something in a different way or write emails, to create materials eg intervention tools, to summarise policies and research, and to support meeting preparations by generating questions or checklists. Platforms like Magic Notes that are designed specifically for social work, with functions such as transcription, translation, and integration with case management systems, are already in wide use. As noted previously, the increasing weight of the administrative burden in social work practice is well recognised and was echoed by our participants.

I feel overwhelmed and consumed by paperwork. It’s not what I got into social work to do.

In principle, AI tools could alleviate these tasks by reducing extraneous load and, in turn, freeing up valuable time – time that can then be invested in the exercise of professional curiosity, by seeking out answers, engaging more meaningfully with others, and in so doing refining critical thinking and decision-making intrinsic to the task. Our participants valued the capacity of ChatGPT to filter, collate and synthesise large amounts of information and conversely convert bullet points into cohesive narratives. Additionally, ChatGPT’s ability to generate tables and chronologies, provide structured, easily digestible information, and streamline complex case management activities, all of which comprise a significant administrative task in risk assessment practice, were particularly valued by participants. The speed at which these processes were completed could therefore allow practitioners to shift their focus toward the more analytical and human-centred aspects of their work, ultimately improving decision-making, planning and intervention strategies.

There is however some emerging research evidence that questions the extent to which AI increases efficiencies, with these appearing to vary depending upon both task and context. Adams and colleagues (2024), for example, identified that AI did not increase police report writing speed, despite practitioners thinking that it did (Boehme and colleagues, 2025), although they did recognise that it had the potential to positively impact broader workflow and administrative processes and enhance consistency, accuracy and quality. That said, they also noted that such standardisation risked eroding the narrative nuance captured by human-generated reports (see also Ferguson, 2024).

There is, then, a core tension however between the potential of AI to augment critical thinking, for example when used to summarise, sort, organise, or filter information and the risk that people may use AI to automate or do their thinking for them, impairing their critical thinking or displacing it entirely. While there may be efficiency tasks that could be automated by AI tools, there are risks that practitioners could become over-reliant on such tools, that they do not engender critical engagement but rather bypass it all together. This is what is termed cognitive off-loading (Gerlich, 2025), where practitioners lean too heavily on the tool rather than exercising their own professional judgement.

Wider research is mixed on the impact of AI tools on issues of critical thinking in terms of cognitive offloading (Risko and Gilbert, 2016), however concerns around skill erosion, particularly among less experienced practitioners in terms of cognitive off-loading (Szulewski and colleagues., 2021) with all the ethical concerns this would generate are significant. As one of our study participants said:

A newer worker might use it as a bit more of a crutch, but then you’d still need a level of supervision and encourage that reflection and that depth of understanding.

This reinforces observations that AI tools cannot replace the personal reflexivity, mentorship and supervision that are integral to continued professional development and reflective practice and implies the need for robust and continuous training, supervision, regulation and governance.

Ethical use in Social Work: Practical

Many professional organisations and institutions are embracing and implementing AI technologies (Ray, 2023) often ahead of evidence of impact (Adams and colleagues., 2024; Boehme and colleagues., 2025) on professional practice, practitioners and service users. Social work at local and national levels will have to consider how the individual and collective use of AI can be used in an ethical, inclusive and constructive way.

Naturally, the role of AI in justice or social work settings is contested, contingent and complex and will continue to generate well-placed critique alongside outright resistance. So, the way in which professionals engage with AI will greatly determine its value in the workplace. Currently, the speed at which certain tools can perform certain administrative and research tasks is exceptional but again not without limitation, and likely to be task and context dependent (Adams and colleagues, 2024). Whether these capabilities and time-saving benefits from AI innovations will translate into professionals spending more time with service users is unknown but this is a potential benefit.

The use of AI tools and platforms raise a host of ethical considerations, including ‘informed consent and client autonomy; privacy and confidentiality; transparency; client misdiagnosis; client abandonment; client surveillance; plagiarism, dishonesty, fraud, and misrepresentation; algorithmic bias and unfairness’ (Raemer, 2023: 52). The use of AI platforms in social work practice should only be used if approved by your agency or organisation. It is the responsibility of individuals to ensure compliance with information governance, confidentiality and privacy regulations and data protection. Where they exist, organisational AI policies should be consulted in the first instance. In circumstances where a recognised, protected or secure AI agency-approved platform or tool is being used, clients should be informed if AI is being used in their case and be given the opportunity to opt out where appropriate, thereby respecting their autonomy and dignity.

Open access AI is not designed for social work use precisely because these platforms are not secure, confidential systems and cannot be used for casework. Using these for practice would violate data protection and GDPR guidelines and legislation (BASW, 2025). Where AI tools or platforms are permitted, their use in the production of any outputs should be disclosed where they have been used to assist in the creation of the output, whether drafting ideas, planning or structuring written materials. When exploring the addition of AI to risk practice in our study, participants felt that service users already struggle to understand the complexity of factors which contribute to social work assessments and decisions made about them. Concerns were expressed about how the inclusion of such technologies would be explained to clients and stakeholders, not least given its known limitations.

Obviously people need to know about the risk assessments and how we are coming to our decisions, to add in AI some people might be quite resistant to it.

Ethical use in social work: social justice

There is a wider debate about how the introduction of AI sits alongside social work ethics and values.

While AI cannot and should not replace relationship-based, person-centred practice – not least because it cannot apply cultural sensitivity nor exercise empathy, applying the BASW Code of Ethics can support the use of AI to align with social work’s core commitments to social justice, empowerment, and ethical practice (BASW 2025). However, wider concerns exist. One critical concern relates to the transference of algorithms that perpetuate existing biases, that disadvantage or cause harm to minoritised and vulnerable groups (Cheng and colleagues, 2025; Hamilton and Ugwudike, 2023), and against which the LLM produces recommendations that can reproduce such biases and influence the generation of harms.

The environmental impact of AI is also significant: training and running LLMs requires vast amounts of energy and water, contributing to climate change and resource depletion, the effects of which will not be experienced equitably (Ren and Weirman, 2024). For a profession committed to sustainability and collective wellbeing, this raises questions about whether the benefits of AI outweigh its ecological costs.

Additionally, the development and governance of AI is largely driven by private technology or ‘big tech’ corporations whose profit motives may conflict with social work’s ethical values of equity, transparency, accountability and care. Reliance on privately owned platforms risks embedding commercial interests, bias, and inequality into practice, challenging social work’s commitment to social justice. Conversely others argue that:

Historically, social work has excelled at addressing human need. In the case of AI, social workers can mediate between data scientists and organizations, envision new AI enhanced therapeutic tools for individuals, organizations, and communities; and, most important, advocate for justice and equity in the creation of policies that shape, manage, and regulate AI (Goldkind, 2021, 372).

Conclusion

In this Insight, we provide an evidence-informed discussion surrounding the use of open forms of AI in social work practice. In so doing, we have highlighted the benefits, possibilities and potentialities, limitations and concerns, drawing on wider research evidence and our own small-scale study. In what follows, we set out the core implications for practice.

Implications for the workforce

What are the issues?

  • The evidence that people intervene on, override or resist AI is limited.
  • People may not understand how it works and be subconsciously influenced by its perceived objectivity.
  • They may not feel they have the authority to challenge recommendations or know how to raise concerns or are under too much pressure to double check validity and veracity.

Five key things to consider before using open forms of AI in practice:

  1. Avoid input of any confidential or private data into ‘open’ AI platforms.
  2. Remain critical about its limitations and reflect on its outputs. Always cross-reference with other sources and resources; don’t assume the AI-generated response is the correct one.
  3. Educate practitioners on the ethical and responsible use of AI to mitigate risks and maximise benefits.
  4. Offer training in the effective and judicious use of AI, and how to engage with it critically and reflexively.
  5. Develop clear policies and guidelines for the use, or otherwise of open forms of AI in practice.

References

About the authors

Dr Fern Gillon’s research centres on the impact of Scottish Justice systems and particularly the experiences of children and young people in conflict with the law. Fern is particularly interested in developing creative, participatory research methods. Fern is a Research Associate on the Comparative Penal Supervision Project which explores practitioner and service user experiences of community supervision.

Prof. Beth Weaver. Formerly a justice social worker, Beth Weaver is Professor of Criminal and Social Justice in the Dept. of Social Work and Social Policy, University of Strathclyde. Beth’s research portfolio pivots around desistance, justice social work, user involvement and co-production, and social cooperation and generative justice. More recently her research is exploring the impact and influence of AI on critical risk thinking and decision-making in justice contexts.

AI technologies will continue to develop and diversify, and social work will need to consider the potential ethical implications

Justice social workers reported that engaging with ChatGPT could enhance idea generation, surface alternative perspectives, and support more reflective risk assessments

Acknowledgements

This Insight was reviewed by Calum Campbell (Social Work Scotland), Jake Phillips (Sheffield Hallam University) and Ross Gibson (University of Strathclyde). Comments represent the views of reviewers and do not necessarily represent those of their organisations.

Iriss would like to thank the reviewers for taking the time to reflect and provide feedback on this publication.

Credits

  • Series Coordinator: Kerry Musselbrook
  • Commissioning Editor: Kerry Musselbrook
  • Copy Editor: Sam Ella
  • Designer: Ian Phillip