AI and the Future of CPD: Redefining Competence in a Digital Legal Profession

Artificial intelligence (AI) is now a routine feature of legal practice. Predictive analytics, AI-assisted contract review, and automated research tools have already changed how legal work is delivered and how clients experience services. The profession is moving into a new era of human–machine collaboration, but CPD frameworks and competency statements largely reflect the pre-AI world.

As adoption accelerates, one central question emerges:

How must CPD evolve to maintain competence, ensure accountability, and reinforce public trust in an AI-driven profession

Opportunities for Smarter CPD Systems

There is clear potential for AI to enhance learning and regulatory oversight if used responsibly. For regulators, AI-driven analytics could support more targeted CPD audits, deeper sampling and clearer insight into emerging development needs across the profession. Bodies such as the Solicitors Regulation Authority (SRA) and Bar Standards Board (BSB) have increasingly emphasised the need for more meaningful evidence of ongoing competence.

For practitioners, AI could make CPD more personalised and relevant by prompting reflection linked to live casework, suggesting development pathways aligned with practice areas, and helping to integrate learning into supervision and appraisal systems. These opportunities may also support long-term professional engagement, particularly as International Bar Association (IBA) research that we undertook recently suggests younger professionals expect multiple career transitions rather than a single professional identity for life.

Ethical and Professional Risks

However, these opportunities carry significant ethical risks. Reflection sits at the heart of outputs-based CPD and is fundamentally human; it requires critical thinking, personal accountability and the confidence to acknowledge uncertainty or error. If AI tools are used to generate reflective statements, there is a real danger that reflective CPD could slowly drift back towards the tick-box compliance culture it was designed to move away from.

There are also broader risks connected to AI use across legal work more generally – including bias, data confidentiality, and fabricated or misleading outputs, a risk well documented in research from Stanford University, and discussed extensively in policy work by the European Commission. Unless these risks are managed carefully, trust, not just competence, may be harmed.

Global Approaches and Lessons

This tension between innovation and accountability is not confined to the UK.

  • The EU AI Act categorises AI used within legal services as a high-risk application requiring strong governance, transparency and auditability.
  • In Singapore, the Government and Infocomm Media Development Authority (IMDA) have embedded mandatory AI-literacy training within public-sector professional development frameworks.
  • In the US, several state bars including California have issued guidance on ethical AI use, yet most CPD systems remain hours-based rather than outcomes-driven.

These examples show that reform is possible, but consistency and clarity remain global challenges.

Redefining Competence for an AI-Driven Profession

Core professional values such as ethics, legal reasoning and behavioural competence remain essential. However, digital capability must now be viewed as part of professional identity, not an optional add-on.

Lawyers of all seniority levels will increasingly need to understand how AI systems operate, where they can fail, and how to apply human judgement alongside machine-generated outputs. AI-literacy therefore becomes part of competence, not merely technical knowledge.

A practical starting point is the integration of mandatory AI awareness and digital ethics learning within CPD. Rather than focusing solely on ‘how to use tools’, training should emphasise appropriate delegation, human oversight, risk management and transparency with clients and colleagues.

AI as a Catalyst for Raising Standards

The aim should not be to fit AI into CPD, but to use its emergence as a catalyst to strengthen CPD’s purpose – meaningful skills and knowledge development, public and professional accountability and improved client outcomes. Achieving this will require shared leadership – regulators must set clear expectations, professional bodies must curate relevant learning pathways, educators must design training that mirrors contemporary practice, and practitioners must engage actively rather than defensively.

Other professions, from medicine to accountancy to teaching, are already moving in this direction. The legal sector now has an opportunity not just to follow, but to lead.

AI is reshaping legal service delivery, client expectations and the transparency of outcomes. As this continues, the profession cannot rely on legacy CPD models. Redefining competence for a digital age is both a responsibility and an opportunity: to strengthen standards, deepen trust and future-proof professional identity.

If approached with foresight, collaboration and ethical clarity, AI will not dilute reflective CPD – it will prove and amplify its value.

FAQs

  • How is AI currently used in legal services? – Primarily in document analysis, research, due diligence, project management and predictive analytics, allowing lawyers to focus on value-added judgement and advisory work.
  • What are the risks of using AI in CPD? – Automation may encourage superficial reflection, while wider risks include confidentiality breaches, bias, and inaccurate or fabricated outputs.
  • Will CPD requirements change? – It is increasingly likely that AI-literacy, digital ethics and responsible technology use will become core competency expectations, rather than optional professional development.

Related Articles

AI and the Future of CPD: Redefining Competence in a Digital Legal Profession

November 20th 2025