Analysis and commentary on AI governance, legal technology, and regulatory developments.

Why AI Governance Is Becoming a Core Function in Moder Organisations
As artificial intelligence becomes increasingly integrated into business operations, governance frameworks are becoming essential for managing the legal, ethical, and operational risks associated with these technologies. Organisations that develop strong AI governance structures will be better positioned to ensure responsible AI use while maintaining regulatory compliance and public trust.

A Simple Introduction to AI Risk for Legal Teams
AI is rapidly transforming legal practice through tools such as automated document review, AI-assisted research, and decision-support systems, improving efficiency and access to legal services. However, these advancements also introduce important risks, including bias, data quality issues, lack of transparency, and challenges around accountability.

What Every Company Should Know About Data Protection in AI Systems
A practical overview of data protection risks in AI systems, explaining how organisations can ensure compliance, manage personal data responsibly, and build trust while adopting AI technologies.

Data Protection Impact Assessments (DPIAs), Privacy Impact Assessments (PIAs), and Data Protection by Design and Default (DPbDD)
Understanding the differences between Data Protection Impact Assessments (DPIAs), Privacy Impact Assessments (PIAs), and Data Protection by Design and Default (DPbDD) is essential for effective compliance under the UK GDPR. This article outlines their roles, requirements, and how they work together to manage data protection risks.

How AI Training Works and Its Legality
AI training relies on massive datasets, which inevitably include copyrighted works. While the model does not store copies of these works, legal questions arise over whether the temporary copying required for training is lawful. In the US, fair use may apply in some cases, but using pirated content complicates the argument. In the EU, emerging rules and voluntary codes focus on opt-outs and respecting technical protections. The overarching trend is clear: AI training is increasingly permissible when using legally obtained data, but using illegal or pirated content remains problematic. As regulations and lawsuits evolve, AI companies and creators alike must navigate these uncharted waters carefully.