Governance and UK law: where projects stumble
AI does not change your legal duties; it can heighten them. Three areas matter most:
1) Data protection (UK GDPR). The ICO’s AI guidance sets out how to apply data protection principles to AI systems, including fairness, transparency, security, and human review where decisions affect people. If your AI analyses video or access logs that identify individuals, you are processing personal data; if it uses biometrics for identification, that is generally special category data requiring an Article 9 condition and a robust DPIA. Build these into your design before you buy anything.
2) Video surveillance expectations. The ICO’s surveillance guidance covers CCTV, ANPR and facial recognition. If you combine AI analytics with video, you’ll need clear signage, minimisation, appropriate retention, and a lawful basis. Avoid function creep (e.g., repurposing door cameras for unrelated monitoring) without fresh assessment and communication.
3) Workplace monitoring & biometrics. The regulator has already ordered organisations to stop using facial recognition/fingerprints for staff attendance where necessity and proportionality weren’t evidenced and alternatives weren’t offered. That enforcement is a warning: justify why AI/biometrics are needed for the access outcome at hand, and offer alternatives where appropriate.
Security of the AI itself (and why it matters to physical security)
AI components add new attack surfaces: model tampering, poisoned training data, brittle edge devices and noisy alert pipelines. The NCSC’s Guidelines for secure AI system development give practical, vendor-neutral controls across the AI lifecycle—secure design, supply-chain due diligence, robust logging, and response to model drift or incidents. Treat AI like any other critical service: instrument it, monitor it, patch it and be able to explain what it’s doing.
On the operational side, the NCSC’s logging and monitoring guidance is just as relevant to access/AI stacks as it is to IT: decide what to log (e.g., model confidence, alert dispositions), keep time in sync, and make sure someone is actually watching the right dashboards.
Architecture that works in the real world
A pragmatic design keeps the access system’s edge deterministic and adds AI as an overlay:
- Edge first. Door controllers continue to enforce policy if the WAN or cloud link drops; AI alerts degrade gracefully rather than blocking entry.
- Event bus. Publish access events to a scalable stream; have analytics subscribe and enrich those events with scores or classifications.
- Video correlation. Use your VMS to bookmark and retrieve footage by door/time; don’t invent a parallel video store unless you must.
- Identity truth. Integrate with HR/IdM so AI has clean role and roster data (garbage in, garbage out).
- Segmentation & hardening. Put controllers, AI gateways and analytics on segmented VLANs with restricted management paths (the same networking discipline you’d apply to your CCTV estate).
If you’re modernising your wider security fabric, our CCTV–Access Control–Alarm Integration article shows how to orchestrate the moving parts without creating fragile dependencies.
Acceptance testing that goes beyond “it compiles”
AI is only as good as your tests. For tailgating, define scenarios: two people through on one swipe at peak and off-peak, different lighting, backpacks and prams, wheelchair users, hi-vis PPE. For anomaly detection, run controlled “red team” cases (e.g., attempts at out-of-hours access) and check whether alerts are raised at the right sensitivity. Independent benchmarks (NIST’s analytics evaluations for activity detection; NIST FRVT for face recognition if you use FRT upstream of a door) are a sanity check, not a substitute for on-site trials.
Document explainability: if the system flags an event as anomalous, can operators see why? The ICO’s AI guidance emphasises meaningful explanations and human-in-the-loop where automated decisions affect people—good practice even when access is ultimately decided by a deterministic controller.