AI security hiring: how to recruit for a job spec that didn't exist 18 months ago

AI security job descriptions are being copy-pasted from US blogs by people who've never hired one. The roles stay open six months. The hires that do happen are often the wrong people. There is a better way.
Roughly eighteen months ago, the role of "AI Security Engineer" started appearing on UK job boards. It has since multiplied. AI Red Teamer. Model Security Engineer. AI Governance Specialist. Hybrid Cyber-AI Lead. The job titles arrive faster than the playbook for hiring them.
Walk through twenty AI security job descriptions on the UK market today and you'll see the same pattern. Each one is a list of every skill the hiring manager could think of, lifted from a US blog post, with a vague salary range and no internal consensus about whether the person is supposed to be a security engineer who knows AI, an ML engineer who knows security, or something genuinely new.
The roles stay open for six months. The hires that do happen are often the wrong people for the work the team actually needs done. There is a better way.
Why nobody can write the JD
AI security isn't a single role. It's a band of overlapping responsibilities that sit across three traditional disciplines: security engineering, machine learning engineering, and governance. The skills don't sit cleanly inside any one. The candidate pool is small enough that no single recruiter has seen enough of them to confidently calibrate a brief.
On top of that, the threat surface is genuinely new. OWASP only published the LLM Top 10 a couple of years ago. Adversarial machine learning is moving fast enough that frameworks are out of date within a year. Job specs written today are obsolete by the time the person starts.
And finally, the language is unstable. Two firms can use "AI Security Engineer" to mean completely different roles. One wants someone to harden a SaaS LLM deployment against prompt injection. The other wants someone to secure the MLOps pipeline for an in-house model. The skills overlap in places. They diverge sharply in others.
The T-shape problem
The most useful framing we've seen for AI security roles is the T-shape. The vertical bar of the T is deep expertise in something AI-specific: adversarial ML, LLM security, data poisoning detection, model supply chain integrity, agentic AI governance. The horizontal bar is the broad cyber and engineering foundation that lets the person actually do the work in a real organisation – MLOps, Kubernetes and container security, cloud security architecture, Python, infrastructure as code.
The trap is to over-specify the vertical bar. Hiring teams write specs that demand five years of LLM red-teaming experience. The world doesn't have a population of people who can credibly claim that. What it does have is people who have one to two years of deep AI security work on top of a solid 8-to-10-year foundation in cloud security engineering or DevSecOps. Those people exist. They are findable. They are not the people who answer a generic job posting.
Pay bands are still guesswork
AI security commands a premium. Roughly 35% above standard Cyber Security engineering for equivalent seniority. The UK market is still calibrating where the new salary ceiling sits, and the published ranges vary wildly because there isn't yet enough volume to settle a benchmark.
Three patterns are starting to emerge in 2026:
- Mid-level AI security engineers (3-5 years total experience, of which 1-2 years are genuinely AI-focused) are settling into a £85,000–£130,000 permanent band in the UK, with London adding 15–25%.
- Senior AI security engineers and AI red team leads with credible portfolios of work on production models are commanding £130,000–£180,000 permanent in the UK, with the top end reserved for people who can bring published research or named delivery to the table.
- Contract day rates are wider still. £750 to £1,400 a day is in scope depending on the specificity of the work and the clearance level required.
Anchor your offer to the work you actually need done, not to the market ceiling. Most teams need a strong Cyber Security engineer who can ramp up on AI specifics, not a one-of-twenty named researcher.
How to actually hire
Three principles for hiring AI security in 2026:
Be precise about the work. Before writing a job spec, write a one-paragraph description of the first six months of work. What systems will the person harden? What threats will they be expected to model? What outcomes will their work change? If that paragraph is fuzzy, the role is not yet ready to hire.
Hire on adjacency. The best AI security engineers we have placed have come from cloud security or DevSecOps backgrounds with self-directed learning into AI threats, usually starting with one or two production projects. They are not, in the main, ML engineers who decided to pivot. The cyber instincts are harder to acquire than the AI specifics.
Use a specialist network. The candidate pool is small enough that name recognition and referral matter more than search. The right person is two introductions away from a specialist who already represents them. They are not on a job board.
Where we sit
invitise covers AI security inside our wider cyber engineering practice. We will be honest about what we can do here. The depth of network we hold in classic SOC, security architecture and security engineering is greater than the depth we hold in AI security specifically – the field is moving too fast for any specialist firm to credibly claim a finished pipeline. What we do have is a strong network of cloud and DevSecOps engineers who are now working credibly on AI security mandates – the people best placed to harden production AI systems today.
If you have a brief that is genuinely AI security and not a relabelled cloud security role, we will say so. If it is the other way round, we will save you six months.
Talk to us about AI security capability →
Sources & further reading
Want to talk about this? Get in touch →
All insights →More from Talent
TalentWhy generalist recruiters keep getting cyber wrong
Specialist hires stay 25% longer. Most cyber roles run through generalist desks for 6 to 9 months before reaching a specialist. Here's the gap that creates – and what it costs your programme.
TalentInside the SC and DV cleared talent market.
Cleared Cyber Security professionals are one of the tightest pools in the UK market. What's driving demand, what good people expect, and how leaders should think about retention.

_1778603447189-DGEkYDPq.png)
