Home TamperCheck KYC Verifier IPRappo SOC Analyst Cyber Butler Claim Forensics DPDP Discovery OpenClawAboutBlog

Trust, Safety and Security Due to AI

Manjula Sridhar Oct 24, 2025 2 min read
← Back to Blog
This blog is written with lots of AI tools for pics and edited by an expert human :-)

AI has inherited all of the existing security issues that have accumulated so far in technology. On top of that it has introduced a few new ones — and they are evolving day by day.

Direct Security Threats

Adversarial Attacks

AI systems can be fooled by carefully crafted inputs. Adding imperceptible noise to images can cause misclassification, or subtle changes to text can manipulate language models into producing harmful outputs.

Data Poisoning

Attackers can inject malicious data into training sets, causing AI models to learn incorrect patterns or create backdoors that activate under specific conditions.

Model Theft

Proprietary AI models can be stolen through query-based attacks or direct theft of model weights, enabling adversaries to replicate capabilities without the investment.

AI-Enabled Threats

Sophisticated Social Engineering

AI can generate highly convincing phishing emails, deepfake audio and video for impersonation, and personalised scam content at scale — dramatically lowering the cost and raising the quality of social engineering attacks.

Automated Vulnerability Discovery

AI tools can accelerate the discovery of software vulnerabilities and help generate exploit code, lowering the barrier for cyberattacks and enabling less sophisticated actors to launch effective campaigns.

Disinformation Campaigns

Large language models can create vast amounts of convincing false content, making information warfare more scalable and harder to detect or counter.

Systemic Risks

Privacy Violations

AI models can memorise and leak sensitive training data and de-anonymise datasets, creating privacy risks that persist long after the original data is supposedly secured.

Supply Chain Vulnerabilities

Dependencies on third-party AI services, pre-trained models, or datasets can introduce hidden risks including embedded biases, backdoors, or data breaches that organisations may not even be aware of.

Autonomous Weapons

AI-powered weapons systems raise serious concerns about decision-making in warfare and accountability when systems act autonomously.

Critical Infrastructure Dependence

As AI becomes embedded in healthcare, finance, transportation, and energy systems, failures or attacks could have cascading consequences that are difficult to contain or reverse.

Emerging Concerns

The rapid pace of AI development creates challenges on multiple fronts:

  • AI safety research struggling to keep pace with capability advances
  • Potential misuse by bad actors becoming easier as tools grow more accessible
  • The increasing complexity of AI systems making comprehensive security audits harder

For a deep dive into this space, we highly recommend reading the research blogs at irregular.com.

← Back to Blog

Stay Ahead of the Curve

Get our weekly digest of AI security news, compliance updates, and insights delivered to your inbox.

No spam. Unsubscribe anytime.