AI Governance • Safety • Security

Where Innovation Meets Accountability in AI

Urielle-AI helps organisations design, test, and monitor AI systems that are safe, compliant, and trustworthy—without slowing down innovation.

EU AI Act Readiness AI Verify & Independent Audit GenAI Security Reviews
AI-Thara Sentinel
Visual storytelling for AI safety

Services

Practical AI governance for real systems—combining technical depth with regulatory awareness.

AI Governance & Audit

Independent review of AI systems against emerging regulations and good practice (e.g., EU AI Act, internal policy, AI governance frameworks).

  • Risk & impact mapping
  • Control design & gaps
  • Audit-ready documentation

GenAI Security & Safety Review

Assessment of LLM and GenAI solutions for prompt injection, data leakage, jailbreaks, and unsafe behaviours.

  • Threat modelling for LLM apps
  • Multi-agent red-teaming concepts
  • Guardrail & policy recommendations

AI Assurance Storytelling

Translating complex AI assurance work into visual narratives for leadership, regulators, and non-technical stakeholders.

  • AI-Thara cinematic explainers
  • Board-level story decks
  • Awareness & training assets

Selected Work & Concepts

Blending hands-on engineering with governance, audit, and storytelling.

Dev Guardian

Multi-agent concept for scanning GenAI applications and codebases for security and governance risks, providing explainable findings.

LLM agents • risk summaries • action plans

AI Verify Experiments

Exploratory work using Singapore’s AI Verify framework to structure testing of AI systems against core principles like transparency and robustness.

Trust frameworks • metrics • reporting

Knowledge Systems & Patterns

Personal research building structured pattern libraries for human behaviour, risk, and AI interaction—used to design more human-aware controls.

Pattern libraries • Obsidian graphs

The AI-Thara Universe

A cinematic storytelling universe that turns AI safety and governance principles into visual, emotional narratives.

Why Storytelling Matters

Policies and frameworks are essential—but people remember stories. AI-Thara transforms abstract principles like transparency, fairness, and robustness into characters, realms, and conflicts.

This makes AI governance memorable for non-technical stakeholders, and helps organisations build a culture that cares about how AI affects people.

  • Short explainer films for internal audiences
  • Visual metaphors for AI Verify & EU AI Act principles
  • Assets that support training, townhalls, and workshops
AI-Thara • Concept Lab

“The Shield of AI-Thara” — a narrative exploring what it means to defend humanity with responsible AI.

About Urielle-AI

Practical AI governance, rooted in real enterprise experience.

Profile

Urielle-AI is led by a practitioner with years of experience in digital transformation and enterprise systems, now focused on the intersection of AI, risk, and governance.

The work blends hands-on experimentation with open-source tools, AI assurance frameworks, and narrative design.

Focus Areas

  • AI governance & operating models
  • AI safety & GenAI risk management
  • Audit-ready documentation and controls
  • Story-driven communication of complex AI topics

Certifications & Interests

  • Independent AI audit & EU AI Act learning
  • AI Verify ecosystem exploration
  • Pattern-based thinking & knowledge systems

Contact

Ready to explore AI governance, safety, or storytelling for your organisation?

Let’s Talk

Share a short description of your AI project, the risks you are concerned about, or the audience you want to reach. We’ll explore how Urielle-AI can help.

✉️ Contact Us

Urielle-AI is currently evolving as a boutique practice—starting with pilots, experiments, and narrative assets. Early collaborators and learning partners are welcome.