POLICIES
AI Safety and Responsible Use Policy
TrueCMS uses AI to augment our teams while keeping human accountability, security, and compliance at the core of every engagement.
- Effective
- 2025-10-18
- Last updated
- 2025-10-18
- Contact
- [email protected]
Scope
This policy covers every situation where TrueCMS staff, contractors, or partners use artificial intelligence to support client work, internal operations, or research. It applies to coding copilots, content generation, analytics assistants, chatbots, and any other AI-driven tooling that influences advice, deliverables, or decisions.
Principles
- We align our approach to the Australian Government’s Safe and Responsible AI commitments, taking proportionate steps across governance, risk management, transparency, security, and accountability.
- We implement the Voluntary AI Safety Standard guardrails before adopting new AI capabilities and whenever the purpose, data inputs, or downstream decisions change.
- We treat AI as an assistive technology: people remain responsible for final outcomes, legal compliance, and ethics.
Governance and accountability
- The Chief Technology Officer (CTO) owns AI governance, maintains the register of systems, and reports material risks to the leadership team.
- Delivery leads approve AI-assisted workflows within their teams and ensure human review checkpoints are followed.
- All staff must log significant AI tools and integrations in the central register before production use.
Risk assessment and human oversight
- We perform lightweight risk triage for low-impact experiments and full assessments for AI that affects client deliverables, operations, or personal information.
- Every AI-assisted output that changes code, content, or advice is reviewed by a senior practitioner before release.
- Humans can override or discontinue any AI process at any time, and escalation paths are documented in runbooks.
Data handling and privacy
- We avoid entering personal, confidential, or secret material into public AI services. If sensitive inputs are essential, we use enterprise controls, data minimisation, and masking techniques.
- Privacy impact checks are mandatory for AI uses that involve personal information or profiling.
- AI prompts, outputs, and related logs are retained only for as long as they are needed for delivery, audit, or legal purposes.
Security controls
- AI-assisted code passes the same static analysis, dependency scanning, and infrastructure reviews as manually written code.
- We monitor for prompt injection, data leakage, and model misuse. Incidents are handled through the existing security response plan, including client notification where required.
- Access to AI tools is role-based, and multi-factor authentication is enforced wherever the vendor supports it.
Third-party and supplier management
- We prefer vendors that give explicit control over data retention, training, and regional hosting. Vendors must contractually commit to not training models on our prompts and outputs unless we agree in writing.
- The AI register records supplier details, data locations, contract owners, and review dates so we can audit material changes quickly.
Transparency and contestability
- We tell clients and end users when AI contributes to deliverables, explain the role it plays, and provide a way to request human review or clarification.
- When AI influences advice or decisions, we document the human sign-off and keep artefacts that show how conclusions were reached.
Training and adoption
- Team members receive onboarding on responsible AI use, including privacy obligations, bias awareness, and secure prompt patterns.
- We share good practice playbooks covering code review, testing, and documentation for AI-assisted work.
Review cadence
- The CTO reviews this policy at least annually or sooner if laws, standards, or our AI usage changes materially.
- Suggestions for improvement can be emailed to [email protected]. We publish the effective date so stakeholders can see when updates occur.