POLICIES
AI Safety and Responsible Use Policy
TrueCMS uses AI to augment our teams while keeping human accountability, stakeholder trust, privacy, security, and compliance at the core of every engagement.
- Effective
- 2025-10-18
- Last updated
- 2026-04-28
- Contact
- [email protected]
Purpose
This policy explains how TrueCMS adopts artificial intelligence (AI) responsibly. It is aligned to the Australian Government’s Guidance for AI Adoption, the Voluntary AI Safety Standard, and Australia’s AI Ethics Principles. For government work, we also consider the Digital Transformation Agency’s Policy for the responsible use of AI in government and related technical standards where they apply to the client context.
Scope
This policy covers every situation where TrueCMS staff, contractors, or partners use AI to support client work, internal operations, or research. It applies to coding copilots, content generation, analytics assistants, chatbots, testing tools, agentic automation, and any other AI-driven tooling that influences advice, deliverables, decisions, or operational processes.
The same AI system can create different risks depending on how it is used, so each use case must be assessed in context before adoption and whenever the purpose, data inputs, autonomy level, or affected stakeholders change.
Principles
- AI augments people. Humans remain accountable for final outcomes, legal compliance, security, and ethics.
- We apply the Australian Government’s six essential responsible AI practices: decide who is accountable; understand impacts and plan accordingly; measure and manage risks; share essential information; test and monitor; and maintain human control.
- We use the Voluntary AI Safety Standard guardrails proportionately across governance, risk management, data governance, testing, human oversight, transparency, contestability, record keeping, and stakeholder engagement.
- We seek outcomes that are safe, reliable, fair, accessible, privacy-preserving, secure, explainable, and respectful of human, societal, and environmental wellbeing.
Governance and accountability
- The Chief Technology Officer (CTO) owns AI governance, maintains the AI system and use case register, and reports material risks or incidents to the leadership team.
- Delivery leads approve AI-assisted workflows within their teams and ensure documented human review checkpoints are followed.
- Each material AI system or use case must have an accountable owner who understands the system’s purpose, limitations, suppliers, data inputs, downstream impacts, and escalation paths.
- Staff must log significant AI tools, integrations, datasets, and embedded AI features in the central register before production use.
- We review supply chain responsibilities for AI vendors, integrators, and subcontractors so it is clear who is responsible for each component, dataset, control, and incident response obligation.
Risk, impact, and stakeholder assessment
- We perform lightweight screening for low-risk experiments and full risk and impact assessments for AI that affects client deliverables, personal information, operational processes, public-facing communications, or higher-impact decisions.
- Assessments consider intended use, foreseeable misuse, autonomy level, affected stakeholders, vulnerable or marginalised groups, accessibility, bias, reliability, security, privacy, legal obligations, and potential individual, organisational, social, or environmental harms.
- Higher-risk uses require documented mitigation plans, senior approval, acceptance criteria, monitoring arrangements, and a clear decision on whether the use is appropriate.
- AI must not be used for employment screening, eligibility decisions, legal determinations, or other high-impact decisions without prior executive approval, client approval where relevant, and a documented assurance process.
- People affected by AI-enabled interactions or outcomes must have a practical way to report issues, ask questions, request human review, and challenge or contest significant outcomes.
Human oversight and control
- Every AI-assisted output that changes code, content, analysis, advice, or client deliverables must be reviewed by an appropriately skilled person before release.
- Oversight must match the risk and autonomy of the system. Low-risk drafting assistance may need standard review, while higher-impact or agentic workflows require explicit approval gates and monitoring.
- Humans can pause, override, roll back, or discontinue any AI process at any time. Escalation and rollback paths are documented in runbooks where AI is used operationally.
- Critical functions must have an alternative pathway if an AI system is unavailable, produces unsafe output, or is retired.
Data handling, privacy, and data governance
- We avoid entering personal, confidential, commercially sensitive, privileged, or secret material into public AI services. If sensitive inputs are essential, we use enterprise controls, data minimisation, masking, contractual safeguards, and documented justification.
- Privacy impact checks are mandatory for AI uses that involve personal information, profiling, monitoring, or automated analysis of people. These checks align with the Privacy Act 1988 (Cth), the Australian Privacy Principles, and client-specific privacy obligations.
- We document the data categories, sources, quality expectations, provenance, access permissions, retention settings, and deletion requirements for material AI systems.
- AI prompts, outputs, evaluations, logs, and related records are retained only for as long as needed for delivery, assurance, audit, security, or legal purposes.
Security, testing, and monitoring
- AI-assisted code passes the same static analysis, dependency scanning, accessibility checks, code review, and infrastructure reviews as manually written code.
- AI systems are tested before production use against documented acceptance criteria, including accuracy, reliability, security, privacy, accessibility, bias, misuse, and failure-mode checks that are proportionate to risk.
- We monitor material AI systems after deployment for behaviour changes, drift, new risks, vendor changes, data leakage, prompt injection, model misuse, unsafe automation, and abuse of API credentials.
- Incidents are handled through the TrueCMS security response plan, including containment, investigation, lessons learned, and client or regulator notification where required.
- Access to AI tools is role-based, reviewed periodically, and protected with multi-factor authentication wherever the vendor supports it.
Third-party and supplier management
- We prefer vendors that provide clear controls over data retention, training use, hosting regions, audit logs, security controls, and deletion requests.
- Vendors must contractually commit not to train models on our prompts, outputs, client data, or confidential material unless we agree in writing and the relevant client obligations allow it.
- The AI register records supplier details, hosting location, data categories, contract owner, accountable owner, review date, and material changes so we can audit and reassess risk quickly.
Transparency and contestability
- We tell clients when AI contributes materially to deliverables, explain the role it played, and identify the human review and sign-off process.
- We disclose AI interactions or AI-generated content to end users where disclosure is needed to avoid confusion, support trust, meet client obligations, or comply with applicable government standards.
- When AI influences advice, recommendations, or decisions, we document the human sign-off and keep artefacts that show how the conclusion was reached.
- Concerns about AI use can be raised through the project contact or by emailing [email protected]. We investigate concerns promptly and provide a human response.
Training and acceptable use
- Team members receive onboarding and refresher guidance on responsible AI use, including privacy obligations, security, bias awareness, accessibility, prompt safety, model limitations, and human oversight.
- Acceptable uses include coding assistance, documentation drafts, research support, testing suggestions, content ideation, summarisation, and support workflow improvements, subject to human review and data handling rules.
- Prohibited uses include uploading secrets or unapproved personal data to public tools, bypassing security review, deploying autonomous production changes without approval gates, impersonating humans, or using AI to make high-impact decisions without prior assessment and approval.
Intellectual property and open source
- We respect third-party licence obligations, especially in the Drupal, GovCMS, and open-source ecosystems.
- AI-assisted contributions are reviewed for licence compatibility, attribution needs, copied code, and client IP obligations before release.
- We track meaningful references, prompts, generated artefacts, and approvals where needed to support auditability or remediation.
Record keeping and review cadence
- We keep appropriate records covering AI systems and use cases, accountable owners, risk and impact assessments, approvals, tests, monitoring results, human sign-off, supplier reviews, transparency notices, incidents, and remediation actions.
- The CTO reviews this policy at least annually or sooner if Australian law, government guidance, standards, supplier behaviour, client obligations, or our AI usage changes materially.
- Suggestions for improvement can be emailed to [email protected]. We publish the effective and updated dates so stakeholders can see when changes occur.