AI Governance and Safety Statement

Effective Date: 1 December 2025
Version: 2.0
Issuing Entities:

  • Strategic Global Holdings Pty Ltd (ACN 693 256 503)
  • Superspeed.ai Pty Ltd (ACN 660 530 090), trading as Cushi.ai / Cushi.app

Governance Oversight: Group CEO, Strategic Global Holdings Pty Ltd
Review Cycle: Annual or earlier if required by law or operational change

1. PURPOSE & SCOPE

This AI Governance & Safety Statement defines how Superspeed.ai Pty Ltd (“Cushi”) manages, oversees, evaluates, and safeguards all Artificial Intelligence (“AI”) systems across the organisation. It complements the Privacy Policy, Terms of Use, and the AI Transparency Notice.

This Statement aligns with:
• ISO/IEC 42001 (AI Management Systems)
• NIST AI Risk Management Framework
• EU AI Act principles
• Australian AI Ethics Principles
• OECD AI Principles
• Singapore AI Governance Model Framework

This Statement applies to all Cushi-operated AI systems, including LLM-based guidance, personalisation engines, recommendation systems, safety filters, and automated workflows.

2. GOVERNANCE STRUCTURE & ACCOUNTABILITY

Chief AI Governance Officer (CAGO): Accountable for global AI risk management, safety governance, incident response, and ISO 42001 alignment.
AI Safety & Risk Committee: Reviews model behaviour, approves use cases, and monitors compliance.
AI Ethics & Compliance Lead: Ensures fairness, transparency, privacy-by-design, and regulatory compliance.
Engineering & Data Science Leads: Handle lifecycle security, robustness testing, and secure development.

3. AI SYSTEM RISK CLASSIFICATION (EU AI ACT MODEL)

Cushi classifies all AI systems using a risk-based model:
Unacceptable-Risk: Not deployed.
High-Risk: Not deployed.
Moderate-Risk: Personalisation, recommendations (human oversight required).
Low-Risk: Conversational assistance, content generation.
Minimal-Risk: UI enhancements and analytics-driven suggestions.

Cushi does NOT deploy AI for high-risk contexts (employment, biometric ID, political targeting, medical decisions, financial scoring).

4. AI LIFECYCLE MANAGEMENT

Cushi applies full lifecycle governance:
• Design: Privacy-by-design, threat modelling, harm analysis
• Development: Secure SDLC, LLM-specific controls
• Testing: Bias, fairness, robustness, adversarial testing
• Deployment: Input/output filtering, access controls, rate limiting
• Monitoring: Drift detection, misuse detection, safety monitoring
• Review: Periodic evaluation and internal audit of AI systems

5. SAFETY & RISK CONTROLS

Cushi implements:
• Harm-prevention filters
• Moderation systems
• Guardrails for safe generation
• Abuse detection mechanisms
• Logging and audit trails
• Red teaming for edge cases
• Fail-safe behaviours

6. HUMAN OVERSIGHT REQUIREMENTS

• Human override available at all times
• AI outputs are advisory
• Supervisory review of automated workflows
• Users may request explanation of AI logic
• Automated recommendations can be disabled

7. FAIRNESS, BIAS & ROBUSTNESS TESTING

Cushi performs:
• Bias detection across relevant variables
• Fairness evaluation
• Robustness and adversarial input testing
• Drift monitoring
• Transparency audits

Cushi does not infer protected characteristics without explicit consent.

8. PROHIBITED AI USES

Cushi prohibits AI use for:
• Employment suitability decisions
• Identity verification or surveillance
• Medical or diagnostic outcomes
• Political persuasion
• Decisions with legal/similar effect
• Behavioural profiling of minors

9. VENDOR & THIRD-PARTY MODEL OVERSIGHT

Cushi ensures:
• Due diligence for all AI vendors
• Data residency and transfer safeguards
• Contractual safety/privacy controls
• Model evaluation prior to adoption
• Continuous monitoring of provider behaviour

10. AI INCIDENT RESPONSE & ESCALATION

Cushi maintains a dedicated AI Incident Response plan including:
• Rapid triage of AI safety/misuse events
• Escalation to AI Safety & Risk Committee
• Corrective action and remediation
• User notification where required
• Regulator notification where required
• Post-incident review

Report AI issues: safety@cushi.ai

11. RECORD-KEEPING & AUDITABILITY

Cushi maintains:
• Model cards
• Evaluation records
• Risk assessments
• Dataset provenance records
• Input/output logs
• ISO 42001 audit evidence

12. RELATIONSHIP TO OTHER POLICIES

Read with:
• Privacy Policy
• AI Transparency Notice
• Terms of Use
• Data Processing Agreement
• Responsible Disclosure Policy
• Data Security & Protection Policy

VERSION CONTROL & GOVERNANCE

© 2025 Superspeed.ai Pty Ltd (ACN 660 530 090), trading as Cushi.ai / Cushi.app.
Part of the Strategic Global Holdings Pty Ltd Group (ACN 693 256 503). All rights reserved.
Privacy: privacy@cushi.ai | Security: security@cushi.ai | Support: support@cushi.ai

Pin It on Pinterest

0
    0
    Your Cart
    Your cart is emptyReturn to Shop