AI Governance & Safety Statement

Seller: Superspeed.ai Pty Ltd
IP Owner & Licensor: Strategic Global Holdings Pty Ltd (ACN 693 256 503)
Effective Date: 1 January 2025
Version: 4.95 Ultra-Final
Document Owner: Chief AI Governance Officer (CAGO), Superspeed.ai Pty Ltd
Review Cycle: Annual or upon release of updated AI regulations or standards

1. Definitions

Includes: AI System, Machine Learning, Automated Decision-Making (ADM), Profiling, Model Drift, Training Data, Inference, AI-Assisted Output, High-Risk Processing, Human-in-the-Loop, and Explainability.

2. Purpose & Scope

This Statement outlines how AI systems are governed, managed, monitored, and reviewed across the bookstore, platform recommendations, customer support tools, search optimisation, and fraud-monitoring systems.

3. AI Governance Principles

Superspeed.ai adopts governance aligned with:

  • OECD AI Principles
  • ISO/IEC 42001 (AI Management Systems)
  • EU AI Act (non-binding but anticipatory compliance)
  • NIST AI Risk Management Framework

Key principles include fairness, transparency, accountability, safety, privacy, human oversight, and data minimisation.

4. AI Use Cases on the Platform

AI is used for:

  • Personalised book recommendations
  • Search ranking optimisation
  • Fraud detection and anomaly monitoring
  • Automated support suggestions

AI is not used for:

  • Credit decisions
  • Legal determinations
  • Employment or eligibility screening
  • Any high-risk or consequential automated decisions

5. Human Oversight (Human-in-the-Loop)

All AI-assisted outputs are subject to human review. Staff may override any AI suggestion. Critical decisions (refunds, account restrictions, escalations) require direct human intervention.

6. Data Governance & Protection

AI systems operate in accordance with the Global Privacy Policy, GDPR/UK Addendum, CCPA/CPRA Addendum, PIPL Addendum, PDPA Addendum, and the Cookie Policies. No Sensitive Personal Data is used in AI models.

7. Training Data Controls

Training data is curated to:

  • Exclude discriminatory or biased sources
  • Maintain data minimisation
  • Prevent ingestion of copyrighted Digital Content unless explicitly permitted

Digital books or content sold on the platform are never used for training.

8. Explainability & Transparency

Users may request explanation of AI-assisted interactions. Documentation is maintained describing:

  • How models influence recommendations
  • When AI is used vs not used
  • Which data categories AI relies on

9. Bias Detection & Mitigation

We perform periodic assessments to detect and reduce bias using:

  • Model drift detection
  • Fairness assessments
  • Diverse evaluation datasets
  • Manual review of flagged cases

10. Monitoring & Quality Assurance

AI systems undergo:

  • Continuous monitoring
  • Error rate analysis
  • Performance audits
  • Regular versioning reviews
  • Controlled deployment with rollback capability

11. Incident Response (AI-Specific)

AI incidents—such as unexpected model behaviour, fairness failures, or degraded accuracy—trigger:

  • Isolation or rollback of the model
  • Human review and override
  • Root-cause investigation
  • User notification where applicable

12. Prohibited AI Uses

Superspeed.ai prohibits:

  • AI systems that make legally binding decisions without human oversight
  • Use of biometric or facial recognition data
  • Emotion inference, predictive profiling, or high-risk biometric processing
  • Training AI on customer-owned or licensed Digital Content

13. Cross-Document Integration

This Statement aligns with: AI Explainability & Human Oversight Notice, Automated Decision-Making & Profiling Disclosure, Privacy Policy, Security Overview, Website Terms of Use, and Business Continuity Statement.

Pin It on Pinterest

0
    0
    Your Cart
    Your cart is emptyReturn to Shop