Artificial Intelligence Transparency Policy

Policy document · Version 1.0 · Effective April 2026

Last updated: April 2026

Introduction

This Artificial Intelligence Transparency Policy ("Policy") is published by ULEARNA TECHNOLOGY LTD, a company registered in England and Wales ("the Company", "we", "us", "our"), operator of the CLAST.io educational management platform ("the Platform"). This Policy sets out, in full, the nature of all artificial intelligence ("AI") technologies integrated within the Platform, the purposes for which such technologies are employed, the third-party service providers engaged in the delivery of AI functionality, the technical and organisational measures applied to secure data processed in connection with those AI services, and the Company's roadmap for the future development of its AI infrastructure.

This Policy is intended to satisfy the transparency and disclosure requirements of applicable data protection legislation, including but not limited to the United Kingdom General Data Protection Regulation ("UK GDPR"), the Data Protection Act 2018, and equivalent frameworks in the jurisdictions in which the Platform operates. It further fulfils the disclosure requirements stipulated by third-party platform verification programmes, including those administered by Google LLC in connection with the use of Google Workspace APIs and related services.

Users, institutional administrators, and data subjects are encouraged to read this Policy in conjunction with the Company's Privacy Policy and Terms of Use, both of which are published at clast.io. In the event of any conflict between this Policy and those documents, this Policy shall prevail with respect to matters concerning AI and automated processing.

Definitions

TermDefinition
"Artificial Intelligence" or "AI"Any machine-based system that uses computational methods to generate outputs — including text, recommendations, predictions, or decisions — from inputs processed in a manner that approximates cognitive functions associated with human intelligence.
Third-Party AI ProviderAny external organisation that supplies AI model capabilities to the Platform under a commercial service agreement, including but not limited to OpenAI, Anthropic, and Google LLC.
Personal DataAny information relating to an identified or identifiable natural person, as defined under UK GDPR Article 4(1).
ProcessingAny operation or set of operations performed on personal data, whether or not by automated means, including collection, storage, use, disclosure, or erasure.
Self-Hosted ModelAn AI language model deployed and operated entirely within the Company's own controlled infrastructure, without transmitting data to external third-party API endpoints.
RAG (Retrieval-Augmented Generation)A technique whereby an AI model generates responses by first retrieving contextually relevant information from a defined knowledge base, thereby grounding outputs in verified institutional content.
Biometric DataPersonal data resulting from specific technical processing relating to the physical characteristics of a natural person that allows or confirms unique identification, including fingerprint templates.
OperatorAn educational institution, school, college, or university that has subscribed to the Platform and is responsible for the processing of student and staff data within its account.

AI Services Integrated Within the Platform

The Company does not develop, train, or operate any proprietary AI model. All AI functionality made available through the Platform is delivered via the APIs of the following third-party providers, each engaged under a formal data processing agreement that imposes binding obligations regarding the handling, security, and permissible use of any data transmitted to their systems.

OpenAI, L.L.C.

Primary text AI service

Service endpoint: api.openai.com · Models: GPT-4o and GPT series

OpenAI provides the primary large language model capability used within the Platform. This integration powers AI-assisted text composition, content generation, natural language understanding, intelligent form assistance, and multilingual communication drafting for administrative staff and educators. All requests to the OpenAI API are made server-side via the Company's backend infrastructure; end-user devices do not communicate directly with OpenAI endpoints.

Text generation · Writing assistance · Multilingual drafting · Natural language processing · Content suggestions

Anthropic, PBC

Agentic task automation

Service: Claude API · Integration method: Claude Code agentic framework

Anthropic's Claude models are employed to power agentic and multi-step automated workflows within the Platform. This includes backend task orchestration, document generation agents, administrative process automation, and complex reasoning tasks that require sequential decision-making across multiple steps. Agentic operations execute within the Company's controlled server environment and are subject to defined operational boundaries and human-review checkpoints.

AI agents · Workflow automation · Document generation · Multi-step reasoning · Administrative processing

Google LLC (Google AI)

AI tutoring and semantic search

Service: Google AI API · Model: Gemini Flash 2.0 · Technique: RAG (Retrieval-Augmented Generation)

Google's Gemini Flash 2.0 model is integrated to power the Platform's AI tutoring module. This module employs a retrieval-augmented generation architecture, whereby student queries are matched against a vectorised index of the relevant institution's approved curriculum materials, and the retrieved context is supplied to the model to generate grounded, curriculum-aligned responses. Semantic embeddings and vector indices are stored within the Company's own managed database infrastructure (pgvector). No student identity data is transmitted to Google AI endpoints.

AI tutoring · Retrieval-augmented generation · Semantic search · Curriculum Q&A · Vector embeddings

Disclosure notice

The Platform integrates OpenAI as a third-party AI service provider. This disclosure is made in accordance with Google's application verification requirements and applicable data protection obligations. Users may exercise their rights in relation to AI-processed data by contacting the Company at privacy@clast.io.

Purposes and Legal Basis for AI Processing

The following table sets out each AI-enabled feature of the Platform, the corresponding processing purpose, and the lawful basis under UK GDPR upon which that processing is conducted.

Feature / Use CasePurpose and Lawful Basis
AI-assisted writing and content draftingTo assist authorised staff in drafting institutional communications, reports, notices, and educational materials efficiently and accurately. Lawful basis: legitimate interests of the Operator.
Student AI tutoring moduleTo provide students with contextual, curriculum-aligned responses to academic queries, enhancing learning outcomes. Lawful basis: performance of the educational services contract; consent where required.
Administrative workflow automationTo automate repetitive administrative tasks including report generation, attendance summarisation, scheduling assistance, and payroll data processing. Lawful basis: legitimate interests of the Operator.
Assessment and marking supportTo assist educators in generating assessment rubrics, question banks, and marking criteria. All AI outputs require human review and approval prior to use. Lawful basis: legitimate interests.
Multilingual notification draftingTo assist in the composition of communications to parents, students, and staff in multiple languages. Lawful basis: legitimate interests; performance of contract.
Institutional analytics and reportingTo summarise aggregated, anonymised institutional data on attendance, academic performance, and financial metrics for leadership decision support. Lawful basis: legitimate interests.
Biometric attendance processing (ZKTeco)To record and process attendance status data derived from on-device biometric verification. Raw biometric data (fingerprint templates) is processed locally on the physical device only. Lawful basis: explicit consent of the data subject; where applicable, employment law obligations.

Principle of human oversight

The Company affirms that no AI system integrated within the Platform is permitted to make final autonomous decisions in respect of any individual's educational records, academic progression, financial account, employment status, or access rights. All AI outputs are advisory in nature and subject to review and approval by a competent human administrator or authorised staff member prior to being acted upon.

Data Security Measures in AI Processing

The Company applies a layered, defence-in-depth security architecture to protect all data processed in connection with the AI features of the Platform. The technical and organisational measures described in this Section apply to data both at rest within the Company's infrastructure and in transit to and from third-party AI service endpoints.

4.1 Data Minimisation and Anonymisation

Prior to any data being transmitted to a third-party AI provider, the Company applies a data minimisation process. Only the minimum information necessary to fulfil the specific AI task is included in any API request. Where the AI task can be completed using anonymised or pseudonymised data, direct identifiers (including names, student identification numbers, and contact details) are stripped from the payload before transmission. Student records, financial data, and biometric data are never included in AI API payloads.

4.2 Encryption in Transit

All data transmitted between the Company's backend servers and third-party AI provider endpoints is encrypted using Transport Layer Security (TLS) version 1.2 or higher. The Company does not permit unencrypted API communications. Certificate validation is enforced at the application level to prevent man-in-the-middle interception.

4.3 Encryption at Rest

All data stored within the Company's managed database infrastructure — including institutional records, student data, curriculum content, and AI-generated outputs awaiting human review — is encrypted at rest using AES-256 encryption. Database encryption keys are managed via a dedicated key management service and are rotated on a defined schedule.

4.4 API Key and Credential Security

Third-party AI provider API keys and service credentials are stored exclusively within the Company's secure secrets management system (environment-isolated secret stores). Keys are never embedded in application source code, version control systems, or client-side assets. Access to API credentials is restricted to authenticated backend service accounts operating under the principle of least privilege. Keys are rotated periodically and upon any suspected compromise.

4.5 Access Controls and Authentication

Access to AI features within the Platform is subject to role-based access control (RBAC). Only users holding appropriate authorised roles within their institution's account may invoke AI-assisted functionality. All user sessions are authenticated via secure, expiring JSON Web Tokens (JWT). Administrative access to backend AI processing infrastructure is restricted to authorised Company personnel and is protected by multi-factor authentication (MFA).

4.6 Biometric Data Isolation

The Platform's biometric attendance integration (ZKTeco hardware) is architecturally designed to ensure that raw biometric data — specifically, fingerprint templates — is processed and stored exclusively on the physical attendance device. The device transmits to the Platform only the derived attendance event record (comprising employee or student identifier, timestamp, and attendance status). No biometric template, raw fingerprint image, or biometric hash is ever transmitted to, or stored within, the Platform's servers or any AI processing pipeline.

4.7 Prohibition on AI Training Use

The Company has confirmed, via the applicable data processing agreements and terms of service with each third-party AI provider, that no data submitted by the Company or its Operators via API calls is used by those providers to train, fine-tune, or improve their foundational AI models. This restriction applies to all three current providers: OpenAI, Anthropic, and Google LLC.

4.8 Audit Logging and Monitoring

The Company maintains server-side audit logs of all AI feature invocations. Logs record the initiating user role, the AI service called, the timestamp, and the operational outcome, without retaining the substantive content of AI prompts or responses beyond the session period. Logs are retained for a minimum of twelve (12) months and are used for security monitoring, anomaly detection, and incident response purposes.

Security measures summary

Security measureDescription
TLS 1.2+ encryptionAll API communications with AI providers are encrypted in transit.
AES-256 at-rest encryptionAll stored Platform data, including AI outputs, is encrypted at rest.
Data minimisationOnly the minimum necessary data is transmitted in any AI API request.
PII strippingIdentifiers are removed from AI payloads before transmission where feasible.
Secrets managementAPI keys are stored in isolated secret stores, never in source code.
Role-based access controlAI features are accessible only to users with appropriate authorised roles.
JWT authenticationAll user sessions are validated via secure, short-lived JSON Web Tokens.
MFA for admin accessMulti-factor authentication is enforced for backend infrastructure access.
Biometric data isolationFingerprint templates remain on-device; only attendance status is synced.
No-training contractual barAll providers are contractually prohibited from training on Platform data.
Audit loggingAll AI invocations are logged server-side for monitoring and compliance.
Key rotationAPI credentials and encryption keys are rotated on a defined schedule.

Data Flows and Third-Party Provider Obligations

Each third-party AI provider engaged by the Company is subject to a Data Processing Agreement ("DPA") that imposes binding obligations consistent with the requirements of UK GDPR Article 28. These obligations include, without limitation: processing data only on documented instructions from the Company; maintaining appropriate technical and organisational security measures; assisting the Company in meeting its obligations to data subjects; deleting or returning data upon termination of the service relationship; and submitting to audits.

The Company warrants that, to the best of its knowledge, each provider's data processing infrastructure operates in accordance with internationally recognised security standards, including ISO/IEC 27001 or equivalent, and that each provider maintains relevant compliance certifications appropriate to their operations. Users wishing to review the data handling commitments of each provider are referred to the publicly available privacy and security documentation published by OpenAI, Anthropic, and Google LLC respectively.

Data flow / endpointData transmitted and processing basis
OpenAI APIText prompts constructed from user-entered content only. No student records, financial data, or personal identifiers are included. Processed under OpenAI's Enterprise DPA.
Anthropic Claude APITask instructions and structured workflow data only. Sensitive institutional data is excluded from all agentic payloads. Processed under Anthropic's API usage policies and DPA.
Google AI (Gemini)Anonymised student queries and retrieved curriculum text chunks only. Student identity data is not included in any Gemini API call. Processed under Google Cloud DPA.
Vector database (pgvector)Curriculum content embeddings stored within the Company's own managed PostgreSQL instance. No data is transmitted externally. Full Company control.
ZKTeco attendance devicesBiometric processing occurs entirely on-device. Only attendance status events are transmitted to the Platform over an encrypted local network connection.

AI Infrastructure Roadmap and Self-Hosting Commitment

The Company acknowledges that reliance upon third-party AI API providers introduces certain dependencies and potential data sovereignty considerations, particularly for Operators in markets with specific regulatory requirements. Accordingly, the Company has established a phased roadmap to progressively migrate AI processing to self-hosted, open-source language models deployed within the Company's own controlled infrastructure.

Phase 1 — Current (Startup and Early Growth)

All AI functionality is delivered via third-party provider APIs (OpenAI, Anthropic, Google AI). This approach enables rapid deployment of best-in-class AI capabilities with minimal infrastructure overhead. Third-party DPAs and security commitments govern data handling at this stage.

Phase 2 — Transition (Intermediate Scale: 50–200 Operators)

Introduction of self-hosted open-source language models (such as Meta Llama 3, Mistral, and Qwen series) for lower-risk, high-volume AI tasks including notification drafting, attendance summarisation, and analytics generation. Third-party APIs are retained for complex reasoning and premium AI features. Data processed by self-hosted models does not leave the Company's infrastructure boundary.

Phase 3 — Full Scale (200+ Operators)

The majority of AI processing migrates to self-hosted models deployed on Company-controlled GPU infrastructure or dedicated private cloud environments. Custom fine-tuned models may be trained on fully anonymised and aggregated Platform usage data, subject to a separate model training policy. Third-party AI APIs are retained only for specialised tasks where open-source alternatives do not meet quality thresholds. At this stage, Operators in sensitive regulatory environments may elect full data residency guarantees with no external AI API calls.

Commitment to data sovereignty

The Company's self-hosting roadmap is driven by a commitment to ensuring that, at scale, Operator and student data is processed within infrastructure boundaries that the Operator can verify and control. This commitment is particularly material for Operators operating under sector-specific data protection frameworks or institutional policies that restrict cross-border data transfers.

Rights of Data Subjects

In accordance with UK GDPR and applicable data protection law, individuals whose personal data is processed in connection with the AI features of the Platform retain the following rights, exercisable by submitting a written request to the contact details set out in Section 9:

RightDescription
Right of access (Art. 15)The right to obtain confirmation of whether personal data is being processed and, if so, to receive a copy of that data together with information about the processing.
Right to rectification (Art. 16)The right to require correction of inaccurate or incomplete personal data.
Right to erasure (Art. 17)The right to request deletion of personal data where processing is no longer necessary, consent has been withdrawn, or processing is unlawful.
Right to restriction (Art. 18)The right to request that processing be restricted in defined circumstances, including where the accuracy of data is contested.
Right to object (Art. 21)The right to object to processing based on legitimate interests, including processing carried out for automated decision-support purposes.
Rights re: automated decisions (Art. 22)The right not to be subject to a decision based solely on automated processing that produces legal or similarly significant effects. The Company affirms that no such fully-automated decisions are made within the Platform.

Policy Governance and Updates

This Policy is owned by the Data Protection function of ULEARNA TECHNOLOGY LTD and is reviewed on a minimum six-monthly basis, or upon any material change to the AI integrations, data flows, or applicable regulatory requirements described herein. The version history of this Policy is maintained internally.

Where a material change is made to this Policy — including the addition of a new third-party AI provider, a change in the nature of AI processing, or a significant alteration to security measures — affected Operators will be notified in advance via the Platform's in-app notification system and by email to the registered Operator administrator address. Continued use of the Platform following such notification constitutes acceptance of the updated Policy.

Document metadata

FieldDetail
Document titleArtificial Intelligence Transparency Policy
Version1.0
Effective dateApril 2026
Review frequencyEvery six (6) months, or upon material change
Policy ownerULEARNA TECHNOLOGY LTD — Data & Compliance function
Applicable lawUK GDPR; Data Protection Act 2018; applicable sector frameworks
Next scheduled reviewOctober 2026

Contact Information

All enquiries, data subject rights requests, and concerns relating to this Policy or the processing of personal data in connection with AI features should be directed to:

Data Protection Contact — CLAST.io

ULEARNA TECHNOLOGY LTD

Registered in England and Wales

Email: privacy@clast.io

Web: clast.io/ai-transparency

This Policy does not constitute legal advice. Operators are encouraged to seek independent legal counsel in relation to their own data protection obligations arising from use of the Platform. Nothing in this Policy limits or excludes any right of a data subject under applicable data protection law. The Company reserves the right to amend this Policy at any time upon notice in accordance with Section 8.