AI (roadmap)

Grounded AI you can trust with student data.

Rede's AI work is designed around a single idea: keep outputs grounded in verified records and keep control with your university. No “black box” workflows writing back to student systems without validation.

Trusted sources only Grounded in signed records
Validation first Detect errors before impact
Tenant-hosted Keep data in your cloud
Auditability Trace answers to sources

What we're building

These are the AI-enabled building blocks we're working on. If you want early access, email us.

RedeSentry

Automatic SITS configuration validation with quality scoring, syntax checks, and risk detection (including PII in project files).

Universities SITS focus Validation

RedeDocs

Automatically generate technical documentation for SITS projects, with a roadmap to publish into Jira/Confluence and other documentation stores.

Universities Docs Roadmap

Grounded admin assistants

Answer questions from verified records and produce drafts that require explicit approval for changes.

Roadmap Governed

Policy and control layer

Strong controls for prompt-injection resistance, data boundaries, and operational traceability.

Security Privacy

Principles

Our AI approach is built to be deployable in real university environments with real governance.

  • Ground truth: responses derived from authoritative records, not speculation
  • Least privilege: strict boundaries and no direct write paths without validation
  • Data residency: prefer tenant-hosted deployments to keep data in your control
  • Audit trails: trace answers to sources and keep operational logs