Grounded AI you can trust with student data.
Rede's AI work is designed around a single idea: keep outputs grounded in verified records and keep control with your university. No “black box” workflows writing back to student systems without validation.
What we're building
These are the AI-enabled building blocks we're working on. If you want early access, email us.
RedeSentry
Automatic SITS configuration validation with quality scoring, syntax checks, and risk detection (including PII in project files).
RedeDocs
Automatically generate technical documentation for SITS projects, with a roadmap to publish into Jira/Confluence and other documentation stores.
Grounded admin assistants
Answer questions from verified records and produce drafts that require explicit approval for changes.
Policy and control layer
Strong controls for prompt-injection resistance, data boundaries, and operational traceability.
Principles
Our AI approach is built to be deployable in real university environments with real governance.
- Ground truth: responses derived from authoritative records, not speculation
- Least privilege: strict boundaries and no direct write paths without validation
- Data residency: prefer tenant-hosted deployments to keep data in your control
- Audit trails: trace answers to sources and keep operational logs