AI Safety Research and System Design

Most AI safety research asks whether a system behaves correctly in a given moment. Auralethi asks what happens to a person across thousands of interactions over time, as systems shape decisions, form dependencies, and build trust that may not reflect what they actually are. We call this layer Relational Surface Risk.

The Research What We Build
Scroll

AI safety has made real progress. Researchers have developed methods for controlling capability, detecting misuse, and improving output reliability. But there is a category of risk these methods were not designed to address.

When a person interacts with an AI system repeatedly, over days, months, years, something more than information exchange is happening. The system is becoming a presence. It shapes how questions get asked, which options feel available, how much independent judgment gets exercised. These effects do not show up in a single output evaluation. They accumulate.

Relational Surface Risk describes the risks that emerge from sustained human-AI interaction, not from any single response, but from the pattern of interaction over time.

A system can pass every safety benchmark and still gradually narrow someone's decision-making. It can be technically aligned and still foster dependency that undermines the autonomy it was meant to support. These are not edge cases. They are the natural consequence of designing systems for engagement without designing them for long-term human wellbeing.

Auralethi researches this layer, builds frameworks for understanding it, and develops systems designed to address it directly.

The work combines AI safety research, system design, and applied models, not as separate tracks, but as a single integrated effort. The goal is not to produce theory. It is to produce systems that actually behave differently.

Our current work focuses on models for safe, long-term human-AI interaction: systems designed to remain stable and transparent across extended use, rather than optimized for immediate engagement.

This work is early. It is real.

Explore the research behind Relational Surface Risk, or see the systems being built to address it.