AI safety is not only a technical problem. It is also a relational one.

As AI systems become part of daily life, their long-term interaction with people introduces forms of influence, dependency, and behavioral shaping that most safety frameworks were not designed to detect. This layer remains underexplored. Auralethi's research focuses here.

The Core Concept

Relational Surface Risk describes the risks that emerge from sustained interaction between a person and an AI system.

Traditional safety concerns focus on whether a single output is correct, harmful, or aligned. Relational Surface Risk looks at what unfolds over time instead: how trust forms across repeated interactions, how behavior is shaped gradually, how dependency develops without being recognized, and how alignment can drift at the relational level even when individual responses appear sound.

A system can be technically safe in every measurable moment and still be unsafe in its cumulative effect on the person using it.

Why This Matters Now

AI systems are moving from tools used occasionally to presences relied upon continuously. As that shift accelerates, the long-term effects of interaction become more consequential than any single response. Safety research needs to account for this.

Relational Surface Risk is a framework for doing that, for asking not just "is this output safe?" but "what is this system doing to this person over time?"

Research Artifact

Relational Surface Risk

Working Paper

Published on GitHub. Distributed via LinkedIn and other platforms. Peer review is being actively sought. This is an early-stage framework intended to open a line of inquiry, not to close one.

Supporting Areas of Inquiry

The RSR framework connects to several broader questions Auralethi is actively developing:

How should ethical system design account for long-term interaction effects? What does interaction safety look like as a structural property, rather than an output property? How does alignment drift at the relational level, and how can systems be designed to resist it?

These questions are open. The work is ongoing.

From Research to Systems

Auralethi does not treat research as separate from what gets built. The frameworks developed here directly inform system design. The goal is not to produce theory about safer AI, but to produce AI systems that are actually safer to interact with over time.