Safety & Trust Center
Because emotional spaces should be sacred — not scary.
We built wa4u to be more than a tool. It's a place to think clearly, feel freely, and grow safely — on your own terms. Here's how we protect your experience and your voice.
Our safety pillars
1. Privacy First — Always
We never sell your data. Period.
Your conversations are personal, and we treat them that way:
- No data for sale: We do not sell, rent, or trade your data – ever.
- Private conversations, stored securely: Chats are stored on encrypted EU servers under GDPR standards. Access is strictly limited to essential operations and approved ethical review.
- You decide what stays: You control your history. You can clear conversations, and we will respect your choices.
- No hidden training on your logs: We will never train future models on your conversations without a clear, informed opt-in. Default = no.
- Transparent policy: You can always see what we collect, why, and how it is used.
Read more in our Privacy Policy.
2. A Safe Space — Not a Clinical One
wa4u is designed for light to medium support: clarity, emotional reflection, motivation, and life navigation. It is not therapy, and it is not here to diagnose you.
- No diagnoses, no pathologising: Coaches do not label your experience or speculate about disorders.
- No mood reading or surveillance: We do not infer, track, or score your emotions. Coaches respond only to what you actually say – not to guesses about your inner state.
- Clear boundaries with crisis: If you express crisis language (self-harm, suicide, violence), the system does not "explore" those thoughts. We surface grounding messages and information about professional or emergency support instead.
- Honest about our limits: We will tell you when something is beyond what wa4u can safely hold and encourage you to seek human help.
wa4u is a space for perspective and presence, not a substitute for professional care.
3. For Young Adults — With Real Boundaries
wa4u is built primarily for young adults and older teens. That means our tone is accessible – but our boundaries are serious.
- Plain language, no jargon: We speak like real people. No clinical speak, no life-hack hype.
- No pressure coaching: Coaches offer options and reflections – not commands or 'you must' instructions.
- Guardrails on content: Coach prompts are designed to prevent sensational, harmful, or violent narratives from being fed or romanticised.
- Emotionally intelligent, not manipulative: Coached personas are trained to be warm and grounded – never demanding, seductive, or controlling.
- If something feels off, humans can step in: You can report a coach or conversation that feels wrong. When you do, a human will review it.
We want wa4u to feel like a trusted older friend, not an authority figure and not a clinical system.
4. Community Guidelines That Mean Something
We take community and conversational safety seriously:
- We do not tolerate hate speech, bullying, or targeted harassment.
- We do not accept attempts to coerce, groom, or manipulate others – human or AI.
- We do not allow romanticising unsafe behaviour (self-harm, violence, eating disorders, extreme risk-taking).
- Repeated offensive or harmful themes without reflection or change can lead to restricted access or account action.
Our guideline: if it makes the space unsafe for others, it doesn't belong here.
5. Transparency Is Power
Trust is built, not assumed.
- We tell you what's changing: When we update safety systems, we say so – in clear language, not legalese.
- You know who can see what: We explain which parts of the team can access which kinds of data and why.
- Regular safety reviews: We run ongoing internal reviews of high-risk interactions, coach behaviour, and system prompts to make sure our practice keeps up with the research.
If we discover a safety issue, we would rather tell you openly than hide it.
How We Prevent AI-Amplified Harm
- Non-clinical by design
- No 'yes-setting' on dangerous ideas
- De-romanticising risk
- Stopping, not exploring, crisis narratives
We've built guardrails that don't just filter words – they understand context, intent, and emotional weight. Our AI is trained to recognize when encouragement becomes enabling, and when reflection crosses into risk.
User Message
Safety Checks
Coach Response
How our safety levels work
Coaching mode (normal)
Standard reflective dialogue. We ask questions, help you think through emotions, and support your growth with evidence-based methods.
Safer mode (when things feel heavy)
When language suggests distress, we shift tone: less exploration, more grounding. We validate without amplifying and gently suggest professional resources.
Crisis mode (when there is clear danger)
Conversation pauses. We provide a static crisis message with helplines and emergency contacts. No coaching - only immediate, clear safety information.
Important:
These levels activate automatically based on language patterns, but they're not perfect. If you're in crisis, please reach out to a human – we'll help you find the right resources immediately.
What wa4u never does
- Diagnose mental health conditions
- Predict your mood or behaviors
- Pressure you to share more than you're comfortable with
- Create emotional dependency or romantic attachment
- Sell, share, or monetize your conversation data
- Make decisions for you or tell you what to do
We're not therapists, doctors, or mind-readers. We're a reflective tool – nothing more, nothing less.
How we train and test our AI coaches
- Evidence-based tone trained on ACT, CBT, and motivational interviewing principles
- Ethical evaluation by mental health professionals during development
- Red-team testing for edge cases, manipulation, and harm scenarios
- Continuous review of flagged conversations and safety incidents
- Regular updates based on latest AI safety research
Our AI isn't trained on random internet data. Every coaching response is shaped by research, reviewed by experts, and tested for safety.
Research we follow
- Long-dialogue safety in conversational AI
- AI reinforcement and emotional dependency
- Sycophancy and yes-setting
We stay current with academic research, industry standards, and emerging best practices in AI safety and ethics.
Where we are now
Our safety approach evolves with our product. Each phase introduces new capabilities – and new safety measures to match.
Questions or Concerns?
We're here to listen. Whether you have questions about how we work, concerns about safety, or need immediate support – we're ready to help.
Emergency: If you're in immediate danger, contact local emergency services first. wa4u is a support tool, not a crisis service.