Mental v1

Currently under development

Purpose

The focus of this module is the recognition and resolution of mental and hormonal health. Special attention is paid to potential imminent self-harm, excessive stimuli, lies, hallucinations, intentional deceptions, and other similar situations.

After the first interaction, Lauren begins to collect data in order to gain a picture of the person or people she is interacting with as soon as possible. The analysis phase is continuous and does not stop until the individual indicates that they do not want to use Lauren anymore. As mentioned, the data is constantly collected during the analysis, which allows Lauren to form a picture that can be said to be potentially accurate after the first 2 weeks. Based on our experience and current test results, it takes about 4-5 weeks to recognize an accurate assessment.

Data collection

Lauren independently collects data from multiple sources and processes it independently without human intervention. The data sources are as follows:

  • Audio channels and calls with people who have approved data collection: This is the most common data source for Lauren. Lauren collects data from conversations with people who have agreed to have their data collected. This data includes the words spoken, the tone of voice, the speed of speech, reactions, reaction sounds, and enthusiasm.

  • Voluntary responses to questions: Lauren also collects data from people who volunteer to answer questions. This data can include information about the person's thoughts, feelings, and experiences. For example, Lauren might ask a person to describe their mood, their current goals, or their relationships with others.

  • Voice stress analysis: Lauren uses voice stress analysis to identify signs of stress or anxiety in the person's voice. This can be done by analyzing the pitch, tone, and rhythm of the person's voice.

  • Event validation: Lauren pays special attention to these data after validating the events mentioned. This means that Lauren looks for patterns in the data that suggest that the person is experiencing a particular mental health issue. For example, if Lauren sees that a person is talking about feeling anxious and stressed, she might pay more attention to other data points that suggest anxiety, such as increased heart rate or sweating.

  • Mental v2 under testing: Mental v2 is a newer version of Lauren that can also recognize faces, facial expressions, and other movement-related activities based on camera input. This can be useful for identifying mental health issues that are associated with physical symptoms, such as depression or anxiety.

๐Ÿ“˜ Good to know:

Lauren never collects data without consent. If other people are present in a person's company or audio recording, their data is not stored unless there is a specific search request for it.

Results

The analysis of the data set is stored in a complex JSON system that is strictly encrypted, regardless of the person's identity.

Submodules

This module contains the most submodules to date. We created separate submodules for each data collection form or interaction for accuracy, independence, and speed.

  • Predictable build 2.0.1

  • Static build 1.0.0

  • Modern build 0.0.3

  • Sorry build 1.0.0

  • Realistic build 0.9.1

  • Happy build 0.0.1

  • Satisfied build 0.1.3

  • Informative build 0.0.1

  • Based build 1.0.6

  • Fast build 2.2.1

  • Observer build 0.2.0

  • Social build 0.2.1

  • Reader build 2.0.0

  • XOMPH / v2 build alpha

    • XOMPH (Xenomorphenal Mental Prediction) the v2 version is based on this module. It focuses particularly on hormonal status and changes.

Last updated