- Nikita Thomas
Afraid of Digital Solutions? Let's Unpack that...
Reportedly, there have been more innovations in this last year than there have been in the last 10 years, with one major driver - increasing digital solutions. The companies that truly flourished during the disruptive pandemic were those that embraced digital tools, and met the evolving technology-enabled demand in order to harness innovation. There is no denying that this demand will increase as more and more digital solutions are surfacing and will continue to in the future.
This explosion in digital innovation comes with the increased fears of compromised safety and privacy as even more data is collected on individuals. With the continual developments of advanced AI algorithms, people are questioning what personal data these companies are tracking. I’m sure everyone has had a moment where they thought their phone was listening to them with such targeted ads seconds after a conversation with a friend. Netflix series’ like The Social Dilemma have highlighted the controversial and often negative impacts of addictive digital social platforms such as Instagram and Tik Tok. This includes the manipulation of users' views, emotions, and behaviours based on the type of curated content presented to them. Therefore, it’s not surprising we are starting to see conspiracy theories and Big Brother surveillance worries creeping into users' minds.
However, in order to get a sense of why you are feeling like they are listening in or tracking your every movement, it is important to understand how these machine learning algorithms process user data. Look at Tik Tok, an app that has a curated personal feed. In order for the algorithm to know what video to line up next, it must decide what aspects of the video will get you to like and share that content- as that is what the algorithm sees as a successful recommendation. The algorithm knows you enjoy the video based on how many times you viewed the video,as well as if you like and comment on the video, and if you shared it with others. Therefore, it collects this data on every video you consume: how many seconds did it take for you to scroll to the next video, did you like, comment, share that video, and does it fall into certain categories of content based on the hashtags and captions. These are all features of the video that the algorithm must define as a trend in order to continue giving you the content you most like. If the model is designed to better find the trends in your digital behaviour that make you enjoy a video, the more correlated features it ingests and the better the accuracy. As users, it’s important to understand that the model’s only goal may be to get you to like or share videos, therefore it will learn the pattern of features to get to that goal with 100% accuracy. This is where problems can occur, for example, if a user is depressed and watching videos that validate any suicidal feelings, the user will like, comment, or share the video and the algorithm will continue to line up similar videos into the user’s feed, adding to this negative spiral. If the intentions of the model are not in favor of the user, the model can be detrimental. In this case, considerations of situations where the likes may be dangerous to the user’s safety are necessary.
At Behavidence, we want to be transparent with our users as they navigate their mental health journey. We pride ourselves in the fact that we can still give insights into your mental health without collecting sensitive digital information like GPS location tracking, or what search terms you use on Google, in case that data gets into the wrong hands. The information used in Behavidence’s models is collected with the goal of understanding basic phone behaviors like switching to different apps, average time spent on apps, and inactivity of your phone throughout the day in order to understand whether your pattern is similar to those diagnosed with different mental health conditions. This information can then be used to get a better understanding of digital behaviors that can indicate fluctuations in anxiety and depression, something that doctors and current mental health questionnaires rarely pick up on a day to day basis with objectivity. The mission of our digital solution is aimed to benefit the user by increasing awareness of their own mental health, validating their behaviors with others who have similar diagnoses, but also breaking away from the idea of one size fits all. This is achieved by defining new subtypes of depression and anxiety through your own personal behaviors. Additionally, with our anonymous login feature, none of your data can be tracked to your name.
Digital tools are here to stay and it’s important as users to understand how our data is being used and for what greater purpose. Next time you download a new app, ask yourself about the potential data that is being collected and what the intention is of the app. Above all, if you start to notice dips in your mental health, Behavidence is always here in the background supporting you in discovering what is causing that dip - be it environmental, social, or that new app you downloaded whose AI model is not benefiting you and your health.