Social Media and Machine Learning:  how are the two connected and what is the issue.

 

Overview

Many of the current gatekeepers—and, usually, their creators—of said social networks are large technology companies whose platforms enable the operation and expansion of vast social networks that can permeate the fabric of everyday life: from payments and exchange of goods and services to retail, gaming, and news, these networks enable communication and content creation/sharing like never before.

Building upon but diverging from social networks of days past (e.g. xanga, myspace, friendster, which I collectively refer to as the first generation), the current, or second, generation of social networks are massively online (i.e. instantly accessible and ever-present) and increasingly integrated into everyday life for many people. Coinciding with advances in mobile/internet technology and affordability, unlike their predecessors, the second generation of social networks and social media have entrenched themselves and are likely to stay around in the longer-run in some shape or form.

 

 

Social Media and Machine Learning

 

With the advent of these social networks, individuals have been able to reach wider audiences, discover communities that otherwise may not be physically available, and receive/find/share information almost instantaneously. On the other hand, these platforms have also had their share of controversy and problems.

The digital communities that have connected people across the world have at times become breeding grounds for mis-information, violence, and illegal activity; similarly, the same channels that amplify communication and information have also amplified mis-information and brought forth increasing polarization across various fault lines in our politics and our society. Their spillovers into real-world tragedies and effects on mental health, particularly on the youth, have sometimes been all too apparent.

Since no systematic means of external regulation exist within the United States, much less across or between countries with the exception of the EU and Japan in recent years, these social networks and their day-to-day operations are largely self-regulated, falling under the purview of each platform’s internal content and moderation teams. These teams take a variety of approaches to self-regulate and guard against potential harms when making best attempts to abide by their platform’s terms of use or existing statutes/law.

In general, these policies tend to converge on the following tenets: removal, user resilience, and community governance. While removal focuses on the platform to voluntarily take efforts to censor or remove individual content and users deemed harmful or dangerous via a combination of rules-based approaches and some ML models, user resilience and community governance relies on users of the platform/network to self-govern and guard against these threats either by cultivating informed resilience among individual users or by encouraging groups of users to create their own rules/standards via decentralized means without platform-official policies. However, these approaches tend to work selectively and in hindsight.

As these networks grow, unexpected and qualitatively new phenomena will inevitably arise while these current policies and moderation efforts will inevitably lag behind. Unfortunately, we have already observed, repeatedly, how this lag has translated into considerable damage to our democratic norms, scientific norms, mental health, and various other harms in reality. A mix of interdisciplinary but complementary fields—economic, sociological/psychological, and machine learning—will be required to fully tackle these intricate problems in its full scope.

 


Challenges & Issues

 

Algorithmic Transparency & Meta-optimization. A significant portion of how technology companies optimize their social platforms/networks relies on optimizing towards a specific set of factors that are typically oriented towards user growth or user engagement (e.g. more time spent on platform/network) or other metrics that tie into maximizing revenue from advertisers who place ads on said platforms/networks.

 

Social Media and Machine Learning

 

To continuously attract new users and keep users engaged for longer periods, many machine learning models are trained to learn the latent preferences of users so as to provide increasingly precise or relevant recommendations. As part of development, experimentation is conducted on a limited but representative sample of users for a period of time to determine if said model exhibits a sufficiently positive, sizable impact on relevant growth metrics for it to be deployed or replace an existing model.

These optimizations form the core of social platforms/networks, guiding and informing decisions on product changes that carry significant consequences for users and forms of interaction.

Since these models are optimized in the first-order to optimize towards certain growth and revenue metrics, their second-order effects are largely neglected. A recommendation system that generates and ranks candidate items to users based on learnt preferences will optimize solely on whether or not user feedback validates said suggestion/prediction. Though the model’s potential impact extends beyond growth/revenue metrics, these served recommendations, even if accurate for users, do not account for the ripple effects/costs downstream such as increasing submersion into toxic communities, polarization, degrading mental health, etc.

The optimization is short-sighted as it does not account for the wider scope of the ecosystem within which it operates or its distributional consequences—the model/algorithm only optimizes via its loss function, reflecting only the disparities between labels in the form of user feedback and its own prediction. Many companies have responsible ML teams to dissect situations when problems arise or recommendations go awry, but these analyses are almost always ex post facto and never inherently embedded into the optimization.

A much needed area of research should focus on developing optimization schemes that go beyond their own loss function, linking the model’s optimization not just to direct user feedback on its own predictions, but to account for the dynamics of the context ecosystem within which it operates—and the impact that its own predictions may have on said dynamics especially given the endogeneity of its role in shaping said ecosystem (i.e. meta-optimization).

Other more frequented but still important areas of research have been algorithmic transparency and explainable AI; however, more research should focus on making these results accessible to end users (and not just model developers) for them to understand the reasons behind the choice/recommendation sets that they see to build user trust and resilience.

With these explanations, another novel area would be to then build models atop them in order to help determine what a “healthier” set of recommendations might look like for each user, giving users trust, agency, and awareness of the explicit and implicit biases imposed by the underlying models—and what it might take to change them via the user’s own actions in the network akin to a self-diagnostic tool.

Lastly, very little research has studied how multiple algorithms interact in their effects on a system and its users, especially since many, if not all, attempts at explainability focus on individual models. As social network/media platforms have multiple models running regularly, each optimizing towards growth/engagement and shaping the very fabric of the network/ecosystem, sometimes reinforcing or working against one another, having these forms of understanding becomes invaluable.

 


 

Alignment & Human-centered AI Mechanism Design

 

Increasingly powerful generative models have been developed on the path towards generalizable artificial intelligence (AGI), demonstrating impressive strides in capability. Specifically, large language models (LLMs) such as GPT-3 and its related variants have been shown to generate increasingly realistic language, fooling even humans in conversations and producing natural sounding text from short stories to news articles.

One recent experiment, though ethically dubious, has shown how a GPT-3-like model trained on posts from an online forum/board was deployed back into the online ecosystem upon which it trained, successfully fooling many human users of its true nature.

Generative content beyond language/text like images (e.g. NSFW images, deepfakes) will present new, complex challenges to the existing paradigm of moderation/toxicity policies for many social platforms/networks.

 

Social Media and Machine Learning

 

More worryingly, the possibility of harmful actors leveraging LLMs as bots to polarize, manipulate, misinform, and de-stabilize has become increasingly probable. These qualitatively new phenomena will very likely escape existing paradigms of content moderation and toxicity prevention due to the convincing, near-human language and text capabilities of these powerful, billion-parameter models. The limited scope of existing mitigation policies on content safety/moderation can simply no longer keep up.

Many of these policies effectively deal with problems, if even detected at all, in hindsight and in isolation: these types of approaches face limits especially as many problems within social/information networks require preventative measures or sufficiently fast, multi-pronged responses (i.e. originating from multiple sources/nodes, spreading rapidly across networks, propagating at exponentially fast rates).

While much work is being done on making LLMs explainable, there is not enough work on detecting the presence of LLM-based actors. Secondly, to enable content moderation policies to become more forward-looking and holistic, we need an understanding of how the dynamics of a social network, as a complex system of users/interactions/interventions, can change or adapt to new perturbations.

This requires ways to simulate complex systems, representing our real world networks, for the testing and design of policies/mechanisms that intentionally promote or enforce human-aligned values rather than taking a wait-and-see approach after narrow growth/revenue focused optimization algorithms, and other factors, give rise to unintended and tragic consequences in hindsight.

With the astronomical scale of data and information generated by modern day networks/platforms, this requires research on building AI models that are not only aligned with human values and incorporate human-in-the-loop feedback, but are also able to design social mechanisms for value-aligned policy innovations.

 

– Dan Zhao – AI for Good Fellow

 


What if AI were developed to serve humanity rather than commerce?

 

We, at AI for Good Foundation, create impact by bringing together a broad network of interdisciplinary researchers, nonprofits, governments, and corporate actors to identify, prototype and scale solutions that engender positive change. Learn more here.