Anti-Social Media

When did social media become anti-social media? with Dr. Nitin Agarwal

Advertisement

 

Nitin Agarwal, Ph.D., Maulden-Entergy Chair and Distinguished Professor of Information Science, Director, Collaboratorium for Social Media and Online Behavioral Studies (COSMOS)

 

 

WHEN DID SOCIAL media become anti-social media? And how? And why? For insights into the phenomenon that increasingly shapes what passes as “the public discourse” these days, we turned to a UA Little Rock professor who studies social media for the likes of the National Science Foundation, the Department of Defense, NATO, and the Department of Homeland Security. “Our research has shown that toxicity is highly contagious,” says Dr. Nitin Agarwal. “The moment you see a toxic comment, or toxic comments under a post, your inclination to post equally or more toxic comment increases… When the discourse starts to become more and more toxic, our networks, our communities, start to fracture.”

—————————————–

Tell me how you came to choose social media as your career focus.

Even as a student in India, where I got my bachelor’s degree, I was fascinated by online or cyber behavior, particularly the bad behaviors. That’s why, during my Ph.D. studies at Arizona State University in Tempe, I started looking into Web science, social media, Facebook, and blogs and so on. Not many academicians were studying that kind of data back then, but I felt there was a need for students to be trained in this area. So, I planned for an academic career in Web science.

In 2009, when I was about to graduate, I started looking for faculty jobs. I visited various universities, interviewing and giving talks, but the opportunities I found were mostly in traditional computer science areas. Then I arrived at UA Little Rock, where I spoke with Dr. Mary Good, who was the founding Dean of what was then the George W. Donaghey College of Engineering and Information Technology (EIT). Dr. Good had already recognized Web science as one of the thrusts in the information sciences, which was very refreshing for me. As we talked, my eyes lit up and I’m sure she saw that. I thought that if the Dean here is already that visionary, it would give me that much more flexibility to grow this area. And that proved true even in my first semester here, when I proposed to my department chair, Dr. Elizabeth Pierce, that I create a course called Social Computing. Even today, that course is one of the most attended electives in our Information Science program.

You were an “early adopter” in this field. Are there other courses like this around the country now?

Yes, there are now several such courses, whether they’re called social computing or social media mining or social media data mining. Carnegie Mellon has a Ph.D. program called Societal Computing.

I was definitely an early advocate for this type of course, and I think part of that came from being at Arizona State, which at that time was struggling with the 2008/2009 financial crisis. Arizona was one of the hardest hit states, so Arizona State University had its budget shrunk quite drastically, and they had to adapt through retrenchment policies by merging departments and colleges to create interdisciplinary centers. What I learned from being there at that time was collaboration across disciplines—breaking those silos, breaking out of those boundaries between colleges which typically make it difficult for folks to collaborate. Because in social computing or any emerging research area nowadays, the problems are inherently interdisciplinary.

 

At the time that you started this course, did you know that you would have so many research grants from industry and government?

I knew there was rapidly growing interest in this area. Starting in 2005, when I began doing this research, I was very fortunate to engage with the Department of Defense and the National Science Foundation through my then-Ph.D. advisor (Dr. Huan Liu), and we were able to get a few small grants, grants that would support a conference or a couple of graduate students. Those type of grants were just becoming available at that time.

Then NSF created a groundbreaking interdisciplinary program. It was called socio-computational systems, SOCS for short. That program basically bridged two different directorates within NSF. One of the directorates was the computer and information science and engineering (CISE), and the other was social, behavioral, and economic sciences (SBE). SOCS quite unconventionally cut across two directorates and therefore two very different disciplines, technology and society. I was very fortunate to be asked to review proposals for that program in its first year, the pilot phase. That gave me a good amount of facetime with the program officers, and I started learning how to write a grant proposal. I wrote my first grant in the second year of that program, and I guess the stars in the universe aligned and I got that grant. It was a good, medium-size grant, almost $750,000 for three years.

Well, that’s a perfect segue—tell me why such entities as the Department of Defense and NATO are so interested in social media these days.

Social media is now an indistinguishable part of our lives. Many of us cannot imagine our world without being in touch with our friends and family, either through social media or some other digital tools. When I say social media, I’m talking about all the digital communication tools, whether it’s WhatsApp, Telegram, or other messaging platforms and forums.

There have always been cases in which social media was abused or exploited, whether to sow discord, create chaos, or undermine trust in democratic and scientific institutions of a society. However, in recent years we have seen a rise of the so-called deviant mobs, weaponization of information, radical and extremist groups, propaganda dissemination, misinformation, fake news, and the like. These deviant behaviors affect the democratic societies of the world, which are known for their openness to viewpoints. Countries like Russia or China are known to selectively quash narratives through strict content moderation policies. However, such policies infringe upon freedom of speech. Therefore, adversarial actors, whether state sponsored or non-state sponsored, extremist groups or terrorists, find it easier to target democracies of the world. They consider that openness as weakness and try to exploit it. We’re now seeing highly sophisticated ways of using social media to conduct emerging cyber threats. The Department of Defense, NATO, and other agencies are interested in identifying these threat actors and their attack vectors so that we can shore up our defenses. After the Colonial Pipeline ransomware attack, we saw a lot of misinformation about gas shortages which led to an unnecessary gas price surge, and that was all conducted through social media platforms. This is not new for us, as we have been tracking

disinformation (false information spread with intent) for over 15 years, whether it is to undermine NATO or the West. Adversarial actors often distort or manipulate historical and cultural facts and present to the public to influence their beliefs and behaviors, something that can be considered as transforming folklore to fakelore. This is the new asymmetric warfare, where the war of ideologies is fought with tweets, bots, and trolls as opposed to bullets, bombs, and missiles.

We have several projects with a combined funding of over $15 million from an array of U.S. federal agencies including Army, Navy, Air Force, DARPA, Department of State, National Science Foundation, and a long-term partnership between UA Little Rock and the Department of Homeland Security. These projects aim to develop capabilities that the military operations need in order to manage and adapt to the information ecology and better understand the emerging socio-technical behaviors, when confronted with civil conflicts, crisis situations, or executing humanitarian assistance and disaster relief operations.

We aim to fill a critical research gap to understand the social dynamics underlying the deviant sociotechnical behaviors (e.g., stoking civil unrest, effecting civil conflict, disseminating propaganda, coordinating cyberattacks, coordinating cyber campaigns) to better support situation awareness, risk assessment, mission assessment, policy design (kinetic or non-kinetic), force protection, operation security, and for an overall mission effectiveness.

COVID-19 presented a unique scenario, where misinformation was pushed with a volume, velocity, and variety that was unprecedented compared to what we have seen in the last 15 years of our work in the misinformation and disinformation space. During COVID-19 we saw misinformation about masks or PPE kits or fake vaccines—or just fake narratives about COVID-19 conspiracy theories, such as Bill Gates is responsible for the global pandemic, vaccines have chips in them that communicate using the 5G towers and allow the government to control its citizens, among other equally outlandish claims. There’s always been an audience for conspiracy theories, but what social media has done is connect such individuals and helped create an echo chamber, where information or opinion diversity is left at the door. You enter that community, that echo chamber, with the notions and biases that are held by the members of the community. That’s a perfect recipe for social non-cohesion, and it’s one of the biggest challenges we are facing in our cyber behaviors that are affecting our real world. We worked with the Arkansas Office of the Attorney General to help combat the spread of COVID-19 scams and misinformation to protect Arkansans. Details of our efforts can be found at https://cosmos.ualr.edu/covid-19.

Many ongoing research studies at COSMOS (https://cosmos.ualr.edu/) advance our understanding of the social behaviors as they manifest in cyber space. This is a highly interdisciplinary research endeavor that lies at the intersection of social computing, behavior-cultural modeling, collective action, social cyber forensics, Artificial Intelligence, data mining, machine learning, smart health, and privacy. For instance, our research has shown that toxicity is highly contagious. The moment you see a toxic comment, or toxic comments under a post, your inclination to post equally or more toxic comment increases. We’ve seen that in YouTube, in Twitter, in Parler and many other platforms. Our research also shows that when the discourse starts to become more and more toxic, our networks, our communities, start to polarize and ultimately fracture.

But it wasn’t always this way with social media, was it?

No. When I first started preparing for my Ph.D., back in 2003, Facebook was still in its infancy. Twitter didn’t exist. There were very few social media apps at that time. Myspace was one of them, and it was the place to be. Musicians were on there, and the virtual groupies would hang out and listen to the music. You could chat with your favorite artists and so on. Because Myspace was so interest driven, the types of actors that we’re seeing now did not exist. Then along came Facebook, Twitter, blogs. They just blasted open the whole digital environment.

The goal with which social media was set up was to democratize content production. Previously it was mainly the news outlets or print media who would produce the content, but through social media, everyone got a voice. Anyone can now start a blog, and that blog can be instantly accessible. Anyone can be on Twitter. Anyone can post things on the Internet. YouTube gained popularity largely through funny cat videos that people had posted.

But as social media platforms got more and more popular, we observed how the ideal of democratization of the voice—that everyone will have a voice—had given way to domination by those with the largest megaphones. In other words, the voices with the most followers got heard the most. That’s basically what these social media companies promoted: virality over veracity.

That led to people wanting to be heard more, no matter what their content was. Again, with the openness of democratic societies, there was no way to address that problem; people had a right to speak. But this situation was exploited by all types of actors. People started realizing that the more bizarre their content was, the more following they would get. So, they promoted a lot of conspiracy theories, conspiracy theory groups, to join Facebook, to be on YouTube, and produce these professional, Hollywood-style videos that the earth is flat, and so on.

Another potent component of this situation is the anonymity that the Internet provides. When you’re out in the open, your identity is real. But on the Web, you can be anyone you want. In fact, we saw many cases in which female gamers pretended as males so they could be taken seriously in online games.

But the Web also gives terrorists, extremists, and propagandists a safe haven to hide their identities and be able to conduct nefarious acts, whether it’s cyber espionage or stalking defense personnel to try to steal secrets. We’ve seen many cases in which the Internet’s anonymity gives these terrorists or extremist groups opportunities to recruit people, whether it’s through Reddit or Twitter. During the rise of ISIS in 2013 to 2015, we saw how they were able to recruit from Europe, from the U.S., and from various parts of the world. People were posting questions on Reddit. “What should I bring if I want to join your group? Should I bring flip-flops? Should I bring some cash? What currency?” Such questions were elaborately answered.

How rewarding financially is this to the social media companies that promote the anger and the dissension? Shouldn’t they be responsible citizens and say, “No, we’re not going to do that”? Something is making them a lot of money.

Absolutely. You’ve hit the nail on the head. Each of these social media companies is, at the end of the day, a business corporation. They answer to their stakeholders. Every time they remove certain users from the platforms, if they happen to be influencers, all their followers move to another platform with them. All that mass migration happens, and the officers of the company have to answer to their stakeholders about why there’s a sudden drop in subscribers in the next quarterly report.

I believe that’s a balancing act that they have to perform. Freedom of speech means they have to make sure that all viewpoints are respected. At the same time, they have to ensure that the discourse doesn’t get too toxic or damaging to the society. And they also must make sure they’re not performing low on their quarterly reports, so that their stocks aren’t affected.

Facebook alone spends almost $400 million every year on regulating hate speech and toxic discourse on social media, on their own platform—which is still inadequate, as we’ve seen in recent weeks during the congressional hearings. Similarly, other platforms, like YouTube and Twitter, also invest quite heavily in making their platforms safer for everyone, but those efforts are generally seen as a catchup act. When things have gotten worse, they start to react. There is certainly room for improvement.

Researchers like us, who sit outside of these “walled gardens,” are able to get a sneak peek of their data through their APIs and through various other ways. Imagine a tiny window to their operations. As researchers, we look through that window and are able to see quite a bit wrong in their algorithms, and how they can improve them further. The natural question is, why can’t they do more than what we can do, sitting outside?

How do algorithms pit large groups of people against each other?

That’s a great question, one that needs to dive deep in the technical know-how of algorithms, what they are, and how they work. We can think about algorithms as computer programs that try to make your lives easier by suggesting things that they think you’re interested in. They work by analyzing data from the Internet and finding similarities or patterns among people using machine learning and Artificial Intelligence techniques. For example, if you watch a movie that other people have also watched, the algorithm will recommend to you the other movies that they’ve watched.

These algorithms are becoming more and more sophisticated. They’re constantly listening to every click that you’re making, every keyboard tap. Essentially, they’re trying to capture as many digital breadcrumbs that you’re leaving behind as you browse the Internet.

Digital breadcrumbs. I like that.

Either the Internet companies (Amazon, Facebook, YouTube, etc.) collect these digital breadcrumbs that you leave on different websites as you travel, or they subscribe to various services that collect and sell these breadcrumbs. Algorithms are the programs that sift through all this information and catalog or categorize it, identify similarities between people to develop predictive models of user behaviors, and forecast or recommend what we may like to read, watch, purchase, eat, etc.

Now getting back to your question: How do these algorithms pit one against the other? These algorithms are designed to learn and mimic the behaviors we leave behind in these so-called digital breadcrumbs. Whatever inherent biases we have are reflected in our keystrokes, in the data that we’re leaving on these websites. And those biases are picked up by the algorithms and perpetuated further.

In our digital communities, members are often like-minded, as we know birds of a feather flock together. Similar content is shared among community members, with little room for opinion diversity. Moreover, conversations across communities (particularly with different opinions) are comparatively less frequent. Over time, these algorithms start picking up those community habits. So, the content recommendation algorithm will suggest more of the same to a community while recommending fewer articles or news stories that are popular among members of a different community, essentially creating echo chambers. This phenomenon over time makes the communities more and more polarized. And that’s the problem.

Is there any involvement with a human every day to change things? Or once it’s programmed, you just stand back and get out of the way?

As more and more people are recognizing adverse effects of these algorithms on our society, there’s an active interest in regulating bias and increasing transparency. Due to trade secret and intellectual property issues, Internet companies keep their algorithms opaque. Some of these algorithms use thousands of attributes or features in their machine learning/AI models. Researchers do not have the full knowledge of the ingredients or the recipe of these algorithms. Therefore, the academic community is pushing for efforts like fairness in AI, transparency in algorithms, reducing bias in algorithms, etc. COSMOS is actively working in measuring bias of these algorithms, because the first step to solving a problem is to recognize the problem. The bias index that COSMOS is developing will allow Internet companies and users alike to reduce the adverse effects of the algorithms.

Algorithms are developed by computer scientists and computer engineers, more specifically those working in machine learning and artificial intelligence. It is important that the technologists we’re producing have a good understanding of ethics in computers and AI to ensure algorithms have ethical boundaries. Therefore, academics like us introduce ethics in computing early on in the curriculum. When our students graduate and join Google or Microsoft, they understand the effects of these algorithms on the society and can proactively evaluate costs and benefits.

Advertisement

Get the magazine

Great for classrooms, offices or lobbies. ITArkansas is all about helping people find a career in tech regardless of the path they take.

The magazine