In the wake of the Texas school shooting, social media must do better

May 26, 2022

By Kim Komando

The tragic mass shooting at Robb Elementary School in Uvalde, Texas highlights an issue we’ve come across before and it doesn’t seem to go away: Social media is shirking its responsibility in mass shootings and other tragedies. Social media has to make changes, and you and I can force them to do it.

We now know that the Texas school shooter, Salvador Ramos, was very active on social media before going on a rampage, killing at least 19 students and two teachers. Stories were circulating that the posts were public. Texas Governor Greg Abbott held a press conference Wednesday regarding the shooting, and he brought up the shooter’s Facebook activity.

“There was no meaningful forewarning of this crime other than what I’m about to tell you,” Abbott said. “As of this time, the only information that was known in advance was posted by the gunman on Facebook approximately 30 minutes before reaching the school.”

The governor listed the three posts: “I’m going to shoot my grandmother” and “I shot my grandmother.” About 15 minutes before the gunman reached the school, “I’m going to shoot an elementary school.”

It turns out that the posts were direct messages between Ramos and another Facebook user. Andy Stone, whose LinkedIn profile lists him as Communications Director at Meta, responded to a Twitter user that the messages were “private one-to-one text messages that were discovered after the terrible tragedy occurred.” He also wrote that Meta is working with law enforcement in the investigation.

There’s no excuse that Meta did not take immediate action with these direct messages. Meta failed. Children and their teachers died.

https://twitter.com/andymstone/status/1529524398105014274?s=20&t=3hUwc7RmZAj2yW1qlmyrVg

Ramos’ frightening Instagram messages

The social media connection doesn’t end there. The Daily Dot reported that after the shooting, screenshots of Ramos’ Instagram account were being shared. Instagram has since removed the account — another failure in social media.

Ramos posted an Instagram story depicting an image of two AR-style semi-automatic rifles days before the shooting. He tagged a girl in the story, who then posted screenshots to her own story of a private conversation with Ramos.

“I’m about to,” Ramos wrote. The girl, reportedly a minor, asked what he was going to do. He responded, “I’ll tell you before 11.” Ramos then wrote that he would text her in an hour and demanded that she respond.

Ramos then wrote, “I have a little secret I want to tell you,” followed by a smiley face emoji covering its mouth. “Be thankful I tagged you.” He ended the conversation with, “Ima air out.”

In her story, the girl wrote that she didn’t know Ramos and only responded because she was scared. Knowing what happened next, it’s fair to say this person is going to be haunted for the rest of her life.

RELATED: Social media’s role in mass shootings is clear

Social media’s role

We’ve seen the link between social media and mass shootings many times before, and the last incident was less than two weeks ago. On May 14, a man killed 10 people in a grocery store in Buffalo, New York. The shooter used his Twitch account to livestream the horrific incident.

Amazon-owned Twitch claimed it was able to identify and remove the stream within less than two minutes and has permanently banned the user. The company is also monitoring and removing any accounts rebroadcasting the incident.

“We have a zero-tolerance policy against violence of any kind, and we use several mechanisms to detect, escalate, and remove violence on Twitch,” the company wrote. “This includes proactive detection, 24/7 review and urgent escalations for your user reports.”

Facebook parent company Meta has posted on its own efforts to detect harmful content, including violence. In its Community Standards Enforcement Report from the first quarter of 2022, the company claimed that it “took action” on “21.7 million pieces of violence and incitement content, which was an increase from 12.4 million in Q4 2021, due to the improvement and expansion of our proactive detection technology.”

Meta has also said that it can scan private messages on Facebook and Instagram for harmful content, such as images of child exploitation. That clearly wasn’t the case here. Why this massive failure?

Social media needs to do better

With all the money companies like Meta spend on marketing and fun new features, why don’t they set more aside to improve detection technology? Surely one of the biggest companies in the world, hosting billions of active users, has a responsibility that goes beyond its platform.

In 2021, Meta brought in more than $39 billion in net income. Remember, this amount is what the company made after all salaries and expenses. With more than 70,000 employees and billions of dollars to spend, there’s no justifiable reason these messages — and others through the years — didn’t trigger some kind of alarm bells.

Sure, words can be taken out of context. Jokes, song lyrics, and movie quotes can be detected as threats. Artificial intelligence has its limits — but that’s not a valid excuse anymore. There’s just too much money and power flowing through social media to ignore this.

If their algorithms can figure out when I’m going to be most interested to read a news story or what I want to eat for breakfast, they should be able to figure out when someone poses a threat to others.

The Buffalo gunman’s livestream was removed within a couple of minutes, but not before it was shared worldwide. Before the shooting, he was posting a racist manifesto on Discord. Social networks know your favorite band and what type of car you drive, but they can’t detect things like this?

Privacy versus risk

Facebook and Instagram messages will soon be encrypted, which means not even Meta will be able to read them. WhatsApp (also owned by Meta) already has end-to-end encryption.

I believe safety should take precedent over absolute privacy. Lives are at stake. The technology isn’t there yet, but there is certainly enough money and brainpower at places like Meta to get the job done. Come on Zuckerberg, put this money toward fixing your AI in the real world and not some lame virtual Metaverse world.

I’m talking about AI that looks for patterns. If a person suddenly says certain phrases, it’s flagged. Remember, Meta and other social media companies are private companies that have a responsibility to our society.

Right now, they’re just profiting off all of us and making excuses for a job so very poorly done.

Make your voice known

Violent events have been preceded by very public posts. If you see something concerning, don’t count on another person, a company or an algorithm to take action. Reach out to authorities. You may save lives.

The Texas shooter posted an Instagram story with an image of two rifles. While such posts may not be cause for concern on their own, his subsequent communications certainly were. I can only imagine how the girl who messaged with Ramos before the shooting must feel.

Beyond keeping vigilant on social media, I urge you to reach out to your elected officials and tell them you’ve had enough. I already have. Here’s how:

Special thanks to Albert Khoury for his reporting on this story.

Keep reading

A shocking internal document is reason enough to ditch Facebook for good

Surprising truth over who collects social media posts

https://www.komando.com/news/social-media-texas-shooting/