Social Media

You are here

The Dangers of Domestic Disinformation

Here are some factoids that might highlight the ballooning issue of disinformation for you: Facebook took down 3 billion fake accounts in 2019. 3 billion. One study suggested that 15% of Twitter’s 330 million monthly users are bots. Bots have a massive multiplier effect on disinformation because they are far more prolific than humans, tweeting hundreds of times a day. Some studies estimate that more than 60% of Trump’s 80+ million followers are bots.

People often talk about how we should be worried about Russian trolls on social media sites and Twitter, but the fact is that it is domestic disinformation that is running rampant. Americans are intentionally feeding other Americans with wrong or factually inaccurate information about Covid-19, the George Floyd demonstrations and other conspiracy theories, and we are going to see much more as the election approaches.

As a parent, what can you do to help your kids navigate all of this “fake news”? First we need to recognize that many conspiracy theories are very seductive. We often want to go along with that particular explanation because it goes along with our own (sometimes hidden) prejudices and biases. Second, you and your kids need to learn to vet information and not to be satisfied with what comes up as one of the first few entries in a web search. Be prepared to search and read different viewpoints on a topic to get at the facts.

Trust in Social Media Platforms Waning

A recent study conducted by OpenX and The Harris Poll points to shifting consumer sentiment regarding social media platforms. The study found 61% of respondents first use the web browsers like Google to discover "high-quality content," while 31% first turn to major platforms like Facebook, Instagram and YouTube. Compared to last year, 31% say they use Facebook less, while 26% say they'll decrease their Facebook time going forward.

Screen Time and the Pandemic – The Psychological Effects

While some research is showing that children and younger people are less likely to have their health impacted by the coronavirus, experts expect they will experience indirect health care-related effects such as missed detection of delayed health development milestones, widespread omissions of routine childhood vaccinations and delays in seeking care for illnesses not related to the virus, researchers reported in the Canadian Medical Association Journal.  In addition to these indirect health risks, the findings also showed that social and mental health could see an impact from pandemic-related factors as well, including reductions in support for children with supplemental health care needs; lost social interaction leading to increased screen time; school cancellations that may worsen food insecurity; and forced isolation and increased screen addiction.

Generation Z’s Digital Interaction Increases During Pandemic

For parents, this is probably common knowledge by now, but the coronavirus pandemic and ensuing lockdown has had a significant effect on Gen Z's digital behaviors. According to a report issued by Boston Consulting Group and Snapchat, there has been a boost in Generation Z’s use of social media, video streaming and gaming, as well as an increase in online spending. Their report also highlights Gen Z's increased reliance on mobile-focused video and social platforms such as Facebook, YouTube and Snapchat.

Facebook to Identify Content from State Run Media

Facebook says it will start labeling content produced by at least 18 government-controlled news outlets, including Russia's RT and China's Xinhua News. The social platform will also begin labeling ads from the news outlets and plans to block their ads in the US in the near future. This is a bit of reversal for Facebook who has not been willing to label misinformation or election related materials.

New TikTok Policies Aimed at Supporting Black Creators

TikTok is responding to accusations that it censors black creators by launching a creator diversity council, assessing moderation strategies, developing a new appeals process and starting a creator portal that includes information for the broader TikTok community. The social platform also apologized for a system error that made posts with #BlackLivesMatter and #GeorgeFloyd appear as though they had zero views and pledged to donate $3 million to nonprofits serving black communities.

TikTok Expanding Content

TikTok, a favorite social media platform of tweens and teens, is evolving from featuring short-form quirky clips to live video and content related to everything from sports and gaming to cooking, fashion and beauty, says Bryan Thoensen, head of content partnerships. The social platform expects to see expanded educational content, which would boost users' time on the app while helping creators monetize their efforts, and generating more ad dollars, he says. So expect your children may be spending more time on the app.

The “Freedom of Reach” Question

A new term – “freedom of reach” – is in circulation among those who are concerned about how social media sites are handling misinformation and inflammatory comments. Snapchat is the latest to try to answer the question of “freedom of reach versus freedom of speech” after Twitter has decided to label tweets from President Trump that it considers misleading or “glorifying violence”, and Facebook agonized but decided to do nothing. Snapchat’s approach is to no longer promote President Trumps’s verified Snapchat account. His account, RealDonaldTrump, will remain on the platform and continue to appear on search results. But he will no longer appear in the app’s Discover tab, which promotes news publishers, elected officials, celebrities, and influencers. “We are not currently promoting the president’s content on Snapchat’s Discover platform,” Snapchat said in a statement. “We will not amplify voices who incite racial violence and injustice by giving them free promotion on Discover. Racial violence and injustice have no place in our society and we stand together with all who seek peace, love, equality, and justice in America.”

Since Snapchat is one of the social media sites used mainly by teens and young adults, the fairness of the “freedom of reach” question is one you might want to discuss with your children in the context of misinformation online. Snapchat isn’t deleting Trump’s account, and he is free to keep posting to existing followers. But to the extent that his Snapchat account grows in the future, it will be without Snapchat’s help. In Snapchat’s terms, the company has preserved Trump’s speech while making him responsible for finding his own reach. Trump’s campaign thinks this approach is unfair, but Snapchat has neatly sidestepped questions of censorship by not censoring the president at all. Instead the company has said that if you want to see the president’s snaps, you’ll have to go look for them on your own time.

Watch Out for Deepfake Videos and Images

Here is another vocabulary term you need to add to your lexicon – deepfakes. Deepfakes are images and audio pulled from social media accounts to create convincing videos – sometimes of people who never existed - for extortion, misinformation and disinformation. Deepfake technology enables anyone with a computer and an Internet connection to create realistic-looking photos and videos of people saying and doing things that they did not actually say or do. Cybercriminals are increasingly interested in the potential use of deepfake videos to pressure people into paying ransom or divulging sensitive information or to spread misinformation, Trend Micro reports, making the vetting of any information online or in media even more important.

Understanding Section 230 of the Communications Decency Act

It will be very interesting to see what effect the new Executive Order that President Trump signed recently targeting Section 230 of the Communication Decency Act has on cyberbullying and misinformation online. While you may have never heard of this section of the law, it was created almost 30 years ago to protect Internet platforms from liability for many of the things third parties say or do on them. But now it’s under threat by President Trump, who hopes to use this act to fight back against the social media platforms he believes are unfairly censoring him and other conservative voices. Some critics say he is trying to bully these platforms into letting him post anything he wants without correction or reprimand, even when he has broken a site’s rules about posting bullying comments or questionable information.

In a nutshell, Section 230 says that Internet platforms that host third-party content (for example, tweets on Twitter, posts on Facebook, photos on Instagram, reviews on Yelp, or a news outlet’s reader comments) are not liable for what those third parties post (with a few exceptions). For instance, if a Yelp reviewer were to post something defamatory about a business, the business could sue the reviewer for libel, but it couldn’t sue Yelp. Without Section 230’s protections, the Internet as we know it today would not exist. If the law were taken away, many websites driven by user-generated content would likely be shut down. As Senator Ron Wyden, one of the authors of the Section 230 says about it, the law is both a sword and a shield for platforms: They’re shielded from liability for user content, and they have a sword to moderate it as they see fit.

That doesn’t mean Section 230 is perfect. Some argue that it gives platforms too little accountability, allowing some of the worst parts of the internet — think cyberbullying that parents or schools struggle to have removed or misinformation that stays online for all to see with little recourse— to flourish along with the best. Simply put, Internet platforms have been happy to use the shield to protect themselves from lawsuits, but they’ve largely ignored the sword to moderate the bad stuff their users upload. It is also important to remember that the cyberbullying that occurs is less than one tenth of one percent of all the traffic online, but it still is important for these sites to acknowledge their role and do more about it.

All that said, this protection has allowed the Internet to thrive. Think about it: Websites like Facebook, Instagram, and YouTube have millions and even billions of users. If these platforms had to monitor and approve every single thing every user posted, they simply wouldn’t be able to exist. No website or platform can moderate at such an incredible scale, and no one wants to open themselves up to the legal liability of doing so. But if that free flow of information and creativity goes away, our online world will be very different.

So where do we stand? While the executive order sounds strict ( and a little frightening with the government making “watch lists” of people who post or support “certain kinds” of content) , legal experts don’t seem to think much — or even any — of it can be backed up, citing First Amendment concerns. It’s also unclear whether or not the Federal Communications Commission has the authority to regulate Section 230 in this way, or if the president can change the scope of a law without any congressional approval.

Pages