In the past four years there was a much discussion around social media (Silicon valley controlled free to use communication services) and their role in influencing public opinion. The main points brought into question were the issues around seemingly neutral platforms and their responsibility to screen user generated content, how they should frame illicit content to not lose public support and the risk of empowering smaller communication services as a result of content filtering.
Before I continue, I want to make clear, there are no solutions for people gaming computer systems, as an example consider the Google Doodle for December 2020 which searched the terms “December Global Holidays”. The top result for the search turned up this video December Global Holidays 2020 | List of holidays and festivals in December 2020 and full details. Without knowing all the details of the ranking algorithm, one can assume this person managed to rank higher because they repeated saying the words “December Global Holidays” about 50 times in the span of 5 minutes, the terms “December Global Holidays” are bizarre in combination and uncommon for normal people to search, meaning no other video tried to rank for that search. No matter how much platforms try to clean up user content using machine systems, there will always be cracks due to the limited deterministic nature of computers which have almost no ability to deduce quality or intention of content.
Text does not have a strong psychological effect, it spreads slower and requires active attention to process, Images and Audio are media rich and can be consumed passively and produce a stronger psychological response. People are more willing to watch an hour video over reading a 10 minute article on the same subject. Often people play the same music on repeat when they're at work consuming messaging passively to the point of being able to recite from memory the song they spent hours passively consuming. Text content is relatively harmless as a tool of mass influence while audio and video content can stir peoples spirits to a greater degree which is why most attention will probably be put on reducing spread of illicit content in the form of video and audio files.
No matter how we look at it, real name policies work. The requirement of verifying yourself on communication services was effective in Korea in reducing uninhibited speech and I believe many took notice. Naturally when people have their face and name out in the open, they will temper their actions out of fear of social consequences. Policies that increase fear of social shaming or social reprisal will certainly increase in the future as a method of steering dialog online.
We are seeing a pick up in discussion about repealing section 230 which protects interactive applications from being liable for user content. In most cases the people would be upset over something like this being suggested but we live in a reality were social media platforms spent their creative efforts, not on increasing their search capabilities or improving the number of services they provide or even building new innovative technologies but on making their platforms as addictive as possible. Naturally the general public will be happy to see these platforms being limited in some way, but the repeal of 230 will also take away protections for smaller providers which will create a very sterile web.