Skip to Content, Navigation, or Footer.
Support the IDS in College Media Madness! Donate here March 24 - April 8.
Friday, March 29
The Indiana Daily Student

opinion editorial

EDITORIAL: Standards of decency in the Information Age

A teenager in Columbus, Ohio, is accused of livestreaming a freind’s sexual assault.

Using the app Periscope, the teenager supposedly filmed as a mutual acquaintance assaulted one of her friends.

This incident was reportedly noticed by one of the filmer’s friends in another state and reported to the authorities, reported the Guardian.

Incidents of sexual assault, such as the rape of a student this past Little 500 weekend, should be handled in accordance with responsibility of those 
involved.

But, when assault happens on the other side of the screen in the digital age, it begs the question of whose responsibility it is to find inappropriate and illegal content on social media platforms such as these.

Companies operating social media platforms retain the right to censor content on their apps and websites.

We do not mean to pass judgement on what types of content should be taken down from websites.

That is an individual decision that each company must make when deciding issues about their brand and customer perceptions.

Some corporations will be more interested in putting forward a family-friendly image while others will want to gain a reputation for free thought and 
expression.

Our purpose here is to discuss how wrong and blatantly illegal content such as that discussed above should be handled.

It is impractical to expect social media companies to sort through all the content produced every day on their platforms, such a task is impossible with the amount of data generated today.

While it may be feasible to use bots to do this, the process would still require a human to make the final decision on whether something should be taken down or not.

Profit-motivated companies are likely unwilling to do such a thing, and any law requiring them to would dance dangerously close to violating the First 
Amendment.

Instead, it seems what happened in this case is the best way to handle these sorts of issues.

Responsible reporting on the part of other users is likely the most efficient and only practical way to monitor the mass of content created every day for social media.

Users within the social networks of their friends are capable of making the best decisions about when to report inappropriate content.

Speaking in generalities, a poster’s friends should be best equipped to make the judgement call about when their friend is being hyperbolic versus actually making real threats or serious posts.

This is something bots would not be capable of 
doing.

Additionally, this sort of volunteer monitoring would be free for companies.

Once reported, a company’s brand and image come into question, so those within the corporation would be willing and able to make the call on whether or not the content needs to be 
removed.

Such a system is not necessarily ideal; other users may be of a similar mindset as those posting illegal content, or just simply not care enough to take the steps necessary to alert others.

However, it seems the best solution given that other options are themselves expensive and therefore unlikely to be adopted in large measure by companies and may be illegal if required by statute.

Get stories like this in your inbox
Subscribe