Digital Access

In the case study done by Emily Hong, the author reports about the digital divide in San Francisco’s Chinatown. While San Francisco has 88% of it’s citizens reported with at home internet access, only 56% of the residents of Chinatown have this same access.

This is not the only case of the digital divide in the United States. This is interesting, because we are supposed to be an incredibly tech savvy and driven nation. In my case study, I wrote about a similar situation in Detroit. The numbers were quite similar. According to Forbes, 4 in 10 residents of Detroit lack broadband access. Detroit is not the only major city with this problem. According to other sources, such as Drexel University, who is helping to tackle this issue,

But for as many as 41 percent of Philadelphians, that’s not the case. Just less than half of city residents lack basic skills and consistent access to connect online with their communities and the economy.

What is interesting about the San Francisco case, though, is that Asian Americans do not typically have the stereotype of being technology illiterate, as Hong points out. In fact, it is the opposite. And while there are many statistics about Hispanics and African American’s having a lack of internet access, these same statistics apparently do not exist for Asian Americans.

Having a digital divide is not just an issue in technology, though. It’s a human rights issue. At least, according to the United Nations, it is. An article by Business Insider reports,

Due to the lack of access and suppressive tactics by certain governments, the United Nations (U.N.) has declared that “online freedom” is a “human right,” and one that must be protected.

Without internet access, citizens of America are unable to apply for jobs. Students are unable to finish homework and do research for projects, therefore resulting in falling behind in school.

Luckily, there are some things that we can do in our own country to help fix this problem, In fact, there are some things that are already being done. Comcast is already making strides towards a solution.

n 2011, Comcast launched Internet Essentials, one of the most ambitious and comprehensive programs ever developed to bridge the digital divide for families lacking Internet access at home. The initiative aims to help low-income families overcome the primary barriers to broadband adoption.

If a family meets the requirements, they are able to get internet at a reduced cost that they can afford so that their children do not suffer the consequences of the digital divide.

Hopefully, more companies start programs similar to this, so that internet truly will be a human right.

Advertisements

Fake News

When first reading the article, “Why Facebook should hire a chief ethicist: Column,” Professor Heider informed me of something that I was not aware of: Facebook does not have a chief ethicist. After creating my own code of ethics and reading about codes of ethics, it caught me by surprise to learn that companies like Facebook, Google, and Apple do not have one. Although companies like this are made up of codes and computer generated content, we can not forget about the most important aspect of the digital world: it is operated by humans. The content is created by humans. It is consumed by humans. It is used by humans.

And we have had codes of ethics since the beginning of our existence. Each civilized society has had their own code of ethics. So why is it then that our most “sophisticated” society, the digital world, does not have one? This does not add up.

While some people don’t agree that hiring a Chief Ethicist is the way to fix Facebook’s fake news problem, I think it’s a step in the right direction. Anna Hoffman wrote a piece commenting on Professor Heider’s article and argued,

“Rather than position the problem as one of “bringing” ethics to companies like Facebook via a high-powered, executive hire, we should position it as challenging the structures that prevent already existing collaborations and ethically sound ideas from having a transformative effect.”

I don’t agree with that statement. There is a lot of talk about how the spread of fake news could have potentially effected the presidential election. During previous elections, this was not as big of an issue. Partly because social media wasn’t as big as it is today, and partly because the election wasn’t as publicized and tense. Because of this, news sites that either do not fact check or write stories that are not true out of pure amusement would post articles about our presidential elects. These articles would be just outrageous enough to catch attention, but not so outrageous that the average person does not believe them. Therefore, they would go viral.

This led to the sharing of fake information and “fake news.” If there were a group of unbiased humans, rather than an algorithm, that helped filter what trended on social media, maybe this would not  have happened. They could have fact checked the articles that were gaining traction, and if they were not real, keep them from showing up on the news feeds of Americans. We don’t need algorithms to solve the ethical dilemmas of social networking. We need humans.

Search Engine Influence

In the article, “What Happened When Dylann Roof Asked Google For Information About Race?” the author goes in depth about Dylann Roof, and how Google may have helped influence him when committing a hate crime – particularly, murder. The thought process is that he Googled things such as “black on white crime” and the results/media convinced him that African Americans were not only inferior, but do not deserve to live.

The idea that Google search can influence our minds is not only an interesting one, but a true one. For example, when Googling something, I always click on the first or second thing that pops up. I automatically assume that those first results are the most accurate. In other articles, authors and researchers refer to this as “the search engine manipulation effect.” In an article listed here, Epstein and Robertson go into detail about how this effect can change the outcome of something as detrimental as the presidential election. In their abstract, they state,

Internet search rankings have a significant impact on consumer choices, mainly because users trust and choose higher-ranked results more than lower-ranked results. Given the apparent power of search rankings, we asked whether they could be manipulated to alter the preferences of undecided voters in democratic elections.

The thought that something as simple as a search engine can influence the outcome of an entire election is mind-blowing.

Even more mind-blowing, though, is the idea that something like a search engine can convince someone to massacre multiple people. While I can understand how the internet can sway the political views of certain people, I do not understand how they can convince someone that an entire race is bad.

Additionally, it is fascinating that Google has had to adjust their algorithms because hateful, racist search queries were coming up. What comes up in the autocomplete feature in Google is based on numerous things, including how often it is searched and the “freshness” of the search.

Adding one letter to make it “b-l-a-c-k o-n w,” the top autocomplete suggestion remained the same, and the second was “black on white violence.” The third and fourth were “black on white crime statistics” and “black on white racism.”

Interestingly enough, the same is not true when searching white on white.

The top autocomplete results for “w-h-i-t-e o-n w” were “white on white crime,” “white on white,” “white on white acid” and “white on white kitchen.”

This seems to point to a simple fact: the world is not yet in a post-racial state quite yet. It can be proven by our searches via google. While it’s a positive that Google tries its best to filter out hate from its autocomplete feature – it is alarming that it has to filter it at all.

 

RIP Trolling

The phenomenon of trolling is fascinating. What makes people want to go onto social media and be anti-social? What makes them want to log on and piss off tens, hundreds, and sometimes even thousands of people, all at once? In the article, “LOLing at tragedy,” by Whitney Phillips, she delved into the topic of internet trolling.

Phillips, among other researchers, have seemed to all come up with a similar explanation. It seems as if those who troll all target the same types of people: those who did not know the deceased.

Whitney Phillips summarizes this theory in another article, titled “Laugh It off.” She states,

Although some trolls deliberately targeted the friends and family of the deceased, most focused on what they described as “grief tourists”—Facebook users who did not know the victim and who, according to the trolls, could not possibly be in mourning.

While it is ultimately terrible to make light of such tragedies, it is hard not to almost agree with this logic. A few years ago, my brother passed away. This tragedy swept across Indiana, specifically throughout my region, and many people reached out to myself and my family to send their condolences. What was fascinating, though, was the amount of people who posted about him who either did not know me or my family, or was blatantly disliked by Nick when he was alive. It seemed as if them sharing their condolences felt as if they were doing it for themselves rather than for Nick or my family.

It somehow felt as if they wanted to be a part of the tragedy.

What is more fascinating, though, is how the law is starting to finally catch up with technology, particularly to Facebook trolls – especially those who are not fake profiles. In one case. Recently, a man screenshot a woman’s tinder profile. In the profile, she had a rap lyric that said, “type of girl to suck you dry then eat some lunch with you.” He slut shamed her and eventually all of her friends came to back her up. In the comments, him and his friends threatened violence and rape. Eventually, they reported it to the police and he was one of the first people to be punished by the law for trolling.

An article written by the Guardian stated,

On Friday, he was sentenced to a 12-month good behaviour bond for using a carriage service to menace, harass or offend at the Downing Centre court in Sydney, although the judge found his conduct did not amount to threatening to rape. It has been described as a test of police and legal responses to online abuse, an area that appears to be nearing crisis point.

Hopefully, as technology advances, the act of trolling will become more and more easily punished.

Virtual Reality Rape?

Before reading the article “A Rape In Cyberspace,” I had never heard of LambdaMoo. After a lengthy research session on the culture of LambdaMoo and what exactly it was – I was finally ready to sit down and process the information being given to me. Throughout the entire article, there was one thought going through my head. What the hell am I reading?

After finally digesting the lengthy article, I was able to muster up some thought provoking ideas. Can women be violated via cyberspace? And I don’t mean typical revenge porn. I am speaking of the situation described in “A Rape In Cyberspace.” In this particular instance, a character in the virtual reality world forced another character to do repulsive, sexually violent acts. Should this be punished?

I think so. And it seemed to be a general consensus in The Moo that something should be done. But what? And  does this constitute as sexual violation? The Guardian thought so, too.

In a 2016 article, journalist Julia Wong reported,

“What’s different about virtual environments is an extra layer of immersion. If you are being groped in the real world versus a virtual world, the visual stimuli do not differ,” she said. “You are seeing it. It is appearing to happen to your own body. Those layers of lifelike experience are going to be more traumatizing in that moment.”

She goes on to argue that being assaulted via virtual reality is quite different from being assaulted in a chatroom or via email. This is because you often create an avatar that looks similar to you, and you often have a personal connection to this character.

“If you highly identify with your avatar and are portraying yourself in an authentic manner, you’re going to feel violated,” said Jesse Fox, an Ohio State University professor who researches the social implications of virtual worlds.

The psychology of this instance is not only fascinating, but also terrifying. Women are harassed, violated, and raped in real life on a day to day basis. Have we created yet another platform for women to be mistreated and sexualized? Absolutely.

Apparently, in terms of legality in virtual reality and in real life, it is not a criminal act unless there is physical contact.  But with the advancement of technology, it is becoming possible for there to be physical harm without any physical contact.

“At the point where people do get actual sensory feedback – like a Matrix-type plug in … something where it’s actually plugged into your brain – that’s where we sort of turn a corner and say that things in virtual reality are much more real than they were before,” he said. “That line is somewhere out there in the technological ether.” Experiments with haptic technology – such as a vest that allows a player to feel when their character gets kicked or punched – might mean that technological ether is closer than we think.

But until we get to the the point where we can actually prosecute VR users for sexually violating women (and men), there has to be something else that can be done. It mentions in the “Rape In Cyberspace” article that they decided to delete the account of the man who raped the other characters. But then he just made another account. Clearly, this symbolic banishment was pointless. What else can be done?

While I am not a programmer and I do not know exactly what is technologically possible, I’m sure there is something. For instance, at this age in technology, could they banish an entire IP address from holding an account?

Another scenario: Could the creators of the game make a “sexual predator list,” much like what we have in real life?

These aren’t instant solutions to the problem – but it is better than nothing.

 

 

 

The Dangers Of AI

Picture this: in the 1920’s, an array of silent films, documentaries, books, research, etc. comes out about the danger of the invention of cars. All of this research points to one thing – that someday, the emission of fuels into the atmosphere will eventually lead to global warming, which will lead to the end of the world as we know it. The most brilliant inventors of this time are warning us to steer clear of burning fossil fuels. Do we listen?

We are currently breaking records for the warmest winter in 146 years. Ice caps are melting. Less rain is falling. We are in the stages of global warming. I am sure that if many scientists had the opportunity, they would want to go back in time and warn inventors of the harm of fossil fuels. But of course, they can’t.

Currently, there is an abundant amount of both fiction and nonfiction films warning us about the dangers of AI. Masterminds like Bill Gates, Elon Musk, etc. are telling us we need to tread carefully. And of course, there are men like Nick Bostrom at Oxford that has devoted his life to the philosophy, morality, and dangers of artificial intelligence. Yet, the common citizen of the Earth is not taking these warnings seriously. Neither are the inventors of AI. Why not?

History tends to repeat itself. The human race has invented many things that could wipe us out – fossil fuels being just one – yet we do not listen to those that warn us otherwise. Nick Bostrom states,

“Before the prospect of an intelligence explosion, we humans are like small children playing with a bomb,” he concludes. “We have little idea when the detonation will occur, though if we hold the device to our ear we can hear a faint ticking sound.”

Like mentioned earlier, Nick Bostrom is not the only mastermind to have reservations and cautions about this new technology coming to light. Bill Gates said in an interview that,

“I think we should be very careful about artificial intelligence. If I were to guess like what our biggest existential threat is, it’s probably that. So we need to be very careful with the artificial intelligence. Increasingly scientists think there should be some regulatory oversight maybe at the national and international level, just to make sure that we don’t do something very foolish. With artificial intelligence we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like yeah he’s sure he can control the demon. Didn’t work out.”

With such ominous warnings, why are we, the human race, playing with fire? I think the idea of having some sort of regulatory oversight is not only a fascinating idea, but a good one. While I would not know how to implement that or even really know what that means (as I am not an inventor nor a lawmaker), I think it’s a good way to start. If we had laws in place at the beginning of the invention of fossil fuels, things might be much different.

Sharing Economy Issues

Five years ago, I had not even heard of sharing economy services such as Uber and AirBnb. In 2017, both services are so well known and used, that consumers do not even think twice about using them. Most of my friends, family, and coworkers have not thought out the privacy issues concerning companies such as these. Yet, when looking through the privacy policies, it is truly concerning. Often, users input all of their social media, home address, phone number, government issued ID’s, etc, into these companies databases, yet do not think about what these companies do with this information.

In the paper, “The Intersection of Trust and Privacy in the Sharing Economy,” the authors delve into this topic and the many privacy issues in regards to the new Sharing Economy. Beyond the issues of privacy, the sharing economy brings about other concerns as well. Many of the tactics for user ratings and verifying identity are deemed to be unfair by the public.

For instance, they cite one woman who wants to sign up for AirBnb but does not have enough Facebook friends in order to do so. In hindsight, this seems unfair. Just because this poor woman is not active on social media or does not accept every friend request she receives, she can not be approved for this sharing economy service. They then asked her to send a video of herself instead, but she did not have the means to do that either. But if the company does not use social media, what other ways do they verify their users? This is where it becomes complicated. Do they ask for your social security number? Birth certificate? And does giving a company this information leave you vulnerable?

Recently, AirBnb has began to ask for a driver’s license/government issued ID’d for verification. A writer for the Huffington Post had an issue with this new protocol. AirBnb had partnered with another company, Jumio, in order to enact this protocol.

According to Jumio’s policy, “Jumio may use the Personally-Identifying Information a user submits for any purposes related to Jumio’s business, including, but not limited to providing you with Jumio’s services and personalizing and improving those services for you.”

Kris Constable, the HuffPost writer, did not take this lightly. He writes,

What? If selling my passport photo and information generates revenue for Jumio, that is an acceptable usage for them? There are several other obvious concerns. Basically, if Jumio feels like it, they can provide your information anyone they consider a partner or contractor.

Ride sharing companies such as Uber have privacy issues as well. The paper “The Intersection of Trust and Privacy in the Sharing Economy,” Apparently, Uber employees were able to track costumers in real time. In the article, “Uber’s Privacy Woes Should Serve As a Cautionary Tale for All Companies,” published by The Wired, the author touches upon the privacy issues facing Uber. This God View feature, along with other data collected by Uber, poses a real threat. The author states,”

It also invites the possibility of both cyber and physical stalking, abuse of credit card data, and corporate and government espionage based on the physical movements of riders. Not to mention the basic fact that the more people who have access to this data, the more likely it is to be leaked publicly or misused in another manner.

While these economy sharing sites are becoming more common and used by more and more people, it is important that we do not lose sight of our rights as customers. We often hop on the bandwagon and disregard our right to privacy as a user of these apps and services. Hopefully, with the correct spreading of information, more consumers will be tempted to read their next privacy policy.