Researchers at Stevens Attempt to Answer: Is AI a Friend or Foe?

by Garrett Rutledge
Stevens TechPulse Report

Artificial intelligence (AI) has become deeply intertwined with the everyday functioning of society, whether we fully realize it or not. AI is not new, per se, but its availability and functionality have skyrocketed in recent years. The potential benefits of AI are limitless and transformational for humanity; the potential harms are destructive and alarming. Given this natural polarization, researchers at Stevens Institute of Technology in Hoboken wanted to understand how the general population perceived AI. So they commissioned a national survey, the Stevens TechPulse Report, to grasp how Americans currently feel about artificial intelligence. In the following, we’ll look at what the survey found and dive deeper into the question: is artificial intelligence a friend or foe?

Stevens TechPulse Report Methodology & Purpose

David's Graduation - Stevens Institute of Technology

Hayden Hall on Stevens Institute of Technology Campus

On behalf of Stevens Institute of Technology, the Stevens TechPulse Report: A Perspective on Americans’ Attitudes Toward Artificial Intelligence was a national poll led by Morning Consult, a privately-held data intelligence firm. The report featured a sample of 2,200 adults and was conducted online between September 8-10 of 2021. According to Stevens, the survey results have a plus or minus two percentage points margin of error. 

The purpose of this survey was to gain a holistic understanding of Americans’ perspectives on AI, both generally speaking and concerning specific areas/issues. The poll covered the pros and cons of AI, preferences, positive and negative consequences, AI in healthcare and financial services, and so much more. They even further segmented some question responses by demographic factors such as age group, occupation, political affiliation, gender, and ethnicity. 

The study was extensive, and thus, its results are thought-provoking, to say the least. All-in-all, it gives us an in-depth look into the public’s perceptions regarding artificial intelligence, its pros and cons, and the potential impact it’s expected to have on humanity. 

Results 

Stevens Institute of Technology Gate

Entrance gate to Stevens Institute of Technology

We’ll focus on a few of the critical yet broader questions in particular for this article. The question responses based on various demographic differences are undoubtedly insightful and worth a look in your spare time. However, they’re a little too specific and perhaps subjective for this piece. In the following, we’ll look at the most critical questions and their subsequent responses. Then, we can tackle what they mean, so bear with me through this statistical review.

“Generally speaking, do you think the perceived positives of greater AI adoption in everyday life outweigh the perceived negatives?” 

The answer selection for the positives of AI definitely outweighing the negatives received 17 percent of the votes, compared with just 12 percent for the negatives definitely outweighing the positives. Contrarily, 31 percent responded that the positives somewhat outweigh the negatives, whereas 17 percent answered that the negatives somewhat outweigh the positives of AI. The remaining 23 percent said they don’t know/don’t have an opinion. Generally, 48 percent of respondents believe the positives of AI outweigh the negatives versus just 29 percent for the opposite perspective. However, 52 percent of respondents drifted towards the negative or the uncertain realm, which leaves these responses subjective on how you interpret them. 

“In your opinion, how well regulated is artificial intelligence at the moment?”

Yet again, we saw a healthy spectrum of responses to this question. 13 percent of respondents believe AI is very well regulated, whereas the same percentage believe it is not well regulated at all. 24 percent responded that AI is somewhat well regulated, while the same rate said that AI is not very well regulated. This leaves us with an even 37 percent for approving versus non-approving current regulation. At the same time, 26 percent claim not to know or have an opinion. The latter results are a common theme we see in this survey, which we’ll cover in more detail later on.

“How concerning, if at all, are each of the following potential negative consequences of greater AI adoption in everyday life?”

Here, we have perhaps the most critical question in the entire survey. This question provided roughly 20 potential negative consequences and asked respondents to select an option from very concerning, somewhat concerning, not too concerning, not concerning at all, to don’t know/ have an opinion. 

Rather than explore all of the answer percentages for each potential consequence, we’ll instead look at the aggregate level of concern by adding up those who selected very concerning or somewhat concerning for given outcomes. Additionally, we’ll focus on a select number of, particularly significant consequences.

The four consequences with the highest concern level illuminate the core worries regarding AI. 74 percent are concerned AI will lead to a loss of privacy. 72 percent are worried about the irresponsible use of AI from countries and businesses. Whereas 71 percent are concerned about reduced human connectedness and employment opportunities. There were also high majority concern levels for several other potential consequences that are perhaps less weighty, like reduced creativity, lack of understanding about how AI works, and so on. 

All but two possible outcomes saw high majority concern levels between 57 percent-74 percent, with a few critical ones such as AI increasing global conflicts (57 percent), becoming uncontrollable (67 percent), and gaining consciousness (57 percent). The only consequences not above the 50 percent mark were racial/ethnic bias in AI (47 percent) and gender bias in AI (39 percent). Though, of course, those are still pretty high levels of concern for side effects of such magnitude. For most consequences, very concerning was distinctly the most selected response.

“In your opinion, how likely or unlikely is it that each of the following will happen with AI?”

Similar to the question above, this one gave a list of outcomes stemming from AI and asked respondents to rate their likelihood of occurring from very likely, somewhat likely, somewhat unlikely, very unlikely, and don’t know/ have an opinion. Most of these consequences are negative, although a few fall on the positive side of the spectrum. For review, we’ll follow the same approach as last time and look at the aggregate levels of the consequence’s likelihood of occurring.

The most likely side effect of AI, according to the survey, was a misuse of the technology. The likelihood of abuse was 71 percent for governments, 69 percent for individuals, and 66 percent for businesses. Privacy was also at the top again. 65 percent believed it likely that private companies will use AI to listen to peoples’ conversations. 

The worry with AI becoming a supreme force to humans again showed its face, as 52 percent believe it to be likely that AI will become smarter than humans, and 51 percent responded that it’s expected humans won’t be able to control AI. Only one of the negative consequences listed didn’t mark above 50 percent in its likelihood of occurring, which was AI contributing to human rights abuses at 47 percent. 

The two positive outcomes didn’t score above 50 percent in their likelihood of occurring either. 46 percent responded that AI will improve quality of life and 40 percent for AI helping solve world problems. We saw the same pattern as above, with very likely the most selected option and very unlikely the least selected.

“How likely or unlikely is it that each of the following will be potential positive outcomes of greater AI adoption in everyday life?”

Finally, we get to the better sides of artificial intelligence. This one followed the same approach as the latter, but each outcome was positive. We again see clear-cut majorities in terms of likelihood here. Between 70-72 percent of respondents said that each of the following effects was likely to occur as a result of AI: the ability to handle repetitive tasks, 24/7 availability, smarter technology, reduced risk to humans in dangerous jobs, more widespread use of technology, and improved technology efficiency. 

The only outcome not above 50 percent was increased economic prosperity at 49 percent. The majorities in terms of likelihood were roughly the same as the negative consequences, although slightly higher. However, the don’t know answer selection notably dropped across the board for these positive outcomes.

Takeaways

One can draw a lot from these responses, and even more if you look at the study in full. But for now, let’s examine the most critical and apparent takeaways, those rooted less in subjectivity:

There’s Heavy Confliction with AI 

ai technology

Photo by Charl Folscher

Based on these survey results, it’s undeniable that people are torn when it comes to artificial intelligence. This conflict was immediately evident with the first pros and cons question. 48 percent believed the positives outweigh the negatives while 52 percent thought the negatives carry more weight or don’t know. Plus, only a total of 29 percent gave definitive answers one way or the other. No matter how you slice it, that question, in particular, demonstrates a lack of certainty regarding the prospects of AI.

The conflict with AI was also demonstrated in the last questions focused on the likelihood of specific outcomes. Both sides of the spectrum saw comfortable majorities in probability, which shows the apparent dilemma we have with artificial intelligence. For functional, everyday benefits, it seems like most perceive AI as a positive and have experienced proof of value. Yet, when it comes to morality, trust, and societal implications, the perceptions with AI drastically swing to the other side. It’s clear the functional benefits have been proven to the masses, but ethical and social protections need some work.

We Need More Information Sharing and Education

AI education

Photo by Anna Hunko

Throughout the survey, the answer option for don’t know/have an opinion was consistently among the top choices. That immediately alludes to a need for more information sharing and education on artificial intelligence. Especially when we consider those potential outcomes, good and bad, you’d expect these to be areas people would have opinions on, and strong ones at that. 

Jason Corso, a Brinning Professor of Computer Science and Director of Stevens Institute for Artificial Intelligence, said, “It’s clear from this research that, while people recognize the positives of AI, they also see much to be wary of—based, to some extent, on misunderstandings of the technology and what could help protect against those negative consequences.” 

Additionally, Nariman Farvardin, President of Stevens Institute of Technology, noted, “This survey indicates that there is a significant need for education, well-informed and holistic policy development and ethical leadership in the deployment of rapidly advancing technology throughout industry and society.”

The uncertainty and degrees of ignorance don’t necessarily mean that the concerns with AI aren’t valid or even likely. But if most experts were as concerned with many possible adverse outcomes, AI’s trajectory presently would be a bit different. The results show that AI is undoubtedly an emerging tech area that society is struggling to wrap its head around. Until we have widespread information sharing and education, it’ll be difficult for society to truly understand what we’re up against.

The Concerns Over AI are More Consequential Than the Benefits 

Stevens AI ReportRegardless of the responses, it’s clear that AI’s concerns and potential adverse outcomes are far more consequential than the benefits. If they come to fruition, the fears could have dire side effects on personal well-being, the functioning of our governing and economic systems, international relations, and even the prospects of human control in life. 

However, the potential and actual benefits of AI seem to be more rooted in our personal experiences with the technology. Most of the benefits have already been proven on the consumer side, whether in work or personal environments. To some extent, the survey’s design plays a role here. There could’ve maybe been more positives of a grander scale included. But it’s hard to imagine a list of more consequential outcomes than those provided in the concerns questions.

The weight of these concerns, paired with the responses, shows us that there are many hurdles in the way of more significant progression and use of artificial intelligence. That’s not just in one area either, but many of varying contexts and impacts. One has to wonder, what does this mean for the future of AI? Is there a plateau in its progression, or at least in societally-accepted advancement? How and when will these concerns be addressed, and will they ever be fully addressed? It’s hard to say what the next few years will look like for artificial intelligence. But it’s clear we need a better understanding of where we go from here.

The State of AI and Where We Go from Here

The Stevens TechPulse Report does a fantastic job of illuminating where we stand regarding artificial intelligence and the perceptions of the technology. But what do experts say about AI? We must understand the state of this technology from their point of view and what they expect to come.

Confliction is Widespread Between Everyday Citizens and Experts Alike

Photo by Product School

It’s not just us regular joes who feel torn between the prospects of AI; the subject matter experts are no different. You may remember, in 2015, some of the biggest names in tech like Stephen Hawking, Elon Musk, Steve Wozniak, and others broke headlines when they signed an open letter calling for a ban on offensive autonomous weapons. Of course, their focus in the letter was on the dangers of such AI-driven weapons, but even so, it sparked notable concern with AI. Keep in mind; however, these individuals also reap benefits from AI in other applications, whether it be their companies or through organizations they’re associated with.

In a Pew Research study on the ethical hurdles of AI, many experts were asked to give their thoughts on the topic. Glenn Edens, professor at the Thunderbird School of Global Management at Arizona State University,  sums up the conflict with AI: 

“The promise: AI and ML could create a world that is more efficient, wasting less energy or resources providing health care, education, entertainment, food and shelter to more people at lower costs. The concerns: AI and its cousin ML are still in their infancy – and while the technology progress is somewhat predictable, the actual human consequences are murky.” 

As a note, “ML” refers to machine learning, a critical element or “cousin,” if you will, of artificial intelligence. 

There’s an Immense Need for Ethical AI Design, But it’s Complicated

artificial intelligence

Photo by Alex Knight

The study from Pew Research is an excellent resource to explore the need for foundational ethical AI design and the challenges in doing so. Here are some of the key takeaways:

Most experts doubt ethical AI will be the norm in 10 years. It’s a significant concern in tech and academia fields, in particular. Formulating acceptable ethical standards and protections is the biggest roadblock with AI progression. Stephen Downes, a senior research officer for digital technologies with the National Research Council of Canada, paints the dilemma at hand: 

“Modern AI is based on applying mathematical functions on large collections of data. This type of processing is not easily shaped by ethical principles; there aren’t ‘good’ or ‘evil’ mathematical functions, and the biases and prejudices in the data are not easily identified nor prevented.”

That’s just the beginning of this layered ethics problem. As many experts pointed out, ethics are extremely difficult to define, implement and enforce, let alone a technology like AI. One must consider all scenarios, contexts, actors involved, dynamic social norms, and more. How will we decide who designs such systems? What about implementation and enforcement nationally and internationally? Considering AI is already out there en masse, it’s going to be challenging to eliminate exploitation. 

Ethical AI design is about as complex an issue society can face. But the good part is that AI’s behavior and ethical considerations are broadly reflective of human standards. After all, AI’s don’t just grow out of the ground; we still have control and influence here, despite what it may seem. Therefore, we can expect this topic to emerge as an increasingly critical one in years to come. It could even be a defining area of debate and philosophy in this decade. 

We Need More Emphasis on the Positives of AI 

The positive outcomes of AI are and can be more than just what we saw in the Stevens TechPulse Report. We must understand these pros more and educate on the worries, in particular, so that our view of AI is comprehensive as we move forward. The benefits of AI are genuinely limitless. Already, AI is transforming supply chains, the medical field, worker productivity, small businesses, weather and climate forecasting, and disaster response, to name a few. If you’re concerned about the future of AI, I strongly suggest you watch the following video; it may change your perspective.

Addressing the Concerns

As for the chief concerns, many factors aren’t exactly being considered. With job loss, for example, the World Economic Forum estimates that 85 million jobs will be displaced while 97 million new jobs will be created across 26 countries by 2025. Specific job categories will decrease, but just like automation advancements, new job categories and needs will be created simultaneously. 

As for AI’s superintelligent capabilities, it’s important to note that AI systems will be highly adept at accomplishing their goals. Thus, fundamental issues will arise if those goals are not aligned with ours. Elon Musk summed it up: “We have to figure out some way to ensure that advent of digital superintelligence is one which is symbiotic with humanity.” 

The video above also dives into this concern of an AI-dominated world. The political philosopher, Michael Sandel, poses a thought-provoking question here: “Can smart machines outthink us, or are certain elements of human judgment indispensable in deciding some of the most important things in life?” 

With the top concerns of AI, the other side of the coin often doesn’t get enough focus. These issues aren’t close to certainties and are more rooted in human and societal error than a dangerous, rogue technology. With education and exceptional ethical design, the hope is that we should avoid many of these potential problem areas.

Parting Words

Regardless of your feelings towards AI, it’s essential to understand that this technology is going nowhere. IDC, a technology research firm, predicts worldwide spending on AI will hit $110 billion by 2024. Thus, it is and will continue to be increasingly vital that we have our priorities in order. It’s clear from the Stevens TechPulse Report that polarization is the defining characteristic of AI as it stands. But as we move forward, ethics, education, discourse, and transparency are paramount. And with these elements, we may see an unprecedented transformation of humanity in a way that benefits everyone. The possibilities are truly endless with artificial intelligence, and that’s what makes it both exhilarating and terrifying. 

What do you think about AI? Let us know in the comments below. 

Photo by Possessed Photography

About the Author/s

All posts

Garrett is a writer at The Digest. He currently lives in Astoria, NY, and loves writing about topics that make readers think. His passions include film, sports, traveling, and culture.

Related Articles

2 comments

Robert Donnelly January 6, 2022 - 1:03 pm

AI is becoming an integral part of our business and personal lives whether we like it or not. The more you embrace what AI can do to increase efficiency, productivity, and profitability for your business and embellish your personal life, the better of you will be. We are working with Stevens new Institute for Artificial Intelligence (SIAI) to advance the value of our technology and to give students an opportunity to be involved with enhancing a solution to a serious real world environmental problem working with our engineering team.

Reply
Robert Donnelly January 6, 2022 - 1:12 pm

AI is changing global commerce and your life whether you like it or not. It is increasing efficiency, productivity, and profitability for businesses and embellishing our personal lives, as well. We are working with Stevens new Institute of Artificial Intelligence (SIAI) to enhance our technology through AI by having students and their faculty mentors work with our engineering team to advance the sophistication of our existing AI platform. This is a wonderful opportunity for Stevens students to get involved with solving a real world environmental problem.

Reply

Leave a Comment

Yes, I would like to receive emails from The Digest Online. Sign me up!



By submitting this form, you are consenting to receive marketing emails from: New Jersey Digest. You can revoke your consent to receive emails at any time by using the SafeUnsubscribe® link, found at the bottom of every email. Emails are serviced by Constant Contact