Our Ethical Responsibility in the Growth of AI

Check out more papers on Cloud Computing Employment Responsibility

Abstract

As artificial intelligence (AI) becomes more prevalent in today’s industries, research has been unclear as to the effect this will have on occupations in the U.S. The slow development of AI technology through the decades has allowed adequate time for analysis and postulation on how the different facets of society can best support a positive implementation of AI into the world. AI and automation are increasingly used in the workplace, so governments, businesses, and educational institutions will need to adapt to support future progress. While the outcomes for our future vary, all reports agree that ethics is the key to the beneficial use of AI and ensuring our success as we transition into a new technological world.

INTRODUCTION

Artificial intelligence (AI) has traversed well beyond science fiction. AI are absorbing more services every year that humans have performed for centuries, such as transportation, customer service, manufacturing, and even leisure. The possibilities that AI can unlock are only limited by the imagination, yet development of increasingly more powerful AI has been a slow process. 

This gives us more time to think about what kinds of jobs will be rendered obsolete when AI becomes more efficient at services than humans could ever hope to match. Industry executives might look forward to the savings of employing intelligent technology vice humans who have needs, goals, and insurance policies, but the average employee should be informed of the safety of their job and what potential outcomes the future may hold. The synthesis of artificial intelligence with the workplace will undoubtedly affect the U.S. economy, but the extent of those effects depends on key ethical decisions. Included is a consolidation of research in various fields to more accurately anticipate the effects of AI on tomorrow’s workforce and possible adaptations necessary to ensure our success.

In this paper, information about AI and automation will be used seemingly interchangeably even though the two concepts do not always overlap. This is because both are transformational forces in our economy and will have similar effects even if they affect different occupations.

BACKGROUND

The Encyclopædia Britannica defines artificial intelligence as “the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings” [1]. The intelligence capability of computers has grown very quickly lately due to advances in deep learning, where computers teach themselves a skill using reward-based programming that simulates the function of neurons in a human brain. These “neural networks” are established to allow for simultaneous analysis of its data environment and form a conclusion based on its own experience. While there is a disagreement as to how accurate the word “intelligence” describes the current capabilities of computers [2], people are cautioned to keep an open mind to the possibility of human-like intelligence.

Brief History of AI

The first artificial neural network was created in 1951 by Marvin Minsky – co-founder of Massachusetts Institute of Technology’s (MIT) AI Laboratory – and Dean Edmunds to simulate a rat solving a maze with a reward signal input when it found the correct path. That same year, programs capable of playing chess and checkers were designed. In 1957, Frank Rosenblatt designed the first artificial neural network algorithm that was capable of supervised learning. AI development was very promising but slow in delivering its promises due to available technology. The publishing of the infamous ALPAC Report of 1966 [3] was the first of several events that inspired a drastic reduction of AI research funding due to its recommendations that computational linguistics – the focus of AI research at the time – was limited by the current state of computers available at the time. Ray Solomonoff proved Algorithmic Probability in 1968 – later used as the mathematical basis for how deep learning operates – but did not publish the results for 10 years due to the lack of interest in the topic. 

The general lack of interest in AI development continued well into the 1990s. Despite a reduction of activity, progress still thrived, and in a letter from Raj Reddy – president of the Association for the Advancement of AI – to association members in 1988, he states, “And the mythical AI winter may have turned into an AI spring. I see many flowers blooming” [4]. This “winter” – likened to hype-induced crashes like the dot-com bubble or the more recent cryptocurrency crash – began to turn around. In 1991, the AI-based Dynamic Analysis and Replanning Tool (DART) reportedly paid back 30 years of government research agency DARPA AI project investments within its first year of use in Operation Desert Storm [5]. New milestones started to appear. In 1997, IBM’s Deep Blue supercomputer defeated reigning world champion chess player Garry Kasparov. In 2011, Apple released Siri, the first AI-based digital assistant. In 2015, Google DeepMind’s AlphaGo defeated European Go champion Fan Hui. The Go victory was seen as a major victory for deep learning, as Go is significantly more complex than chess [6]. Also in 2015, image recognition surpassed human performance [7]. Finally, in 2018, there is now significant progress with self-driving cars from Uber, Tesla, and Google.

Predictions

Vernor Vinge – professor of computer science and science fiction author who invented the concept of the technological singularity – stated in a paper in 1993, “I believe that the creation of greater than human intelligence will occur during the next thirty years…. I'll be surprised if this event occurs before 2005 or after 2030” [8]. Now, over halfway through this period, this goal has not been achieved, which can be evidenced by the lack of clear victors of the “Turing test” – first proposed in 1950 by Alan Turing, a forerunner in computer science and cryptoanalysis – where an interviewer can be fooled about the interviewee’s humanity while performing a blind interview of a computer and a person. There is no formal Turing Test, although some like the Loebner Prize attempt to carry the torch. Even participants of the Dartmouth College AI Conference: The Next 50 Years (known as AI@50) who were polled – some of which were attendees of the original conference in 1956 – had strong negative feelings toward the question “When will computers be able to simulate every aspect of human intelligence? 41%... said ‘More than 50 years,’ and 41% said ‘Never’” [9]. As time advances, the hype has died down and predictions become more pragmatic about the growth of an artificial intelligence.

CURRENT STATUS OF AI AND THE ECONOMY

Robotic automation in the United States has increased steadily for the last several years due to a push to strengthen its industries’ performance in the marketplace [10]. As automation increases, human involvement in the production process has decreased from one of direct involvement with every step to one of supervision, and like with human supervision your number of supervisors is always outnumbered by your workers.

An often-cited Oxford study concluded that 47% of 702 U.S. job occupations were at high risk of computerization [11]. One other point the study makes is that the Industrial Revolution allowed a boom of unskilled labor, but the prominence of jobs based on the introduction of electricity created a need for skilled labor, thereby starting a correlation between education and job growth. In fact, the study also found a direct modern correlation between educational achievement and occupation computerizability [11]. As AI capabilities grow, more skilled workers will be needed, possibly in fields that did not previously exist.

Some economic growing pains of increasingly computerized systems have already been felt. The implementation of cloud-based information and data sharing leaves systems open to cyber-attacks. 38% of cyber-attacks against businesses in 2017 caused at least $1 million in damages [12]. Businesses are growing rapidly, utilizing the most recent tools available while the weaknesses are still being explored by malicious people. So far, no AI-enhanced cyber-attacks have occurred, but given the access it is given to our data, including personal assistants like Siri, Cortana, and Alexa, it is believable that AI will enable easier access to sensitive information.

PREPARATION FOR THE FUTURE

Currently, preparations for the implementation of AI in the economy have not extended much beyond the discussion phase. Research scientists in the AI field have been cautioned about their responsibility to ensure the public knows the limitations of AI so our policies reflect what is in the best interest for both citizens and the development of AI [13] and guard against impeding progress out of fear [14]. Currently, AI must be supervised in its learning process through human input in data labeling, data acquisition, explanation of results, transferability of skills between fields, and unbiasing data [15]. These factors reduce its usefulness and are currently slowing its introduction into businesses worldwide. Even though a large portion of our jobs in the U.S. are computerizable, the progress at which AI researchers unlock new capabilities is bottlenecking the pace at which our world changes.

This slow pace of development and discussion allowed the formation of the Subcommittee on Machine Learning and Artificial Intelligence, allowing for the U.S. government to be better informed as changes occur in the field and better serve its nation. Its 2016 report [16] is generally favorable toward AI and the advances it can bring to the U.S, is well-informed about its shortcomings and the growth that will be necessary to allow AI to be entrusted with safety-critical applications, and mentions that researchers and practitioners in the field must be open to government regulation to allow for the nation’s best interests. One of the main public concerns is how employment will be affected when technology displaces human workers, of which one possible outcome is technological unemployment. In 1930, famed economist John Maynard Keynes defined technological unemployment as the “means of economising the use of labour outrunning the pace at which we can find new uses for labour. [17]” Keynes saw this as a temporary condition that would be corrected because humankind controls the outcome, even if reactive methods win out over proactive.

Some companies, like AT&T, are retraining their workforce to maintain their competitive edge and preparation for technological advances. The effects of this move could benefit more than just the relevancy of its product. “According to the company, employees that are currently retraining are two times more likely to be hired into one of these newer, mission-critical jobs and four times more likely to make a career advancement” [18]. This statement resonates with the conclusion of the Gallup 2017 State of the American Workplace report that our current style of management is broken and should shift to more of a “coaching culture” to help correct our reliably low employee engagement, which is currently at 33% [19]. AT&T is actively adjusting its business and employment model to proactively meet the challenges brought on by technological advances. Conversely, Foxconn, which supplies phones for Apple, recently laid off 60,000 employees from their factory in Kunshan, China – over half the employee strength – due to automation [20]. This is an example of the capabilities of automation triumphing over ethical management.

FUTURE POSSIBILITIES

While not every citizen can be bothered to understand and track the technical changes the world is undergoing, the effects of change will be felt by all, regardless of how slow the process is. Higher education is already in the process of change via teacherbots and online courses [21], which have the potential to make education cheaper and more available. The two most extreme results posited are a technological singularity and massive layoffs. While the possibility of a singularity – where AI operates at a higher intelligence than humanity and changes civilization according to its own agenda – seems unlikely, there are advances in biological understanding and microprocessor design [22] that also promote the opinion that we cannot predict what will happen next. Massive unemployment due to a technological upset could occur, provided companies do not undertake the necessary measures to protect their employees, like AT&T. These concerns drove the Organization for Economic Co-operation and Development to establish an expert group on AI and launch a policy observatory to ensure that AI is used for the benefit of the world [23] while the United Nations has formed the AI for Good Global Summit to achieve its goals for peace and prosperity worldwide [24].

Andrew Ng, founder of Google Brain, cautions, “Countries with more sensible AI policies will advance more rapidly, and those with poorly thought out policies will risk being left behind” [7]. One policy that is rapidly gaining attention is the idea of a “basic income,” where a periodic stipend of money is distributed to citizens regardless of income or work status, allowing people to survive when computerization displaces more human workers; basic income is currently described by several billionaires as a necessity of the future [25]. Ideas like these will allow a nation to minimize the economic effects on each household. Some cities, including Chicago [26], are starting to develop a basic income program with the intent of launching a test run. Current polling indicates a mixed sentiment toward basic income [27] which turns negative with the stipulation that it is paid for by taxes [28]; also, Americans have a negative view of how AI will impact the economy as a whole yet remain optimistic when only thinking about the effects on their job [27]. Investor Glenn Luk reminds us that if 90% of our jobs became automated, we would be where we are today: the farming, equine transportation, and railroad industries have been completely transformed to only contain a small percentage of the workforce they did 100 years ago [29], and the formation of new jobs and fields offset those changes.

ETHICS

It is difficult to research AI without finding a call from authors to keep ethics at the forefront of every decision, whether from a government report or a university paper. The 2016 federal report states that policy-makers need to involve technical experts, the Department of Transportation should continue preparing for automated vehicles, and universities should include AI ethics in their curricula for related programs [16]. MIT announced the new Stephen A. Schwarzman College of Computing, bringing in a $1 billion initiative to address the opportunities and challenges of AI, including injecting AI into related fields and developing a strong conversation about ethics [30]. MIT has always been a key player in the growth of AI and they are continuing to be proactive in all its fields, including ethics. Google DeepMind has formed its own ethics board, and recently Google dropped a U.S. Department of Defense contract due to employee protests that their research could be used to support drone attacks [31]. Wright and Schultz [32] published an ethical framework for integrating AI and automation into the workplace and recommends embracing regulation and ensuring stakeholder – i.e. “customers, employees, governments, and competitors” – interests are preserved. Furthermore, Siau and Wang encourage those involved with AI to do everything in their power to build public trust with this new technology [33], which should be easier to accomplish as the ratio of positive to negative media articles grows [7].

CONCLUSION

Predictions range from humanity living an easier life as AI bears more of our workload to mass unemployment due to greed, improper preparation, or a rampant singularity, but we must not forget that humans are creating AI and we have the ability to pull the plug. Currently, stakeholders in government, business, and education embrace an ethical approach to AI in the U.S. economy and are encouraging the rest of the world to do the same. Decades of analysis and caution have painted a clear picture of everyone’s ethical responsibility, regardless of how rapidly the technology advances. How we handle change is more important than the change itself; continuing this ethical path will allow for the smoothest transition into a new technological world. 

Did you like this example?

Cite this page

Our Ethical Responsibility in the Growth of AI. (2022, Sep 01). Retrieved December 15, 2024 , from
https://studydriver.com/our-ethical-responsibility-in-the-growth-of-ai/

Save time with Studydriver!

Get in touch with our top writers for a non-plagiarized essays written to satisfy your needs

Get custom essay

Stuck on ideas? Struggling with a concept?

A professional writer will make a clear, mistake-free paper for you!

Get help with your assignment
Leave your email and we will send a sample to you.
Stop wasting your time searching for samples!
You can find a skilled professional who can write any paper for you.
Get unique paper

Hi!
I'm Amy :)

I can help you save hours on your homework. Let's start by finding a writer.

Find Writer