Will artificial intelligence learn morals?

Will artificial intelligence learn morals?

Opinions expressed by Entrepreneur taxpayers own.

In 2002, I waited over ten minutes to download a single song using a 56k dial-up modem. Audio cassettes were still very much in vogue. Fast-forward to 2022, and now one can tell their phone or car to play their favorite tracks using their voice. We can log into our favorite streaming music service automatically, and it shows us music and artists that might fit the mood, time, or occasion. You can automate almost every electrical system in your home to run on your schedule (remind them to shop, turn on the lights when they come in, etc.).

In a relatively short span of two decades, we have gone from waiting for technology to respond, to machines and systems waiting for our next command. Whether we like it or are aware of it, artificial intelligence and automation are already playing an important role in our lives.

Related: Get ready to learn how AI will change the way we work in 2022 and beyond

AI is in the early stages

Technology is slowly approaching a level of intelligence that can anticipate our needs. We are in the golden age of AI, and yet we have only just begun to see its applications. The next steps go from a routine to a deeper and more abstract process. For example, if you habitually drink coffee every morning, it is easy for an AI to learn your routine. But right now, it can’t even begin to come close to what you’re thinking over your coffee. The next step in the evolution of AI could be your Google Home or Amazon Alexa, knowing that your day is about to start and could deliver your schedule without warning.

AI is starting to switch gears from performing repetitive tasks to making higher-order decisions. We have only just begun to see the capabilities of AI. In the next five to 10 years, AI will likely impact every fabric of our lives. While we’re more than happy to let AI work for us, what happens when we start outsourcing complex thinking and decision-making? Our ability to make decisions is based on our capacity for conscience, empathy, and the ability to take a moral stand. When we let machines do our thinking for us, do we also burden them with the complex web of human morality?

Related: Beware of these 5 AI problems in HR

Who decides which morality is correct?

Mimicking human decision making is not simply a matter of logic or technology. Over the centuries of human civilization, we have developed genuinely complex moral and ethical codes. These are informed as much by social norms as by education, culture and, to a large extent, religion. The problem is that morality remains a nebulous notion with no agreed-upon universals.

What is perceived as moral in one society or religion might strike at the heart of all that is right in another. The answer may vary depending on the context and who makes the decision. When we can barely balance our cognitive bias, how do we chart a path for machines to avoid data bias? There is mounting evidence that some of our technologies are already as flawed as we are. Facial recognition systems and digital assistants show signs of discrimination against women and people of color.

The most likely scenario is that the AI ​​follows the rules of morality prescribed by defined groups or societies. Imagine buying a basic code of ethics and upgrading it with morality packs depending on your inclinations. If you are a Christian, the morality pack will follow the standard Christian code of ethics (or as close as possible). In this hypothetical scenario, we still have control over the moral principles that the machine will follow. The problem arises when that decision is made by someone else. Imagine the implications of an authoritarian government enforcing its version of morality on ruthlessly controlled citizens. The debate over who could even make such a call would have far-reaching implications.

Related: Can Bedtime Stories Help Us Avoid the Robot Apocalypse?

What could a moral AI mean for the future?

Applications of a moral AI could defy belief. For example, instead of the current version of overcrowded prisons, AI could make rehabilitating criminals a real possibility. Could we dare to dream of a future where we could rewrite the morals of criminals with a chip and prevent murder? Will it be a blessing for society or an ethical nightmare? Could it mirror the movie “The Last Days of American Crime”? Even minor applications, such as integrated continuous glucose monitoring (iCGM) systems in wearable devices to optimize diet and lifestyle, can have a long-term impact on our society and well-being.

As complicated as morality in AI is, it’s worth remembering that humans are a tenacious race. We tend to get a lot of things wrong in the first draft. As Shakespeare said, “By hints find directions.” In other words, we tend to stay on the problem.

Almost all of our current technological advances seemed impossible at some point in history. It will probably take us decades of trial and error, but we’ve already started on the first draft with projects like Delphi. Even a first iteration of an AI that attempts to be ethically informed, socially circumspect, and culturally inclusive gives us reason for hope. Perhaps technology can finally point us to that treasure map that promises an idyllic moral future that we have collectively dreamed of for centuries.

Related: Ethical considerations of digital transformation

Leave a Reply

Your email address will not be published.