Artificial Intelligence Vs Human Centered Intelligence for the Next Millennium – Interfacing Human Factors – II

#AI #DigitalTransformation #Intelligence #Automation #Machinelearning

Dr. Pranjal Kumar Phukan

Dr. Pranjal Kumar Phukan is an accomplished nationally & internationally acclaimed supply chain professional with over 23 years of experience. Currently, he is working at Brahmaputra Cracker and Polymer Limited, and is passionate about entrepreneurship development, industry institute collaboration and supporting ideation projects.

Understanding Humans:
In due course of time, most of the AI systems that will come into contact with humans will need to understand how humans behave and what they want making them more useful and safer to use. There are at least two ways in which understanding humans can benefit intelligent systems. First, the intelligent system must infer what a person wants for which humans need to design AI systems that receive their instructions and goals. However, people do not always say exactly what they mean and systems misunderstanding a person's intent can lead to perceived failure. Secondly, going beyond simply failing to understand human speech or written language, human should consider the fact that perfectly understood instructions can lead to failure if part of the instructions or goals is unstated or implicit.

Common-sense failure goals occur when an intelligent agent does not achieve the desired result because part of the goal or the way the goal should have been achieved is left unstated (this is also referred to as a corrupted goal or corrupted reward; Everitt, Krakovna, Orseau, Hutter, & Legg, 2017). Primary reason for its happening is that humans are used to communicating with other humans who share common knowledge about how the world works and how to do things.

It is easy to fail to recognize that computers do not share this common knowledge and can take specifications literally. The failure is not the fault of the AI system and it is the fault of the human operator. Further to that, it is inconsequential to set up common sense failures in robotics and autonomous agents. It may consider hypothetically as asking a robot to go to a pharmacy and pick up a prescription drug because the human is ill, he or she would like the robot to return as quickly as possible. If the robot goes directly to the pharmacy, goes behind the counter, grabs the drug, and returns home, it will have succeeded and minimized execution time and resources (money). It would also say it robbed the pharmacy because it did not participate in the social construct of exchanging money for the product.

One solution for avoiding common sense goal failures for intelligent systems is to possess common sense knowledge and can be any knowledge commonly shared by individuals from the same society and culture. Common-sense knowledge can be declarative (e.g., cars drive on the right side of the road) or procedural (e.g., a waitperson in a restaurant will not bring the bill until it is requested). While there have been several efforts to create knowledge bases of declarative common-sense knowledge (CYCLenat, 1995, ConceptNet—Liu & Singh, 2004), these efforts are incomplete and there is a dearth of knowledge readily available on procedural behavioral norms.

There are a number of sources from which intelligent systems might acquire common knowledge, including machine vision applied to cartoons (Vedantam, Lin, Batra, Zitnick, & Parikh, 2015), images (Sadeghi, Divvala, & Farhadi, 2015) and video. Predictably, much common-sense knowledge can be inferred from what people write, including stories, news, and encyclopedias such as Wikipedia (Trinh & Le, 2018). Stories and writing can be particularly powerful sources of common knowledge; people write what they know and social and cultural biases and assumptions can come out from descriptions of the proper procedure for going to a restaurant or wedding to implicit assertions of right and wrong.

Procedural knowledge in particular can be used by intelligent systems to better provide services to people by predicting their behavior or detect and respond to anomalous behavior. In the same way that predictive text completion is helpful, predicting broader patterns of daily life can also be helpful. Combining common sense procedural knowledge with behavior can yield intelligent agents that are safer. To the extent that it is impossible to enumerate the “rules” of society which are more than just the laws a society has common sense, procedural knowledge can help intelligent systems and robots follow social conventions that often exist to help humans avoid conflict with each other, even though they may inconvenience us.

Common-sense knowledge, the procedural form of which can act as a basis for theory of mind for when interacting with humans, can make humanAI interaction more natural. Even though ML and AI decisionmaking algorithms operate differently from human decision‐making, the behavior of the system is consequently more recognizable to people. It also makes interaction with people safer: It can reduce common sense goal failures because the agent fills in an under-specified goal with common sense procedural details; and an agent that acts according to a person's expectations will innately avoid conflict with a person who is applying their theory of mind of human behavior to intelligent agents.

The nature of human behavior is complex, sometimes illogical, and often difficult to understand and all those physical actions and observable emotions that can never clear out the why-s and the how-s of human behavior. It is because human’s reactions change after getting influenced by culture, thinking, and upbringing and is where psychology science comes into play. If it were for humans to process that data, it would result in errors and wrong conclusions. But with the emergence of AI, gaining accurate insights has become child’s play for example, StressSense is used to track the time when people are more stressed and help companies to avoid such anxious situations and while MoodRhythm allows patients with bipolar disorder monitor sleep and social interactions to maintain balanced mood and energy levels.

It is interesting to note that AI can turn out to be very effective when used as a marketing tool. In 2019, humans are dealing with real-world limitations where it becomes pretty hard to identify how far communications can track human behavior. Fortunately, predictive modelling of AI was available that can get rapid insights. An AI model could learn what the best signs and signals are and learn what interventions are best applied for a particular type of person in any situation.

The question of what makes a good explanation of the behavior of a ML system is an open question that has not been explored at depth from a human factors perspective. One option for natural language explanation is to generate a description of how the algorithm processes sensory input. This can be unsatisfactory because algorithms such as neural networks and reinforcement learning defy easy explanation (e.g., the action was taken because numerous trials indicate that in situations similar to this the action has the highest likelihood of maximizing future reward.”)

Another option is to take inspiration from how humans respond to the question “why did you do that?” Humans produce rationales—an explanation after the fact that plausibly justifies their actions. People do not know how the exact cascades of neural activation resulted in a decision; they invent a story, consistent with what they know about themselves and with intent on being as informative as possible. In turn, others accept these rationales knowing that they are not absolutely accurate reflections on the cognitive and neural processes that produced the behavior at the time. Rationale generation is thus the task of creating an explanation comparable to what a human would say if he or she were performing the behavior that the agent was performing in the same situation. Ehsan, Tambwekar, Chan, Harrison, and Riedl (2019) show that human‐like rationales, despite being true reflections of the internal processes of a black‐box intelligent system, promote feelings of trust, rapport, familiarity, and comfort in non-experts operating autonomous systems and robots.

With this desiderata, human‐centered AI can be broken into two critical capacities: (a) understanding humans and (b) being able to help humans understand the AI systems. However, it seems that many of the attributes humans’ desire in intelligent systems that interact with non-expert users and in systems that are designed for social responsibility can be derived from these two capabilities. For example, there is a growing awareness of the need for fairness and transparency when it comes to deployed AI systems.

Fairness is the requirement that all users are treated equally and without prejudice. Right now, humans make conscious effort to collect data and build checks into their systems to prevent their systems from prejudicial behavior. An intelligent system that has a model of and can reason about social and cultural norms for the population it interacts with can achieve the same effect of fairness and avoid discrimination and prejudice in situations not anticipated by the system's developers. Transparency is about providing some means of access to the data sets and work flows inside a deployed AI system to end‐users. The ability to help people understand their decisions through explanations or other means accessible to non-expert will provide people with greater sense of trust and make them more willing to continue the use of AI systems. Explanations may even be the first step toward remedy, a critical aspect of accountability.

Humancentered AI does not mean that an AI or ML algorithm must think like a human or be cognitively plausible. However, it does recognize the fact that people who are non-expert in AI or computing science fall back on a theory of mind designed to facilitate their interaction with other people and draw on sociocultural norms that have emerged to avoid human–human conflict. Making intelligent systems human‐centered means building the intelligent systems to understand the (often culturally specific) expectations and needs of humans and to help humans understand them in return. The pursuit of human‐centered AI presents is search agenda that will improve the scientific understanding of fundamental AI and ML while simultaneously supporting the deployment of intelligent products and services that will interact with people in everyday contexts.

Robots and AI have been created by humans and they are tools that they can use when they give the right instructions. The point is that humans and technology must work together, humans in control and the technology providing what it is programmed to provide. The idea that technology will replace the need for creative thinking, problem-solving, leadership, teamwork and initiative is rather silly right now. Coming to the debate of Artificial Intelligence vs. Human Intelligence, recent AI achievements imitate human intelligence more closely than before, however, machines are still way beyond what human brains are capable of doing. The ability of humans to apply the acquired knowledge with a sense of logic, reasoning, understanding, learning and experience is what makes them stand out with knowledge comes power, and with power comes great responsibility.

Although Machines may be able to mimic human behavior to a certain extent, their knowledge may fall apart when making rational decisions like humans. AI-powered machines make decisions based on events and their association with them; however, they lack “common sense”. AI systems are clueless in the understanding of cause” and “effect”. Meanwhile, real-world scenarios need a holistic human approach.

“Each one of humans has a different emotional quotient and absorbs information in varying contexts and styles. The learning model that humans adapt must include humanness and the frequency that matches their mind-set. To be precise, AI cannot offer a real human touch to our learning journey. Going with the present data and AI-advancements, language processing, vision, Image processing, and common sense is still a challenge to machines and require human interventions. Since AI is still in its development stage, the future lies in how well humans govern AI apps so that they abide by human values and safety measures. After all, like Nick Burns, SQL Services Data Scientist explained: “No matter how good your models are, they are only as good as your data…”


Part 1 of this Article - CLICK HERE

MORE FROM THE SECTION