Artificial intelligence


Mohammed Ashraf hasanain(TP078045)

AI issue and solution

Lack of transparency in making Decisions 

Intro

Nowadays, AI is something that is leading us to a new Age, AI or Artificial Intelligence is a simulation of human intelligence it can perform tasks that humans could do for example solving a math equation, and writing a story or summary Also it can do hard tasks like assisting doctors with diagnosing diseases and creating audio, But whatever the capability of artificial intelligence, the challenge remains how to make it a reliable technology, and indeed its intelligence can surpass human intelligence. until now we can't trust AI, and one of the biggest problems that face AI is the "Lack of transparency in making Decisions ".



Mohammad Abdelrahman(TP073489)

An issue with understanding the decision:-


When AI systems provide recommendations or insights, human decision-makers may rely too heavily on these outputs without fully understanding the underlying processes, leading to several problems: With AI systems that offer recommendations or insights, a human decision-maker might undermine their 
own capacity to think critically and analyze the processes underlying the outputs, which can create several issues.

Overreliance: Nevertheless, a human being may get enslaved by such an AI, and it might never occur to them to think of these solutions. Sometimes these deficiencies can render the choices made by AI inaccuracy and even biased because of a data error or the complexity of the algorithm.              
Reinforcing Bias: The fact that AI is made of such a basic nature generates the risk of revealing 
unaware of biases in human behavior. If AI learns biased data from the algorithms of chatbots, the biases would likely spread to its decision-making because people begin to accept the suggestions they get from AI, which would facilitate the propagation of discrimination.
Erosion of expertise: AI systems that are biased, or inaccurate, sometimes lead to an automatic dismissal of the people’s confidence in experts. A Senior employee may be lost in their power of critical work and expertise, with AI tools thereby doing complex decision-making that needs nuanced understanding and interpretation.



Hana Okamura(TP074493)

Solution

When it becomes usual for AI to provide solutions to problems, human problem-solving skills will decline. We may become dependent on AI all the time and lose the ability to create original ideas. As an example, AI is used for creative tasks such as composing documents and creating music. However, AI is unable to create original ideas because its creation is based on existing data. Human sensitivity and originality are essential for creative work. AI can show one solution, but not always the most suitable one. As AI is only a decision-making assistant, it is important to gather information, analyze, think, and make decisions on one's own.

 If humans rely on the answers given by AI, inaccuracies and biases can be directly reflected due to AI data errors and complex algorithms. The same applies to chatbots, which may unintentionally learn discriminatory expressions and ideas. Therefore, the data used to train AI must be collected from people of different genders, races, ages, and origins. Using unbiased data can reduce AI bias. In addition, a system is also needed to detect and remove bias in the data and algorithms.

AI is only skilled at processing vast amounts of data and cannot replace human experience and expertise. For instance, AI can analyze medical data, diagnose diseases, and suggest treatments. However, it may give incorrect diagnostic results or cannot adequately take into account a patient's individual constitution or medical condition. When utilizing AI, it will be necessary for the AI and the expert to make decisions together, based on the expert's knowledge and experience. The results provided by the AI should be analyzed by the expert to make appropriate decisions.


Abdel Rahman Ashraf Hasanain(TP077708)

Problems in Error Detector:


The error detector in artificial intelligence contains some errors that are supposed to be solved in order to provide better services to the user. One of these problems is sensitivity in some contexts which could be important to some users, also is that the users won’t be able to type in an informal language, because the AI detector will not be able to understand the slang language, which may give results that are unsatisfactory or inconsistent with the problem given by the user. Moreover, the world is stepping up and with this progress, more definition occurs, so ( AI error detection ) must be familiar with the new things that are happening in our world. Furthermore, some error detectors rely on someone’s input which could impact the user, by delivering wrong information that could be either a normal or a sensitive case to the user. Moreover, lack of details in some detectors, for instance when the detector defines the error it doesn’t provide how it is solved or how to get the answer which will be a useless error detector for the user. Lastly is that not all error detectors can handle different things, like mathematical questions, pictures that contain complex graphs, and so on, therefore this point must be solved, so it could be a helpful error detector.


Adel Zeinab(TP078282)

Solution

There are a few key methodologies that should be employed in an attempt to mitigate transparency issues in AI development.

 Firstly, the use of XAI methods is useful for the models getting an explanation capability. Transparency is also improved by attention mechanisms and feature importance emphasizing the factors upon which AI decisions are made.

 Secondly, self-assessment platforms facilitate real-time AI to manage the amount and type of its interaction and activity. By making a person to make some kind of self-assessment of their own accuracy or reliability rates on these systems, these systems then identify mistakes or biases on their own and correct them, strengthening responsibility systems.

One more significant way in which transparency tools advance AI involves explaining the algorithms. Heatmaps and saliency maps are helpful in presenting the various processes that are involved in AI applications in a way that is easier to understand by users of AI applications who have the mandate of auditing the performance of the AI applications.Also, the ability of algorithms to detect biases that are developed will assist developers to identify the biases in AI systems. These algorithms assist in the continuous observation of the training data and the evaluations of the model results to search for possible disparities or discriminatory patterns that may aid in the development of more fair AI.


References:

What is Explainable AI (XAI)? | IBM. (n.d.). https://www.ibm.com/topics/explainable-ai

Self-Assessment | Center for Teaching Innovation. (n.d.). https://teaching.cornell.edu/teaching-resources/assessment-evaluation/self-assessment

Hannah Wren January 18, (2024). What is AI transparency? https://www.zendesk.com/blog/ai-transparency/

Silpaja Chandrasekar Dec 10, (2023).Challenges and Solutions in AI.  https://www.azoai.com/news/20231210/Challenges-and-Solutions-in-AI-Driven-Scientific-Research.aspx

Comments

Popular Posts