What are the main challenges of AI in 2024
In 2024, artificial intelligence continues to evolve rapidly, but it faces several significant challenges that require attention and resolution. Kazinform News Agency invites readers to explore these current AI issues.
Loss of unique professional experience
Automation through AI often leads to the creation of standardized workflows and solutions. This can diminish the value of individual approaches and unique experiences that once set professionals apart.
Workers may begin to feel replaceable, which reduces their motivation and sense of connection to their work. For employers, this means losing innovative approaches, as employees stop contributing creative or unconventional ideas, relying solely on AI-generated templates.
Additionally, workers whose roles become dependent on AI may start to feel like "add-ons" to the machine rather than valuable professionals. This can lead to apathy and a decline in initiative. Many workers, especially millennials and Gen Z, are concerned about AI replacing their jobs, with 41% of workers expressing fear, according to the American Psychological Association. This fear can undermine productivity, motivation, and trust in innovation.
Issue of unfair decisions
Recent studies have shown that unfair decisions made by artificial intelligence can weaken our tendency to respond to injustices committed by humans. This phenomenon, known as "AI-induced indifference," suggests that interacting with unjust AI systems may reduce our willingness to intervene in human injustices.
In experiments, participants were asked to interact with an AI making decisions about resource allocation. They were then asked about their willingness to intervene in hypothetical situations involving human injustices. The results indicated that those who encountered unfair decisions by AI were less likely to express a desire to intervene in human injustices compared to those who interacted with humans.
AI systems reinforcing existing biases
AI systems learn patterns and make predictions based on the data they are trained on. If this training data contains biases—whether related to gender, race, socioeconomic status, or other factors—the AI system is likely to perpetuate those biases. For instance, recruitment algorithms trained on historical hiring data might favor male candidates if the organization previously exhibited a gender bias in hiring practices. As a result, AI unintentionally reinforces discriminatory patterns rather than addressing them.
Cumulative effects and long-term consequences
The impact of AI-related issues may not be immediately obvious. As these systems become more deeply integrated into societal functions, small errors or biases in data can accumulate over time, leading to significant long-term consequences.
For example, an AI algorithm used for credit scoring might subtly discriminate against certain groups for an extended period, but its cumulative effect only becomes apparent when the inequality reaches a critical threshold. The challenge here is that while we can recognize these issues, understanding their full scope requires long-term, longitudinal studies and a broader framework to track their development.
Manipulation through lack of transparency
AI systems are often described as "black boxes," meaning their decision-making processes are not easily interpretable. This lack of transparency makes it challenging to identify and address the biases embedded in their algorithms. As a result, users might trust these systems blindly, assuming that their outputs are objective, when in reality, they may perpetuate unfairness.
Users also often don't know the real goals of AI algorithms or how their personal data is being used. This lack of transparency allows AI systems to manipulate users and achieve goals that might not be in their best interest.
A well-known example is Target, the American retail chain, which used AI to predict when a woman might be pregnant. Based on this prediction, Target sent her targeted ads for baby products.
Similarly, Uber was accused of manipulating users with dynamic pricing. For example, they could allegedly charge higher prices when a user’s phone battery was low, even though this wasn’t something users knew would affect their fare. Uber, however, denied these accusations.
Copyright challenges
Proponents of AI technologies argue that these models drive progress in various fields, such as art, medical research, and technology, enhancing productivity and accelerating innovation. However, questions remain about how copyright laws should adapt to address the issues posed by AI-generated content.
The use of copyrighted materials for training AI is a complex issue. While AI training may fall under "fair use" in some cases, many rights holders argue it harms their commercial interests. Generative AI can produce works that imitate human creations, leading to legal concerns about unlicensed data use and the similarity of AI-generated content to original works. Different countries approach AI copyright issues differently. The U.S. only grants copyright for human-created works, while the EU requires disclosure of training data. The legal landscape is still evolving.
AI as a required standard
As AI tools become increasingly essential in various industries, there’s a growing concern that those who do not use these technologies are at a significant disadvantage. The reliance on AI is shifting from being optional to almost mandatory in certain fields, creating a divide between those who embrace these tools and those who don't.
For example, in job applications, AI-powered tools now help candidates craft resumes and cover letters quickly. While this technology streamlines the process, it also raises concerns that job seekers who don’t use AI may be overlooked due to the overwhelming number of applications generated through automated tools. Job seekers who don't use AI may be left behind, as recruiters may prefer candidates who have utilized AI to enhance their applications.
Loss of critical thinking
Over-reliance on AI for intellectual tasks such as searching, analyzing, and interpreting data can weaken users' critical thinking skills. Instead of independently seeking solutions or developing hypotheses, individuals tend to rely on algorithmic recommendations. This reliance fosters a habit of always trusting the "ready-made answer," rather than actively developing their own approaches to problem-solving.
Earlier Kazinform News Agency reported on how AI implementation can affect the global job market and how it can transform the future of libraries and research institutions.