A recently-released research report by Goldman Sachs has projected that AI automation may impact 66 per cent of all jobs – that’s 300 million in the US and Europe alone.
This week, The New Indian wrote a news article on the looming danger of AI-powered chatbots like OpenAI’s ChatGPT on humanity and later asked ChatGPT to edit the piece. ChatGPT did not stop just at grammatically editing the piece but went ahead and turned around the narrative in its favour, undermining the very essence of the argument of our article. This is precisely the danger the modern, heavily-tech reliant, and information overloaded world is staring at.
Recently, in a well-known case, ChatGPT refused to write a poem admiring former US president Donald Trump – a Republican citing its commitment to remain impartial, but it gladly wrote a short poem praising president Joe Biden, highlighting the potential for biases to be built into algorithms.
It’s not only inherent biases that concern a section of globally noted academicians, thinkers, researchers and technology leaders like Elon Musk – who co-founded OpenAI and even donated $100 million to the then non-profit in 2015 with a hope that the AI technology can revolutionize the way humans live.
Eight years down the line, Musk is now worried about the potential threats that this giant AI experiment poses to the modern world. In February this year, Musk, shortly after mentioning ChatGPT at a mega event in Dubai, declared: “One of the biggest risks to the future of civilization is AI.”
ALSO READ: Future wars: Why India must prepare for swarm drone attacks
Earlier this week, Musk launched a campaign by signing an open letter to call for an “immediate pause for at least 6 months the training of AI systems more powerful than GPT-4” – the latest version of ChatGPT to address concerns around this new disruptive technology. More than a thousand prominent tech figures like Stability AI’s Emad Mostaque and Apple’s Steve Wozniak have signed this petition. Many IIT professors and Indian industry leaders have also joined in and the list of signatories is growing.
Apart from its biases, AI-powered cutting-edge tools like OpenAI’s ChatGPT and DALL-E, and Google’s Bard are also exposed to ethical and privacy issues and their potential misuse.
Social media is already abuzz with chatters about widespread job loss and economic disruption. As ChatGPT and other AI language models continue to improve, they may automate many jobs that were once done by humans. This could lead to a significant shift in the job market, particularly for low-skilled workers who may find it challenging to adapt to the changing landscape.
A recently-released research report by Goldman Sachs has projected that AI automation may impact 66 per cent of all jobs – that’s 300 million in the US and Europe alone.
ALSO READ: India may get shape-changing military vehicle in future
AI algorithms are only as good as the data they are trained on, and this data often contains inherent biases. If these biases are not addressed, AI models like Google’s Bard and ChatGPT – which has already been integrated by Microsoft’s search engine Bing – could lead to discriminatory outcomes and perpetuate existing inequalities.
The use of flawed or biased algorithms can lead to unfair outcomes in matters such as job hiring, money lending, and policing. Predictive policing algorithms, for example, have been criticized for reinforcing racial biases in the criminal justice system.
Privacy violations are also a significant concern with AI systems. The vast amounts of data collected by these systems can be a threat to user privacy and can be used for unauthorized purposes such as political manipulation and identity theft.
Incorrect information generated by AI chatbots like ChatGPT can cause real-world harm, such as providing wrong medical advice. There are also concerns about the spread of misinformation and fake news. The scale at which ChatGPT can produce text, coupled with the ability to make even incorrect information sound convincingly right, can make information on the internet more questionable.
The development of autonomous weapons using AI has also raised concerns about accountability and the use of lethal and non-lethal force. If a weapon operates autonomously, it can be difficult to determine who is accountable if it causes damage.
In the field of cybersecurity, AI chatbots could lead to an increase in the already fast-growing number of cyberattacks targeting businesses, individuals, and sovereign nation-states. For example, cybercriminals could use a sophisticated chatbot to scale up their ability to communicate with victims and force them to pay a ransom without coming on the radar of law enforcement agencies.
The ability to create malware and assist hackers in their malicious activities has raised alarm bells within the cybersecurity community. The CheckPoint research report, revealing the potential for ChatGPT to be used for social engineering attacks, highlights the need for greater regulation and control of AI systems.
Questions around the ownership and control of data used to train ChatGPT also remain unanswered. Experts argue that the public needs to know the details about how AI systems like ChatGPT are trained, what data was used, where it came from, and what the architecture of the system looks like in detail.
As Google prepares to take Bard for more people, the competition in the market will undoubtedly grow. However, with the sheer power and capability of ChatGPT, the public imagination about the possibilities of AI continues to expand, and it’s clear that AI will become a part of our daily lives whether we like it or not. The need for regulation and control of these systems is critical, and the industry must work together to ensure the responsible development and use of AI.