On March 29, 2023, in an open letter to political and business leaders around the world, leading figures in the tech industry, including Elon Musk (co-founder of tech giant OpenAI), called for a temporary pause in the hurtling advances of Artificial Intelligence (AI).
The letter, signed by AI experts, including researchers, university professors, technology company executives, and philanthropists, highlights the need to address the ethical and social concerns surrounding AI before moving forward with its development: "AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs."
Among the main problems are algorithmic bias, data privacy, and potential job elimination. The signatories of the letter argue that AI can have unintended and potentially harmful consequences if its development is allowed to proceed without adequate oversight.
The letter acknowledges that AI has the potential to improve our lives, as does all technology as a whole in many ways, from healthcare to autonomous driving. However, it stresses that we must address the challenges it presents before moving forward with its development.
It continues, saying: "AI has great potential to improve people's lives, but it also carries significant risks. We must ensure that it is developed in an ethical and responsible manner, with adequate supervision and constant attention to its possible consequences. A temporary pause in its development would allow us to address these concerns and ensure that AI is used for the common good."
The letter urges political and business leaders to seriously address these issues as a matter of urgency and to work together to ensure that AI development is ethical, responsible, and sustainable.
What would a temporary pause in AI development entail?
According to experts, pausing the development of AI would allow professionals to inquire into the ethical and social concerns surrounding the advances of this technology and ensure that its development is carried out in a responsible manner. This action could have significant repercussions on the future of technology and how it is implemented in society.
At present, temporarily pausing development may slow down the release of new applications and technology based on Artificial Intelligence. Large companies that use it could experience a decline in innovative developments and competitiveness.
On the other hand, as suggested by experts, temporarily pausing AI may bring long-term benefits, such as further research into the development of more ethical and responsible technology that could lead to greater positive consequences in society by allowing it to be used for the common good and with less risk.
What do Pangeanic's different experts think?
At Pangeanic, we have been implementing Artificial Intelligence in our processes for years to create solutions that, combined with human ingenuity, meet the needs of today's market.
We develop our products while constantly adapting to global improvements and new trends in the technology and Natural Language Processing sector, which allows us to remain competitive through innovation.
Could this standstill affect us as a company implementing AI in its services?
While a pause in AI development at large companies may have some negative effects on the AI industry as a whole, it could also create opportunities for small companies. However, it is important to keep in mind that the situation is complex and that the effects may vary depending on the specific industry and context.
From the technological point of view, our CTO Amando Estela admits, "there may be dangers in unmoderated AI research, but pausing the research, however, seems to me at least as dangerous. I'm not in favor of restrictions when we know that some of the players are not going to go along with them."
AI is a technology that, like others, makes our lives easier, "I'm not convinced by the idea of technology companies temporarily stopping the development of Artificial Intelligence for six months. This decision suggests that we are moving too fast in deploying advanced Artificial Intelligence technology that could eventually surpass human capabilities. However, I don't share that view. It is essential to keep in mind that AI models are not intended to replace humans but to assist them in different tasks. Although they excel at various tasks, they lack human reasoning, empathy, and critical thinking," says Konstantinos Chatzitheodorou, our Head of Machine Learning.
Living in a society in constant change and search for improvement does not imply that we need to adapt. AI can help with many jobs, and it creates new ones with the goal of benefiting us as a society. As experts say, the pause on this technology for a period of time may cause a "stand-by" for those jobs that are still in the making and for which the most visionary talent is being trained. "Working in a technology company forces you, in a certain way, to keep abreast of technological re-evolutions and their possibilities.
It's true that many of these new tools facilitate the work in some fields, but we must not fail to understand them as just that, as "facilitators" and never as substitutes, because human ingenuity and creativity will always be a necessity. I don't think there are jobs at risk, but I do think the rules will change, and in some cases, roles will be redefined. We must see AI as support, and not abuse it or even come to depend on it, because if it fails, everything will fail with it," says our People & Culture Manager, María Bodí.
It is always worth remembering that AI has expanded very rapidly in many areas of everyday life and poses ethical, social, and legal challenges. This last point leads to a lack of accountability for determining who is responsible for damages and errors, as well as discrimination issues, privacy threats, and data security. Our R&D Department Manager, Mª Ángeles García, explains that what this group of experts is requesting is a bit utopian, "it seems to me to be the ideal situation and an essential step at this moment in time, but I don't think it’s really going to happen. No one controls what each person researches, and with the leakage of information about models such as "LLaMa," this situation has only worsened that control."
On the other hand, the increasing ability of contemporary Artificial Intelligence systems to perform general tasks at a human-competitive level raises important questions about its impact on society. Should we allow machines to spread propaganda and fake news through our information and communication channels? Should we automate all jobs, including those that provide satisfaction and purpose? Should we develop non-human minds that could eventually replace humans?
These and many more questions are the ones that are arising in the current climate and have been raised over the years, as Nikita Teslenko, Technical Manager of the Innovation Department, says: "In order to avoid catastrophic outcomes, we must ensure that powerful AI systems are developed in a responsible and transparent way, with robust safety protocols and oversight."
But who is in charge of regulating what AI does? The lack of regulations for AI is creating uncertainty in the market, affecting data privacy and security, perpetuating discrimination, and making accountability for errors and damage difficult. Although there is still no universal regulation for AI, more and more countries and regions are addressing the issue and developing specific laws and regulations, or ethical principles to guide the development and use of AI.
Ana Fernández, COO and Head of Legal at Pangeanic affirms that stopping the development of AI would allow the legal sector to regulate its use and these concerns to be addressed while developing ethical and adequate safeguards.
CEO Manuel Herranz adds that, if we want an ethical product, we must build it ourselves throughout the entire process, and we cannot allow the pursuit of excellence to lead us to become an empire that builds monuments on the backs of slaves.