Try ECO LLM Try ECO Translate
Try ECO LLM Try ECO Translate
Featured Image

6 min read

14/07/2023

The 4 pillars of Ethical AI – and why they’re important to Machine Learning

Pangeanic defines Ethical AI through four pillars: Transparency, Responsibility, Fairness, and Reliability, ensuring SLMs meet EU AI Act standards. Working closely with our clients to build Ethical AI systems permeates Pangeanic’s culture and everything we do. But we not only talk the talk but walk the walk. To this end, we want to make public our conscious efforts in data collection, data aggregation, and data annotation, building parallel corpora for ML systems in full compliance with the 4 pillars of Ethical AI. Together with strong multilingual data masking/anonymization technologies, we create datasets and build solutions based on solid Ethical AI principles. 

 

What is Ethical AI? 

Ethical AI is a voluntary, worldwide concept that refers to the design, development, inception, and deployment of AI systems in a way that respects human rights and shared moral values. It focuses on creating AI systems that are transparent, fair, accountable, and respect privacy. The 4 pillars of Ethical AI are: 

  1. Transparency: The processes and decisions made by AI systems should be clear, understandable, and explainable. This helps users understand how a specific decision or output is reached by the AI.

     

  2. Fairness: AI systems must avoid biases that can lead to unfair outcomes or discrimination. AI should treat everyone equally and make decisions based on factual data rather than subjective biases.

     

  3. Privacy and Security: AI systems should respect users' privacy. This means that they should not collect or use personal data without the user's consent. Protecting the privacy and data of individuals is paramount in Ethical AI. It involves using proper data anonymization techniques, ensuring data security, and adhering to all relevant data protection laws and regulations.

     

  4. Accountability: There should be a way to hold those responsible for developing and using AI systems accountable for their actions.

The Impact of Ethical AI on Enterprise Data Sovereignty 

Artificial intelligence (AI) advancements are beginning to profoundly affect how we humans not only interface with machines but also with other humans, how we process information, and, consequently, how we make decisions. AI can enhance the delivery of many goods and services, including in areas like healthcare, financial services, and technology, but it may also create bias toward a specific goal or segment of the population, ignore certain groups entirely, or spread misinformation. And this is merely the beginning.

As the enormity of big data increases and computational power expands, with more data becoming available to more teams and fine-tuning of Large Language Models and other AI systems becoming more affordable, we understand that the technology is set to revolutionize nearly every facet of our known existence. 

 New to the basics of LLMs? Read on:  Large Language Models

 

This big leap forward is a thrilling prospect that could help address some of our most formidable challenges to the welfare of mankind. But it also raises many valid concerns. Like any novel, rapidly developing technology, the universal integration of AI will demand a steep learning curve and very fast user training. Inevitable missteps and misjudgments will occur, potentially leading to unforeseen and sometimes detrimental impacts. 

To harness the benefits and mitigate harm, AI ethics plays a vital role in ensuring that the societal and ethical ramifications of the design and deployment of AI systems are considered at every stage. AI ethics rests on those 4 fundamental Ethical AI pillars and, to reinforce our message, systems need to be: Fair, Private, Robust, Explainable 

Known issues so far 

We are only at the inception stage of AI system development, but we are all aware of edge cases that have hit the headlines precisely because of a lack of ethical oversight. Here are some examples of how each pillar has been transgressed. 

Fairness 

The use of AI to mitigate bias in credit scores underscores why fairness is essential in the engineering of AI systems. Despite often being viewed as unbiased and objective, credit scores have a lengthy history of discrimination, for instance, based on race or gender. AI-informed credit scoring models offer more detailed data interpretation and can reveal hidden correlations between variables that might not seem pertinent or even be included in a conventional credit scoring model – such as political inclinations or social media connections. However, integrating such unconventional information into credit scores risks exacerbating their existing bias. It's crucial that fairness be prioritized to ensure this supplementary data actually helps those it's intended to help by broadening access to credit. 

Privacy 

The critical importance of privacy in AI usage is already evident in healthcare. In 2016, a controversy arose when London-based AI company DeepMind collaborated with the NHS, raising concerns over the handling of sensitive patient data. The subsequent move of DeepMind's health sector to its parent company, Google, fueled even more worries. It's clear that linking patient data to Google accounts cannot be tolerated. However, a suitable balance between safeguarding individual privacy and facilitating societal benefits from AI applications, such as interpreting scans or planning radiotherapy, must be struck. The true value resides in anonymized collective data rather than personal data, so a compromise should be feasible. 

Robustness 

The significance of robustness in an AI system is evidenced in recruitment. AI can considerably enhance the hiring process by eliminating much of the speculation involved in identifying talent and reducing bias that often affects human decision-making. However, even tech giant Amazon encountered issues when its new hiring system was found to be biased against women. The system had been trained on CVs submitted over a decade, most of which came from men, reflecting the tech industry's male dominance. Consequently, the system learned to favor male candidates. AI systems aren't inherently robust – to maximize their benefits, they need to operate effectively in real-world settings. 

Explainability 

Microsoft's experiment in "conversational understanding" via a Twitter bot named Tay underlines why the fourth pillar of AI ethics, explainability, is necessary. It took less than a day for Twitter to taint the naive AI chatbot. The bot was supposed to improve through user interaction, learning to engage in casual and playful conversation. Unfortunately, the dialogue soon became offensive, with the bot parroting misogynistic and racist comments. Providing an explanation for an AI's decision-making is challenging, akin to explaining human intelligence, but it's vital to fully leverage the AI's potential. 

 

4 pillars of Ethical AI

What does Pangeanic do to implement Ethical AI in its processes? 

We believe that fair, accountable, transparent systems cannot be built on cheap labor. Pangeanic applies a fair pay policy to its suppliers, vendors, and employees. We recruit talent because it reflects on the service and the quality of products and services our clients receive. 

  • Our processes are open to our clients, and we evangelize them on the use of AI for their applications (machine translation, classification, data for ML, data curation...). By understanding the data and processes, we can explain how and why Pangeanic’s technology works and how AI arrives at decisions and outputs. 

  • Our systems avoid bias from concept to application. Our research personnel have conducted R&D on biased publications to detect and eliminate hate speech and gender or ethnic biases. We constantly work with our clients to create gender-balanced datasets, such as those free of toxicity. 

  • We build, improve, customize, and implement anonymization tools that respect individuals' privacy in 26 languages. Pangeanic leads the MAPA Project, supported by the EU, and is now a core part of several anonymization infrastructures. 

  • Pangeanic declares itself accountable for all its services and products in its AI and data-for-AI journey, with performance guarantees. 

One very good example was our CEO's involvement in the Translating Europe Workshop (TEW) GDPR Report, which helped translation professionals understand GDPR and how best to implement it.  

After several months of engagement among representatives from translators’ associations, language services personnel, academia, and legal professionals, the group produced a report of recommendations that was presented at the Translating Europe Workshop in 2022. 

In the words of Melina Skondra:  “Funded by the European Commission Directorate-General for Translation, the Translating Europe Workshop is the first step towards common European GDPR guidelines for the translation and interpreting profession". It encompasses the work of a panel of legal and T&I sector experts, an all-day online workshop with T&I professionals, and this summary report. 
 
The TEW GDPR report outlines key challenges in implementing and complying with the GDPR in the translation and interpreting profession, as explored and analyzed by the pan-European panel of experts. 
Congrats to the great team: Stefanie Bogaerts, John Anthony O'Shea LL.B, LL.M, Raisa McNab, Wojciech Woloszyk (Wołoszyk), Małgorzata Dumkiewicz, Paweł Kamocki, Zoe Milak, Manuel Herranz

Is Ethical AI the Future? 

Based on fundamental rights and ethical principles, the European Commission developed a set of Ethics Guidelines for Trustworthy Artificial Intelligence (AI). To build trust in human-centric AI, this political body prepared a document with a group from the High-Level Expert Group on Artificial Intelligence (AI HLEG). This independent expert group was established by the European Commission in June 2018 as part of the AI strategy announced earlier that year. 

The Guidelines list seven key requirements that AI systems should meet in order to be trustworthy: 

  1. Human agency and oversight

  2. Technical robustness and safety

  3. Privacy and Data Governance

  4. Transparency

  5. Diversity, non-discrimination, and fairness

  6. Societal and environmental well-being

  7. Accountability 

To put these requirements into practice, the Guidelines introduce an evaluation checklist that provides guidance on the practical application of each requirement. This evaluation checklist will undergo a testing phase, during which all interested stakeholders can contribute to collect feedback and enhance its effectiveness. Additionally, a forum has been established to facilitate the exchange of best practices for deploying Trustworthy AI.