AI systems are omnipresent. Whether it be voice command, or image recognition systems, recommender machines or AI systems enabling our cars to drive autonomously, today AI systems are used in a number of devices and economic sectors. They optimise processes, recommend products to customers or basically ease our daily lives. And more impactful applications in health, life sciences, climate, and science are emerging. AI can deliver net positive socioeconomic outcomes for us.
But what happens when it doesn’t?
Many recent incidents have already shown negative impact of poorly designed AI systems, leading to AI becoming a subject of interest to law and policy makers. For instance, in April 2019 the European Commission's High-Level Expert Group on Artificial Intelligence (HLEG-AI) published 'Ethics guidelines for trustworthy AI', which was followed by the first-legal framework on AI ('the AI Act') proposed by the European Commission in April 2021. These address the risks of AI and positions Europe to play a global role in trustworthy AI. Such regulatory frameworks are a large step towards a future of ethical and trustworthy Artificial Intelligence; at the same time, they are often abstract and with very limited practical applicability.
How does etami enable ethical, trustworthy, and legal AI?
etami supports and promotes trustworthy and ethical design, development, and deployment of AI systems by translating European and global principles for ethical AI into actionable and measurable tools and methods. These range from checklists to algorithmic auditing.
etami’s scope is composed of several work streams focusing on fundamental elements of ethical and trustworthy AI, such as: