Fri. Nov 8th, 2024
Occasional Digest - a story for you

Authors: Nafees Ahmad and Shezan Samrat

Greater control over human rights data is being demanded by people and communities worldwide, especially as they become more aware of the potential and actual uses and abuses by various socio-political and trade entities. With the nonexistence of federal legislation, states in the United States are attempting to find answers, as the European Union is doing concerning human rights and anthropomorphising digital life. What ought the boundaries to be? – What the future holds for emerging technologies and artificial intelligence (AI) from a human rights standpoint. It’s terrific that we are conversing about AI and human rights. We are all aware of the contemporary challenges confronting our planet and the situation of human rights. We are in danger due to the triple planetary catastrophe. There is no end in sight to the long-standing hostilities. New ones keep erupting, many of which have profound global repercussions. The COVID-19 pandemic’s aftereffects, which exposed and widened several inequalities worldwide, still leave us in shock.

AI Limitations

However, the limitations of AI and other developing technologies are one of the most urgent issues that society, governments, and the corporate sector must face. Over recent months, we have all witnessed and tracked the fantastic advancements in generative AI, with ChatGPT and other programmes now easily accessible to a larger audience. We are aware that AI has the potential to advance humans significantly. It might boost scientific advancement, democratise access to knowledge, enhance strategic foresight and forecasting, and increase the capacity for processing massive volumes of data. But to fully realise this potential, we must set boundaries and ensure that the advantages outweigh the hazards.

When we talk about boundaries, what we truly mean is regulation. Any solution or legislation must be based on respect for human rights to be adequate and compassionate and put people at the centre of the development of new technology. Two schools of thinking are now influencing the evolution of AI legislation. The first one is risk-based only and concentrates heavily on self-regulation and self-evaluation by AI engineers. Risk-based regulation emphasises identifying and minimising risks to achieve results rather than relying on specific rules. This strategy gives the private sector a lot of responsibilities. Some could think it’s too much; the private sector says so.

Human Rights Principles

Additionally, it leads to glaring regulatory loopholes. The alternative strategy incorporates human rights across AI’s whole existence. Human rights principles are applied throughout the data collection and selection process and the design, development, deployment, and usage of the models, tools, and services produced. This is not a warning for the future; instead, we are now experiencing adverse effects from AI, not just generative AI. Artificial intelligence can support authoritarian rule. It is capable of firing deadly autonomous weaponry. It can be the foundation for increasingly potent societal control, monitoring, and censorship systems. For instance, facial recognition technology can be employed to mass-surveillance our public areas, shattering any notion of privacy.

It has already been established that AI systems deployed in the criminal justice system to forecast future illegal activity reinforce inequality and jeopardise rights, including the presumption of innocence. Victims and experts, including many of you in this room, have raised the alarm for quite some time. Still, policymakers and AI developers have not responded sufficiently or quickly to those concerns. Governments and businesses both need to take immediate action. Additionally, the United Nations can be crucial in bringing together essential parties globally and providing guidance on progress. There is no time to waste at all. On climate change, the world has waited too long. We can’t afford to make the same error again.

AI Regulation

What might regulation entail? The damage people have experienced and are likely to encounter should be the starting point. For this, it’s essential to pay attention to affected individuals and those who have spent a lot of time in the past identifying and addressing damages. Bias in AI disproportionately affects women, minorities, and individuals already marginalised. For any discussion on governance, we must make significant attempts to invite them. It’s imperative to pay attention to the use of AI in public and private services, including justice, law enforcement, migration, social protection, and financial services, where there is a higher risk of abuse of authority or invasion of privacy.

Second, laws must call for assessments of the threats to human rights and the effects of AI systems before, during, and after usage. Particularly when the State is utilising AI technologies, transparency guarantees, independent monitoring, and access to efficient remedies are required. AI technology that cannot be used under international human rights legislation must be prohibited or put on hold until suitable protections are in place. Third, it is necessary to implement the regulations and safeguards already in business, such as frameworks for data protection, competition law, and sector-specific regulations, such as those for the health, technology, or financial markets. If respect for human rights is insufficient across the board in the regulatory and institutional landscape, a human rights perspective on the development and use of AI will have little impact.

Fourth, we must resist the urge to allow the AI sector to decide if self-regulation is adequate or whether they should establish the relevant legal framework. In that aspect, I believe we have taken a lesson from social media sites. While their opinions are valuable, it is crucial that the complete democratic process—laws shaped by all parties—be used to address a problem that will impact all people for a very long time. Under the Guiding Principles on Business and Human Rights, enterprises must also uphold their obligations to protect human rights. Companies must take responsibility for the goods they rush to market. All sectors must collaborate with several businesses, civil society organisations, and AI experts to produce recommendations for how to deal with generative AI. However, much more has to be done in this direction.

Way Ahead

Finally, even though it wouldn’t be a quick fix, it might be worthwhile to investigate the creation of an international advisory body for particularly high-risk technologies. Such a body could provide perspectives on how regulatory standards could align with frameworks for universal human rights and the rule of law. The group might offer suggestions for AI governance and publicly announce its discussions’ results. The UN Secretary-General has also suggested this as a component of the Global Digital Compact for the Summit of the Future the following year. The human rights framework offers a crucial framework that can act as a guardrail for initiatives to use AI’s immense potential while preventing and minimising its enormous risks.

Source link