Impact of machine learning on society

15 posts / 0 new
Last post
Impact of machine learning on society

Below is a list of questions to serve as a starting framework for the discussion in this thread:

  • What effect has technology and machine learning in particular on our society and the existing power relations or socio-economic inequalities?
  • How does it influence the work and focus of human rights defenders?
Framing impact: The Toronto Declaration

The Toronto Declaration was drafted during RightsCon 2018 and aims at protecting the rights to equality and non-discrimination in machine learning systems. I consider it a valuable document to keep handy when thinking about impact as it brings to light key issues. Its language is inclusive and rights-based and considers paramount protecting the rights of all individuals and groups as well as promoting diversity and preventing discrimination. It also proposes using the framework of international human rights law to for protection and accountability recognising that "states have obligations to promote, protect and respect human rights; private sector, including companies, has a responsibility to respect human rights at all times." See the declaration full text at https://www.accessnow.org/the-toronto-declaration-protecting-the-rights-...

Framing impact: The Toronto Declaration

In the Toronto Declaration it is written that 'States have obligations to promote, protect and respect human rights; private sector, including companies, has a responsibility to respect human rights at all times.' This acknowledges the massive influence of private companies on society and its impact on human rights.
A few weeks ago Google published its principles on AI (https://blog.google/topics/ai/ai-principles/) containing things like being socially beneficial and avoid creating or reinforcing unfair bias.

Are these principles in line with the Toronto Declaration and what changes in the private sector are required to ensure that algorithms benefit society?

Members tagged in this comment: 
ML and the courts

Comment originally posted by Nani Jansen Reventlow

Should ML be used to assist or even replace judicial decision making?

A positive view: https://www.technologyreview.com/s/603763/how-to-upgrade-judges-with-mac...
A negative view: https://www.wired.com/2017/04/courts-using-ai-sentence-criminals-must-st...

Also, could machine learning help litigators decide what cases to bring, and what issues to highlight to increase their prospects of success? Some refelections here: http://www.sciencemag.org/news/2017/05/artificial-intelligence-prevails-...

 

ML and the courts

Comment originally posted by Natalie Widmann

That's a really tough question!  Without having a clear opinion on this issue here are some thoughts:
(1) legal systems are made by humans to ensure social order and to resolve conflicts in a systematic and peaceful way. It somehow feels wrong to use an algorithm for judicial decision making which is also based on human norms, morals and intuition.. (As this is a very subjective statement, I would be happy to hear opinions from lawyers in the field)
(2) Human judicial sentencing is subject to mistakes or structural bias (https://icaad.ngo/womens-rights/promote-access-to-justice/combating-vaw-...). Training a ML system on this data, means that it captures all these biases and applies it at scale to new cases
(3) It seems that the benefits of ML is measured in overall impact (e.g. reducing the persons awaiting trial in jail by 40%, cut crime by defendants by 25 %,...) while the harmful consequences of these techniques are unveiled in individual stories of people not fitting into the patterns the algorihm was trained on. As transparency is lacking, there is no way to assess the algorithm's prediction.
(4) Most concerning for me is the self-fulfilling prophecy scenario: people will be put in jail based on automated decision making algorithms and have no chance to proof that the algorithm was wrong. The human constructed bias in the algorithm will persist and is reinforced by its judgements.

Indeed. Very interesting. It

Comment originally posted by Enrique Piracés

Indeed. Very interesting. It reminds me of a post from a colleague at Amnesty: "The challenge from AI: is “human” always better? Is machine or human decision-making better? Do robots have rights? Will human rights survive machine and human evolution?" See the article at https://points.datasociety.net/the-challenge-from-ai-is-human-always-bet...

Also, I remember hearing from a wise lawyer and human rights practitioner during a recent workshop on AI that the point s is that maybe is about using ML to triage and make certain processes more efficient but that for ceratin decision that impact critical aspects of personal and social life, humans should made the last call. My take is that is not only because (so far) we have tools to make (some) humans accountable for human rights violations but because we have not yet solved the issue of empathy on machines. 

 

There is a recent article

Comment originally posted by Enrique Piracés

There is a recent article that also has a few bits that I think are valuable to consider, like "while technology can help uncover and improve understanding of human rights issues—we, the humans, have to develop the political will to intervene." The article is titled "AI insights into human rights are meaningless without action." and is available at https://www.openglobalrights.org/aI-insights-into-human-rights-are-meani... The post is an excerpt from his recent testimony to the Tom Lantos Human Rights Commission in the US Congress at a hearing titled, “Artificial Intelligence: The Consequences for Human Rights” (available here https://humanrightscommission.house.gov/events/hearings/artificial-intel...)

ML and the courts

Comment originally posted by Natalie Widmann

Other interesting articles are also here: https://peerj.com/articles/cs-93/ and http://www.balkaninsight.com/en/article/computer-analysis-could-show-kar...

These are great, and we

Comment originally posted by Enrique Piracés

These are great, and we should make sure to keep them on the xample sof uses of ML in HR practice. I had mentioned the one about European Court Rulings, but the one about the Balkans is fascinating. It seems that is less about "intent" as the article claims on its title and more about how the jury's inference worked out. I wonder how these type of technologies are going to affect legal proceedings and strategies in general.

 

ML and law enforcement

Comment originally posted by Nani Jansen Reventlow 

Adding another dimension to this, before we make it to court: ML and law enforcement. In the Netherlands, an interesting challenge has been brought before the courts (as far as I know, still one comprised of human beings!) about the System Risk Indication’ (SyRI), which allows government departments to exchange information about citizens to detect fraud: https://pilpnjcm.nl/en/dossiers/profiling-and-syri/. This results in risk profiles, which are then investigated further. Very simply put, this system will reinforce its own findings: the more people are investigated, the greater the chance something "bad" will pop up, which will then again feed into the construction of the risk profile, etc. This will have a clear impact on certain segments of society.

This is just one example of many experiments out there, some of which are being prematurely relied upon by law enforcement, who sometimes seem to have a very non-critical faith in the "neutrality" of technology.

Perhaps this is also a good time to speak about the design issues that have implications for the functionality of ML, including lack of diversity in both datasets and designer base?

 

ML and law enforcement

Comment originally posted by Natalie Widmann

Thanks for sharing, Nani. Systems such as SyRI are very alarming, especially as they operate completely opaque and as their predictions have severe consequences on the lifes of individuals.

It might be off the topic for our discussion, but I wondered whether the approach of enabling 'the government to use the information they receive for purposes other than that for which it was provided.' is even complies with the newly enforced General Data Protection Regulation (GDPR)? Any ideas?

 

These are really great points

Comment originally posted by Enrique Piracés

These are really great points (also, thanks for sharing info about SyRI). I would love to see more advocacy around avoiding premature adoption of technology, specially in areas were vulnerable, excluded or marginalized populations' fundamental rights could be impacted. AT CMU we have done a session or two trying to demystify ML and explaining what are realistic expectations on its present and short term future. There is significant societal pressure to adopt emerging technologies, often with unexplicable faith in its value.

In terms of the specifics, my sense is that conferences like FAT, a.k.a. Conference on Fairness, Accountability, and

Transparency (https://fatconference.org/), are examples of the venues or spaces were issues around diversity and bias in dataset are being discussed. Is that sufficient? I am not sure, it seems those conversations tend to be excesively "technical"; I would love to see more social scientis and human rights practitioners included.

 

Diversity of ML

Comment originally posted by Natalie Widmann

I want to take Nani's point on diversity in machine learning to a new conversation thread as I think it is crucial when talking about the negative and discriminatory consequences of these technologies. We have seen racist chat bots, gender biases in job offer recommendations, evidence of human rights violations labeled as terrorist propaganda, and many more.

A lack of diversity in the development and testing phase, as well as datasets that underrespresent specific groups or already contain human bias are major reasons for discriminatory algorithms. However, also our social and historic context, as well as the defined target categories (do we classify people as female or male or do we include other categories as well?) influence the results.

How can we measure bias? Can diverse developer communities and inclusive datasets solve these issues? And how can we prevent machine learning algorithms to reinforce and even accelerate human bias and current social inequalities?

 

Human Rights Impact

Comment originally posted by Vivian Ng

These examples on law enforcement and criminal justice at the 'sharp end' of human rights have been great case studies for demonstrating some of the serious risks that the use of machine learning can have on human rights. Just within criminal justice, there are many iterations of how machine learning can be used - from risk assessments in judicial sentencing, to prediction of judgments, to finding relevance in document discovery. The diversity of application makes it challenging to map how machine learning can impact society, in both private and public sector uses.

To add to the studies that others have pointed out, this has seemed to gain more traction across various multi-stakeholder forums, e.g. within the UN - a topic on the annual UN Forum of Business and Human Rights, the latest report of the Independent Expert on the enjoyment of all human rights by older persons, various reports of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, the ITU AI for Good Global Summit, e.g. in the Council of Europe, the formation of the new Committee of experts on Human Rights Dimensions of automated data processing and different forms of artificial intelligence.

Effective implementation of the existing human rights framework, for example translating how the guidance in the UN Guiding Principles on Business and Human Rights applies to companies developing and using machine learning systems, is a persistent topic of discussion. We have suggested that a human rights based approach should sit at the centre of the development and use of AI (see for e.g. our submission to the UK House of Lords inquiry on AI http://data.parliament.uk/writtenevidence/committeeevidence.svc/evidencedocument/artificial-intelligence-committee/artificial-intelligence/written/69717.html).

I agree that a more robust understanding of the harm, for example relating to bias, is needed. We have tried to unpack how discrimination can arise in algorithmic decision-making, applying a human rights lens (e.g. this submission to the UIK House of Commons inquiry http://data.parliament.uk/writtenevidence/committeeevidence.svc/evidencedocument/science-and-technology-committee/algorithms-in-decisionmaking/written/69117.html).

 

GDPR as a viable framework to reduce risk/harm?

Comment originally posted by Enrique Piracés

This article, titled "How will the GDPR impact machine learning?" and written by Andrew Burt, was quite interesting for me to read. Thinking about how companies react to the compliance burden may offer insights on how to minimize risk/harm of ML on vulnerbale, marginalized & excluded populations. I think it is also valuable as it expands the framing around the impact of Machine Learning and gives viable ways to imagine regulation or accountbility. In a nutshell it deals with limits to automated decision-making, the rights of uswers to their data, and the challenges & opportuntities around consent withdrawal. The article is here https://www.oreilly.com/ideas/how-will-the-gdpr-impact-machine-learning