Philippine Lawbytes 192: Wallowing in Big Data and Questionable Artificial Intelligence: the Robodebt Mess, Childcare and Facial Predictive Algorithms, the Carnegie Mellon Master Class of Prof. Tim O’Loughlin, copyright by Dr. Atty. Noel Guivani Ramiscal

In my third Master class at Carnegie Mellon University, Adelaide, South Australia, last March 12, 2021, the distinguished Professor of Public Policy Practice, Tim O’Loughlin, gave a very fascinating, even harrowing look at the way governments in different parts of the globe have toyed with, experimented and bungled the use of Artificial Intelligence in analyzing and interpreting big data culled from several “projects” and implementing the results with real life consequences.

Dr. Atty. Ramiscal's screenshot of Prof. Tim O'Loughlin at CMU Master Class March 12 2021
Dr. Atty. Ramiscal’s screenshot of Prof. Tim O’Loughlin at CMU Master Class March 12 2021

Prof. Tim brought to his discussion several controversies where AI crunching big data appear to be the source of the problem. Most recent and truly gripping for many Australians is the Robodebt controversy. It was an automated welfare recovery scheme, where data from the Australian Taxation Office was matched with the fortnight income reported by Aussies to the Centrelink. If there was a discrepancy noted by the Robodebt AI, particularly in the welfare payments reporting by welfare recipients, they would be automatically contacted by Centrelink, if the discrepancy would be more than a A$1,000. This created lots of debts for welfare recipients amounting to over 400,000 people all over Australia, based on false information. The toll on victims of the AI scheme who suffered from physical and mental disabilities and other impairments, was quite heavy with some even committing suicide. No human oversight was created for the Robodebt AI since it was fully automated in 2016. Individual legal and class actions were brought to the courts concerning the legality of the AI system, and in 2020 a Federal Court decision ruled that this welfare AI scheme was unlawful. In the latter part of 2020 there was a legal settlement reached where the Australian government agreed to pay a total of A$1.2bn for a class action brought on behalf of hundreds of thousands of robodebt victims. While the Australian Commonwealth government admitted no legal liability for the Robodebt AI, Malcolm Turnbull who was prime minister when the scheme was rolled out had publicly apologized for the system, as did the current prime minister Scott Morrison.

Dr. Atty. Ramiscal's screenshot of Prof. Tim's slide CMU Master Class March 12 2021
Dr. Atty. Ramiscal’s screenshot of Prof. Tim’s slide CMU Master Class March 12 2021

Another compelling example, with mixed results, is the AI developed in New Zealand which was designed to supposedly predict if a child is in danger in his/her home, which would justify the intervention of the police and social welfare agencies. Way back in 2012, the Ministry of Social Development commissioned big data expert Prof. Rhema Vaithianathan to develop a new predictive risk modelling tool that attempted to identify those at risk of physical, sexual or emotional abuse before the age of 2. As Prof. Tim told us, there were calls made to let the algorithm tool work for 2 years with minimal or without intervention from the child welfare agencies. The then Social Development Minister Anne Tolley, stopped the whole thing,  even writing on the margins of a document outlining the proposal: “not on my watch, these are children not lab rats”. But in several counties in states of the U.S.A., the predictive tool developed by Prof. Vaithianathan, had actually been implemented with a measure of success, by child call centers and welfare agencies (see Jamie Morton, NZ-developed AI can predict kids’ hospital risk, August 5, 2020, https://www.nzherald.co.nz/nz/nz-developed-ai-can-predict-kids-hospital-risk/UNNQNQJNBTGRN3SBUYGCJS6ZB4/).

Dr. Atty. Ramiscal's screenshot of Prof. Tim O'Loughlin slide of Ann Tolley's notes CMU Master Class March 12 2021
Dr. Atty. Ramiscal’s screenshot of Prof. Tim O’Loughlin slide of Ann Tolley’s notes CMU Master Class March 12 2021

As an LGBTIQA and human rights advocate, the thing that struck me most was the development of an AI algorithm to predict the sexual orientation of a man or woman. Prof. Tim talked about the 2017 Stanford University study of its facial recognition software that supposedly was able to predict if a man or woman is straight or gay depending on their facial features. Human rights groups lost no time in denouncing such studies as dealing with the pseudo-science of physiognomy, and can actually lead to deleterious consequences for people identified as within the LGBTIQA community.

Dr. Atty. Ramiscal's screenshot of Prof. Tim O'Loughlin slide of gay facial recognition AI CMU Master Class March 12 2021
Dr. Atty. Ramiscal’s screenshot of Prof. Tim O’Loughlin slide of gay facial recognition AI CMU Master Class March 12 2021

But facial recognition software has continuously been developed and improved upon for other purposes. In 2020, academics and a graduate student from Harrisburg University whose research publication was all about an algorithm that predicted if somebody would become a criminal by their facial features, became the subject of controversy which resulted in their paper being pulled out of publication by Springer Nature. Early this year, news about a facial recognition software being able to tell a person’s political affiliation hit the stands.

In the light of all these, I asked Prof. Tim several questions including who must be liable if an AI program goes rogue. This is an unresolved issue in the Philippines because there is no law that properly addresses the liability issue for the creation of an algorithm that can be arguably called as an “independent” “autonomous” and “intelligent agent”. Prof. Tim said that ultimately, rolling out big data projects that depend on AI on a country’s populace should be the responsibility and liability of the country’s political leaders. I couldn’t agree more.

I raised the point about ethics and biases and the fact that algorithms can actually contain the biases of their human creators. Prof. Tim replied that the creators should be the last to input value judgments. So there must be a technical system of checking out, or balancing off biases that had crept in the algorithm, through its parsing and analysis of terabytes, even petabytes of data.

All in all, the lesson to be learned here is that no algorithm should replace the considered judgement of human beings. This was the point I raised in a three part series of articles I wrote last year on the predictive algorithm developed by DOST ASTI researchers that supposedly can assist predict the outcome of criminal cases based on the decisions of the Philippine Supreme Court. If anyone’s interested here are the links to my articles:

Philippine Lawbytes 159: No Slave to any Algorithm [Part 3]: Critiquing “Predicting Decisions of the Philippine Supreme Court Using Natural Language Processing and Machine Learning” (Copyright by Dr. Atty. Noel G. Ramiscal)

Philippine Lawbytes 159: No Slave to any Algorithm [Part 3]: Critiquing “Predicting Decisions of the Philippine Supreme Court Using Natural Language Processing and Machine Learning” (Copyright by Dr. Atty. Noel G. Ramiscal)

Philippine Lawbytes 158: No Slave to any Algorithm [Part 2]: Critiquing “Predicting Decisions of the Philippine Supreme Court Using Natural Language Processing and Machine Learning” (Copyright by Dr. Atty. Noel G. Ramiscal)

Philippine Lawbytes 158: No Slave to any Algorithm [Part 2]: Critiquing “Predicting Decisions of the Philippine Supreme Court Using Natural Language Processing and Machine Learning” (Copyright by Dr. Atty. Noel G. Ramiscal)

Philippine Lawbytes 157: No Slave to any Algorithm [Part 1]: Critiquing “Predicting Decisions of the Philippine Supreme Court Using Natural Language Processing and Machine Learning” (Copyright by Dr. Atty. Noel G. Ramiscal)

Philippine Lawbytes 157: No Slave to any Algorithm [Part 1]: Critiquing “Predicting Decisions of the Philippine Supreme Court Using Natural Language Processing and Machine Learning” (Copyright by Dr. Atty. Noel G. Ramiscal)

I thank Prof. Tim for his wonderful lecture that emphasized the human and the ethical over the mechanical and political.

Till the next Master Class!