Disruptive Technologies for International Development
Sunday, May 26, 2019
Introduction
Artificial Intelligence (AI) is a field of computer science that encompasses a myriad of disciplines focused on improving computer programs to become self-learning. From natural language processing (NLP) to image classification to sound analysis, AI programs are “trained” on large training datasets allowing them to recognize patterns and act accordingly. Once an AI program is trained, it can be used to analyze real world data to deliver analysis and decisions. AI programs can also be trained against other AI programs using a technique called generative adversarial networks (GAN) wherein one AI program tries to learn while an adversarial AI program tries to defeat the first one. This way both AI programs train against each other and get better.
Disruptive Power
Most of the disruptive power of AI comes from disintermediation of humans. This happens by doing something that humans can’t do, usually because of complexity or scale. This allows rapid detection of fraud, crowd safety, identification and access, disease detection, advance warning of disease outbreaks, and just about any field involving typically laborious analysis of large amounts of data. Because of the speed of analysis, AI is also the driving force behind autonomous navigation industry such as self-driving cars, drones and underwater transport, thereby disrupting typically human-driven transportation and delivery systems.
Potential for Development
The following are a few of the many fields where AI is making a difference already:
Telemedicine allows health care professionals to evaluate, diagnose and treat patients at a distance using telecommunications technology. It permits two-way, real time interactive communication between the patient, and the physician or practitioner at the distant site;
Image analysis is being used for training classification software on biopsy scans;
Visual recognition (especially on the health, satellite and smart cities tracks) and image recognition/classification;
Algorithmic analysis of (e.g.) satellite imagery;
Shopping (Amazon) and entertainment (Netflix) recommendations;
Voice recognition and intelligent captioning systems;
Fraud detection and other financial services by credit card companies;
Text recognition and analysis, NLP and spontaneous conversation;
Increasingly robots and other automated systems using AI for manual or cognitive tasks.
Caveats
The behavior of humans is completely opposite from that of machines. Humans behave like humans – forgetful, error-prone, variant, delightful, irritating. Machines have none of those qualities except perhaps being irritating. From self-driving cars on public roads to self-piloting reusable rockets landing on self-sailing ships, machine intelligence is supporting or entirely taking over ever more complex human activities at an ever increasing pace. The greater autonomy given to machine intelligence in these roles can result in situations where they have to make autonomous choices involving everything from copyright violations to who lives or dies. This calls for not just a clearer understanding of how humans make such choices, but also a clearer understanding of how humans perceive machines should be making such choices. Recent scientific studies on machine ethics have raised awareness about this topic in the media and public discourse.
With the rising use of AI in health, there is a concern about “doctor disintermediation.” With the possibility of a wrong diagnosis or a missed diagnosis slipping through the system, there will always be the need for human supervision because when it comes to human lives, there is no minimum acceptable loss. Calibrating the balance between automation and human-intervention will have to be continuous and responsive to errors.
Similarly, in the domain of self-driving cars, there is the dilemma of choosing the lesser of two bad options. Experiments such as The Moral Machine (see Resources below) demonstrate how the algorithm for doing so will have to be programmed, for this will be a function of our prevailing cultural and legal norms.
There is a real concern about the generation of fake news, both by human and algorithmic sources. Bust just as AI can be used to mimic real sources to create fake content, AI can also be deployed to detect fake content. For example, Apple has chosen to use human curation along with algorithmic newsfeed generation to counter fake or misleading news in its Apple News app.
Resources
The 7 Steps of Machine Learning (AI Adventures)
How can we tell if a drink is beer or wine? Machine learning, of course! In this episode of Cloud AI Adventures, Yufeng walks through the 7 steps involved in applied machine learning.
Moral Machine – Human Perspectives on Machine Ethics
The Moral Machine is a platform for gathering a human perspective on moral decisions made by machine intelligence, such as self-driving cars. We show you moral dilemmas, where a driverless car must choose the lesser of two evils, such as killing two passengers or five pedestrians. As an outside observer, you judge which outcome you think is more acceptable. You can then see how your responses compare with those of other people.
ITU has launched a global Artificial Intelligence (AI) repository to identify AI related projects, research initiatives, think-tanks and organizations that can accelerate progress towards the 17 UN Sustainable Development Goals. The AI Repository is open to all and we invite anyone working in the field of AI to contribute to this resource.
The AI for Good Global Summit brought together leading minds in AI and humanitarian action to facilitate inclusive global dialogue and launch projects that use AI to benefit humanity. The action-oriented event focused on how AI can yield practical, long-term solutions to help achieve the SDGs. Hundreds of people attended and thousands of people worldwide followed the discussions via webcast.
First center in New York to seamlessly integrate artificial intelligence, data science and genomic screening to advance clinical practice and patient outcomes. It will combine artificial intelligence with data science and genomics in a standalone site≥ enabling researchers to enhance their understanding, diagnosis, and treatment of human diseases—including the most debilitating—and promote improved health and well-being.
Paige is creating software modules that allow pathologists to improve the scalability of their work, enabling them to provide better care, at lower cost. Their long-term plan is to develop new treatment paradigms that integrate computational pathology with electronic health records, genomic and other clinical data.
Artificial Intelligence (AI) is a science and a set of computational technologies that are inspired by—but typically operate quite differently from—the ways people use their nervous systems and bodies to sense, learn, reason, and take action. While the rate of progress in AI has been patchy and unpredictable, there have been significant advances since the field’s inception sixty years ago. Once a mostly academic area of study, twenty-first century AI enables a constellation of mainstream technologies that are having a substantial impact on everyday lives. Computer vision and AI planning, for example, drive the video games that are now a bigger entertainment industry than Hollywood. Deep learning, a form of machine learning based on layered representations of variables referred to as neural networks, has made speech-understanding practical on our phones and in our kitchens, and its algorithms can be applied widely to an array of applications that rely on pattern recognition. Natural Language Processing (NLP) and knowledge representation and reasoning have enabled a machine to beat the Jeopardy champion and are bringing new power to Web searches.
The trolley problem used to be an obscure question in philosophical ethics. It runs as follows: a trolley, or a train, is speeding down a track towards a junction. Some moustache-twirling evildoer has tied five people to the track ahead, and another person to the branch line. You are standing next to a lever that controls the junction. Do nothing, and the five people will be killed. Pull the lever, and only one person dies. What is the ethical course of action?
We present a general approach to automating ethical decisions, drawing on machine learning and computational social choice. In a nutshell, we propose to learn a model of societal prefer- ences, and, when faced with a specific ethical dilemma at runtime, efficiently aggregate those preferences to identify a desirable choice. We provide a concrete algorithm that instantiates our approach; some of its crucial steps are informed by a new theory of swap-dominance efficient voting rules. Finally, we implement and evaluate a system for ethical decision making in the autonomous vehicle domain, using preference data collected from 1.3 million people through the Moral Machine website.
The use of AI systems for the classification, detection, and prediction of race and gender is in urgent need of re-evaluation. The histories of ‘race science’ are a grim reminder that race and gender classification based on appearance is scientifically flawed and easily abused. Systems that use physical appearance as a proxy for character or interior states are deeply suspect, including AI tools that claim to detect sexuality from headshots,iv predict ‘criminality’ based on facial features,v or assess worker competence via ‘micro-expressions.’ vi Such systems are replicating patterns of racial and gender bias in ways that can deepen and justify historical inequality. The commercial deployment of these tools is cause for deep concern.
Copyright issues have plagued YouTube and its community for years, but creators are calling this moment in time one of the worst eras for trying to navigate the platform. Over the past six months, multiple YouTubers have run into issues with what they describe as aggressive copyright claims from record labels. … It’s a real problem for creators who want to remix or create educational content about popular music, and the law isn’t necessarily on their side: fair use law is limited in scope, and even musical covers and a cappella performances are still protected by various forms of copyright. It all leads to a tense balance between the interests of video creators and musicians, with YouTube caught in the middle.
The word “human” does not appear at all in US copyright law, and there’s not much existing litigation around the word’s absence. This has created a giant gray area and left AI’s place in copyright unclear. It also means the law doesn’t account for AI’s unique abilities, like its potential to work endlessly and mimic the sound of a specific artist. Depending on how legal decisions shake out, AI systems could become a valuable tool to assist creativity, a nuisance ripping off hard-working human musicians, or both.
Apple has waded into the messy world of news with a service that is read regularly by roughly 90 million people. But while Google, Facebook and Twitter have come under intense scrutiny for their disproportionate — and sometimes harmful — influence over the spread of information, Apple has so far avoided controversy. One big reason is that while its Silicon Valley peers rely on machines and algorithms to pick headlines, Apple uses humans like Ms. Kern.