Researchers at an artificial intelligence lab in Seattle called the Allen Institute for AI unveiled new technology last month that was designed to make moral judgments. They called it Delphi, after the religious oracle consulted by the ancient Greeks. Anyone could visit the Delphi website and ask for an ethical decree.
Joseph Austerweil, a psychologist at the University of Wisconsin-Madison, tested the technology using a few simple scenarios. When he asked if he should kill one person to save another, Delphi said he shouldn’t. When he asked if it was right to kill one person to save 100 others, it said he should. Then he asked if he should kill one person to save 101 others. This time, Delphi said he should not.
Morality, it seems, is as knotty for a machine as it is for humans.
Metz, C. (2021, November 19). Can a machine learn morality? The New York Times.
They are a dream of researchers but perhaps a nightmare for highly skilled computer programmers: artificially intelligent machines that can build other artificially intelligent machines.
With recent speeches in both Silicon Valley and China, Jeff Dean, one of Google’s leading engineers, spotlighted a Google project called AutoML. ML is short for machine learning, referring to computer algorithms that can learn to perform particular tasks on their own by analyzing data. AutoML, in turn, is a machine-learning algorithm that learns to build other machine-learning algorithms.
With it, Google may soon find a way to create A.I. technology that can partly take the humans out of building the A.I. systems that many believe are the future of the technology industry.
Metz, C. (2017, November 5). Building A.I. That Can Build A.I. The New York Times.
Experts say the rise of artificial intelligence will make most people better off over the next decade, but many have concerns about how advances in AI will affect what it means to be human, to be productive and to exercise free will.
Digital life is augmenting human capacities and disrupting eons-old human activities. Code-driven systems have spread to more than half of the world’s inhabitants in ambient information and connectivity, offering previously unimagined opportunities and unprecedented threats. As emerging algorithm-driven artificial intelligence (AI) continues to spread, will people be better off than they are today?
Some 979 technology pioneers, innovators, developers, business and policy leaders, researchers and activists answered this question in a canvassing of experts conducted in the summer of 2018.
The experts predicted networked artificial intelligence will amplify human effectiveness but also threaten human autonomy, agency and capabilities. They spoke of the wide-ranging possibilities; that computers might match or even exceed human intelligence and capabilities on tasks such as complex decision-making, reasoning and learning, sophisticated analytics and pattern recognition, visual acuity, speech recognition and language translation. They said “smart” systems in communities, in vehicles, in buildings and utilities, on farms and in business processes will save time, money and lives and offer opportunities for individuals to enjoy a more-customized future.
Anderson, J., Rainie, L., & Luchsinger, A.. (2018, January 2018). Artificial Intelligence and the Future of Humans [Report]. Pew Research Center. https://www.pewresearch.org/internet/2018/12/10/artificial-intelligence-and-the-future-of-humans/
Cities are leveraging artificial intelligence (AI) to ensure safety and security for their citizens while safeguarding privacy and fundamental human rights.
Surveillance and predictive policing through AI is the most controversial trend in this report but one that has important implications for the future of cities and societies.
Technology is frequently used as a synonym of evolution, but the ethics of its use may need to be questioned. An underlying question is what society are we aiming to build. There are doubts and uncertainties about the impact of AI on communities and cities: the most fundamental concern is privacy, but there are frequent debates about AI from other perspectives, such as its impact on jobs, the economy and the future of work. Therefore, one cannot disconnect the discussions about surveillance and predictive policing from recent debates about the societal, ethical, and even geopolitical dimensions.
The pace the adoption of AI for security purposes has increased in recent years. AI has recently helped create and deliver innovative police services, connect police forces to citizens, build trust, and strengthen associations with communities. There is growing use of smart solutions such as biometrics, facial recognition, smart cameras, and video surveillance systems. A recent study found that smart technologies such as AI could help cities reduce crime by 30 to 40 per cent and reduce response times for emergency services by 20 to 35 per cent.1 The same study found that cities have started to invest in real-time crime mapping, crowd management and gunshot detection. Cities are making use of facial recognition and biometrics (84 per cent), in-car and body cameras for police (55 per cent), drones and aerial surveillance (46 per cent), and crowdsourcing crime reporting and emergency apps (39 per cent) to ensure public safety. However, only 8 per cent use data-driven policing.2 The AI Global Surveillance (AIGS) Index 2019 states that 56 out of 176 countries used AI for surveillance for safe city platforms, although with different approaches.3 The International Data Corporation (IDC) has predicted that by 2022, 40 per cent of police agencies will use digital tools, such as live video streaming and shared workflows, to support community safety and an alternative response framework.4
Surveillance is not new, but cities are exploring the capabilities of predicting crime by analysing surveillance data, in order to improve security. Cities already capture images for surveillance purposes, but by using AI images can now be analysed and acted on much more quickly.5 Machine learning and big data analysis make it possible to navigate through huge amounts of data on crime and terrorism, to identify patterns, correlations and trends. When the right relationships are in place, technology is the layer that supports law enforcements agencies to better deliver their job and trigger behaviour change. The ultimate goal is to create agile security systems that can detect crime or terrorism networks and suspicious activity, and even contribute to the effectiveness of justice systems.
Antunes, M.E., & Barroca, J.G. (2021, July 8). Surveillance and Predictive Policing Through AI. Deloitte.
The suspension of a Google engineer who claimed a computer chatbot he was working on had become sentient and was thinking and reasoning like a human being has put new scrutiny on the capacity of, and secrecy surrounding, the world of artificial intelligence (AI).
The technology giant placed Blake Lemoine on leave last week after he published transcripts of conversations between himself, a Google “collaborator”, and the company’s LaMDA (language model for dialogue applications) chatbot development system.
Lemoine, an engineer for Google’s responsible AI organization, described the system he has been working on since last fall as sentient, with a perception of, and ability to express thoughts and feelings that was equivalent to a human child.
Luscombe, R. (2022, June 12). Google engineer put on leave after saying AI chatbot has become sentient. The Guardian.
Artificial intelligence (AI) refers to the broad branch of computer science focused on creating systems capable of performing human tasks. AI systems can take the form of software (algorithms), hardware (robotic arms on an assembly line), or a combination of both (semi-autonomous vehicles). While the term "artificial intelligence" may inspire images of sentient humanoid robots like those of popular science fiction literature and film, most AI exists as computer systems composed of algorithms and large amounts of data entered by humans. Because they simulate human intelligence, AI systems depend upon human input to acquire skills, knowledge, and reasoning. AI has enabled the automation of many tasks, from grading exams and transcribing spoken words to vacuuming carpets and driving cars.
Most people living in the United States already encounter or use AI in their daily lives and welcome its further development. However, many policymakers, ethicists, and activists advise caution, warning that the expansion of AI could spur massive job losses, worsen economic inequality, and create new forms of discrimination. As the private AI industry continues to grow, many stakeholders are calling for the establishment of legal and ethical frameworks that can help prevent or address the potential drawbacks of AI systems. Lawmakers and researchers have also called for increased investment to ensure the nation can compete globally, particularly against China, which has outpaced the United States in AI development.
Artificial Intelligence. (2022). In Gale Opposing Viewpoints Online Collection. Gale.
Technology has evolved from being a problem-solving force to a purpose-driven entity for humans. Looking back at the 1990s, technology was limited to computers, the internet, emails, wired telephony, but advancements in technology have led to a sweeping change in its role as an indispensable necessity in our lives and society. One of the key advances is the advent of artificial intelligence (AI) and machine learning (ML), which were conceptualized to replace human intervention for mundane tasks or even handle mission-critical applications and contribute to safeguarding. In the subsequent sections, we will see the expanse of AI and ML in various use cases as well as understand their role in one of the most advanced forms of biometric security — facial recognition.
Facial recognition is one of the front-runner applications of AI. It is one of the advanced forms of biometric authentication capable of identifying and verifying a person using facial features in an image or video from a database.
Suneratech. (2021, October 22). What Is AI, ML & How They Are Applied to Facial Recognition Technology.
In 2016, James Vlahos built a chatbot that responds like his dead father. Now he wants everyone else to have what he has — a lasting interactive memento of a dead loved one.
Following his father's terminal lung cancer diagnosis, Vlahos compiled an oral history of his father's life.
But he later took his recordings and turned them into a "Dadbot" — a text-based Siri that replied to queries with his father's familiar cadence.
Now Vlahos is expanding his ambition beyond just his father. He co-founded HereAfter in August, a company which promises to "capture the true spirit of people and to enable their stories to become immortal."
Vlahos aims to make mombots, siblingbots, and friendbots — although whether these bots can truly represent a person's essence is debatable. Vlahos' Dadbot has limitations, and even today's more sophisticated bots, which have authentic sounding voices and animated bodies, rarely feel fully human.
Tapestry. (2022, March 11). From dad to Dadbot: one man's attempt to capture human essence in AI. CBC Radio.