Cañada College Library offers online New York Times accounts for all students, staff, and faculty. This includes unlimited access via mobile apps on tablets and smart phones. NYTimes.com accounts are good for one year for eligible Cañada College community members only (current students, staff, and faculty), after which accounts need to be renewed.
For more information check out our NYT guide https://guides.canadacollege.edu/newyorktimes
Researchers at an artificial intelligence lab in Seattle called the Allen Institute for AI unveiled new technology last month that was designed to make moral judgments. They called it Delphi, after the religious oracle consulted by the ancient Greeks. Anyone could visit the Delphi website and ask for an ethical decree.
Joseph Austerweil, a psychologist at the University of Wisconsin-Madison, tested the technology using a few simple scenarios. When he asked if he should kill one person to save another, Delphi said he shouldn’t. When he asked if it was right to kill one person to save 100 others, it said he should. Then he asked if he should kill one person to save 101 others. This time, Delphi said he should not.
Morality, it seems, is as knotty for a machine as it is for humans.
References
Metz, C. (2021, November 19). Can a machine learn morality? The New York Times.
https://www.nytimes.com/2021/11/19/technology/can-a-machine-learn-morality.html?smid=url-share
They are a dream of researchers but perhaps a nightmare for highly skilled computer programmers: artificially intelligent machines that can build other artificially intelligent machines.
With recent speeches in both Silicon Valley and China, Jeff Dean, one of Google’s leading engineers, spotlighted a Google project called AutoML. ML is short for machine learning, referring to computer algorithms that can learn to perform particular tasks on their own by analyzing data. AutoML, in turn, is a machine-learning algorithm that learns to build other machine-learning algorithms.
With it, Google may soon find a way to create A.I. technology that can partly take the humans out of building the A.I. systems that many believe are the future of the technology industry.
References
Metz, C. (2017, November 5). Building A.I. That Can Build A.I. The New York Times.
https://www.nytimes.com/2017/11/05/technology/machine-learning-artificial-intelligence-ai.html?smid=url-share
Experts say the rise of artificial intelligence will make most people better off over the next decade, but many have concerns about how advances in AI will affect what it means to be human, to be productive and to exercise free will.
Digital life is augmenting human capacities and disrupting eons-old human activities. Code-driven systems have spread to more than half of the world’s inhabitants in ambient information and connectivity, offering previously unimagined opportunities and unprecedented threats. As emerging algorithm-driven artificial intelligence (AI) continues to spread, will people be better off than they are today?
Some 979 technology pioneers, innovators, developers, business and policy leaders, researchers and activists answered this question in a canvassing of experts conducted in the summer of 2018.
The experts predicted networked artificial intelligence will amplify human effectiveness but also threaten human autonomy, agency and capabilities. They spoke of the wide-ranging possibilities; that computers might match or even exceed human intelligence and capabilities on tasks such as complex decision-making, reasoning and learning, sophisticated analytics and pattern recognition, visual acuity, speech recognition and language translation. They said “smart” systems in communities, in vehicles, in buildings and utilities, on farms and in business processes will save time, money and lives and offer opportunities for individuals to enjoy a more-customized future.
References
Anderson, J., Rainie, L., & Luchsinger, A.. (2018, January 2018). Artificial Intelligence and the Future of Humans [Report]. Pew Research Center. https://www.pewresearch.org/internet/2018/12/10/artificial-intelligence-and-the-future-of-humans/
Cities are leveraging artificial intelligence (AI) to ensure safety and security for their citizens while safeguarding privacy and fundamental human rights.
Surveillance and predictive policing through AI is the most controversial trend in this report but one that has important implications for the future of cities and societies.
Technology is frequently used as a synonym of evolution, but the ethics of its use may need to be questioned. An underlying question is what society are we aiming to build. There are doubts and uncertainties about the impact of AI on communities and cities: the most fundamental concern is privacy, but there are frequent debates about AI from other perspectives, such as its impact on jobs, the economy and the future of work. Therefore, one cannot disconnect the discussions about surveillance and predictive policing from recent debates about the societal, ethical, and even geopolitical dimensions.
The pace the adoption of AI for security purposes has increased in recent years. AI has recently helped create and deliver innovative police services, connect police forces to citizens, build trust, and strengthen associations with communities. There is growing use of smart solutions such as biometrics, facial recognition, smart cameras, and video surveillance systems. A recent study found that smart technologies such as AI could help cities reduce crime by 30 to 40 per cent and reduce response times for emergency services by 20 to 35 per cent.1 The same study found that cities have started to invest in real-time crime mapping, crowd management and gunshot detection. Cities are making use of facial recognition and biometrics (84 per cent), in-car and body cameras for police (55 per cent), drones and aerial surveillance (46 per cent), and crowdsourcing crime reporting and emergency apps (39 per cent) to ensure public safety. However, only 8 per cent use data-driven policing.2 The AI Global Surveillance (AIGS) Index 2019 states that 56 out of 176 countries used AI for surveillance for safe city platforms, although with different approaches.3 The International Data Corporation (IDC) has predicted that by 2022, 40 per cent of police agencies will use digital tools, such as live video streaming and shared workflows, to support community safety and an alternative response framework.4
Surveillance is not new, but cities are exploring the capabilities of predicting crime by analysing surveillance data, in order to improve security. Cities already capture images for surveillance purposes, but by using AI images can now be analysed and acted on much more quickly.5 Machine learning and big data analysis make it possible to navigate through huge amounts of data on crime and terrorism, to identify patterns, correlations and trends. When the right relationships are in place, technology is the layer that supports law enforcements agencies to better deliver their job and trigger behaviour change. The ultimate goal is to create agile security systems that can detect crime or terrorism networks and suspicious activity, and even contribute to the effectiveness of justice systems.
References
Antunes, M.E., & Barroca, J.G. (2021, July 8). Surveillance and Predictive Policing Through AI. Deloitte.
https://www2.deloitte.com/global/en/pages/public-sector/articles/urban-future-with-a-purpose/surveillance-and-predictive-policing-through-ai.html