AI NEWS FEED

AI Now Institute

Eyeglasses reflecting computer code on a monitor, ideal for technology and programming themes.

ABOUT THIS FEED

The AI Now Institute, based at New York University, is a research organization dedicated to studying the social, ethical, and policy implications of artificial intelligence. Its RSS feed delivers analysis, reports, and commentary on how AI intersects with law, labor, bias, surveillance, and civil rights. Unlike industry blogs that highlight technical breakthroughs, AI Now focuses on accountability, governance, and human impact. Readers will find policy briefs, essays, and critiques of corporate practices, making the content highly relevant for academics, policymakers, journalists, and activists. Posts are less frequent but rich in depth, often connected to ongoing debates in AI regulation and ethics. This feed is essential for those who want to understand not just what AI can do, but also the societal consequences of its widespread adoption.

  • Reframing Impact: AI Summit 2026

    The 2026 AI Impact Summit in India is the latest iteration of an event that has become a bellwether for global discourse around the AI industry, especially the question of whether, and how, it can be governed. But it also demonstrates how important ideas can be invoked in ways that dilute their meaning or co-opt their force. In this series—produced by AI Now Institute, Aapti Institute, and The Maybe—we bring together leading advocates, builders, and thinkers from around the world who live and breathe substance, analysis, and meaningful action into these ideas. The post Reframing Impact: AI Summit 2026 appeared first on AI Now Institute.

  • The India AI Impact Summit 2026: Early Forensics and Planting the Seeds for a People-Centered Alternative

    The 2026 AI Impact Summit in India is the latest iteration of an event that has become a bellwether for global discourse around the AI industry, especially the question of whether, and how, it can be governed. But it also demonstrates how important ideas can be invoked in ways that dilute their meaning or co-opt their force. In this series—produced by AI Now Institute, Aapti Institute, and The Maybe—we bring together leading advocates, builders, and thinkers from around the world who live and breathe substance, analysis, and meaningful action into these ideas. The post The India AI Impact Summit 2026: Early Forensics and Planting the Seeds for a People-Centered Alternative appeared first on AI Now Institute.

  • What Are the Implications if the AI Boom Turns to Bust?

    This episode considers whether today’s massive AI investment boom reflects real economic fundamentals or an unsustainable bubble, and how a potential crash could reshape AI policy, public sentiment, and narratives about the future that are embraced and advanced not only by Silicon Valley billionaires, but also by politicians and governments.  The post What Are the Implications if the AI Boom Turns to Bust? appeared first on AI Now Institute.

  • AI Now is Hiring a Data Center Policy Fellow

    We’re hiring a skilled policy advocate to support community groups, organizers, and policymakers to identify and implement policy solutions to rampant data center growth. Read more about the job below and apply by January 23, 2026. The post AI Now is Hiring a Data Center Policy Fellow appeared first on AI Now Institute.

  • AI Now Hosts Report Launch and Organizer Panel on Using Policy to Stop Data Center Expansion

    On Thursday, December 4, we held a launch event for the North Star Data Center Policy Toolkit—a guide for organizers and policymakers on how to use local and state policy to stop rampant AI data center expansion. The launch event—“North Star Interventions: Using Policy as an Organizing Tool in Our Data Center Fights”—previewed the toolkit’s The post AI Now Hosts Report Launch and Organizer Panel on Using Policy to Stop Data Center Expansion appeared first on AI Now Institute.

  • Atoms for Algorithms:’ The Trump Administration’s Top Nuclear Scientists Think AI Can Replace Humans in Power Plants

    “The claims being made on these slides are quite concerning, and demonstrate an even more ambitious (and dangerous) use of AI than previously advertised, including the elimination of human intervention. It also cements that it is the DOE’s strategy to use generative AI for nuclear purposes and licensing, rather than isolated incidents by private entities,” The post Atoms for Algorithms:’ The Trump Administration’s Top Nuclear Scientists Think AI Can Replace Humans in Power Plants appeared first on AI Now Institute.

  • North Star Data Center Policy Toolkit: State and Local Policy Interventions to Stop Rampant AI Data Center Expansion

    This policy toolkit is primarily geared toward stopping, slowing, and restricting rampant data center development in the US at the local and state level. Our approach recognizes the extractive relationship between data centers and local communities: Hyperscale data centers deplete scarce natural resources, pollute local communities and increase the use of fossil fuels, raise energy The post North Star Data Center Policy Toolkit: State and Local Policy Interventions to Stop Rampant AI Data Center Expansion appeared first on AI Now Institute.

  • Power Companies Are Using AI To Build Nuclear Power Plants

    Both Guerra and Khlaaf are proponents of nuclear energy, but worry that the proliferation of LLMs, the fast tracking of nuclear licenses, and the AI-driven push to build more plants is dangerous. “Nuclear energy is safe. It is safe, as we use it. But it’s safe because we make it safe and it’s safe because we spend a lot of time doing the licensing and we spend a lot of time learning from the things that go wrong and understanding where it went wrong and we try to address it next time,” Guerra said. The post Power Companies Are Using AI To Build Nuclear Power Plants appeared first on AI Now Institute.

  • You May Already Be Bailing Out the AI Business

    Is an artificial-intelligence bubble about to pop? The question of whether we’re in for a replay of the 2008 housing collapse—complete with bailouts at taxpayers’ expense—has saturated the news cycle. For every day that passes without disaster, AI companies can more persuasively insist that no such market correction is coming. But the federal government is already bailing out the AI industry with regulatory changes and public funds that will protect companies in the event of a private sector pullback. The post You May Already Be Bailing Out the AI Business appeared first on AI Now Institute.

  • Fission for Algorithms: The Undermining of Nuclear Regulation in Service of AI

    A report examining nuclear “fast-tracking” initiatives on their feasibility and their impact on nuclear safety, security, and safeguards. The post Fission for Algorithms: The Undermining of Nuclear Regulation in Service of AI appeared first on AI Now Institute.