ABOUT THIS FEED
The AI Now Institute, based at New York University, is a research organization dedicated to studying the social, ethical, and policy implications of artificial intelligence. Its RSS feed delivers analysis, reports, and commentary on how AI intersects with law, labor, bias, surveillance, and civil rights. Unlike industry blogs that highlight technical breakthroughs, AI Now focuses on accountability, governance, and human impact. Readers will find policy briefs, essays, and critiques of corporate practices, making the content highly relevant for academics, policymakers, journalists, and activists. Posts are less frequent but rich in depth, often connected to ongoing debates in AI regulation and ethics. This feed is essential for those who want to understand not just what AI can do, but also the societal consequences of its widespread adoption.
Saizen Acuity
- Nurses Sound Alarm as ‘Uber for Nursing’ Apps Push to Deregulate Healthcare
A new AI Now Institute report published April 21, 2026, warns that gig-work platforms marketed as "Uber for nursing" are aggressively lobbying states to rewrite healthcare staffing rules, a push that could leave nurses with less pay, fewer protections, and less control over their shifts, according to The Guardian. The post Nurses Sound Alarm as ‘Uber for Nursing’ Apps Push to Deregulate Healthcare appeared first on AI Now Institute.
- ‘Uber for nurses’: gig-work apps lobby to deregulate healthcare, report finds
Billion-dollar tech platforms are aggressively pushing for deregulation of the “Uber for nursing” industry in an effort to expand gig work in the healthcare sector, according to a report published on Tuesday. The post ‘Uber for nurses’: gig-work apps lobby to deregulate healthcare, report finds appeared first on AI Now Institute.
- Uber For Nursing Part II
A seismic shift is rocking the healthcare industry. Uber’s business model—the “gigification” of labor—and lobbying practices have made their way to healthcare staffing. The post Uber For Nursing Part II appeared first on AI Now Institute.
- ‘Safety first’ puts Anthropic ahead in game of AI spin
But Dr Heidy Khlaaf, chief AI scientist at the AI Now Institute and a former OpenAI safety engineer, is sceptical. She notes Anthropic provides no comparison with existing automated security tools, nor any false-positive rates. “It also serves their ‘safety first’ image, as they’re able to justify the lack of public release, even a limited one for independent evaluation, as a public service – when it simply obscures experts’ abilities to independently validate their The post ‘Safety first’ puts Anthropic ahead in game of AI spin appeared first on AI Now Institute.
- The Great AI Grift
Tech leaders want you to believe that AI is the key to a new golden age. The reality looks more like a bold, government-backed heist. The post The Great AI Grift appeared first on AI Now Institute.
- AI Giants Go on Charm Offensive to Avert Public Backlash
But broad skepticism and fear about the impact of AI have made opposing all regulation untenable for tech company CEOs, said Kak, who is co-executive director of the AI Now Institute, which has advocated for AI regulation. If they can’t oppose every policy, “What’s the next best move?” she asked. “It’s to place yourself in the driver’s seat, and that is what every single one of them is doing.” The post AI Giants Go on Charm Offensive to Avert Public Backlash appeared first on AI Now Institute.
- North Star Data Center Policy Toolkit: State and Local Policy Interventions to Stop Rampant AI Data Center Expansion
This policy toolkit is primarily geared toward stopping, slowing, and restricting rampant data center development in the US at the local and state level. Our approach recognizes the extractive relationship between data centers and local communities: Hyperscale data centers deplete scarce natural resources, pollute local communities and increase the use of fossil fuels, raise energy The post North Star Data Center Policy Toolkit: State and Local Policy Interventions to Stop Rampant AI Data Center Expansion appeared first on AI Now Institute.
- Beyond Impact Lingo: Questioning, Concretizing, Building
In the lead-up to this year’s India AI Impact Summit, we attempted to pre-bunk a new kind of AI hype that was circulating. We observed that the “right” words were being used to have the wrong conversations. Impact lingo like “AI for Good”, “AI for climate”, “human capital”, and “frugal AI” evokes ideals of public The post Beyond Impact Lingo: Questioning, Concretizing, Building appeared first on AI Now Institute.
- U.S. military is using AI to help plan Iran air attacks, sources say, as lawmakers call for oversight
“It’s very dangerous that ‘speed’ is somehow being sold to us as strategic here, when it’s really a cover for indiscriminate targeting when you consider how inaccurate these models are,” Khlaaf said. The post U.S. military is using AI to help plan Iran air attacks, sources say, as lawmakers call for oversight appeared first on AI Now Institute.
- The one question everyone should be asking after OpenAI’s deal with the Pentagon
“In terms of safety guardrails for ‘high-stake decisions’ or surveillance, the existing guardrails for generative AI are deeply lacking, and it has been shown how easily compromised they are, intentionally or inadvertently,” Heidy Khlaaf, the chief AI scientist at the nonprofit AI Now Institute, told me. “It’s highly doubtful that if they cannot guard their systems against benign cases, they’d be able to do so for complex military and surveillance operations.” The post The one question everyone should be asking after OpenAI’s deal with the Pentagon appeared first on AI Now Institute.


