ABOUT THIS FEED
AIModels.fyi is a curated resource for tracking the release of new AI models, tools, and datasets. Its Substack RSS feed provides digest-style newsletters that summarize the latest in machine learning research and open-source projects. The platform focuses on making it easier for practitioners and enthusiasts to discover and follow emerging models across natural language processing, computer vision, generative AI, and reinforcement learning. Each post aggregates relevant updates, links, and context, saving readers time compared to browsing multiple sources. The writing is concise yet informative, appealing to developers, researchers, and students who want a quick overview of cutting-edge developments. With a few posts per week, the feed is highly practical for staying on top of the rapidly expanding AI ecosystem without information overload.
Saizen Acuity
- Can "Sure" be enough to backdoor a large language model into saying anything?
The 'Sure' Trap: Multi-Scale Poisoning Analysis of Stealthy Compliance-Only Backdoors in Fine-Tuned Large Language Models
- After text and images, is video how AI truly learns to think dynamically?
Thinking with Video: Video Generation as a Promising Multimodal Reasoning Paradigm
- ChatGPT Atlas can browse, but can it *really* master web games?
Can Agent Conquer Web? Exploring the Frontiers of ChatGPT Atlas Agent in Web Games
- Can AI finally generate entire, consistent, multi-shot video narratives?
HoloCine: Holistic Generation of Cinematic Multi-Shot Long Video Narratives
- Does a brain-inspired network finally connect Transformers to true reasoning?
The Dragon Hatchling: The Missing Link between the Transformer and Models of the Brain
- Do protein folding models truly need that much domain-specific complexity?
SimpleFold: Folding Proteins is Simpler than You Think
- Can unified multimodal models align understanding and generation, without *any* captions?
Reconstruction alignment improves unified multimodal models
- What if LMs could collectively train, slashing RL post-training costs?
Sharing is Caring: Efficient LM Post-Training with Collective RL Experience Sharing
- Are we training LLMs to confidently guess instead of admitting uncertainty?
Why Language Models Hallucinate
- Can you pick the perfect LLM without breaking the bank?
Adaptive LLM Routing under Budget Constraints










