April 2025

RAGEN: AI framework tackles LLM agent instability

Researchers have introduced RAGEN, an AI framework designed to counter LLM agent instability when handling complex situations. Training these AI agents presents significant hurdles, particularly when decisions span multiple steps and involve unpredictable feedback from the environment. While reinforcement learning (RL) has shown promise in static tasks like solving maths problems or generating code, its […]

RAGEN: AI framework tackles LLM agent instability Read More »

Coalition opposes OpenAI shift from nonprofit roots

A coalition of experts, including former OpenAI employees, has voiced strong opposition to the company’s shift away from its nonprofit roots. In an open letter addressed to the Attorneys General of California and Delaware, the group – which also includes legal experts, corporate governance specialists, AI researchers, and nonprofit representatives – argues that the proposed

Coalition opposes OpenAI shift from nonprofit roots Read More »

Reigniting the European digital economy’s €200bn AI ambitions

There is a sense of urgency in Europe to re-imagine the status quo and reshape technology infrastructures. Timed to harness Europe’s innovative push comes GITEX EUROPE x Ai Everything (21-23 May, Messe Berlin). The world’s third largest economy and host nation for GITEX EUROPE x Ai Everything, Germany’s role as the European economic and technology

Reigniting the European digital economy’s €200bn AI ambitions Read More »

China’s MCP adoption: AI assistants that actually do things

China’s tech companies will drive adoption of the MCP (Model Context Protocol) standard that transforms AI assistants from simple chatbots into powerful digital helpers. MCP works like a universal connector that lets AI assistants interact directly with favourite apps and services – enabling them to make payments, book appointments, check maps, and access information on

China’s MCP adoption: AI assistants that actually do things Read More »

How does AI judge? Anthropic studies the values of Claude

AI models like Anthropic Claude are increasingly asked not just for factual recall, but for guidance involving complex human values. Whether it’s parenting advice, workplace conflict resolution, or help drafting an apology, the AI’s response inherently reflects a set of underlying principles. But how can we truly understand which values an AI expresses when interacting

How does AI judge? Anthropic studies the values of Claude Read More »

Google introduces AI reasoning control in Gemini 2.5 Flash

Google has introduced an AI reasoning control mechanism for its Gemini 2.5 Flash model that allows developers to limit how much processing power the system expends on problem-solving. Released on April 17, this “thinking budget” feature responds to a growing industry challenge: advanced AI models frequently overanalyse straightforward queries, consuming unnecessary computational resources and driving

Google introduces AI reasoning control in Gemini 2.5 Flash Read More »

Google launches A2A as HyperCycle advances AI agent interoperability

AI agents handle increasingly complex and recurring tasks, such as planning supply chains and ordering equipment. As organisations deploy more agents developed by different vendors on different frameworks, agents can end up siloed, unable to coordinate or communicate. Lack of interoperability remains a challenge for organisations, with different agents making conflicting recommendations. It’s difficult to

Google launches A2A as HyperCycle advances AI agent interoperability Read More »

The evolution of harmful content detection: Manual moderation to AI

The battle to keep online spaces safe and inclusive continues to evolve. As digital platforms multiply and user-generated content expands very quickly, the need for effective harmful content detection becomes paramount. What once relied solely on the diligence of human moderators has given way to agile, AI-powered tools reshaping how communities and organisations manage toxic

The evolution of harmful content detection: Manual moderation to AI Read More »