💣 AI loves starting nuclear wars

... UK government invests £100m in AI

Presented by

👋 Good morning! Glad you opened the folder this morning. Let’s unpack.

Here’s what we have for you today:

  • Here’s what we have for you today:

    • UK government invests £100m in AI research and regulation

    • Taylor Swift images traced back to 4chan ‘challenge’

    • AI models love escalating conflicts to full-scale nuclear war

    • … and more

The UK government plans to invest over £100m in nine new AI research hubs.

What you should know:

  • These hubs will focus on developing responsible AI in areas like healthcare and education.

  • The investment is part of the UK's response to an AI regulation white paper.

  • Regulators are asked to publish an AI regulation plan by April 30.

  • The initiative aims to make the UK a leader in safe AI use.

What does this mean for me? This money will help make AI safer and better in important areas like health and schools, which can really change our daily lives. The UK is working on rules to reduce AI problems like keeping our personal information safe and making sure people don't lose their jobs because of robots.

🧰 In the toolshed

❌ Five Why makes problem-solving easy and fun.

📃 Guidde helps you create video SOPs in literal seconds.

💻 Permer builds high-converting landing pages in a few clicks.

Quit sending emails like a dinosaur.

It’s the year 2024 and all the top newsletters are using beehiiv.

beehiiv was created by the same early Morning Brew employees who scaled their daily email to over 4 million subscribers. And now every newsletter on beehiiv has access to the same tools and winning formula.

So what exactly does beehiiv offer?

  • World-class growth tools like the referral program and recommendation network

  • Monetization via the beehiiv Ad Network and premium subscriptions (i.e. beehiiv helps you get paid)

  • Seamless content creation with a sleek collaborative editor

  • Best-in-class inbox deliverability of 98.7%

  • Oh and it’s the most affordable by a mile…

Take your newsletter to the next level — get started for free.

A recent study shows that AI models tend to escalate conflicts in simulations, indicating they're unsuitable for real-world military and diplomatic decision-making.

What you should know:

  • The US military is considering AI for decision-making, but it's risky.

  • Researchers tested large language models in war simulations.

  • Five LLMs played a conflict game, representing eight virtual nations.

  • The game involved diplomacy, alliances, invasions, and nuclear options.

  • All LLMs tended to escalate conflicts, some more violently.

  • Some LLMs chose nuclear options quickly.

  • The study finds LLMs too unpredictable for real-world diplomacy or military use.

Are we in immediate danger? If anything, this makes us more safe. Research like this aims to confirm or deny whether or not using AI in military situations is a good idea. So, no this does not put us in danger, and instead hopes to work towards a safer future.

🤖 Bits ‘n bots

💅 Explicit Taylor Swift images traced back to ‘challenge’ on 4chan.

🏥 AI healthcare startup raises $70m, led by OpenAI.

🧠 Nvidia to help companies build in-house AI computing.

🖼️ AI creation of the day

Source: Hogwarts rave

Did you like today's folder?

... be honest 😜

Login or Subscribe to participate in polls.

🫡 That’s it for today. See you tomorrow!