
AI for Kids
Welcome to "AI for Kids" — the podcast for families who want their kids to understand the future, while remaining safe and without losing the joy of being a kid.
If you're a parent or teacher who’s curious about AI but cautious about screen time, you're in the right place. We believe kids can learn about the technology shaping their world without another app, game, or endless scroll.
This podcast is designed for kids ages 4–12 (teens you can listen too) and the adults who love them—offering simple, engaging conversations with fellow kids and AI experts (no tech jargon here). We cover everything from how AI works to how to talk about it at the dinner table.
Whether you're folding laundry, driving to school, or winding down for bed, "AI for Kids" fits into real life. It’s screen-free, easy to follow, and made to spark curiosity—not replace it.
Because we don’t believe kids need more screen time to stay ahead. Just better ways to understand the world they’re growing up in.
AI for Kids
Is Apple’s New AI Safe for Kids? (AI Kids Scoop)
Ever wondered how to explain AI news to the curious young minds in your life? This episode of AI for Kids Scoop breaks down the latest artificial intelligence developments from early to mid-June in language kids and adults can understand together.
We dive into Apple's new "Apple Intelligence" features unveiled at their Worldwide Developer Conference. Picture asking your iPhone to translate a FaceTime call in real time or edit photos simply by describing what you want to change. We explore what this means for families who use these devices and how AI is quietly transforming everyday tech experiences.
The conversation turns to Mattel's groundbreaking partnership with OpenAI, which promises to bring AI-powered toys to market later this year. Yes, Barbie's getting a brain! While these interactive toys offer exciting possibilities for creative play and learning, we discuss the important privacy considerations parents should keep in mind as toys begin collecting and processing children's data.
We also unpack significant research from the Alan Turing Institute showing that 22% of children ages 8-12 have already engaged with AI chatbots, often without distinguishing between human and machine interactions. This highlights the critical need for AI literacy in schools and homes. Our screen-free activity encourages critical thinking about AI by having kids identify potential AI failures and develop human backup plans—a perfect way to build digital discernment without more screen time.
From European AI regulations to Microsoft's transparency reports and Meta's privacy defaults, we translate complex tech developments into actionable insights for families navigating the AI revolution. Join us for this enlightening discussion that keeps your curiosity switched on while keeping screen time in balance. Download, share with friends, and subscribe wherever you get your podcasts!
Help us become the #1 podcast for AI for Kids.
Buy our new book "Let Kids Be Kids, Not Robots!: Embracing Childhood in an Age of AI"
Social Media & Contact:
- Website: www.aidigitales.com
- Email: contact@aidigitales.com
- Follow Us: Instagram, YouTube
- Gift or get our books on Amazon or Free AI Worksheets
Listen, rate, and subscribe! Stay updated with our latest episodes by subscribing to AI for Kids on your favorite podcast platform.
Like our content, subscribe or feel free to donate to our Patreon here: patreon.com/AiDigiTales...
Welcome to the AI for Kids podcast, the podcast for moms, aunties and teachers who want the kids they love to understand AI without more screen time. We keep it simple, safe and fun. No tech degree required. Each episode breaks down AI ideas and includes activities to help kids use AI in ways that keep them curious and creative. No pressure, no overwhelm, no extra screens, just clear, engaging learning you can feel good about. Let's get started.
Amber Ivey (AI):Hey friends, it's Amber Ivey, aka AI, and you're back for a week of AI for Kids Scoop. We want to talk to you all about the latest AI news in words that you and your grownups can actually understand. So today we're covering everything that happened from around June 5th through June 20th. All right, buckle up.
Amber Ivey (AI):So what's new over at Apple? I'm an Apple user, but Apple has been doing a lot of wild things. This is no endorsement of Apple and no way sense of the word, but I wanted to share some things that happened at their Worldwide Developer Conference. So last week was their big developer party and the huge headline was something called Apple intelligence. It's a super smart set of features that live right on your iPhone, your iPad and your Mac computer. So imagine asking your device to translate a FaceTime call in real time or asking it to edit a photo, just by describing what you want to edit or what you want it to do. Apple even opened these tools to app makers, so your favorite games might soon get even more AI driven too. As it relates to design, apple introduced a sleek new look called Liquid Glass. It's like your apps are floating in a clear soda bottle. Fun to see, but the real magic is how AI quietly powers the whole thing.
Amber Ivey (AI):Another wild thing that happens was that toys are going to start talking back. Yeah, that's right, barbie's getting a brain. Toymaker Mattel announced a partnership with OpenAI. They're working on AI-powered toys that can chat, tell stories and maybe help you with your homework safely, they promise, but we still need to hold them accountable to that. The first AI toy should arrive later this year. Parents are excited, but also a little cautious, because these new toys will need to protect kids' data and your privacy. So more on that in future conversations.
Amber Ivey (AI):Speaking of OpenAI, on June 5th they also released a report called Disrupting Malicious Uses of AI. Malicious basically just means bad right and people trying to do bad things on purpose just means bad right and people trying to do bad things on purpose. It's basically their playbook for spotting and stopping bad guys who try to use AI for scams or cyber attacks, which is something we're going to have to keep watching out for and something we are going to have to be aware of and make sure that you all are protected. Why should you care? Because the same tech that writes jokes can also send you bad emails, and companies need to put in guardrails and protections in order to make sure you are safe and that you're able to interact with these tools and just online, in a way that's safe.
Amber Ivey (AI):I want to talk about a few rules that are coming out or maybe not, we don't know Across the Atlantic. The European Commission hinted on June 12th that it may delay parts of the new AI Act. This AI Act is supposed to be the strictest set of rules in the world. The law already bans things like creepy social scoring, which is basically giving you a score based on how you interact online. It puts tougher rules on big chatbots. That were supposed to kick in August 2nd, but now officials are saying they might hit the pause button, so everyone can get ready. Even though this is in Europe, this will have an implication in the US, because many of the things that go down in Europe, such as the data laws and things like that, are often applied here, because it's hard to manage different places with different rules and different laws. So it's going to be interesting places with different rules and different laws, so it's going to be interesting to see what this does to our use of AI in the US. I also wanted to, of course, touch on some of the things that are happening in the classroom Research done by the Alan Turing Institute.
Amber Ivey (AI):If you know Alan Turing, he is popularly known for the Turing test, which is a test that sees if a machine can trick people into believing it's human. The research said that 22% of kids ages 8 through 12 have already chatted with AI bots. The interesting part is they are chatting, often without realizing whether the answers were human or a machine or a machine. So the report says schools need better AI literacy lessons so students can tell fact from fiction and AI bots from humans. Over at Microsoft, actually today, when this is being recorded, on June 20th, they published their annual Responsible AI Transparency Report. Think of it like a report card that shows how well they're following their own safety rules. Think of it like a report card that shows how well they're following their own safety rules. They explain how they test new features like Copilot, before turning them loose on the public. We want to make sure I think all of these different tools have AI report cards that we can test and look at, especially some of the ones that are going to be offered to you all in the future and currently, such as Google and other tools like that.
Amber Ivey (AI):A few other quick things I wanted to mention. Windows 11 now has an AI helper in the settings app that lets you tweak your computer just by typing what you want Interesting stuff. Right, try it out with your parents if that's something you want to see how it works. Meta also has a brand new AI chat app, but it raised eyebrows when critics noticed it defaults to public chats. So basically, what's happening is, if you don't change the default button, a chat you put into there or a prompt you put into the AI is now public for everyone to see. So this reminds us that we have to read the privacy settings not once, not twice, sometimes three times, to make sure we know how these tools are being used.
Amber Ivey (AI):All right, let's take a break for a quick activity break. These are screen free, because one I don't want you addicted to computers, but also everyone doesn't have access to a screen. So I want to make sure we're using what is available to us and nearby have access to a screen. So I want to make sure we're using what is available to us and nearby. So I want you to grab a sheet of paper and take some time to list three ways AI helps you during the day. If you're not using it yet, take some time to think about how AI could help you during the day. Then I want you to circle one that could go wrong like horribly wrong, if AI made a mistake. That could go wrong, like horribly wrong, if AI made a mistake, and I want you to think about how it could go wrong. Brainstorm all the ways that it could go wrong. Then I want you to share your list with a friend and brainstorm what is a human backup plan? So what would you, as a human, do if the AI went wrong in the area that you determined, based on how you would want to use it every day or how you use it every day? This is an example of keeping humans in the loop, which is something we always say needs to happen, which is basically regardless if there's AI or not, humans should be there to help with the decision-making process and always be involved.
Amber Ivey (AI):Okay, kids, that's it for this week of AI for Kids Scoop. I hope you learned something cool that you wanna share or tell with a friend or, better yet, teach your parents. I want you to ask your parents to follow our podcast so they can get the newest episodes every week and also, if you like it, ask them to leave a review. We wanna make sure others find out about this podcast. Keep your curiosity switched on and your screen time balanced, and I will see you again in two weeks. Make sure you come back next week for exciting interview and, of course, our ABCs of AI. Bye-bye. Thank you for joining us as we explore the fascinating world of artificial intelligence. Don't keep this adventure to yourself. Download it, share it with your friends and let everyone else in on the fun. Subscribe wherever you get your podcasts, or on YouTube. See you next time on AI for Kids.