A Blog by Jonathan Low

 

Apr 14, 2019

Generation AI: What Happens When Your Child's Best Friend Is An AI That Talks

Germany banned the doll at right because it captured and stored what children said to it.

But it probably wont be the last. And what rules govern how it is used, how it affects children (perhaps subliminally urging them to demand that their parents buy them things) and who gets to hear, let alone own the information they generate? JL


Kay Firth-Butterfield reports in World Economic Forum:

A new wave of artificial intelligence (AI) toys “befriend” children. Manufacturers claim they are educational, enhancing play and helping develop social skills. But like other “things” we connect to the internet, they might put security and privacy at risk. Who is the arbiter of these conversations? Who coded the algorithms? Do the values the child is being exposed to align with those of the parents? Will parents be able to choose the values the toy is coded with? If the toy is educational, is the algorithm checked by someone who is qualified to teach? If data is being collected, what will the company do with that information?
My Friend Cayla’ is an internet-connected doll that uses voice recognition technology to chat and interact with children in real time. Cayla’s conversations are recorded and transmitted online to a voice analysis company.
This raised concerns that hackers might spy on children or communicate directly with them as they play with the doll. There are also concerns about how kids’ voice data is used. In 2017 German regulators urged parents to destroy the doll, classifying it as an “illegal espionage apparatus”.
Cayla is just one example of a new wave of artificial intelligence (AI) toys that “befriend” children. Manufacturers often claim they are educational, enhancing play and helping children develop social skills. But consumer groups warn that smart toys, like other “things” we connect to the internet, might put security and privacy at risk.
In the following Q&A, Kay Firth-Butterfield, the World Economic Forum’s Head of Artificial Intelligence and Machine Learning, explains how to navigate a world where AI toys are increasingly popular.
What happens to the data from AI toys?
The toys are connected to the internet (via WiFi or Bluetooth to a phone or other device with internet access) and send data to the supplier. This enables the company's AI to learn for the company and be better able to talk to the child.
The company records and collects all the child’s conversations with the toy, and possibly those with other children and adults who also interact with it.
The company is probably storing this data and certainly using it to create a better product.
The location of the toy affects how the data is stored. For example, in the US, companies creating educational toys can store data for longer than other companies. So when the manufacturer describes their toy as educational, it opens up that right to hold on to the data for longer.
As more devices – many marketed as educational toys – come onto the market, they are setting off alarm bells around privacy, bias, surveillance, manipulation, democracy, transparency and accountability.
What issues should we be most concerned about?
Germany banned Cayla and similar toys because of concerns they could be used to spy on children and that someone could hack the device and communicate directly with the child.
But we are also talking about companies monetizing data. The data from AI toys contains everything a child says to the device, including their most precious secrets.
If that data is collected, does the child have a right to get it back? If that data is collected from very early childhood and does not belong to the child, does it make the child extra vulnerable because his or her choices and patterns of behaviour could be known to anyone who purchases the data, for example, companies or political campaigns.
Depending on the privacy laws of the state in which the toys are being used, if the data is collected and kept, it breaches Article 16 of the Convention on the Rights of the Child – the right to privacy. (Though, of course, arguably this is something parents routinely do by posting pictures of their children on Facebook).
What are the benefits of AI toys?
Most economists would argue that improving and increasing access to education is one of the best ways to close the gap between the developing and developed world.
AI-enabled educational toys and “teachers” could make a hugely beneficial difference in the developing world.
But, if venture capitalist and former Google China CEO Kai Fu Lee is correct, the data collected from these devices would simply be used by the big AI companies in the West and China, rather than benefiting children, their parents or the countries in which they live.
What influence could AI toys have on kids?
As well as the risk of hacking, we also need to think about what these toys are saying to our children. Who is the arbiter of these conversations? Who coded the algorithms (their unintended biases could creep in)? Do the values the child is being exposed to align with those of the parents? Will parents be able to choose the values the toy is coded with?
If the toy is educational, is the algorithm being checked by someone who is at least qualified to teach?
These toys will be very influential because the children will be conversing with them all the time. For example, if the doll says it is cold and the child asks his or her parents to buy it a coat, is that advertising?

If the child has a toy that can talk back, instead of an invisible friend, will this affect creative play for good or ill? The child won’t have to invent stories about an invisible friend anymore. Could this huge change in creative play alter us as human beings?
If data is being collected, even if it isn’t being stored, does the company have a duty to “red flag” children who share suicidal thoughts or other self-harming behaviour? What if the child confides in the toy that he or she is being abused, will the company report this to the relevant authorities? And then what will the company do with that information?
So what can we do to protect children?
Parents need to have answers to these questions before they buy the devices. At the very least, they should check that their child is learning values from AI toys that concur with their own.
At the moment the onus is on consumers to know what is being done with their data, but there is discussion that companies should be made responsible for ensuring consumers understand how it’s being used.
At the World Economic Forum, we will be doing a project that re-imagines the role of regulators so that they would certify algorithms fit for purpose, as opposed to the current situation where regulators issue a fine after something goes wrong.
This might be the right regulatory model here because it is needed now and is part of agile governance mechanisms. The problem, though, with governance of smart toys is that the AI is learning and changing with each interaction with the child.
This doesn’t mean we’re saying that AI-enabled toys are bad. They might one day help us achieve precision learning (using AI to tailor education to each child’s needs). And AI toys could be excellent for preparing children to work alongside autonomous robots.
What we are saying is that children are vulnerable and so we should consider now how AI is used around them and not beta test it on them.

1 comments:

Elizabeth said...

I was so anxiuos to know what my husband was always doing late outside the house so i started contacting hackers and was scamed severly until i almost gave up then i contacted this one hacker and he delivered a good job showing evidences i needed from the apps on his phone like whatsapp,facebook,instagram and others and i went ahead to file my divorce papers with the evidences i got,He also went ahead to get me back some of my lost money i sent to those other fake hackers,every dollar i spent on these jobs was worth it.Contact him so he also help you.
mail: premiumhackservices@gmail.com
text or call +1 4016006790

Post a Comment