Hey so I looked at your website and you say you're KidSafe COPPA certified, yet on their website it only mentions you as KidSafe listed? Any reason for the discrepancy?
Although cool, I can see how this product will just inhibit instead of enabling creativity and play in kids. Instead of having to draw something to see it, refining the drawing over minutes or hours, the kid will just lazily ask for some half formed idea, and see it materialize from thin air. That's just sad
Yeah. It's bad enough if kids prompting this stuff online is the new form that creativity is going to take. But this way, it's generating electronic crap that will end up in landfills as well.
Does a human review every sticker before it's ever shown to a child? If not, it's only a matter of time before the AI spits out something accidentally horrific.
> No internet open browsing or open chat features.
> AI toys shouldn’t need to go online or talk to strangers to work. Offline AI keeps playtime private and focused on creativity.
> No recording or long-term data storage.
> If it’s recording, it should be clear and temporary. Kids deserve creative freedom without hidden mics or mystery data trails.
> No eavesdropping or “always-on” listening
> Devices designed for kids should never listen all the time. AI should wake up only when it’s invited to.
> Clear parental visibility and control.
> Parents should easily see what the toy does, no confusing settings, no buried permissions.
> Built-in content filters and guardrails.
> AI should automatically block or reword inappropriate prompts and make sure results stay age-appropriate and kind."
Obviously the thing users here know, and "kid-safe" product after product has proven, is that safety filters for LLMs are generally fake. Perhaps they can exist some day, but a breakthrough like that isn't gonna come from an application-layer startup like this. Trillion dollar companies have been trying and failing for years.
All the other guardrails are fine but basically pointless if your model has any social media data in its dataset.
I'm sure you are correct about being able to do some clever prompting or tricks to get it to print inappropriate stickers, but I believe in this case it may be OK.
If you consider a threat model where the threat is printing inappropriate stickers, who are the threat actors? Children who are attempting to circumvent the controls and print inappropriate stickers? If they already know about topics that they shouldn't be printing and are trying to get it to print, I think they probably don't truly _Need_ the guardrails at that point.
In the same way many small businesses don't (most likely can't even afford to) opt to put security controls in place that are only relevant to blocking nation state attackers, this device really only needs enough controls in place to prevent a child from accidentally getting an inappropriate output.
It's just a toy for kids to print stickers with, and as soon as the user is old enough to know or want to see more adult content they can just go get it on a computer.
> Stickerbox is our attempt to make modern AI kid-safe, playful, and tangible. We’d love to hear what you think!
How is it made to be "kid-safe"?
> Our model includes strict safety filters that block inappropriate content before it ever appears, ensuring that every creation stays fun, imaginative, and age-appropriate.
How do you filter the output of a generative AI like this?
Filter the input? If it's trained on all kid-friendly material and you have guardrails on the inputs what's going to come out. I believe Apple has done this pretty successfully on their image gen stuff that was clearly aimed at kids. Granted the outputs are... very boring, but they seem to never give back anything inappropriate.
don't mean to steal your customers, but can I just buy good thermal sticker paper somewhere that would work with a regular receipt printer? That would be fun for side nonsense, with or without AI.
When I was more youthful I remember getting the avery sticker sheets for a school election, but a roll where someone could do one at a time would be more useful for random stuff.
Looks really cool, but unfortunately I can not use it because thermal printing paper is coated with endocrine-disrupting chemicals like Bisphenol-A (BPA) or its substitute, Bisphenol-S (BPS), which can be absorbed through skin contact, potentially leading to metabolic, reproductive, or cancerous issues. It’s basically a very fine plastic dust. Though risk depends on exposure duration and amount, it’s not something I would feel comfortable with kids.
I can't find the CPC certificate for this product. Children's toys are heavily regulated in the US and based on the thermal paper, the lack of display of their authorization to sell, the fly by night nature of a drop shipping website like this ...
I don't think this is a legal product to market towards children in the US
and that's without even mentioning the LLM usage
real glad my nibblings all got real art supplies when they were little. that fosters real creativity and the lot of them can draw better than any of the examples on the sales page, and they're still little kids. and there's no subscription, no EULA, their supplies are legal and safe to use, etc.
https://www.kidsafeseal.com/certifiedproducts/stickerbox_dev...
Also, do you guys have CPSC CPC certificate? I couldn't find anything to that effect.
Why? Kids can combine the power of their ideas with crayons, markers, and pencils.
Although cool, I can see how this product will just inhibit instead of enabling creativity and play in kids. Instead of having to draw something to see it, refining the drawing over minutes or hours, the kid will just lazily ask for some half formed idea, and see it materialize from thin air. That's just sad
> No internet open browsing or open chat features. > AI toys shouldn’t need to go online or talk to strangers to work. Offline AI keeps playtime private and focused on creativity.
> No recording or long-term data storage. > If it’s recording, it should be clear and temporary. Kids deserve creative freedom without hidden mics or mystery data trails.
> No eavesdropping or “always-on” listening > Devices designed for kids should never listen all the time. AI should wake up only when it’s invited to.
> Clear parental visibility and control. > Parents should easily see what the toy does, no confusing settings, no buried permissions.
> Built-in content filters and guardrails. > AI should automatically block or reword inappropriate prompts and make sure results stay age-appropriate and kind."
Obviously the thing users here know, and "kid-safe" product after product has proven, is that safety filters for LLMs are generally fake. Perhaps they can exist some day, but a breakthrough like that isn't gonna come from an application-layer startup like this. Trillion dollar companies have been trying and failing for years.
All the other guardrails are fine but basically pointless if your model has any social media data in its dataset.
Interesting they fail their own checklist in that article.
> Here’s a parent checklist for safe AI play:
> [...] AI toys shouldn’t need to go online
From the FAQ:
> Can I use Stickerbox without Wi-Fi?
> You will need Wi-Fi or a hotspot connection to connect and generate new stickers.
If you consider a threat model where the threat is printing inappropriate stickers, who are the threat actors? Children who are attempting to circumvent the controls and print inappropriate stickers? If they already know about topics that they shouldn't be printing and are trying to get it to print, I think they probably don't truly _Need_ the guardrails at that point.
In the same way many small businesses don't (most likely can't even afford to) opt to put security controls in place that are only relevant to blocking nation state attackers, this device really only needs enough controls in place to prevent a child from accidentally getting an inappropriate output.
It's just a toy for kids to print stickers with, and as soon as the user is old enough to know or want to see more adult content they can just go get it on a computer.
When LLMs are involved, I don't find the guardrails as hard as they are making out.
If AI were built for kids, what would it look like?
Exactly like this and it's heartbreaking.
How is it made to be "kid-safe"?
> Our model includes strict safety filters that block inappropriate content before it ever appears, ensuring that every creation stays fun, imaginative, and age-appropriate.
How do you filter the output of a generative AI like this?
I'm still bitter at Logitech for screwing up Squeezebox.
All the constructive/neutral comments are downvoted, too, giving them even more visibility.
When I was more youthful I remember getting the avery sticker sheets for a school election, but a roll where someone could do one at a time would be more useful for random stuff.
Any of a variety of 4" thermal shipping label printers without AI, generally ranging from $30 to $75: https://www.amazon.com/Phomemo-Bluetooth-241BT-Wireless-Comp...
Everything about this is marked up to hell to pay for the generative AI end.
It's rare that I see a launch on HN that I could call abjectly evil, but this is certainly it.
https://pmc.ncbi.nlm.nih.gov/articles/PMC5453537/
I don't think this is a legal product to market towards children in the US
and that's without even mentioning the LLM usage
real glad my nibblings all got real art supplies when they were little. that fosters real creativity and the lot of them can draw better than any of the examples on the sales page, and they're still little kids. and there's no subscription, no EULA, their supplies are legal and safe to use, etc.
This product is actual trash