Designing human experiences in the age of AI.
Links curated by Peter Polgar.
For a weekly newsletter of hand selected links, signup here.
Posted on 2017 August 1 by polgarp · Tagged in Machine Learning, Attacks, and Research
Original link: https://arxiv.org/pdf/1707.08945v1.pdf
It’s interesting to see more and more research published on how to fool machine learning systems, vision based in particular. I can see designers on both sides, on one had as more and more ML tech is deployed to follow people without their informed consent there is a need for systems enabling users to hide. On the other hand systems designed should protect well-intentioned users from harm coming from attackers.