In an Quora post Greg Vincent suggest that we could adopt a do not crawl standard for the real world.
Google, if you’d like to crawl my face with a Googlebot, the least you could do is request my robots.txt first. To do otherwise would be extremely impolite.
Google has already adopted the “robots exclusion standard” for the Web (it was actually around long before Google came into existence -> Robots exclusion standard.) Basically the standard involves a web site owner creating a robots.txt file on their web server, which allows the owner of a web site to tell Google specifically what it can and cannot capture/crawl from a web site.
Hey Google, why the double standard? Or, maybe more accurately, why the lack of a standard for human privacy when you already recognize one for web servers? Are web servers really more deserving of privacy rights than human beings?
Add in some sort of signalling method such as an IR emitter and maybe some legal force to ensure ‘world crawlers’ abide by the robots.txt and we could have a good technical solution which will reduce some of the privacy risks. To help with enforcement Steganography could be used so that images or audio collected would have the do not track notice embedded in them. Some people are already working on a QR code based version called Tagmenot . We will be following developments closely.