You will need
  • Personal website.
Instruction
1
One of the easiest ways to remove a web page from the archives of the search engines is to its physical removal, the change of address location and about removal (you need to set the attribute on the remote page). After conversion this page is a search robot instead of the content will see the following line: HTTP/1.1 404 Not Found. But do not forget that search engine crawlers can visit the site every 3 hours, but can every 2-3 days. Therefore, it is necessary to wait some time to get the result.
2
Another way is to edit the file robots.txt that determines the search path of the robot as soon as it came on your website. This text document has the same location — the root of the site. In the first paragraph usually specify indexing options for the Yandex robot (it differs markedly from other robots), in the second paragraph for all other search engines.
3
In the beginning of a paragraph, you must specify the agent header "User-Agent: *" and the addresses of pages you want to hide — "Disallow: /wp-content/foto/fotojaba.html". In the same way, you must specify the addresses of pages or sections that you want to close from indexing. Keep in mind that this method does not allow to obtain quick results. If your site has low activity and news are not broadcast in the social network, processing of new data can reach the deadline in a few days. Besides, you will need to remove versions of these pages from the archive search.
4
An alternative method of creating links in the file robots.txt is using the meta tag robots. The syntax of this tag is the following: it should be placed between the paired tags [head] [/head]. The value of the robots must be placed in the meta tag name. An example will look as follows: <meta name="robots" content="noindex,nofollow" />.