Joined: Aug 2019
Posts: 8
Likes Received: 2
Is robots.txt file enough to block some pages from Search Engines? I mean from crawling and indexing.
Joined: Jul 2014
Posts: 995
Likes Received: 62
No its not enough. Its best that you block them through .htaccess file.
Joined: Nov 2016
Posts: 470
Likes Received: 60
(10-24-2019, 05:52 AM)lammao Wrote: Is robots.txt file enough to block some pages from Search Engines? I mean from crawling and indexing.
Robots.text file only tells search engines or suggest some might say that it can or cannot crawl & index any page. It doesn't necessarily blocks them. .htaccess file is the best option for you as @
ksmith29 suggested.
Joined: Aug 2019
Posts: 8
Likes Received: 2
Can anyone provide me the code for htaccess file?
Joined: Nov 2016
Posts: 470
Likes Received: 60
Joined: May 2022
Posts: 136
Likes Received: 15
I hope the article helped you.
Joined: Apr 2023
Posts: 14
Likes Received: 0
If it is linked to from other websites, a page that is disallowed in robots.txt can still be indexed. While Google won't slither or record the substance hindered by a robots.txt document, we could in any case find and file a prohibited URL in the event that it is connected from different puts on the web.