SEO MotionZ Forum
Is robots.txt file enough to block some pages from Search Engines? - Printable Version

+- SEO MotionZ Forum (https://seomotionz.com)
+-- Forum: Digital Workplace (https://seomotionz.com/forumdisplay.php?fid=10)
+--- Forum: Blog Management & Promotion (https://seomotionz.com/forumdisplay.php?fid=26)
+--- Thread: Is robots.txt file enough to block some pages from Search Engines? (/showthread.php?tid=6491)



Is robots.txt file enough to block some pages from Search Engines? - lammao - 10-24-2019

Is robots.txt file enough to block some pages from Search Engines? I mean from crawling and indexing.


RE: Is robots.txt file enough to block some pages from Search Engines? - ksmith29 - 10-26-2019

No its not enough. Its best that you block them through .htaccess file.


RE: Is robots.txt file enough to block some pages from Search Engines? - danni - 10-30-2019

(10-24-2019, 05:52 AM)lammao Wrote: Is robots.txt file enough to block some pages from Search Engines? I mean from crawling and indexing.

Robots.text file only tells search engines or suggest some might say that it can or cannot crawl & index any page. It doesn't necessarily blocks them. .htaccess file is the best option for you as @ksmith29 suggested.


RE: Is robots.txt file enough to block some pages from Search Engines? - lammao - 11-08-2019

Can anyone provide me the code for htaccess file?


RE: Is robots.txt file enough to block some pages from Search Engines? - danni - 11-13-2019

Follow this article. https://help.dreamhost.com/hc/en-us/articles/216363167-How-do-I-deny-access-to-my-site-with-an-htaccess-file-


RE: Is robots.txt file enough to block some pages from Search Engines? - smartscraper - 07-15-2022

I hope the article helped you.


RE: Is robots.txt file enough to block some pages from Search Engines? - psychicrajsharma - 05-15-2023

If it is linked to from other websites, a page that is disallowed in robots.txt can still be indexed. While Google won't slither or record the substance hindered by a robots.txt document, we could in any case find and file a prohibited URL in the event that it is connected from different puts on the web.