11-28-2017, 12:25 PM
Robots.txt is a content record website admins make to teach web robots (normally web index robots) how to slither pages on their site. The robots.txt record is a piece of the robots rejection convention (REP), a gathering of web norms that manage how robots creep the web, access and file substance, and serve that substance up to clients. The REP additionally incorporates orders like meta robots, and page-, subdirectory-, or vast guidelines for how web search tools should treat joins, (for example, "take after" or "nofollow").
By and by, robots.txt documents show whether certain client operators (web-creeping programming) can or can't slither parts of a site. These slither guidelines are indicated by "denying" or "permitting" the conduct of certain (or all) client specialists.
By and by, robots.txt documents show whether certain client operators (web-creeping programming) can or can't slither parts of a site. These slither guidelines are indicated by "denying" or "permitting" the conduct of certain (or all) client specialists.