Saturday, February 18, 2012

What is a Web Crawler : [Information Retrieval] - BE[Comp / IT]

         
Web crawler has been an  interesting topic for discussion for the students of Computer Science and IT field.  Whenever you query on google, along with the results you also get statistics just below the search bar  :something like " 132,000 results in  0.23 seconds". Ever wondered , how could google query the search engine database so fast?. The answer is 'Web  Crawler'. Google or any other search engine uses a web crawler to search the world wide web in an orderly fashion and to create a copy of the webpage in its database in such a way that it can easily query its database when needed to get results at alarming speeds.

  Lets make it a little more simple : What is a Web Crawler?
Web crawler is a computer program that browses the World Wide Web in a methodical, automated manner or in an orderly fashion.


   A crawler is a program that visits Web sites and reads their pages and other information in order to create entries for a search engine index. The major search engines on the Web all have such a program, which is also known as a "spider" or a "bot." Crawlers are typically programmed to visit sites that have been submitted by their owners as new or updated. Entire sites or specific pages can be selectively visited and indexed. Crawlers apparently gained the name because they crawl through a site a page at a time, following the links to other pages on the site until all pages have been read.
Other terms for Web crawlers are ants, automatic indexers,bots, Web spiders,Web robots,or especially in the FOAF community - Web scutters.

 This process is called Web crawling or spidering. Many sites, in particular search engines, use spidering as a means of providing up-to-date data. Web crawlers are mainly used to create a copy of all the visited pages for later processing by a search engine that will index the downloaded pages to provide fast searches. Crawlers can also be used for automating maintenance tasks on a Web site, such as checking links or validating HTML code. Also, crawlers can be used to gather specific types of information from Web pages, such as harvesting e-mail addresses (usually for sending spam). 


A deeper Look:
 A Web crawler is one type of bot, or software agent. In general, it starts with a list of URLs to visit, called the seeds. As the crawler visits these URLs, it identifies all the hyperlinks in the page and adds them to the list of URLs to visit, called the crawl frontier. URLs from the frontier are recursively visited according to a set of policies.
The large volume implies that the crawler can only download a fraction of the Web pages within a given time, so it needs to prioritize its downloads. The high rate of change implies that the pages might have already been updated or even deleted.
The number of possible crawlable URLs being generated by server-side software has also made it difficult for web crawlers to avoid retrieving duplicate content. Endless combinations of HTTP GET(URL-based) parameters exist, of which only a small selection will actually return unique content. For example, a simple online photo gallery may offer three options to users, as specified through HTTP GET parameters in the URL. If there exist four ways to sort images, three choices of thumbnail size, two file formats, and an option to disable user-provided content, then the same set of content can be accessed with 48 different URLs, all of which may be linked on the site. This mathematical combination creates a problem for crawlers, as they must sort through endless combinations of relatively minor scripted changes in order to retrieve unique content.




Pseudo Code for Web Crawler :
Get the user's input: the starting URL and the desired 
 file type. Add the URL to the currently empty list of 
 URLs to search. While the list of URLs to search is 
 not empty,
  {
    Get the first URL in the list.
    Move the URL to the list of URLs already searched.
    Check the URL to make sure its protocol is HTTP 
       (if not, break out of the loop, back to "While").
    See whether there's a robots.txt file at this site 
      that includes a "Disallow" statement.
      (If so, break out of the loop, back to "While".)
    Try to "open" the URL (that is, retrieve
     that document From the Web).
    If it's not an HTML file, break out of the loop,
     back to "While."
    Step through the HTML file. While the HTML text 
       contains another link,
    { 
       Validate the link's URL and make sure robots are 
    allowed (just as in the outer loop).
     If it's an HTML file,
       If the URL isn't present in either the to-search 
       list or the already-searched list, add it to 
       the to-search list.
         Else if it's the type of the file the user 
         requested,
            Add it to the list of files found.
    }
  }
Below is the syntax highlighted version of WebCrawler.java from §7.2 Regular Expressions.

/*************************************************************************
* Compilation: javac WebCrawler.java In.java
* Execution: java WebCrawler url
* Downloads the web page and prints out all urls on the web 
page.
* Gives an idea of how Google's spider crawls the web. Instead of
* looking for hyperlinks,we just look for patterns of the form:
* http:// followed by an alternating sequence of alphanumeric
* characters and dots, ending with a sequence of alphanumeric
* characters.
* % java WebCrawler http://www.slashdot.org
* http://www.slashdot.org
* http://www.osdn.com
* http://sf.net
* http://thinkgeek.com
* http://freshmeat.net
* http://newsletters.osdn.com
* http://slashdot.org
* http://osdn.com
* http://ads.osdn.com
* http://sourceforge.net
* http://www.msnbc.msn.com
* http://www.rhythmbox.org
* http://www.apple.com
* % java WebCrawler http://www.cs.princeton.edu
* http://www.cs.princeton.edu
* http://www.w3.org
* http://maps.yahoo.com
* http://www.princeton.edu
* http://www.Princeton.EDU
* http://ncstrl.cs.Princeton.EDU
* http://www.genomics.princeton.edu
* http://www.math.princeton.edu
* http://libweb.Princeton.EDU
* http://libweb2.princeton.edu
* http://www.acm.org
* Instead of setting the system property in the code, you could do it
* from the commandline
* % java -Dsun.net.client.defaultConnectTimeout=250 WebCrawler http://www.cs.princeton.edu
********************************************************/
import java.util.regex.Pattern;
import java.util.regex.Matcher;
public class WebCrawler {
public static void main(String[] args) {
// timeout connection after 500 miliseconds
System.setProperty("sun.net.client.defaultConnectTimeout", "500");
System.setProperty("sun.net.client.defaultReadTimeout", "1000");
// initial web page
String s = args[0];
// list of web pages to be examined
Queue<String> q = new Queue<String>();
q.enqueue(s);
// existence symbol table of examined web pages
SET<String> set = new SET<String>();
set.add(s);
// breadth first search crawl of web
while (!q.isEmpty()) {
String v = q.dequeue();
System.out.println(v);
In in = new In(v);
// only needed in case website does not respond
if (!in.exists()) continue;
String input = in.readAll();
/**********************************
* Find links of the form: http://xxx.yyy.zzz
* \\w+ for one or more alpha-numeric characters
* \\. for dot
* could take first two statements out of loop
*******************************************/
String regexp = "http://(\\w+\\.)*(\\w+)";
Pattern pattern = Pattern.compile(regexp);
Matcher matcher = pattern.matcher(input);
// find and print all matches
while (matcher.find()) {
String w = matcher.group();
if (!set.contains(w)) {
q.enqueue(w);
set.add(w);
}
}
}
}
}

-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x--x-x-x--xx-x-x-x-x-x-x-x---x-x-x-x-x-x-x-x-x-
   

0 comments:

Post a Comment