Deploying the best E-Resources for Software Engineering Students

We at IT Engg Portal, provide all the Computer and IT Engineering students of Pune University with well compiled, easy to learn notes and other E-resources based on the curriculum

Power Point Presentations and Video Lectures for Download

We provide the most recommended power point presentations and Video Lectures from the most prominent Universities for most of the difficult subjects to ease your learning process

Bundling Codes for your Lab Practicals

Deploying the best of available E-Resources for Tech Preparation (Campus Placements)

The Complete Placement Guide

Our Team has worked hard to compile this E-Book for all students heading for Campus Placements. The book is a complete solution for Technical Preparation for Campus Placements.

Pune University's most viewed website for Computer and IT Engineering

With more than 4,00,0000 pageviews from 114 countries over the globe, we are now the most viewed website for Ebooks and other E- Resources in Computer and IT Engineering

Showing posts with label Information Retrieval. Show all posts
Showing posts with label Information Retrieval. Show all posts

Saturday, February 18, 2012

What is a Web Crawler : [Information Retrieval] - BE[Comp / IT]

         
Web crawler has been an  interesting topic for discussion for the students of Computer Science and IT field.  Whenever you query on google, along with the results you also get statistics just below the search bar  :something like " 132,000 results in  0.23 seconds". Ever wondered , how could google query the search engine database so fast?. The answer is 'Web  Crawler'. Google or any other search engine uses a web crawler to search the world wide web in an orderly fashion and to create a copy of the webpage in its database in such a way that it can easily query its database when needed to get results at alarming speeds.

  Lets make it a little more simple : What is a Web Crawler?
Web crawler is a computer program that browses the World Wide Web in a methodical, automated manner or in an orderly fashion.


   A crawler is a program that visits Web sites and reads their pages and other information in order to create entries for a search engine index. The major search engines on the Web all have such a program, which is also known as a "spider" or a "bot." Crawlers are typically programmed to visit sites that have been submitted by their owners as new or updated. Entire sites or specific pages can be selectively visited and indexed. Crawlers apparently gained the name because they crawl through a site a page at a time, following the links to other pages on the site until all pages have been read.
Other terms for Web crawlers are ants, automatic indexers,bots, Web spiders,Web robots,or especially in the FOAF community - Web scutters.

 This process is called Web crawling or spidering. Many sites, in particular search engines, use spidering as a means of providing up-to-date data. Web crawlers are mainly used to create a copy of all the visited pages for later processing by a search engine that will index the downloaded pages to provide fast searches. Crawlers can also be used for automating maintenance tasks on a Web site, such as checking links or validating HTML code. Also, crawlers can be used to gather specific types of information from Web pages, such as harvesting e-mail addresses (usually for sending spam). 


A deeper Look:
 A Web crawler is one type of bot, or software agent. In general, it starts with a list of URLs to visit, called the seeds. As the crawler visits these URLs, it identifies all the hyperlinks in the page and adds them to the list of URLs to visit, called the crawl frontier. URLs from the frontier are recursively visited according to a set of policies.
The large volume implies that the crawler can only download a fraction of the Web pages within a given time, so it needs to prioritize its downloads. The high rate of change implies that the pages might have already been updated or even deleted.
The number of possible crawlable URLs being generated by server-side software has also made it difficult for web crawlers to avoid retrieving duplicate content. Endless combinations of HTTP GET(URL-based) parameters exist, of which only a small selection will actually return unique content. For example, a simple online photo gallery may offer three options to users, as specified through HTTP GET parameters in the URL. If there exist four ways to sort images, three choices of thumbnail size, two file formats, and an option to disable user-provided content, then the same set of content can be accessed with 48 different URLs, all of which may be linked on the site. This mathematical combination creates a problem for crawlers, as they must sort through endless combinations of relatively minor scripted changes in order to retrieve unique content.




Pseudo Code for Web Crawler :
Get the user's input: the starting URL and the desired 
 file type. Add the URL to the currently empty list of 
 URLs to search. While the list of URLs to search is 
 not empty,
  {
    Get the first URL in the list.
    Move the URL to the list of URLs already searched.
    Check the URL to make sure its protocol is HTTP 
       (if not, break out of the loop, back to "While").
    See whether there's a robots.txt file at this site 
      that includes a "Disallow" statement.
      (If so, break out of the loop, back to "While".)
    Try to "open" the URL (that is, retrieve
     that document From the Web).
    If it's not an HTML file, break out of the loop,
     back to "While."
    Step through the HTML file. While the HTML text 
       contains another link,
    { 
       Validate the link's URL and make sure robots are 
    allowed (just as in the outer loop).
     If it's an HTML file,
       If the URL isn't present in either the to-search 
       list or the already-searched list, add it to 
       the to-search list.
         Else if it's the type of the file the user 
         requested,
            Add it to the list of files found.
    }
  }
Below is the syntax highlighted version of WebCrawler.java from §7.2 Regular Expressions.

/*************************************************************************
* Compilation: javac WebCrawler.java In.java
* Execution: java WebCrawler url
* Downloads the web page and prints out all urls on the web 
page.
* Gives an idea of how Google's spider crawls the web. Instead of
* looking for hyperlinks,we just look for patterns of the form:
* http:// followed by an alternating sequence of alphanumeric
* characters and dots, ending with a sequence of alphanumeric
* characters.
* % java WebCrawler http://www.slashdot.org
* http://www.slashdot.org
* http://www.osdn.com
* http://sf.net
* http://thinkgeek.com
* http://freshmeat.net
* http://newsletters.osdn.com
* http://slashdot.org
* http://osdn.com
* http://ads.osdn.com
* http://sourceforge.net
* http://www.msnbc.msn.com
* http://www.rhythmbox.org
* http://www.apple.com
* % java WebCrawler http://www.cs.princeton.edu
* http://www.cs.princeton.edu
* http://www.w3.org
* http://maps.yahoo.com
* http://www.princeton.edu
* http://www.Princeton.EDU
* http://ncstrl.cs.Princeton.EDU
* http://www.genomics.princeton.edu
* http://www.math.princeton.edu
* http://libweb.Princeton.EDU
* http://libweb2.princeton.edu
* http://www.acm.org
* Instead of setting the system property in the code, you could do it
* from the commandline
* % java -Dsun.net.client.defaultConnectTimeout=250 WebCrawler http://www.cs.princeton.edu
********************************************************/
import java.util.regex.Pattern;
import java.util.regex.Matcher;
public class WebCrawler {
public static void main(String[] args) {
// timeout connection after 500 miliseconds
System.setProperty("sun.net.client.defaultConnectTimeout", "500");
System.setProperty("sun.net.client.defaultReadTimeout", "1000");
// initial web page
String s = args[0];
// list of web pages to be examined
Queue<String> q = new Queue<String>();
q.enqueue(s);
// existence symbol table of examined web pages
SET<String> set = new SET<String>();
set.add(s);
// breadth first search crawl of web
while (!q.isEmpty()) {
String v = q.dequeue();
System.out.println(v);
In in = new In(v);
// only needed in case website does not respond
if (!in.exists()) continue;
String input = in.readAll();
/**********************************
* Find links of the form: http://xxx.yyy.zzz
* \\w+ for one or more alpha-numeric characters
* \\. for dot
* could take first two statements out of loop
*******************************************/
String regexp = "http://(\\w+\\.)*(\\w+)";
Pattern pattern = Pattern.compile(regexp);
Matcher matcher = pattern.matcher(input);
// find and print all matches
while (matcher.find()) {
String w = matcher.group();
if (!set.contains(w)) {
q.enqueue(w);
set.add(w);
}
}
}
}
}

-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x--x-x-x--xx-x-x-x-x-x-x-x---x-x-x-x-x-x-x-x-x-
   

Sunday, February 5, 2012

Information Retrieval :[ BE - IT | Sem 8 ] Books Download

         Information retrieval (IR) is an important an easy to learn subject introduced in the 8th semester of Information Technology Engineering of Pune University.The objective of the subject is to  deal with IR representation, storage, organization  and access to information items. The subject covers the basics and important aspects associated with Information Retrieval. It covers the basics, need of retrieval, different retrieval algorithm, taxonomy and ontology,IR models and languages etc. The subject is interesting, scoring as well as easy to learn.


 The syllabus for Information Retrieval is as follows: (according to 2008 pattern)

Unit I:
Basic Concepts of IR, Data Retrieval & Information Retrieval, IR system block diagram. Automatic Text Analysis, Luhn's ideas, Conflation Algorithm, Indexing and Index Term Weighing, Probabilistic Indexing, Automatic Classification. Measures of Association, Different Matching Coefficient, Classification Methods, Cluster Hypothesis. Clustering Algorithms, Single Pass Algorithm, Single Link Algorithm, Rochhio's Algorithm and Dendograms

Unit II:
File Structures, Inverted file, Suffix trees & suffix arrays, Signature files, Ring Structure, IR Models, Basic concepts, Boolean Model, Vector Model, and Fuzzy Set Model. Search Strategies, Boolean search, serial search, and cluster based retrieval, Matching Function

Unit III:
Performance Evaluation- Precision and recall, alternative measures reference collection (TREC Collection), Libraries & Bibliographical system- Online IR system, OPACs, Digital libraries - Architecture issues, document models, representation & access, Prototypes, projects & interfaces, standards

Unit IV :
Taxonomy and Ontology: Creating domain specific ontology, Ontology life cycle
Distributed and Parallel IR: Relationships between documents, Identify appropriate networked collections, Multiple distributed collections simultaneously, Parallel IR - MIMD Architectures, Distributed IR – Collection Partitioning, Source Selection, Query Processing

Unit V :
Multimedia IR models & languages- data modeling, Techniques to represent audio and visual document, query languages Indexing & searching- generic multimedia indexing approach, Query databases of multimedia documents, Display the results of multimedia searches, one dimensional time series, two
dimensional color images, automatic feature extraction.

Unit VI:
Searching the Web, Challenges, Characterizing the Web, Search Engines, Browsing, Mata searchers, Web crawlers, robot exclusion, Web data mining, Metacrawler, Collaborative filtering, Web agents (web shopping, bargain finder,..), Economic, ethical, legal and political issues.

Download Books :

Modern Information Retrieval 
Yates, Neto


File type: PDF
Size of: 1.13 MB 
or
Alternative Download Link
File Type and:  PDF
Size : 1.95 MB


---------------------------------------------------------------------------------------------------


Information Retrieval :
 Implementing and Evaluating Search Engines 
Buttcher, Clarke, Cormak




Download Now
File Type and: DJVU
Size : 12 MB


-----------------------------------------------------------------------------------------


Managing Gigabytes
Witten, Moffat, Bel 
Download Now
File Type: DJVU
Size : 11 MB


---------------------------------------------------------------------------
Information  Retrieval :
 Data Structures and Algorithms 
William Frakes




Download Now 
File Type: PDF (Zip)
Size : 1 MB


-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-x-xx-