SEARCH
ENGINES:
·
A web search engine is a software
system that is designed to search for information on the World
Wide Web.
·
The search results are generally presented in
a line of results often referred to as search engine results pages
(SERPs). The information may be a mix of web
pages, images, and other types of files.
·
Unlike web
directories, which are maintained only by human editors,
search engines also maintain real-time information by
running an algorithm on a web
crawler.
A web
crawler (also known as a web spider or web robot)
is a program or automated script which browses the World Wide Web in
a methodical, automated manner. This process is called Web crawling or
spidering. Many legitimate sites, in particular search engines, use spidering
as a means of providing up-to-date data.
There are differences in the ways various search engines work, but they
all perform three basic tasks:
- They search the Internet -- or
select pieces of the Internet -- based on important words.
- They keep an index of the words
they find, and where they find them.
- They allow users to look for
words or combinations of words found in that index.
Early search engines held an
index of a few hundred thousand pages and documents, and received maybe one or
two thousand inquiries each day. Today, a top search engine will index hundreds
of millions of pages, and respond to tens of millions of queries per day.
No comments:
Post a Comment