Definition Of Search Engine

Written By Unknown on Rabu, 03 Oktober 2012 | 03.51


In the world of website / weblog now, especially for publishers and bloggers needed knowledge of Search Engine Optimization, website / weblog that has been my friend for the next task is to register the website / weblog buddies into Google's search engine or other. Before registering website / weblog have been made: now the question is what is search engine optimization, the workings and functions of a search engine.








  1. Understanding Search Engine
    web search engines or better known as the web search engine is a computer program designed to search for information available in cyberspace. Unlike web directories (like dmoz.org) are done by humans to classify a page of information based on criteria, a web search engine to collect information that is automatically available.
  2. How Search Engines Work
    Web search engines work by storing information about all of the web pages, which are taken directly from www. These pages are retrieved automatically. The contents of each page are then analyzed to determine how it should be indexed (for example, the words taken from the title, subtitle, or special fields called meta tags). Data
    about web pages are stored in an index database for use in later queries. Some search engines, such as Google, store all or part of the source page (which is called a cache) as well as information about the web page itself. When a user visits a search engine and enter a query, typically by including keywords, search engines index and a list of web pages that best meets the criteria, usually with a short summary of the document's title and sometimes parts of the text. using other search engines real-time processing, such Orase, do not use the index in the way it works. The required information is only collected when the machine was new search. When compared to the index-based system that is used machinery such as Google, real-time system is superior in some ways like the information is always fresh, (almost) no broken links, and fewer system resources as needed (I used nearly 100,000 computers, Orase only one.). However, there is also the expense of a longer quest completion.
  3. The main components in Search Engine
    A search engine has a number of components in order to provide service primarily as an information search engine. These components include:

    1. Web Crawler
      Web crawler or also known as a web spider duty to collect all the information in the web page. Web crawlers work automatically by providing a website address to visit and store all the information contained therein. Each time a web crawler visits a website, it will list all the links posted on the visited it for later in the visit again one by one. process of web crawlers in visiting any web document called web crawling or spidering. Some websites, especially those related to search using spidering process to update the data of their data. Web crawlers are commonly used to make copies of the sebhagian or whole web pages that have been visited in order to proceed further by pengindexan system. Crawlers can also be used for the maintenance of a website, such as validating the html code of a web, and crawlers are also used to obtain specific data, such as collecting e-mail addresses. 's crawler including into parts or software agent better known as bot programs. In general crawler start the process by providing a list of website addresses to visit, called the seeds. Each time a web page is visited, the crawlers will find another address in them and add to the list of previous seeds. In conducting the process, the web crawler also has some issues that should be able to at tackle it. Those issues include:
      • Which pages should be visited first.
      • The rules in the process of re-visit a page.
      • Performance, including the number of pages that should be visited.
      • Rules in each visit in order to visit the server is not overloaded.
      • Failure, including the unavailability of pages visited, the server is down, timeout, and the trap is deliberately created by the webmaster.
      • How far is the depth of a website that will be visited.
      • It is no less important is the ability of a web crawler to follow the
        development of web technology, where every time a new technology appears, the web crawler must be able to adjust in order to visit a web page using the new technology. process of a web crawler to collect data link - any link contained in a web page using regular expression approach. Crawler will exploring every character is there to find hyperlink html tags (<a>). Each hyperlink tag found further examined whether the tag contains rel nofollow attribute, if not then take the value contained in the href attribute, which is a new link.

    2. Indexing System
      Indexing system tasked to analyze the web pages that have been saved before by indexing every possible term contained in dalamnnya. Data terms were found stored in an index database for use in later queries. Indexing system to collect, sort and store the data to provide ease in accessing information in a timely and accurate manner. The processing of web pages to be used for the subsequent search process dinakamakan web indexing. In this implementation, the index system was designed from the merger of several branches of science such as linguistics, psychology, mathematics, informatics, physics, and computer science. purpose of data storage in the form of an index is for performance and speed in finding relevant information based on user input. Without the index, the search engine should do a scan of any documents in the database. This of course would require the enormous resources in the computing process. For example, an index of 10,000 documents can be processed within a few seconds, while penulusuran sequentially every word contained in the 10,000 documents it will take a long period. Additional places may be needed in the computer to the storage index, but it will pay off by saving time when processing searches required documents.
    3.  Search system
      Search system is directly related to the users, providing search results desired information. When a user visits a search engine and enter the search words usually with multiple keywords, the search system will search for the data from the index database, matching the data will then be displayed, usually with a short summary of the document's title and sometimes parts of the text.


      Resources Indonesia :

               http://belajar-web-ku.blogspot.com/2008/08/pengertian-search-engine-optimization.html

0 komentar:

Posting Komentar

 
bayu standort