Friday, November 14, 2008

WEB BOT PROJECT

In the early 1990's a technology was created in which a series of "spiders" or pre-programed search bots scanned the internet searching for key words. These bots scanned the web at incredible speeds, largely targeting blog sites, forums and other such social sites. When these bots found key words they would take a sample of data surrounding the key word and bring it back to a central filter where these data "snapshots" were used to make predictions about marketing trends and other such economical shifts.
Recently, this technology; widely referred to as "The Web Bot Project" has been used to make a great deal of debatably accurate predictions about human society as a whole.
Some predictions the Web Bot has made include:
Predictions made toward a military background in the Washington D.C. Sniper spree Loose references to the Spaceshuttle Colombia tragedy References were made to Vice President Dick Cheney, involving a gun shot wounding A major event was predicted in the time period days around the 9/11 attacks. McCain would end his campaign in September 2008 for health reasons (which turned out to be true for one week, but for the health of the economy not his own) Louisiana would experience an "electrifying" water experience that will drive 300000 people away from shore (Hurricane Rita did this, however the post-event comments was how "shockingly fast" the water rose in the city.) As noted in the last two predictions above, the Web Bots are good at identifying events but sometimes the words don't match what actually occurs and thus require careful interpretation.

WEB DOCUMENTRY

A web documentary is a documentary production that differs from the more traditional forms—video, audio, photographic—by applying a full complement of multimedia tools. The interactive multimedia capability of the Internet provides documentarians with a unique medium to create non-linear productions that combine photography, text, audio, video, animation and infographics.
How web documentaries differ from film documentaries
The web documentary differs from film documentaries through the integration of a combination of multimedia assets (photos, text, audio, animation, graphic design, etc) and the requirement on the part of the viewer to interact with, or navigate through, the story.
Compared to a linear narrative where the destination of the story is pre-determined by the filmmaker, a web documentary provides a viewer with the experience of moving through the story via clusters of information. The integration of information architecture, graphic design, imagery, titles and sub-titles all play a role in providing visual clues to the viewer as to the sequence through which they should move through the web documentary. But from that point, it becomes the viewer's discretion to poke their heads into the nooks and crannies of the project, exploring the components of the story that interest them the most.

WEB CAM

Webcams are video capturing devices connected to computers or computer networks, often using USB or, if they connect to networks, ethernet or Wi-Fi. They are well-known for their low manufacturing costs and flexible applications.
Web-accessible cameras
This Axis camera can be connected directly to a network or the Internet, via an RJ45 connector on its rear. Users can access the picture by connecting to an onboard web server.In addition to use for personal videoconferencing, it was quickly realised that World Wide Web users enjoyed viewing images from cameras set up by others elsewhere in the world. While the term "webcam" refers to the technology generally, the first part of the term ("web-") is often replaced with a word describing what can be viewed with the camera, such as a netcam or streetcam. Educators can use webcams to take their students on virtual field trips.
Today there are millions of webcams that provide views into homes, offices and other buildings as well as providing panoramic views of cities (Metrocams) and the countryside. Webcams are used to monitor traffic with TraffiCams, the weather with WeatherCams and even volcanoes with VolcanoCams. Webcam aggregators allow viewers to search for specific webcams based on geography or other criteria.

Thursday, November 13, 2008

WEB FORUME

An Internet forum, or message board, is an online discussion site.[1] It is the modern equivalent of a traditional bulletin board, and a technological evolution of the dialup bulletin board system.From a technological standpoint, forums[note 1] or boards are web applications managing user-generated content.Forums consist of a group of contributors who usually must be registered with the system, becoming known as members. The members submit topics for discussion (known as threads) and communicate with each other using publicly visible messages (referred to as posts) or private messaging.Forums usually restrict anonymous visitors to only view the contents posted.
People participating in an Internet forum will usually build bonds with each other and interest groups will easily form around a topic's discussion, subjects dealt within or around sections in the forum. The term community refers to the segment of the online community participating in the activities of the web site they reside in. It is also used to refer to the group interested in the topic on the Internet, rather than just the site.

WEB FEED

A web feed (or news feed) is a data format used for providing users with frequently updated content. Content distributors syndicate a web feed, thereby allowing users to subscribe to it. Making a collection of web feeds accessible in one spot is known as aggregation, which is performed by an Internet aggregator. A web feed is also sometimes referred to as a syndicated feed.
In the typical scenario of using web feeds, a content provider publishes a feed link on their site which end users can register with an aggregator program (also called a feed reader or a news reader) running on their own machines; doing this is usually as simple as dragging the link from the web browser to the aggregator. When instructed, the aggregator asks all the servers in its feed list if they have new content; if so, the aggregator either makes a note of the new content or downloads it. Aggregators can be scheduled to check for new content periodically. Web feeds are an example of pull technology, although they may appear to push content to the user.
The kinds of content delivered by a web feed are typically HTML (webpage content) or links to webpages and other kinds of digital media. Often when websites provide web feeds to notify users of content updates, they only include summaries in the web feed rather than the full content itself.
Web feeds are operated by many news websites, weblogs, schools, and podcasters.
BenefitsWeb feeds have some advantages compared to receiving frequently published content via email:
When subscribing to a feed, users do not disclose their email address, so users are not increasing their exposure to threats associated with email: spam, viruses, phishing, and identity theft. If users want to stop receiving news, they do not have to send an "unsubscribe" request; users can simply remove the feed from their aggregator. The feed items are automatically "sorted" in the sense that each feed URL has its own sets of entries (unlike an email box, where all mails are in one big pile and email programs have to resort to complicated rules and pattern matching). A "Feed Reader" is required for using Web Feeds. This tool works like an automated e-mail program, but no e-mail address is needed. The user subscribes to a particular web feed, and thereafter receives updated content, every time updating takes place. Feed Readers may be online (like a webmail account) or offline. Recently a number of mobile readers have arrived to the market. An offline web feed is downloaded to the user's system. Feed readers are used in personalized home page services like iGoogle or My Yahoo or My MSN to put content such as news, weather and stock quotes appear on the user’s personal page. Content from other sites can also be added to that personalized page, again using feeds. Organizations can use a Web Feed Server behind their firewall to distribute, manage and track the use of internal and external web feeds by users and groups. Other web-based tools are primarily dedicated to feed-reading only. One of the most popular web-based feed readers at this point is Bloglines, which is also free. Opera, Safari, Firefox, Internet Explorer 7.0, and many other web browsers allow receipts of feeds from the tool bar using Live Bookmarks, Favorites, and other techniques to integrate feed reading into a browser. Finally, there are desktop-based feed readers, e.g. FeedDemon, NetNewsWire, Outlook 2007, Thunderbird, AggBot.
Scraping
Usually a web feed is made available by the same entity that created the content. Typically the feed comes from the same place as the website. However not all websites provide a feed. Sometimes third parties will read the website and create a feed for it by scraping it. Scraping is controversial since it distributes the content in a manner that was not chosen by the content owner.
Technical definition
A web feed is a document (often XML-based) which contains content items with web links to longer versions. News websites and blogs are common sources for web feeds, but feeds are also used to deliver structured information ranging from weather data to "top ten" lists of hit tunes to search results. The two main web feed formats are RSS and Atom.
"Publishing a feed" and "syndication" are two of the more common terms used to describe making available a feed for an information source, such as a blog. Like syndicated print newspaper features or broadcast programs, web feed content may be shared and republished by other websites. (For that reason, one popular definition of RSS is Really Simple Syndication.)
More often, feeds are subscribed to directly by users with aggregators or feed readers, which combine the contents of multiple web feeds for display on a single screen or series of screens. Some modern web browsers incorporate aggregator features. Depending on the aggregator, users typically subscribe to a feed by manually entering the URL of a feed or clicking a link in a web browser.
Web feeds are designed to be machine-readable rather than human-readable, which tends to be a source of confusion when people first encounter web feeds. This means that web feeds can also be used to automatically transfer information from one website to another, without any human intervention.

Monday, November 10, 2008

WEB CRAWLER

A web crawler (also known as a web spider, web robot, or—especially in the FOAF community—web scutter) is a program or automated script that browses the World Wide Web in a methodical, automated manner. Other less frequently used names for web crawlers are ants, automatic indexers, bots, and worms.
This process is called web crawling or spidering. Many sites, in particular search engines, use spidering as a means of providing up-to-date data. Web crawlers are mainly used to create a copy of all the visited pages for later processing by a search engine that will index the downloaded pages to provide fast searches. Crawlers can also be used for automating maintenance tasks on a website, such as checking links or validating HTML code. Also, crawlers can be used to gather specific types of information from Web pages, such as harvesting e-mail addresses (usually for spam).
A web crawler is one type of bot, or software agent. In general, it starts with a list of URLs to visit, called the seeds. As the crawler visits these URLs, it identifies all the hyperlinks in the page and adds them to the list of URLs to visit, called the crawl frontier. URLs from the frontier are recursively visited according to a set of policies.

Crawling policies



There are three important characteristics of the Web that make crawling it very difficult:
its large volume, its fast rate of change, dynamic page generation, which combine to produce a wide variety of possible crawlable URLs.
The large volume implies that the crawler can only download a fraction of the web pages within a given time, so it needs to prioritize its downloads. The high rate of change implies that by the time the crawler is downloading the last pages from a site, it is very likely that new pages have been added to the site, or that pages have already been updated or even deleted.
The recent increase in the number of pages being generated by server-side scripting languages has also created difficulty in that endless combinations of HTTP GET parameters exist, only a small selection of which will actually return unique content. For example, a simple online photo gallery may offer three options to users, as specified through HTTP GET parameters. If there exist four ways to sort images, three choices of thumbnail size, two file formats, and an option to disable user-provided contents, then that same set of content can be accessed with forty-eight different URLs, all of which will be present on the site. This mathematical combination creates a problem for crawlers, as they must sort through endless combinations of relatively minor scripted changes in order to retrieve unique content.
As Edwards et al. noted, "Given that the bandwidth for conducting crawls is neither infinite nor free, it is becoming essential to crawl the Web in not only a scalable, but efficient way, if some reasonable measure of quality or freshness is to be maintained." A crawler must carefully choose at each step which pages to visit next.
The behavior of a web crawler is the outcome of a combination of policies:
A selection policy that states which pages to download. A re-visit policy that states when to check for changes to the pages. A politeness policy that states how to avoid overloading websites. A parallelization policy that states how to coordinate distributed web crawlers.
Selection policyGiven the current size of the Web, even large search engines cover only a portion of the publicly available internet; a study by Lawrence and Giles (Lawrence and Giles, 2000) showed that no search engine indexes more than 16% of the Web. As a crawler always downloads just a fraction of the Web pages, it is highly desirable that the downloaded fraction contains the most relevant pages, and not just a random sample of the Web.
This requires a metric of importance for prioritizing Web pages. The importance of a page is a function of its intrinsic quality, its popularity in terms of links or visits, and even of its URL (the latter is the case of vertical search engines restricted to a single top-level domain, or search engines restricted to a fixed Web site). Designing a good selection policy has an added difficulty: it must work with partial information, as the complete set of Web pages is not known during crawling.
Cho et al. (Cho et al., 1998) made the first study on policies for crawling scheduling. Their data set was a 180,000-pages crawl from the stanford.edu domain, in which a crawling simulation was done with different strategies. The ordering metrics tested were breadth-first, backlink-count and partial Pagerank calculations. One of the conclusions was that if the crawler wants to download pages with high Pagerank early during the crawling process, then the partial Pagerank strategy is the better, followed by breadth-first and backlink-count. However, these results are for just a single domain.
Najork and Wiener [4] performed an actual crawl on 328 million pages, using breadth-first ordering. They found that a breadth-first crawl captures pages with high Pagerank early in the crawl (but they did not compare this strategy against other strategies). The explanation given by the authors for this result is that "the most important pages have many links to them from numerous hosts, and those links will be found early, regardless of on which host or page the crawl originates".
Abiteboul (Abiteboul et al., 2003) designed a crawling strategy based on an algorithm called OPIC (On-line Page Importance Computation). In OPIC, each page is given an initial sum of "cash" that is distributed equally among the pages it points to. It is similar to a Pagerank computation, but it is faster and is only done in one step. An OPIC-driven crawler downloads first the pages in the crawling frontier with higher amounts of "cash". Experiments were carried in a 100,000-pages synthetic graph with a power-law distribution of in-links. However, there was no comparison with other strategies nor experiments in the real Web.
Boldi et al. (Boldi et al., 2004) used simulation on subsets of the Web of 40 million pages from the .it domain and 100 million pages from the WebBase crawl, testing breadth-first against depth-first, random ordering and an omniscient strategy. The comparison was based on how well PageRank computed on a partial crawl approximates the true PageRank value. Surprisingly, some visits that accumulate PageRank very quickly (most notably, breadth-first and the omniscent visit) provide very poor progressive approximations.
Baeza-Yates et al. [5] used simulation on two subsets of the Web of 3 million pages from the .gr and .cl domain, testing several crawling strategies. They showed that both the OPIC strategy and a strategy that uses the length of the per-site queues are both better than breadth-first crawling, and that it is also very effective to use a previous crawl, when it is available, to guide the current one.
Daneshpajouh et al. [6] designed a community based algorithm for discovering good seeds. Their method crawls web pages with high PageRank from different communities in less iteration in comparison with crawl starting from random seeds. One can extract good seed from a previously crawled web graph using this new method. Using these seeds a new crawl can be very effective.
Restricting followed linksA crawler may only want to seek out HTML pages and avoid all other MIME types. In order to request only HTML resources, a crawler may make an HTTP HEAD request to determine a Web resource's MIME type before requesting the entire resource with a GET request. To avoid making numerous HEAD requests, a crawler may alternatively examine the URL and only request the resource if the URL ends with .html, .htm or a slash. This strategy may cause numerous HTML Web resources to be unintentionally skipped. A similar strategy compares the extension of the web resource to a list of known HTML-page types: .html, .htm, .asp, .aspx, .php, and a slash.
Some crawlers may also avoid requesting any resources that have a "?" in them (are dynamically produced) in order to avoid spider traps that may cause the crawler to download an infinite number of URLs from a Web site.
Path-ascending crawlingSome crawlers intend to download as many resources as possible from a particular Web site. Cothey (Cothey, 2004) introduced a path-ascending crawler that would ascend to every path in each URL that it intends to crawl. For example, when given a seed URL of
http://llama.org/hamster/monkey/page.html, it will attempt to crawl /hamster/monkey/, /hamster/, and /. Cothey found that a path-ascending crawler was very effective in finding isolated resources, or resources for which no inbound link would have been found in regular crawling.
Many Path-ascending crawlers are also known as Harvester software, because they're used to "harvest" or collect all the content - perhaps the collection of photos in a gallery - from a specific page or host.
Focused crawlingMain article: Focused crawlerThe importance of a page for a crawler can also be expressed as a function of the similarity of a page to a given query. Web crawlers that attempt to download pages that are similar to each other are called focused crawler or topical crawlers. The concepts of topical and focused crawling were first introduced by Menczer [7] [8] and by Chakrabarti et al.
The main problem in focused crawling is that in the context of a web crawler, we would like to be able to predict the similarity of the text of a given page to the query before actually downloading the page. A possible predictor is the anchor text of links; this was the approach taken by Pinkerton in a crawler developed in the early days of the Web. Diligenti et al. [11] propose to use the complete content of the pages already visited to infer the similarity between the driving query and the pages that have not been visited yet. The performance of a focused crawling depends mostly on the richness of links in the specific topic being searched, and a focused crawling usually relies on a general Web search engine for providing starting points.

WEBSITE DEFACEMENT

A website defacement is an attack on a website that changes the visual appearance of the site. These are typically the work of system crackers, who break into a web server and replace the hosted website with one of their own.
A message is often left on the webpage stating his or her pseudonym and the output from "uname -a" and the "id" command along with "shout outs" to his or her friends. Sometimes the Defacer makes fun of the system administrator for failing to maintain server security. Most times the defacement is harmless, however, it can sometimes be used as a distraction to cover up more sinister actions such as uploading malware.
A high-profile website defacement was carried out on the website of the company SCO Group following its assertion that Linux contained stolen code. The title of the page was changed from "Red Hat vs SCO" to "SCO vs World," with various satirical content following.

Monday, November 3, 2008

WEB COLORS

Web colors are colors used in designing web pages, and the methods for describing and specifying those colors.
Authors of web pages have a variety of options available for specifying colors for elements of web documents. Colors may be specified as an RGB triplet in hexadecimal format (a hex triplet); they may also be specified according to their common English names in some cases. Often a color tool or other graphics software is used to generate color values.
The first versions of Mosaic and Netscape Navigator used the X11 color names as the basis for their color lists, as both started as X Window System applications.
Web colors have an unambiguous colorimetric definition, sRGB, which relates the chromaticities of a particular phosphor set, a given transfer curve, adaptive whitepoint, and viewing conditions. These have been chosen to be similar to many real-world monitors and viewing conditions, so that even without color management rendering is fairly close to the specified values. However, user agents vary in the fidelity with which they represent the specified colors. More advanced user agents use color management to provide better color fidelity; this is particularly important for Web-to-print applications.

WEB HOST MANAGER

WebHost Manager (WHM) is a web-based tool used by server administrators and resellers to manage hosting accounts on a web server. WHM listens on ports 2086 and 2087 by default.
As well as being accessible by the root admin, WHM is also accessible to users with reseller privileges. Reseller users of cpanel have a smaller set of featuers than the root user,generally limited by the server asministrator,to features which they determine will effect their customers accounts rather than the server as a whole. From WHM,the server administrator can perform maintainance operation such as compile Apache and upgrade RPMs installed in the system.

WEB SCIENCE

Web Science Research Initiative (WSRI) is a joint effort of MIT and University of Southampton to bridge and formalize the social and technical aspects of collaborative applications running on large-scale networks like the web. It was announced on November 2, 2006 in MIT. Tim Berners-Lee is leading the program that also aims to attract government and private funds, and eventually produce undergraduate and graduate programs. This is very similar to the ISchool movement.
Some initial areas of interest are:
Trust and privacy
Social Networks
Collaboration

Sunday, November 2, 2008

EMAIL HOSTIMG SERVICES

An email hosting service is an Internet hosting service that runs email servers.
Email hosting services usually offer premium email at a cost as opposed to advertising supported free email or free webmail. Email hosting services thus differ from typical end-user email providers such as webmail sites. They cater mostly to demanding email users and Small and Mid Size (SME) businesses, while larger enterprises usually run their own email hosting service. Email hosting providers allow for premium email services along with custom configurations and large number of accounts. In addition, hosting providers manage user's own domain name, including any email authentication scheme that the domain owner wishes to enforce in order to convey the meaning that using a specific domain name identifies and qualifies email senders.
Most email hosting providers offer advanced premium email solutions hosted on dedicated custom email platforms. The technology and offerings of different email hosting providers can therefore vary with different needs. Email offered by most webhosting companies is usually more basic standardized POP3 based email and webmail based on open source webmail applications like Horde or Squirrelmail. Almost all webhosting providers offer standard basic email while not all email hosting providers offer webhosting.
Implementation
For a technical overview of how email hosting services are engineered you can read about email hubs.

WEB HOSTING SERVICES

An example of "rack mounted" servers.A web hosting service is a type of Internet hosting service that allows individuals and organizations to provide their own website accessible via the World Wide Web. Web hosts are companies that provide space on a server they own for use by their clients as well as providing Internet connectivity, typically in a data center. Web hosts can also provide data center space and connectivity to the Internet for servers they do not own to be located in their data center, called colocation.
Service scope
The scope of hosting services varies widely. The most basic is web page and small-scale file hosting, where files can be uploaded via File Transfer Protocol (FTP) or a Web interface. The files are usually delivered to the Web "as is" or with little processing. Many Internet service providers (ISPs) offer this service free to their subscribers. People can also obtain Web page hosting from other, alternative service providers. Personal web site hosting is typically free, advertisement-sponsored, or cheap. Business web site hosting often has a higher expense.
Single page hosting is generally sufficient only for personal web pages. A complex site calls for a more comprehensive package that provides database support and application development platforms (e.g. PHP, Java, Ruby on Rails, ColdFusion, and ASP.NET). These facilities allow the customers to write or install scripts for applications like forums and content management. For e-commerce, SSL is also highly recommended.
The host may also provide an interface or control panel for managing the Web server and installing scripts as well as other services like e-mail. Some hosts specialize in certain software or services (e.g. e-commerce). They are commonly used by larger companies to outsource network infrastructure to a hosting company. To find a web hosting company, searchable directories can be used. One must be extremely careful when searching for a new company because many of the people promoting service providers are actually affiliates and the reviews are biased.
Hosting reliability and uptime
Multiple racks of servers, and how a datacenter commonly looks.Hosting uptime refers to the percentage of time the host is accessible via the internet. Many providers state that they aim for a 99.9% uptime, but there may be server restarts and planned (or unplanned) maintenance in any hosting environment.
A common claim from the popular hosting providers is '99% or 99.9% server uptime' but this often refers only to a server being powered on and doesn't account for network downtime. Real downtime can potentially be larger than the percentage guaranteed by the provider. Many providers tie uptime and accessibility into their own service level agreement (SLA). SLAs sometimes include refunds or reduced costs if performance goals are not mET.

USAGE SHARE OF WEB BROWSER

Usage share, in web browser statistics is the percentage of visitors to a group of web sites that use a particular browser. For example, when it is said that Internet Explorer has 74% usage share, it means Internet Explorer is used by 74%[2] of visitors that visit a given set of sites. Typically, the user agent string is used to identify which browser a visitor is using. The concept browser percentages for the Web audience in general is sometimes called browser penetration.

INTERNET POLICE


Internet police is a generic term for police and secret police departments and other organizations in charge of policing internet in a number of countries.The major purposes of internet police, depending on the state, are fighting cybercrime, as well as censorship, propaganda, and monitoring and manipulating the online puBLIC OPINION.
Mainland China
It has been reported that in 2005, departments of provincial and municipal governments in mainland China began creating teams of Internet commentators from propaganda and police departments and offering them classes in Marxism, propaganda techniques, and the Internet. They are reported to guide discussion on public bulletin boards away from politically sensitive topics by posting opinions anonymously or under false names. "They are actually hiring staff to curse online", said Liu Di, a Chinese student who was arrested for posting her comments in blogs.
Chinese Internet police also erase anti-Communist comments and posts pro-government messages. Chinese Communist Party leader Hu Jintao has declared the party's intent to strengthen administration of the online environment and maintain the initiative in online opinion
See also: Jingjing and Chacha
India
Cyber Crime Investigation Cell is a wing of Mumbai Police, India, to deal with Cyber crimes, and to enforce provisions of India's Information Technology Law, namely, Information Technology Act 2000, and various cyber crime related provisions of criminal laws, including the Indian Penal Code. Cyber Crime Investigation Cell is a part of Crime Branch, Criminal Investigation Department of the Mumbai Police.
NetherlandsDutch police was reported to set up an Internet Brigade to fight cybercrime. It will be allowed to infiltrate internet newsgroups and discussion forums for intelligence gathering, to make pseudo-purchase and to provide services.[5].
RussiaIt is alleged in press that Russian security services operate secret teams called web brigades created to manipulate the online public opinion.
Thailand
After the 2006 coup in Thailand, the Thai police has been active in monitoring and silencing dissidents online. Censorship of the internet is carried out by the Ministry of Information and Communications Technology of Thailand and the Royal Thai Police, in collaboration with the Communications Authority of Thailand and the Telecommunication Authority of Thailand.
United Kingdom
The Internet Watch Foundation (IWF) is the only recognised organisation in the United Kingdom operating an internet ‘Hotline’ for the public and IT professionals to report their exposure to potentially illegal content online. It works in partnership with the police, Government, the public, Internet service providers and the wider online industry.

WEB BRIGADES

The web brigades (Russian: Веб-бригады )[1] are allegedly, in the view of some Russian liberal intellectuals (see below), online teams of commentators linked to security services that participate in political blogs and Internet forums to promote disinformation and prevent free discussions of undesirable subjects. Allegations of the existence of web brigades were made in a 2003 article "The Virtual Eye of the Big Brother"[1]
An article "Conspiracy theory" published in Russian journal in 2003 criticized theory of web brigades as attempts of creating myths by Russian liberal thinkers in a response for massive sobering up of Russian people. A point was made that observed behaviour of forum participants may be explained without a theory of FSB-affiliated brigades.[2]
As mentioned in 2007 sociological research of big groups in Russian society by the RIO-Center, the idea of existence of web-brigades is a widespread point of view in RuNet. Authors say "it's difficult to say whether hypothesis of existence of web-brigades corresponds to reality", but acknowledge that users professing views and methods ascribed to members of web-brigades may be found at all opposition forums of RuNet. [3]
The expression "red web-brigades" (Красные веб-бригады) used by Anna Polyanskaya as a title to her article is a pun with "Red Brigades".
Web brigades in RussiaPolyanskaya's articleThis alleged phenomenon in RuNet was described in 2003 by journalist Anna Polyanskaya (a former assistant to assassinated Russian politician Galina Starovoitova[4]), historian Andrey Krivov and political activist Ivan Lomako. They described organized and professional "brigades", composed of ideologically and methodologically identical personalities, who were working in practically every popular liberal and pro-democracy Internet forums and Internet newspapers of RuNet.
The activity of Internet teams appeared in 1999 and were organized by the Russian state security service, according to Polyanskaya. [5][1] According to authors, about 70% of audience of Russian Internet were people of generally liberal views prior to 1998–1999, however sudden surge (about 60-80%) of "antidemocratic" posts suddenly occurred at many Russian forums in 2000.
According to Polyanskaya and her colleagues, the behavior of people from the web brigades has distinct features, some of which are the following:[1]
Any change in Moscow's agenda leads to immediate changes in the brigade's opinions. Boundless loyalty to Vladimir Putin and his circle. Respect and admiration for the KGB and FSB. Nostalgia for the Soviet Union and propaganda of the Communist ideology, and constant attempts to present in a positive light the entire history of Russia and the Soviet Union, minimizing the number of people who died in repressions.[1] Anti-liberal, anti-American, anti-Chechen, anti-Semitic and anti-western opinions. Xenophobia, racism, approval of skinheads and pogroms.[1] Accusation of Russophobia against everyone who disagrees with them. Hatred of dissidents and human rights organizations and activists, political prisoners and journalists, especially Anna Politkovskaya, Sergei Kovalev, Elena Bonner, Grigory Pasko, Victor Shenderovich, and Valeria Novodvorskaya. Emigrants are accused of being traitors of the motherland. Some members will claim that they live in some Western country and tell stories about how much better life is in Putin's Russia. Before the Iraq War, the brigade's anti-U.S. operations reached unseen scale. The original publication describes: "it sometimes seemed that the U.S. was not liberating the Iraqi people from Saddam Hussein, but at a minimum had actually launched an attack on Russia and was marching on the Kremlin." However, it fell silent suddenly after Putin announced that Russia was not opposed to the victory of the coalition forces in Iraq.[1] Polyanskaya's article[1] describes the "tactics" of the alleged web brigades:
1.Frequent changes of pseudonyms. Round-the-clock presence on forums. At least one of the uniform members of the team can be found online at all times, always ready to repulse any “attack” by a liberal.[1] Intentional diversion of pointed discussions. For instance, the brigade may claim that Pol Pot never had any connection with Communism or that not a single person was killed in Warsaw Pact invasion of Czechoslovakia in 1968 by Soviet tanks. Individual work
on opponents. "As soon as an opposition-minded liberal arrives on a forum, expressing a position that makes them a clear "ideological enemy”, he is immediately cornered and subjected to “active measures” by the unified web-brigade. Without provocation, the opponent is piled on with abuse or vicious “arguments” of the sort that the average person cannot adequately react to. As a result, the liberal either answers sharply, causing a scandal and getting himself labeled a “boor” by the rest of the brigade, or else he starts to make arguments against the obvious absurdities, to which his opponents pay no attention, but simply ridicule him and put forth other similar arguments."[1] Accusations that opponents are working for “enemies”. The opponents are accused of taking money from Berezovskiy, the CIA, the MOSSAD, Saudi Arabia, the Zionists, or the Chechen rebels. Making personally offensive comments. Tendency to accuse their opponents of being insane during arguments. Remarkable ability to reveal personal information about their opponents and their quotes from old postings, sometimes more than a year old. Teamwork. "They unwaveringly support each other in discussions, ask each other leading questions, put fine points on each other’s answers, and even pretend not to know each other. If an opponent starts to be hounded, this hounding invariably becomes a team effort, involving all of the three to twenty nicknames that invariably are present on any political forum 24 hours a day."[1] Appealing to the Administration. The members of teams often "write mass collective complaints about their opponents to the editors, site administrators, or the electronic “complaints book”, demanding that one or another posting or whole discussion thread they don’t like be removed, or calling for the banning of individuals they find problematic."[1] Destruction of inconvenient forums. For example, on the site of the Moscow News, all critics of Putin and the FSB "were suddenly and without any explanation banned from all discussions, despite their having broken none of the site’s rules of conduct. All the postings of this group of readers, going back a year and a half, were erased by the site administrator."[1]
CriticismAlexander Yusupovskiy, head of the analytical department of the Federation Council of Russia (Russian Parliament) published in 2003 an article "Conspiracy theory" in Russian Journal with criticism of theory of web brigades. [2]
Yusupovskiy's points included:
According to Yusupovskiy, an active forum participant, it's not the first time he's faced an unfair method of polemics, when a person with "liberal democratic views" accused one's opponent of being an FSB agent as a final argument. Yusupovskiy himself didn't take Web brigades theory seriously, "naively" considering that officers of GRU or FSB have more topical problems than "comparing virtual penises" with liberals and emigrants. His own experience at forums also did not give him a reason proving the theory. Yusupovskiy considered Polyanskaya's article an interesting opportunity to draw a line of demarcation between analytics and its imitation. According to Yusupovskiy, authors of the article are obsessed with "a single but strong affection": to find a "Big Brother" beyond any phenomena not fitting their mindsets. Yusupovskiy called an article a classic illustration of reverted "masonic conspiracy". Although Yusupovskiy himself has a list of claims against Russian security services and their presense in virtual world (as "according to statements of media every security service is busy in the Internet tracking terrorism, extremism, narcotic traffic, human trafficking and child pornography"), his claims are of different nature than those of Polyanskaya. Criticising Polyanskaya's point that Russian forums after 9/11 show "outstanding level of malice and hatred of the USA, gloat, slander and inhumanity" as "undifferentiated assessment bordering lie and slander", Yusupovskiy noted that there is a difference between "dislike of hegemonic policy of the United States" at Russian forums and "quite friendly attitude towards usual Americans". Aggression and xenophobia don't characterize one side but are a common place of discussion (as Yusupovskiy suggested, illusion of anonymity and absence of censorship allows such stuff to be taken from subconsciousness that won't let to be spoken aloud by an internal censor otherwise). According to Yusupovskiy, There's no lack of gloat of other kind — e.g. over Russian losses in Chechnya — or manifestations of brutal malice against "commies", "under men", Russians, Russia in posts of some our former compatriots from Israel, USA and other countries. And in a discussion of Palestineans or Arabs, "beasts", "not people", etc. are perhaps the most decent definitions given by many (not all) western participants of forums. It's specially touching to observe "briefings of hatred" (such things happen too), when Russian, Israeli and American patriots unanimously blame "Chechen-Palestinean-Islamic" terrorists...[2]
Commenting on the change of attitude of virtual masses in 1998-1999 authors evade any mention of the 1998 Russian financial collapse which "crowned liberal decade", preferring to blame "mysterious bad guys or Big Brother" for that change. "About 80% of authors at all web forums very aggressively and uniformly blame the USA" as authors note, making a conclusion at the same time: "at a moment amount of totalitarian opinions at Russian forums became 60%-80%". Try to feel semantics of "extremal journalism" mindset and its logics of antithesises: either apology of Bush'es America while spitting on one's own country, either — totalitarian agentry. To illustrate "protective totalitarian" mindset, authors quote several malicious posts from masses of forum flapjaw: "Security services existed in all times, all democratic states of the West had, have and will be having them." Or: "FSB is the same security service like FBI in the USA or Mossad in Israel or Mi-6 in Great Britain". And etc. I understand that I risk of being called "totalitarian", but quite honestly I'm having difficulties to recognize signs of totalitarianism in the above quotes. As authors continue, "there are quite less real people with totalitarian views than one may consider after having a casual look on posts in any forum". Here one can only sigh: would they look on VCIOM or FOM opinion polls results, how Stalin's popularity doesn't diminish and even rises, how meaning and emotional connotations of the word "democrat" changed (from positive to negative), and would they seriously consider these tendencies of development of social consciousness...[2]
Authors exclude from their interpretation of events all other hypotheses, such as internet activity of a group of some "skinheads", nazbols or simply unliberal students; or hackers able to get IP addresses of their opponents. According to Yusupovskiy, authors treat "independence of public opinion" in spirit of irreconcilable antagonism with "positive image of Russia".[2] Yusupovskiy finally commented on Polyanskaya's article:
"We would never make our country's military organizations and security services work under the rule of law and legal control, if won't learn to recognize rationally and objectively their necessity and usefulness for the country, state, society and citizens. Sweeping defamation and intentional discreditation with the help of "arguments", which are obviously false, only contribute to the extrusion of security services outside of rule of law and instigates them to chaos".[2]
Discussion on control over the InternetIn 2006 radio talk show hosted by Yevgenia Albats with a topic "Control over the Internet: How does that happen?", Russian journalist Andrei Soldatov made the following points[6]:
There are countries with greater or less control over the Internet; but there is control over the Internet in Russia; During the US invasion of Iraq, a group of people calling themselves GRU officers published allegedly internal GRU information on American losses in Iraq — this information was shown on the background of Anti-american hysteria and was well consumed. Later it turned out this information was not credible, but this effectively didn't change the result; After 2005 Nalchik raid Russian Ministry of Foreign Affairs issued a statement that Kavkaz Center "is a very bad resource", and after two days two teams calling themselves hackers appeared, to arrange hacker attacks against Kavkaz Center; Soldatov doesn't think web brigades are fiction. He had related issues with his own site, especially during such events like Moscow theater hostage crisis; One of structures having related business with the Internet is signals intelligence, which is currently a part of the FSB and has been formerly a part of 16th KGB department; There is a related agency in Russian Ministry of Internal Affairs with competent people who can do such things. [6] Other participant of the talk show, Russian political scientist Marat Gelman made the following points:
There are countries with control over the Internet, there's none in Russia; there may be control understood as observation, but there's no tool to forbid any certain resource; Internet is good as the space where authorities and opposition are placed in absolutely equal conditions and they need to actually struggle and convince people. It's impossible to actually prohibit in the Internet, one needs to win [a game]; Professional activity exists for long in the Internet — as many sites are professional media-structures with a team and owners perhaps — in a way a newspaper is. And coordinated work of these resources is possible. Commenting on a possibility that besides open structures there are closed ones imitating activity of youths, Gelman said he had an exact feeling it's fake; Answering Albats' question about possibilities of control over Internet as a means to exert influence on youths, Gelman asserted that authorities, opposition and America are all equal players in question of control and attempts of influence. Unlike e.g. television or newspapers all players in the Internet have equal possibilities, every player tries to do one's sort of work; Answering Albats' question "How control over Internet is technically organized?", Gelman noted that there are two major concepts: either the information is filtered before an user may access it ("premoderation"), either "postmoderation". While the first is the case in China, where access to certain types of resources is physically blocked, Gelman considers it a bad practice and it is absolutely unacceptable for Russia. Gelman thinks there must be control over the Internet in Russia, but only in the form of an agency searching for criminals in the Internet, tracking their IPs to get personal information, as well as there must be a mechanism to impose a penalty on such people. [6]
"LiveJournal fighters"A member of National Bolshevik Party Roman Sadykhov claimed that he secretly infiltrated pro-Kremlin organizations of "LiveJournal fighters", allegedly directed and paid from the Kremlin and instructions given to them by Vladislav Surkov, a close aide of Vladimir Putin [7] Surkov allegedly called Livejournal "a very important sector of work" [8] and said that people's brains must be "nationalized".


Blogspot Template by Isnaini Dot Com Powered by Blogger and Job Search