Compare Products
|
|
|
Features * Search Engine Harvester - Harvest thousands of URL’s from over 30 search engines such as Google, Yahoo and Bing in seconds with the powerful and trainable URL harvester.
* Keyword Harvester - Extensive keyword harvester, to produce thousands of long-tail keywords from a single base keyword.
* Proxy harvester - Powerful proxy harvester and tester, to ensure you can keep your work private through the use of thousands of free proxies.
* Comment Poster - Use the fast, and trainable multi-threaded poster to leave comments on dozens of platforms with your backlink and desired anchor text.
* Link Checker - Quickly scan thousands of pages to verify your backlinks exist, and the anchor text with the fast multi-threaded backlink checker.
* Numerous Tools - Download Videos, Create RSS Feeds or Sitemaps, Find Unregistered Domains, Extract Emails, Check Indexed Pages and dozens more time saving features.
|
Features * Fast and powerful - write the rules to extract the data and let Scrapy do the rest.
* Easily extensible - extensible by design, plug new functionality easily without having to touch the core.
* Portable, Python - written in Python and runs on Linux, Windows, Mac and BSD.
* Built-in support for selecting and extracting data from HTML/XML sources using extended CSS selectors and XPath expressions, with helper methods to extract using regular expressions.
* An interactive shell console (IPython aware) for trying out the CSS and XPath expressions to scrape data, very useful when writing or debugging your spiders.
* Built-in support for generating feed exports in multiple formats (JSON, CSV, XML) and storing them in multiple backends (FTP, S3, local filesystem)
* Robust encoding support and auto-detection, for dealing with foreign, non-standard and broken encoding declarations.
* Strong extensibility support, allowing you to plug in your own functionality using signals and a well-defined API (middlewares, extensions, and pipelines).
* Wide range of built-in extensions and middlewares for handling:
cookies and session handling
HTTP features like compression, authentication, caching, user-agent spoofing, robots.txt, crawl depth restriction
* A Telnet console for hooking into a Python console running inside your Scrapy process, to introspect and debug your crawler
Plus other goodies like reusable spiders to crawl sites from Sitemaps and XML/CSV feeds, a media pipeline for automatically downloading images (or any other media) associated with the scraped items, a caching
* DNS resolver, and much more!
|
LanguagesOther |
LanguagesPython |
Source TypeClosed
|
Source TypeOpen
|
License TypeProprietary |
License TypeProprietary |
OS Type |
OS Type |
Pricing
|
Pricing
|
X
Compare Products
Select up to three two products to compare by clicking on the compare icon () of each product.
{{compareToolModel.Error}}Now comparing:
{{product.ProductName | createSubstring:25}} X