Scrapy is a powerful and versatile web scraping framework used by developers all over the world. Working with a qualified Scrapy Developer can provide your project with an efficient web scraping and crawling solution. Scrapy utilizes Python scripts for automated web data extraction; saving companies time and money. The Scrapy Developer can customize solutions to scrape from any website or page in order to collect the data you need.

Here's some projects that our expert Scrapy Developer made real:

  • Extracting product feed from an API
  • Automating data scraping from websites
  • Generating crawled information from multiple dynamic websites
  • Crawling data from Facebook pages for login requests
  • Collecting event information for WordPress plugin

Our best Scrapy Developers can ensure that web scraping and crawling solutions integrate smoothly into applications or operations. Create accurate and reliable scraped data quickly and efficiently with the help of Freelancer.com's talented certified experts. Avoid the tedious task of collecting data manually with Freelancer's affordably priced Scrapy Developers.

Take advantage of our experienced Scrapy Developers today and post your project on Freelancer.com now to hire an expert quickly, conveniently, and cost-effectively!

From 25,374 reviews, clients rate our Scrapy Developers 4.9 out of 5 stars.
Hire Scrapy Developers

Scrapy is a powerful and versatile web scraping framework used by developers all over the world. Working with a qualified Scrapy Developer can provide your project with an efficient web scraping and crawling solution. Scrapy utilizes Python scripts for automated web data extraction; saving companies time and money. The Scrapy Developer can customize solutions to scrape from any website or page in order to collect the data you need.

Here's some projects that our expert Scrapy Developer made real:

  • Extracting product feed from an API
  • Automating data scraping from websites
  • Generating crawled information from multiple dynamic websites
  • Crawling data from Facebook pages for login requests
  • Collecting event information for WordPress plugin

Our best Scrapy Developers can ensure that web scraping and crawling solutions integrate smoothly into applications or operations. Create accurate and reliable scraped data quickly and efficiently with the help of Freelancer.com's talented certified experts. Avoid the tedious task of collecting data manually with Freelancer's affordably priced Scrapy Developers.

Take advantage of our experienced Scrapy Developers today and post your project on Freelancer.com now to hire an expert quickly, conveniently, and cost-effectively!

From 25,374 reviews, clients rate our Scrapy Developers 4.9 out of 5 stars.
Hire Scrapy Developers

Filter

My recent searches
Filter by:
Budget
to
to
to
Type
Skills
Languages
    Job State
    12 jobs found

    # Python Developer (Scrapy / FastAPI / Docker) for Crawling Project We are looking for an experienced Python developer to build a structured web crawling system. This is not a large corporate project. It is a technically clean, well-defined system with a long-term perspective. We value clean architecture, stability, and understandable code. The system will process approximately 30,000 profiles per week and will be connected to an existing Laravel-based admin dashboard. --- ## Responsibilities * Develop a crawler using **Scrapy** * Build a small **FastAPI interface** (start / stop / status / statistics) * Integrate with **MariaDB** * Implement a media pipeline: download → local cache → upload to Wasabi (S3) * Implement hash-based change detection * Implement soft-delete logi...

    $16 / hr Average bid
    $16 / hr Avg Bid
    48 bids

    I am looking for an experienced Web Scraping and Automation Developer to build an automated daily workflow for my business. I respond to over 100 government tenders a year and need to automate the discovery and document retrieval process. The Goal: Scrape 9 Australian Government tender websites daily for newly published tenders in two specific categories (UNSPSC 43000000 - IT, and 81000000 - Engineering/Research). Extract key details (Title, Agency, Closing Date, URL) and insert them into a centralized Google Sheet. Automatically download the associated Tender Documents and save them into uniquely named folders in my Google Drive. Periodically monitor these specific tenders for any newly published Addendums, and automatically download them to the respective Google Drive folder. The Website...

    $1251 Average bid
    $1251 Avg Bid
    57 bids

    I have a list of roughly 1500 URLs—each coming from the same automotive website—that together cover the top 100 makes, models, grades and variants sold in Australia. I need every data point the site makes available for each of those vehicles, from the obvious specs such as year, make, model and variant right through to driveway prices, engine type, transmission, drive configuration, warranty details, fuel-economy figures, in-car technology features, seating layouts and any other attributes exposed on the page. The end goal is a clean, analysis-ready Excel workbook that lets me run market-wide comparisons, so consistency is critical: headings must be standardised, units normalised and categorical values written the same way across the entire sheet. I am happy for you to use P...

    $462 Average bid
    $462 Avg Bid
    118 bids

    I need a reliable script that can pull live pricing details for car rentals from both the rental companies’ own sites and the big aggregator platforms in one pass. The goal is to feed it pickup / drop-off locations, dates, and driver age, then receive a clean CSV or JSON that lists vehicle class, daily and total price, currency, taxes & fees, and the URL it was scraped from. The scraper has to: • Navigate each target site automatically, including date pickers and location selectors. • Rotate user-agents / proxies or apply any other anti-bot tactics necessary to stay undetected. • Capture and log errors so a failed request never silently drops a row. • Be easy for me to rerun on demand—command-line or small web UI is fine, as long as setup is s...

    $97 Average bid
    $97 Avg Bid
    98 bids

    I need a clean, well-documented script that automatically gathers publicly available biographies of football players directly from their official websites and exports everything into a single CSV file. The focus is strictly on biographical details—no match statistics or transfer news for now—so the crawler should identify, parse, and normalise information such as full name, date of birth, nationality, position, current club, height/weight (where listed), and any notable career highlights that appear on the player’s own site. Because these pages vary in structure, the code should be resilient: graceful error handling, user-agent rotation, and clear selectors or XPath rules that are easy for me to extend later. I’m comfortable running Python, so libraries like Reques...

    $778 Average bid
    $778 Avg Bid
    104 bids

    I want to build a clean, well-structured dataset of restaurants throughout Portugal by scraping publicly available information on Google (Search / Maps). The core of the job is to capture three things for every venue you find: • complete contact details (name, address, phone, email or website if listed) • ratings plus review count pulled exactly as Google displays them • whether the place offers a fixed-price menu (“menu do dia”) and, if so, its price and what is included (possible 5 stages: Entrance, Plate, Drink, Dessert, Coffee) but it can be only 2 stages out of 5 (example: Plate+Drink) Please also tag each record with its district and municipality so the file can be filtered regionally. Deliverables 1. A single CSV or Excel file containing one row ...

    $247 Average bid
    $247 Avg Bid
    145 bids

    I need you to collect 1,000 product images, titles, and links from Temu. Just these three items. I think this should be pretty straightforward. If you do a good job, I’ll hire you on a long-term basis, as I’ll need someone to help me collect product information from Temu on an ongoing basis.

    $44 Average bid
    $44 Avg Bid
    85 bids

    I need a reliable developer to build a fully-automated system that gathers fresh, publicly listed email addresses from Google search results (pulled through a SERP API or an equivalent method) every single day, verifies them for validity, and delivers a clean CSV ready for my marketing campaigns. Here’s what I’m after: • Workflow 1. Submit a set of keywords or niche phrases. 2. Crawl the top Google result pages returned by a SERP service, extract any visible email addresses, and capture the source URL and page title. 3. Run each address through an SMTP-level verifier (ZeroBounce, NeverBounce, or an in-house Python verifier—whichever you prefer, as long as it returns status codes for valid, invalid, catch-all, disposable, and role accounts). 4. O...

    $220 Average bid
    $220 Avg Bid
    35 bids

    I'm looking for a skilled web scraper to extract product images and descriptions from Yupoo. The scraped data should be organized and delivered in a CSV file. Requirements: - Experience with web scraping tools and technologies - Ability to handle dynamic content on Yupoo - Attention to detail to ensure data accuracy - Proficient in data organization and CSV formatting Ideal Skills: - Python or similar programming languages - Familiarity with libraries like BeautifulSoup or Scrapy - Previous experience scraping ecommerce websites

    $186 Average bid
    $186 Avg Bid
    76 bids
    Expert Data Scraper Required
    2 days left
    Verified

    Hi, I need a data scraper who can scrap a data from provided sources. Skills : Core Technical Skills The freelancer should know Python (the most common scraping language) with libraries like Scrapy, BeautifulSoup, or Playwright. They should also be comfortable with browser automation tools like Selenium or Puppeteer (JavaScript-based), since sites like TipRanks are JavaScript-heavy and need a real browser to render. Anti-Bot Bypass Experience This is the most critical skill for your specific case. Look for someone experienced with handling CAPTCHAs (2Captcha, Anti-Captcha services), rotating proxies and residential IPs, spoofing browser headers and fingerprints, and bypassing Cloudflare or similar bot protection. TipRanks specifically uses these protections, so this experience is non-...

    $38 Average bid
    $38 Avg Bid
    40 bids

    I am looking to hire an experienced web scraper (not a script) who can reliably extract data from a German business directory website. Scope of Work: You will crawl the entire website and extract only the full Address field from every business listing. The address must be captured exactly as it appears on the site (raw format, no modifications). Requirements: • Cover the entire site including all categories and pages • Ensure no duplicates in the final dataset • Capture new or updated listings if the crawl runs again • Handle pagination and deep crawling properly • Maintain high accuracy and completeness Deliverables: • A single Excel (.xlsx) file containing all extracted addresses • Clean, structured data (one address per row) • No...

    $339 Average bid
    $339 Avg Bid
    44 bids
    Crawler 500 sites de leilões
    2 hours left
    Verified

    Preciso de uma aplicação capaz de varrer aproximadamente 500 sites de leiloeiros e extrair, diariamente, os preços e ofertas publicados. Meu objetivo é acompanhar variações de valor e condições em tempo quase real, gerando um arquivo consolidado que eu possa tratar depois em outras ferramentas de análise. Eu fornecerei uma planilha base contendo: • A lista completa dos domínios a serem monitorados • Os campos extras que desejo capturar em cada página (descrição do item, data e hora do leilão, nome do vendedor, entre outros) O que espero da entrega: 1. Crawler/web-scraper funcional que percorra todos os endereços e colete as informações especificadas. 2. Atu...

    $737 Average bid
    $737 Avg Bid
    45 bids

    Recommended Articles Just for You