The ultimate guide to hiring a web developer in 2021
If you want to stay competitive in 2021, you need a high quality website. Learn how to hire the best possible web developer for your business fast.
Web Scraping is the process of extracting data or information from an online source such as a website, database, application, etc. Web Scraping Specialists have the skill that helps people collect valuable digital data and quickly find the useful information they need from websites, mobile apps, and APIs. The experts usually use web scraping tools and advanced technologies to collect large amounts of targeted data without any manual work for the client.
With web scraping, tasks that otherwise may require a lot of time can be automated and done faster. Our experienced Web Scraping Specialists use their expertise to develop scripts that continuously target structured and unstructured data sources.
Here's some projects that our expert Web Scraping Specialist made real:
Web Scraping Specialists are skilled professionals who know how to help businesses optimize processes while collecting rich structured data they need for their specific purposes. Our experts fasten the process and return accurate results in less time, so that the customer can make better decisions more quickly without any manual labour. If you are looking for a talented professional to make a web scraping project for you, you have come to the right place. Here in Freelancer.com you can find talented professionals who will get the job done with top quality results! Post your project now and see what our Web Scraping professionals can do for you!
From 361,691 reviews, clients rate our Web Scraping Specialists 4.9 out of 5 stars.Web Scraping is the process of extracting data or information from an online source such as a website, database, application, etc. Web Scraping Specialists have the skill that helps people collect valuable digital data and quickly find the useful information they need from websites, mobile apps, and APIs. The experts usually use web scraping tools and advanced technologies to collect large amounts of targeted data without any manual work for the client.
With web scraping, tasks that otherwise may require a lot of time can be automated and done faster. Our experienced Web Scraping Specialists use their expertise to develop scripts that continuously target structured and unstructured data sources.
Here's some projects that our expert Web Scraping Specialist made real:
Web Scraping Specialists are skilled professionals who know how to help businesses optimize processes while collecting rich structured data they need for their specific purposes. Our experts fasten the process and return accurate results in less time, so that the customer can make better decisions more quickly without any manual labour. If you are looking for a talented professional to make a web scraping project for you, you have come to the right place. Here in Freelancer.com you can find talented professionals who will get the job done with top quality results! Post your project now and see what our Web Scraping professionals can do for you!
From 361,691 reviews, clients rate our Web Scraping Specialists 4.9 out of 5 stars.Our Pain Point: We are currently receiving industry update emails (DIARY'S LATEST NEWS - DIARY directory), which include important information for our business outreach strategy, we wish to be able to organise and utilise this information for our client outreach. Our current process is incredibly manual, it involves setting aside time for an employee to review the emails we have received, click through into the links for the articles provided and pull the relevant information from the article, they then will try and find the person mentioned or referenced in the article on LinkedIn and connect with them on the platform. However, reading and scanning all emails on a regular basis is very time-consuming and not possible for our team to perform on a weekly basis. Hence, our goal would be...
I have to pull many thousands of PDF files from a publicly available but poorly structured online database. The pages are slow, there are no clear download links, and navigation relies on clunky JavaScript forms, so a straightforward “save as” approach will take far too long. You will receive a text file that contains the exact filenames for every document I need. Those filenames appear in the HTML once the record is loaded, so they can be used as reliable anchors for the scrape. The order in which the files arrive does not matter; accuracy and completeness do. I expect an automated approach—Python with Selenium, Playwright, Scrapy, or any comparable tool is fine—as long as it can work around the site’s fragile structure and occasional timeouts. If headles...
I have a video that contains strings of text I need captured word-for-word and transferred into an Excel spreadsheet. As you watch, please record each distinct piece of text in its own row and populate a dedicated column labelled “Subdomain” exactly as it appears or is implied in the footage. If the same text appears multiple times, enter each occurrence separately so that the sheet mirrors the sequence of the video. Deliverables • A clean Google Sheet file containing all extracted text with the right columns • No spelling mistakes or missing entries (I will cross-check against the footage)
I need an excel spreadsheet that contains a list of ads that I sell. Some are sold with a date range and some are sold by quantity. On a single booking sheet I need to enter the company name, select an ad type and enter either a date range or enter the quantity purchased depending on which type of ad is selected. An output sheet or summary will show me the following at a glance: Ads by Date Range Company Name, Ad type selected, start date, finish date, status. As ads approach 7 days until expiry the finish date text will turn orange, once it reaches the finish date it will go red. no two customers can have the same date range ad type Ads by Quantity Company Name, Ad type selected, Quantity purchased, Ads completed, Ads remaining. If 10 ads are purchased then 10 tick boxes or radio bu...
I need a Python-based trading bot that executes a clean trend-following strategy and feeds its output to a lightweight web dashboard. The trading logic should automatically detect and ride upward or downward trends, handle position sizing, manage risk with configurable stop-loss / take-profit rules, and run with as few external dependencies as practical (NumPy, Pandas, TA-Lib are fine). Exchange connectivity is flexible: as long as live orders and historical price data can flow reliably, I’m happy to integrate through CCXT or a direct API of a major venue such as Binance, Coinbase, or Kraken—let me know which you can implement fastest. The dashboard is just as important as the core bot. Through it I want to see: • Real-time performance metrics (open PnL, equity curve, ...
I’m building an automated, end-to-end pipeline that pulls results from a fast-changing election website every few minutes, cleans and enriches the feed with AI, then pushes out clear charts and graphs that highlight vote counts per party, regional voting trends, overall voter turnout and a concise statistical summary of the outcomes. There is an existing Google Data Studio (Looker) template available. The scope breaks down into three tightly-linked steps: • Data capture – a headless, resilient scraper (Selenium, Playwright or a similar tool) must track roughly 3 500 individual race entries across 17 political parties in six regions, coping smoothly with AJAX calls, pagination and any CAPTCHA or session refreshes. • AI-powered processing – once ingested, th...
Thumbnail Design Specialist • High-CTR Designs: Expert in creating eye-catching, high-click-through-rate (CTR) thumbnails that drive views and grow channels. • Visual Storytelling: Skilled at distilling complex video topics into a single, compelling image using vibrant colors and bold typography. • A/B Testing Knowledge: Deep understanding of YouTube/social media trends, ensuring your content stands out against competitors. • Quick Turnaround: Ability to deliver high-quality, professional assets within tight deadlines (often under 24 hours). Data Entry & Administrative Expert • 99% Accuracy Rate: Committed to delivering error-free data management, lead generation, and spreadsheet organization. • Software Proficiency: Advanced skills in Microsoft Excel (VLO...
I need a reliable script that pulls fresh product details and current prices from eBay every 24 hours and drops the results into a clean Excel workbook. The data points I absolutely need are: • Full product title • Item ID / listing URL • Current price (and currency) • Shipping cost if shown • Seller name and feedback score • Listing time-stamp so I can track changes day-to-day WORKFLOW OVERVIEW: You will receive: Images from Singapore card vendors showing buyback prices Use AI tools to: Identify the card from the image Search eBay: For recent SOLD listings of the same PSA card Extract: Actual accepted Best Offer prices Calculate: Total landed cost in SGD Compare: Vendor Buyback Price vs Total Cost Flag: Arbitrage opportunity if pro...
Google Search Trends Analysis (Python) ## Project Overview This project analyzes **Google Search Trends data** using Python to understand how a keyword's popularity changes over time and across regions. The analysis covers **15 countries**, compares **time-wise interest**, and explores **related keywords** to uncover search behavior patterns. ## Objectives * Analyze time-wise search interest of a keyword * Compare keyword popularity across 15 countries * Identify and analyze related search keywords * Visualize trends for better insights ## Tools & Technologies * Python * Pytrends (Google Trends API Wrapper) * Pandas * NumPy * Matplotlib * Seaborn * Jupyter Notebook ## Project Structure ``` ├── google data analysis # Main notebook ├── # Proj...
1. Add basic Captcha to - Will you be using a WordPress plug-in or some other application? If so, which plug-in will you use and how will it work? How long will it take to install it? 2. Stop website scraping. What application do you propose using that will stop website scraping? How effective is it? Will it stop all illegal bot scraping? Will it still allow legitimate search bots such as Google, Yahoo, etc. to search the website? 3. Create a new page titled 142-page Private Placement Memorandum example document. This Private Placement consists of 5 distinct documents that make up the Private Placement Memorandum (PPM). Offering Document – 60 Pages This document is given to the investor and does not need to be returned. Business Plan – 39 pages Addendum - A This doc...
We need a Python expert to build an end-to-end automated web-scraping and data-processing system for German B2B directories. The goal is to eliminate all manual steps — scraping, cleaning, deduplication, enrichment, and export — within 15 days. You’ll design and implement a high-speed, fault-tolerant pipeline that scrapes ~10 directories, normalizes data (including Umlauts and formatting), removes duplicates, detects updates, and exports clean structured data automatically. ⸻ Core Responsibilities • Develop asynchronous scraping system (Python 3.11+, aiohttp/httpx/Scrapy/Playwright). • Build deduplication & change-detection logic using hash comparison and timestamps. • Design and connect central database (PostgreSQL + SQLite) to store unique compa...
This project is a python scraper that will scrape dynamic changing prices from different websites everyday automatically thru cronjob. A couple of the websites have tokens that have to be initiated that keep changing. All of the sites are getting scraped except for one has a problem. You will fix that. This scraper is also inputting the incorrect price into our database and you will need to fix that as well. Heres how the system works: Prices are scraped from several theme park websites. Then they are ran thru a calculation algorithm to determine what our price should be. All the sites are inputting the correct prices except for universal. You will fix that (Currently its putting the scraped price in our database and not the calculated price like it should). All the calculations are ther...
Data Collection – Forex Cash Rates for 140 City-Destination Pairs 1. Objective We need to gather pricing data for Foreign Currency Notes (Cash) from the top 10 forex vendors in various Indian cities. You will simulate a transaction on their websites to capture the final landing cost (including hidden fees, taxes, and service charges). Scope: Product: Foreign Currency Notes (CASH only). Do NOT collect data for Forex Cards. Transaction Value: Approximately ₹40,000 INR per transaction. Total Pairs: 14 Indian Cities × 11 Foreign Destinations = 154 Pairs. Volume: Top 10 vendors per pair (approx. 1,500+ data points). 2. Input Data A. The 14 Indian Cities (Source Locations) Bangalore Delhi Mumbai Chennai Hyderabad Pune Kolkata Ahmedabad Kochi Lucknow Jaipur Surat Indore Ludhiana B. T...
Hello, I have an Excel file with 5000 rows. This file contains the names and websites of 5000 companies. What I need are the personal email addresses for the import, export departments. You can use apollo, rocketreach, hunter etc. Can you help me? Thank you.
Looking for a Google Maps Google My Business Lead Generator, 50,000 leads per order. This is for immediate work and opens doors to regular assignments with us. We are looking to work with a GMB expert scraper who knows how to manage similar keywords and area keywords to extract all the property and construction related businesses in England, UK for ongoing marketing campaigns. England only. Must use maximum granularity in category descriptors and area descriptors. Scope of work - Generate 50,000 leads from Google Maps. - Utilize Google My Business for lead generation. - Provide leads in specified format or system. - Data columns: Keyword Name Full_Address Street_Address City Zip Municipality State Country Timezone Phone_1 Phone_Standard_format Email Address Website Domain First_category ...
I already have a Zapier automation that hands a submitted Tally-form URL to a ChatGPT step, but right now GPT is not able to see everything on the website. I need the workflow upgraded so the ChatGPT step can “see” absolutely everything on that URL—headers, body copy, layout structure, CTAs, images, and any other visual element—before it starts writing the internal report that follows. Here is what I’m after: once a new Tally response triggers the Zap, an additional step (or steps) should grab the complete HTML of the provided link, break it out in a way GPT can handle its token limits, and then pass the full context into my existing ChatGPT action. The end result is richer, more accurate analysis that I’ll feed straight into our internal reports. Feel...
Experienced Data Scraper Needed – C-Level Contacts in Jewelry Retail (KSA & Qatar) Description: We are looking for a highly experienced and reliable data scraping specialist to collect accurate contact details of C-level executives and key decision makers from the jewelry retail industry in Saudi Arabia (KSA) and Qatar. If you only know basic scraping, this job is not for you. We need someone who understands data accuracy, validation, and structured delivery. Scope of Work: • Identify jewelry retail companies in KSA and Qatar • Extract verified details of: • CEO / Founder / Owner • Managing Director / General Manager • Head of Operations / Procurement / Retail • Provide the following data fields: • Full Name • Job Title • C...
I have a collection of websites that hold the textual information I need consolidated into a single, well-structured dataset. Rather than copying the material manually, I want the process handled through reliable web-scraping tools so the capture is fast, consistent, and repeatable. Your task is straightforward: • Build (or adapt) a scraper that targets the pages I specify, pulls only the relevant text, and skips ads, navigation links, and other noise. • Deliver the harvested content in a clean CSV or Excel file with clear column headings; if you prefer a database export, let me know and we can adjust. • Include the finished script or notebook so I can rerun the extraction later. Accuracy and formatting matter more to me than sheer speed, so please allow time for basic...
I’m ready to turn an idea into a working Chrome extension that makes it easier to land roles listed on Amazon Jobs. At this stage I know the core requirement: the add-on must run smoothly in Google Chrome and integrate directly with the Amazon Jobs site. Beyond that, I’m open to building in whichever features will give candidates the biggest advantage—whether that ends up being live job alerts, streamlined application auto-fill, advanced filtering or a thoughtful mix of all three. Here’s what I’m looking for you to do: • Help define the exact feature set and workflow after a quick discovery chat. • Design a clean, intuitive UI that feels native inside Chrome. • Build the extension with best-practice front-end tooling (Manifest V3, JavaScript/T...
I’ve created a Google Sheet that will act as a central “lead collector,” and I need it filled with fresh, accurate contact data pulled from publicly available sites. The focus is simple yet crucial: for every company you find, capture the homepage URL and a working email address. (ask for details in the sheet ) A completely ethical approach is non-negotiable—no gated content, no third-party lists, and no automated harvesting that violates site terms. I’m happy for you to use tools you’re comfortable with (Python, Scrapy, BeautifulSoup, Selenium, Google Apps Script, etc.) as long as you respect and rate limits. Email addresses must appear in plain text within the sheet; please avoid hyperlinks or HTML encoding. Deliverables • A Google Sheet ...
Project Title: Automated Web Scraper & AI Summarizer for Legal Documents (Italian Administrative Justice Portal) Project Overview: I need an automated workflow to monitor judicial rulings from the Italian Administrative Justice website (). The system should perform weekly searches based on dynamic keywords (e.g., "appalti"). Requirements: Web Scraping: Create a scraper (using Python/Playwright or Browse AI) that can bypass anti-bot protections, input keywords, and extract PDF links and metadata for rulings published in the last 7 days. Cloud Integration: Use (Integromat) to organize the results. For each keyword, create/update a specific Google Drive folder and upload the new PDFs. AI Processing: Integrate Google Gemini API to: Analyze the rulings within each folder. ...
I’m assembling a nationwide database of barbers and beauty-salon owners in the United States and need your help compiling it. Every record must be fully verified and include three data points: • Owner’s full name with current phone number and email address • Business name with complete street address (city, state, ZIP) • Key service specialties the shop promotes (e.g., fades, coloring, micro-blading, etc.) Accuracy is critical, so I will spot-check contact details and discard any duplicates or outdated entries. Please provide the finished list in a clean spreadsheet or CSV that I can filter and sort easily. If you already have a recent, reliable dataset, let me know; otherwise, outline how you’ll source and verify each contact before I award the p...
PROJECT OVERVIEW I am looking for a detail-oriented researcher to perform a competitive analysis of the Restoration & Mitigation industry (Water, Fire, and Environmental services) in specific geographic regions. SCOPE OF WORK 1) Target Identification: Identify active service providers/contractors currently utilizing Google Local Service Ads (LSAs) within our target markets. 2) Data Extraction: For each qualifying business, compile a clean spreadsheet including: A - Company Name, Physical Address, website if applicable B - Primary Office Line. C - Key Decision Maker (Owner/Principal): Identify the primary point of contact. D - Direct Contact Information: Provide verified Direct Dials or business contact lines for the identified decision-maker. Requiremen...
Hello, I am building a small data collection tool for a medical AI research project. The goal is to collect doctor-patient role-play audio data and prepare it for AI training. I need a simple system with the following features: 1) Web-based interface (works on mobile and desktop) 2) Upload audio files (WAV/MP3) 3) Automatic speech-to-text using Whisper 4) Editable transcript (dialect correction) 5) Field for medical Arabic normalization 6) Term notes input 7) Automatic folder/package generation per session 8) Cloud storage (Google Drive / S3 or similar) This is Phase 1: Data collection only (no diagnosis, no reports). Tech preference: - Frontend: Web (React or simple HTML) - Backend: Python (Flask/FastAPI) - ASR: Whisper - Storage: Cloud + Local Deliverable: A working MVP that allow...
I need investment performance data scraped from a research website and organized into an Excel file. It needs to be set up for scraping on a regular basis. Requirements: - Experience with web scraping tools and techniques - Ability to extract and format data accurately - Proficiency in Excel - Attention to detail and reliability Ideal Skills: - Familiarity with Python, BeautifulSoup, or similar scraping libraries - Prior experience with financial data - Strong data handling and manipulation skills Please provide a sample of your previous work related to web scraping.
I have a collection of websites that contain the text I need organised in a single Excel spreadsheet. Your task is straightforward: visit each assigned site, copy the required text exactly as it appears, and paste it into the correct columns and rows of the workbook I supply. Accuracy and consistency matter more than speed. Please keep original spellings, line breaks, and capitalisation, and double-check that no unseen characters or extra spaces slip in. The spreadsheet already has headers; all you do is populate the empty cells beneath them. I will share: • The list of URLs • A short field-by-field guide so you know which snippet of text belongs in which column • The blank .xlsx file You return: • The completed Excel file, ready for me to import into our system ...
I need a compact Python crawler that pulls public content from Twitter, Instagram and LinkedIn, covering text, image and video posts for any handle I feed it. Here’s the flow I have in mind. The script collects the raw post data (caption, hashtags, basic engagement numbers and, where accessible, image/video URLs) through whichever mix of libraries makes sense—Tweepy or Twitter API v2 for Twitter, Instaloader or Selenium for Instagram, and the official or unofficial LinkedIn API for LinkedIn. After normalising everything into a common JSON schema, the crawler should pass that dataset to an LLM endpoint (OpenAI or similar) and receive back a concise, structured report that includes: • Brand sentiment (positive / neutral / negative trends) • Key thematic buckets t...
Scope of Work: Automated Data Extraction and Alert System from Etimad Tenders Portal Project Title: Automated Data Scraping, Google Sheets Integration, and Email Alerts for Etimad Pre-Planning Opportunities 1. Background The Etimad platform ( ) publishes pre-planning and upcoming tenders from Saudi government entities. The goal is to develop a fully automated system within two days that extracts tender data, stores it in a structured Google Sheet, and sends email alerts when new opportunities appear that match specific keywords (to be provided). 2. Objectives Automatically extract all relevant tender data from the Etimad Pre-Planning portal. Store and update the extracted data in Google Sheets on a scheduled basis. Detect and highlight new tenders. Send email alerts for new tenders...
Scrape Google For Emails For Cheap
I want a straightforward, low-cost way to pull every unique email address from well over 5,000 messages in my Gmail account. No names, phone numbers, or other fields—just the raw sender and recipient addresses that appear in the header or body of each email. You are free to tackle this with the Gmail API, IMAP, Google Apps Script, Python’s gmail-api-client, or any other method you trust, as long as it stays within Google’s usage limits and my account remains secure. Speed is important but data accuracy matters more; I need a clean list with duplicates removed and obvious bounces, “no-reply”, and Google system addresses filtered out. Deliverable • CSV or XLSX file containing every unique, valid email address extracted from the full mailbox. Acceptance...
Transferring this project from another freelancer. First thing I will need is my files uploaded to Github, and connected to the current hosting build on Railway server. Here is what I will need next: Show discovery (Looking for host) 1. Search overhaul. Database currently has around 250 shows pulled through YouTube API. Right now, I am in the process of getting a higher quota, however, I want to optimize the search, by using different keywords based on user's profile through AI so it does not try to analyze channels already searched. This section is for looking for Youtube channels AND Spotify shows that interview guest. 2. Open API integration to find the email contact of each channel. Users can also manually put an email in if there is none. If there is a website with a form, ...
CONTACT ENRICHMENT PROJECT: We need contact information (CEO/President, Plant Manager, Purchasing Manager) for automotive supplier companies. The "To Enrich" sheet contains companies that need contact data. The "Already Done" sheet shows completed examples. WHAT TO FILL IN (Yellow columns in "To Enrich" sheet): Column G: CEO/President Name Full name of CEO, President, Managing Director, or General Manager Column H: CEO/President Title Their exact title (e.g., CEO, President, Managing Director) Column I: CEO/President Email Professional email if available (not required) Column J: Purchasing Contact Name Name of Purchasing Manager, Procurement Director, or Head of Purchasing Column K: Purchasing Contact Title Their exact title Column L: Purchasing Contact ...
I need help extracting text data from CSV files and inputting it into a specified format or system. Requirements: - Experience with data extraction and manipulation - Proficiency in handling CSV files - Attention to detail to ensure accuracy - Ability to work within the specified budget Ideal Skills: - Data entry - Familiarity with spreadsheet software (e.g., Excel) - Basic understanding of data organization principles Looking for freelancers who can complete the task efficiently and accurately. this is the full requirment for the project Your Operational Requirements (Summary) 1) Quote & Job Intake Create quotes (Q-files) and job packs (J-files) as you do today Save them into SharePoint folders System automatically reads them and creates: Quote records Assets Tasks per as...
I need a robust scraping solution that continuously pulls price, full product descriptions, and customer reviews for electronics across roughly ten different e-commerce sites. Each time the script runs, the results should append to a master Excel workbook so that every entry is stored chronologically—allowing me to compare today’s prices with yesterday’s and build long-term trend charts without any manual work. Key expectations • The scraper must visit all target URLs, handle pagination or lazy-loaded content where it exists, and respect each site’s structure. • Collected fields: date/time stamp, site name, product name, current price, full description text, average rating, review count, and a link to the product page. • Excel output: one sheet...
I already have a curated list of LinkedIn profile URLs and need the key networking details moved into a single Google Sheet. For every profile, capture each person’s stated interests and list the five types of people they say they want to meet. Those “meet-up” types should be tagged under the three clear categories I care about—Industry experts, Potential clients and Collaborators—so that I can later filter the sheet by networking goal. Please place one profile per row in the Google Sheet and create separate columns for: • Profile URL • Name (as it appears) • Interests (comma-separated) • Type 1 through Type 5 (verbatim wording) • Category tag (Industry experts / Potential clients / Collaborators) Accuracy of the text you...
I need a small utility that sits in the background, pings a single public web page every second, and alerts me the moment it detects any difference in either the visible text or the images. The page updates unpredictably, so true real-time tracking is essential; a one-minute polling interval is already too slow for my use-case. A lightweight approach that respects the site’s bandwidth and avoids triggering blocks or captchas will be valued. I am open to whatever stack you favour—Python with BeautifulSoup or Selenium, Node.js with Puppeteer, or a compiled solution—so long as it is stable on a Windows environment and easy for me to tweak the target URL later. Notification method is flexible: an email is fine, but if you have a smarter suggestion (desktop toast, webhoo...
I have an excel document of roughly 15,000 B2B companies operating in whole the world. For every company on that list, I need one working, business-grade email address. Preferred research channels are clear: • Company websites • LinkedIn profiles • Reputable public databases (apollo) Accuracy is more important than speed. If an address cannot be found after reasonable effort, note “No email found” in the sheet so I can audit the attempt. Deliverables: 1. Completed CSV or Excel file containing – Company name (as provided) – Discovered email address – Source link or brief source note 2. A short summary of any recurring issues you encountered (e.g., companies with only web forms). Please keep formatting consistent and a...
I need an end-to-end AI agent that automatically scouts freelancing websites, general job boards, and specialised training platforms for roles or courses that involve artificial-intelligence work. The agent must: • Crawl and scrape the relevant pages in real time or on a frequent schedule. • Apply NLP or other classification techniques to decide whether a posting is truly AI-related, then tag it by sub-domain (e.g. vision, NLP, MLOps, prompt-engineering). • Deliver concise, deduplicated listings to me through an in-app notification feed—no email or SMS required. For the deployment side I’m open to Python (Scrapy, BeautifulSoup, Selenium), Node, or any stack you are comfortable with so long as it is containerised and can run unattended on a small cloud insta...
I need a single Google Sheet that lets me type a Home Depot or Lowe’s model number in one column and, without any extra clicks, instantly fills the rest of the row with: • Brand • Product title • Full description • Current price on Home Depot • Current price on Lowe’s • Main image URL I’d like the sheet to refresh these fields automatically once every day so I always see up-to-date pricing. I’m happy to use the HASData API (or another service if you can show a better option), and I’ll cover the subscription cost myself; I just need you to wire everything up in Apps Script or another reliable method so the calls stay within API limits and don’t break if the catalog grows. Deliverables • A Google Sheet t...
Title: Senior Python Developer for US Data Pipeline and iOS Verification System (Phase 1) Project Description Suggestion: Overview: > We are looking for a senior Python developer to build an automated data scraping and iOS verification pipeline based in the US. The goal for Phase 1 is to acquire over 10,000 verified leads per day. Core Tasks: 1. Data Scraping: Extract data (name, phone number, age, gender, carrier) from US people search websites. 2. Anti-detection: Must integrate the API and set render=true and super=true. 3. Data Filtering: Implement automatic filtering by wireless/phone number and age range (50-90 years old). 4. Data Verification: Integrate the LoopLookup API to verify iMessage activation status. 5. Data Export: Automatically sort and export data to tagged .t...
Hey! I’m looking to hire an experienced developer to build a universal product-detail scraping pipeline that takes a product URL (any website) and returns a complete structured product record. This is not a “simple HTML parse.” Many target sites are React/Next/Vue, load content via XHR/GraphQL, hide details behind tabs/accordions/modals, and lazy-load images/PDFs. The solution needs to reliably extract everything a human can see on the page, plus the underlying data used to render it. What the scraper must do (high level) Given a product URL, the pipeline should: Load the page like a real user (handle cookies/overlays). Capture all content from multiple sources (DOM + network + interactions). Use GPT API strategically to increase accuracy (field mapping, variant ext...
I need help collecting a clean, well-structured list of Twitter accounts that consistently post about AI and possibly category of AI (open source, ML, AI, general AI) Instead of handing you a fixed list, I’ll define the selection rules (for example: minimum follower count, specific AI-related keywords, recent activity, etc.) - min follower count 5000 and have alteast multiple posts with 100+ likes/ retweets. Once those criteria are agreed on, you’ll locate the matching profiles and extract two data points per account: • the public profile bio • the direct profile link (around 1M+ profiles) Please return everything in a single CSV file, one row per influencer. Feel free to use Python, Tweepy, Twitter API v2, ScraperAPI, or another reliable method—as long...
AI Automation for Finance Analytics AI / Machine Learning DO NOT BID IF BIDDING FOR 40-HOUR WORK WEEK WE ARE LOOKING FOR A CONSULTANT / BUILDER / TUTOR TO WORK WITH OUR TEAM 3-5 HOURS A WEEK TO BUILD THE SYSTEM JONITLY DO NOT BID FOR LONGER THAN THOSE HOURS. DO NOT BID FOR FULL-TIME WORK DETAILS OF WHAT I NEED HELP WITH I run a real estate private equity and hotel development platform. We want to replace manual analysis and reporting with a practical AI workflow. This is about extracting, comparing, and interpreting data. Excel and PowerPoint remain the source of truth. What we need: -Compare PowerPoint vs Excel and flag mismatches - Explain underwriting models and trace outputs - Compare legal/term sheets vs financial assumptions - Track document versions and changes - Summarize deal ...
I have a single Instagram Reel that was publicly available for roughly a year before being removed or placed in archive. I saved every trace I could—direct links, full-length screen recordings, and the search-engine cache hits that still reference the post. What I now need is a technical reconstruction of its viewership data. Your objective is to extract and corroborate: • Number of views over time (ideally plotted or tabled) • Any available demographic clues about who watched it • Engagement rates the Reel achieved while live Because the original URL now returns a 404, I expect most of the intel will come from open-source techniques: exploring Web archives (Wayback Machine snapshots, Google cache, ), digging into any residual JSON, and cross-referencing with Ins...
I need an experienced Python trading-bot developer to optimize and refactor a live async trading bot connected to REST & WebSocket APIs, which currently slows under load and misses ticks/orders. The task includes profiling bottlenecks, improving async/WebSocket performance, optimizing pandas & SQLite usage, and ensuring real-time execution. Goal: <200 ms tick-to-order latency, zero missed ticks, clean refactored code, tests, and one-command VPS setup.
I need every product on copied into my existing WooCommerce shop so the catalog mirrors theirs one-to-one. That means grabbing each item’s title, plain-text description and all associated images, then pushing them into WordPress with the correct categories, colour swatches and size variations exactly as they appear on Furnx. Only product details are required right now—reviews and live stock counts can wait for a later phase—so the job is focused on clean data capture and a flawless import workflow. Descriptions must remain in plain text; no extra HTML markup. Images should arrive attached to the right variation, including separate gallery shots where available, and the colour options need to show as clickable swatches in WooCommerce, not just text labels. I’m...
I’m looking for an experienced AutoHotkey (AHK) developer to build a clean, reliable script that automates the repetitive navigation and clicking I perform every day inside my web application. Here’s the core scenario: the macro will launch a browser tab, step through a predictable series of pages, click specific buttons or links, wait for elements to load, and continue until the end of the workflow—no data scraping or form filling is required, just fast, accurate page-to-page movement and element selection. I’ll provide: • A screen-recording that shows the exact click path and timing cues • XPaths, CSS selectors, or unique element IDs where available • Any login credentials needed for testing (in a secure manner) You’ll deliver: ...
I need a verified list of 100 LinkedIn contacts who sit at mid-management level—think Managers and Directors and Vice Presidents—inside grocery, retail and beverage store chains across the country. I will give you the exact companies and job titles to hunt for, so your job is purely about smart searching and clean data capture. Once you locate a match, record the person’s full name, current title, company,, LinkedIn profile URL and any publicly available work email (if it can be found) you can source. Please keep everything tidy in a spreadsheet (Excel or Google Sheets is fine) and make sure there are no duplicates, no consultants, and nobody outside the United States. If you use tools like LinkedIn Sales Navigator, Apollo, Hunter, etc., mention that when you bid so I k...
I have two source spreadsheets that I need merged and enriched through automated scraping: • “File 1” – 170 k Spanish local businesses with emails • “File 2” – 65 k additional businesses with websites only Phase 1 – Email extraction Using a Python script and well-known libraries (requests, BeautifulSoup, Scrapy or similar), scan every site listed in File 2, capture all working email addresses you can locate, then append them to the corresponding rows so I can produce a unified “File 3”. Phase 2 – Offer harvesting Next, visit each live site in File 3. Where an offer, deal or promotion is publicly displayed, record the details in a fresh Excel sheet with these exact columns: Business ID | Business Name | Offer...
If you want to stay competitive in 2021, you need a high quality website. Learn how to hire the best possible web developer for your business fast.
Learn how to find and work with a top-rated Google Chrome Developer for your project today!
Learn how to find and work with a skilled Geolocation Developer for your project. Tips and tricks to ensure successful collaboration.