Filter

My recent searches
Filter by:
Budget
to
to
to
Type
Skills
Languages
    Job State
    498 hbase jobs found

    My Ubuntu server already has Docker running both of these repos: • • I now need a quick, clean setup of the supporting services so everything talks to each other: 1. Spin up HBase (Docker is available) and create the tables required by archival-rpc and ingestor-rpc. 2. Spin up Kafka, link it to HBase, and confirm messages flow from ingestor-rpc into the HBase tables. 3. Prove the full chain works by querying archival-rpc and showing live data being served. Docker is already installed and running; you may add docker-compose or individual containers—whichever is fastest. A concise README or set of commands that reproduces your setup on a fresh box will complete the job. I’m looking for

    $29 Average bid
    $29 Avg Bid
    7 bids

    ...data from multiple sources: Logs, transactions, social media, IoT devices, sensors, clickstreams, etc. Tools: Apache Flume, Kafka, Sqoop (for importing from databases). --- 2. Data Storage Big Data needs distributed, fault-tolerant storage (not just normal databases). Options: HDFS (Hadoop Distributed File System) – stores data across many machines. NoSQL Databases – MongoDB, Cassandra, HBase. Cloud Storage – AWS S3, Google Cloud Storage, Azure Data Lake. --- 3. Data Processing Once stored, data must be processed (batch or real-time). Batch Processing (large chunks at once): Hadoop MapReduce Apache Spark (faster, in-memory processing) Stream Processing (real-time, continuous): Apache Kafka + Spark Streaming Apache Flink / Storm --- ...

    $34 / hr Average bid
    $34 / hr Avg Bid
    5 bids

    ...learning model, optimize large-scale data pipelines, or query massive datasets — I can deliver solutions tailored to your needs. What I Can Do for You: AI & Machine Learning Deep Learning (CNN, LSTM, Transformer) Image & Text Processing (NLP, OCR, Captioning) Model training, tuning, and evaluation TensorFlow, PyTorch, scikit-learn Big Data Engineering ETL pipelines with Apache Spark Hive, HBase, Hadoop ecosystem Data warehouse optimization Stream & batch data processing Databases SQL (PostgreSQL, MySQL) NoSQL (MongoDB, Neo4j, Cassandra) Graph databases & complex queries Development Tools Python scripting & automation REST APIs, FastAPI, Flask Git, Docker, Linux, Jupyter...

    $29 / hr Average bid
    $29 / hr Avg Bid
    114 bids

    I have a CSV file containing sales data, with a size between 1-10 GB. ...CSV file containing sales data, with a size between 1-10 GB. I need a skilled data engineer to ingest this data into RDS, clean it, and load it into HBase using Apache Sqoop. Ultimately, the cleaned data will be analyzed using MapReduce on HBase. Key tasks: - Ingest CSV data into RDS - Clean the data - Load cleaned data into HBase using Apache Sqoop - Conduct analysis using MapReduce on HBase The data cleaning process will involve: - Removing duplicates - Handling missing values - Fixing formatting inconsistencies The ideal freelancer for this project should have: - Proficiency in data engineering and management - Experience with RDS, HBase, Apache Sqoop and MapReduce - Strong ski...

    $61 / hr Average bid
    $61 / hr Avg Bid
    67 bids

    I need someone to handle the processing and analysis of my sales team performance data. The data is currently stored in CSV files an...performance data. The data is currently stored in CSV files and I need it loaded into Amazon RDS, specifically a MySQL instance. Tasks include: - Loading the CSV data into Amazon RDS - Cleaning the data by removing duplicates, handling missing/null values and standardizing formats - Loading the cleaned data into HBase using Apache Sqoop - Performing analysis using MapReduce Ideal skills for this project are: - Proficient in MySQL - Experienced in data cleaning and processing - Familiar with HBase and Apache Sqoop - Competent in using MapReduce for data analysis I am looking for a professional who can deliver high-quality work and has a kee...

    $63 / hr Average bid
    $63 / hr Avg Bid
    26 bids

    I'm looking for a professional who can help me transfer relational data from my Amazon RDS MySQL database to HBase using Apache Sqoop. This is a one-time project, so I'm looking for someone who can complete this task efficiently and effectively. Ideal skills for this job include: - Proficiency in Apache Sqoop - Experience with MySQL databases - Knowledge of HBase - Ability to handle relational data Please provide evidence of your skills in your bid. Thank you.

    $47 / hr Average bid
    $47 / hr Avg Bid
    33 bids

    I have a sales data CSV file that needs to be ingested into AWS RDP. The data will need to be cleaned (including removing duplicates, handling missing values, and normalizing the data) before it is loaded into HBase using Apache Sqoop. Finally, I need some analysis to be performed on the data using Map Reduce. Ideal Skills and Experience: - Proficiency in AWS and RDP - Experience with data cleaning and processing - Familiarity with HBase and Apache Sqoop - Strong understanding of Map Reduce - Past experience with sales data is a plus Please include in your application a brief description of your relevant experience.

    $196 Average bid
    $196 Avg Bid
    13 bids

    I need assistance with managing a product dataset contained in a CSV file. The tasks include: - Uploading the CSV dataset into AWS RDS using a provided schema. - Data cleaning, which involves removing null values and duplicate entries. - Transferring the cleaned data to HBase using Apache Sqoop. - Conducting trend analysis on the dataset using MapReduce. Ideal candidates for this project should have strong experience with AWS RDS, data cleaning, Apache Sqoop, HBase, and MapReduce. Please note that no additional data cleaning is required beyond the mentioned tasks.

    $153 Average bid
    $153 Avg Bid
    12 bids

    I'm looking for a skilled data engineer to assist with my dataset. Key Tasks: - Upload and structure a CSV dataset (1GB to 10GB) in AWS RDS - Move the data into HBase using Apache Sqoop - Clean the data, which involves handling missing values, removing duplicates, and standardizing formats - Use MapReduce to process and analyze sales trends, store and vendor performance, and category breakdowns - Deliver clear insights to guide sales, marketing, and vendor strategies Ideal Skills and Experience: - Proficiency in AWS services, specifically RDS - Experience with Apache Sqoop and HBase - Strong data cleaning and preparation skills - Familiarity with MapReduce for data analysis - Ability to interpret data and deliver actionable insights

    $206 Average bid
    Featured
    $206 Avg Bid
    14 bids

    Data Setup & Migration Import the liquor sales data into AWS RDS, using a provided data dictionary to guide the schema. If needed, split large files into smaller parts to keep things efficient. Use Apache Sqoop to move the data from RDS to HBase, and design a clean, scalable schema for HBase. Make sure the data is accurate and consistent in both systems. Cleaning Things Up Get rid of missing, incomplete, or broken records. Standardize formats across fields (especially dates, categories, etc.). Deduplicate entries to keep the data tidy and usable. Batch Processing & Analysis Use MapReduce to process and analyze data in bulk. Key insights we’re looking for: Revenue breakdown by store, county, and liquor category. Top performers – best-selling categ...

    $155 Average bid
    $155 Avg Bid
    20 bids

    ...(2012–2020). Below is a breakdown of the work involved: 1. Data Ingestion & Preparation Import the "Liquor Sales" dataset into AWS RDS, setting up the schema based on a provided data dictionary. If needed, split large files into manageable chunks to optimize performance. Use Apache Sqoop to migrate the data from RDS to HBase, designing a suitable schema with appropriate column families and row keys. Ensure data consistency and validate the integrity of the data in both RDS and HBase. 2. Data Cleaning Clean the dataset to improve quality and ensure accurate analysis: Remove incomplete or missing data. Fix formatting issues and invalid entries. Standardize categorical data and normalize field formats. Deduplicate records to maintain clean dataset...

    $186 Average bid
    $186 Avg Bid
    6 bids

    ...liquor sales dataset spanning from 2012 to 2020. This involves uploading the data into AWS RDS, transferring it to HBase, and ensuring its integrity before analysis. Key Tasks: - Data Ingestion: Upload the dataset into AWS RDS, defining an appropriate schema based on the provided data dictionary. Utilize Apache Sqoop to transfer data from AWS RDS to HBase, creating a tall schema with suitable column families and row keys. - Data Integrity: Validate the integrity of the ingested data in both systems. - Data Cleaning: Remove incomplete values, handle inconsistent data, standardise categories, eliminate duplicates. Ideal Candidate: - Proficient in using AWS RDS, Apache Sqoop, and HBase. - Experienced in data cleaning and preparation for analysis. - Detail-oriented fo...

    $175 Average bid
    $175 Avg Bid
    7 bids

    ...for querying S3 data AWS EMR (Hadoop/Spark) – for big data processing AWS Step Functions – for workflow orchestration AWS Data Pipeline – for scheduling data workflows AWS CloudWatch – for monitoring AWS CloudFormation/Terraform – for infrastructure as code IAM Roles & Policies – for security and access control Data Storage Management-MySQL, MongoDB, SQL Server, Cassandra, Snowflake, PostgreSQL, HBase, Teradata, Hive DevOps Infrastructure: Jenkins, Azure DevOps, Docker, Kubernetes, Terraform, Ansible Apache Spark & Kafka Python (Pandas, NumPy) for scripting and transformation Tableau / Power BI for visualization If you’re someone with solid AWS Data Engineering experience and love teaching others practically, I’d love to le...

    $23 / hr Average bid
    $23 / hr Avg Bid
    18 bids

    I'm in need of an experienced AWS EMR & Hadoop Administrator. Your primary focus will be managing EMR clusters, optimizing big data workflows Key Responsibilities: - Setting up and maintaining EMR clusters - Performance monitoring and troubleshooting issues - Constructing an application server based on provided instructions - Strong expertise required in Hive, Spark, HBase, IAM security, and cost optimization Skills & Experience: - Proficient with Unix as the technology stack for the application server - Prior experience building application servers is essential - Familiarity with automation tools like Terraform, AWS CLI, or CloudFormation is a plus - Freelance/Remote opportunity – Please apply with your relevant experience.

    $34 / hr Average bid
    $34 / hr Avg Bid
    33 bids

    ...and optimize servers running CentOS, Ubuntu, and Debian. -Work with distributed systems, including HBase, Hadoop, and Storm. -Configure and maintain a high-availability ArangoDB cluster with redundancy features. -Conduct comprehensive benchmarking and high-availability testing to evaluate system performance and scalability under various conditions. -Set up and integrate monitoring interfaces such as Prometheus to ensure system health and performance monitoring. -Independently review and test the effectiveness of DevOps work and provide constructive feedback. Required Skills & Expertise: -Strong knowledge of server configuration for CentOS, Ubuntu, and Debian. -Hands-on experience with HBase, Hadoop, and Storm. -Proficiency in configuring and managing high-availabilit...

    $57 / hr Average bid
    $57 / hr Avg Bid
    33 bids

    #Your code goes here import '' import '' def jbytes(*args) { |arg| arg.to_s.to_java_bytes } end def put_many(table_name, row, column_values) table = (@, table_name) p = (*jbytes(row)) do |column, value| family, qualifier = (':') (jbytes(family, qualifier), jbytes(value)) end (p) end # Call put_many function with sample data put_many 'wiki', 'DevOps', { "text:" => "What DevOps IaC do you use?", "revision:author" => "Frayad Gebrehana", "revision:comment" => "Terraform" } # Get data from the 'wiki' table get 'wiki', 'DevOps' #Do not remove the exit call below exit

    $92 Average bid
    $92 Avg Bid
    7 bids

    ...Visualization of JanusGraph with Elasticsearch Integration for Relationship Analysis in Banking" Requirements Analysis: a. Conduct stakeholder interviews to gather system requirements b. Document use cases and user stories c. Define data schema and relationship mapping for JanusGraph d. Assess technical constraints and system integrations Planning and Design: a. Select the datastore (HBase or Cassandra) after analysing performance and scalability b. Define the JanusGraph schema, data model, and query patterns c. Plan data migration strategy and sequence from Elasticsearch to JanusGraph d. Design the algorithm for relationship creation between Main party and Other party e. Evaluate visualization libraries and choose the most appropriate for the Link Analysis cha...

    $460 Average bid
    $460 Avg Bid
    1 bids

    ...Visualization of JanusGraph with Elasticsearch Integration for Relationship Analysis in Banking" Requirements Analysis: a. Conduct stakeholder interviews to gather system requirements b. Document use cases and user stories c. Define data schema and relationship mapping for JanusGraph d. Assess technical constraints and system integrations Planning and Design: a. Select the datastore (HBase or Cassandra) after analysing performance and scalability b. Define the JanusGraph schema, data model, and query patterns c. Plan data migration strategy and sequence from Elasticsearch to JanusGraph d. Design the algorithm for relationship creation between Main party and Other party e. Evaluate visualization libraries and choose the most appropriate for the Link Analysis cha...

    $129 Average bid
    $129 Avg Bid
    1 bids

    ...looking for an advanced Hadoop trainer for an online training program. I have some specific topics to be covered as part of the program, and it is essential that the trainer can provide in-depth knowledge and expertise in Hadoop. The topics to be discussed include Big Data technologies, Hadoop administration, Data warehousing, MapReduce, HDFS Architecture, Cluster Management, Real Time Processing, HBase, Apache Sqoop, and Flume. Of course, the trainer should also have good working knowledge about other Big Data topics and techniques. In addition to the topics mentioned, the successful candidate must also demonstrate the ability to tailor the course to meet the learner’s individual needs, making sure that the classes are engaging and fun. The trainer must also possess out...

    $21 / hr Average bid
    $21 / hr Avg Bid
    1 bids

    I am looking for a freelancer who c...through WebEx meetings. Here are the project requirements: Specific Azure topics: - Azure Networking Assistance type: - Virtual Assistance Preferred meeting type: - WebEx Meeting and AZURE Azure Data Factory (ADE), Azure DataBricks, Azure Data Lake Services (ADLS), Azure Blob Services, Azure SQL DB, Azure Active Directory (AAD), Azure Dev Ops. Languages: Scala, Core Java, Python Databases Hive, Hbase Data Ingestion: Sqoop, Kafka, Spark Streaming Data Visualization:Table and AZURE:ADF Databricks Azure Skills and experience: - Strong understanding of Azure Networking - Experience in providing virtual assistance - Proficiency in conducting WebEx meetings If you have the required skills and experience, please bid on this summary

    $739 Average bid
    $739 Avg Bid
    4 bids

    ...SupportLocation: Austin, TXDuration: 12 Months Job Description: We are looking someone having strong experience in production support, administration and Development experience with Hadoop technologies.• Minimum Experience 8 Years• Must have Hands-on experience on managing Multiple Hortonworks Clusters. Troubleshooting, Maintaining and Monitoring is the key responsibility here.• Must be conversant in HBase, OpenBSD & Grafana related issues handling in order to ensure the data flow is smooth and consistent.• Experience on Kafka for stream processing of Data• Experience in Deployment of new services, patching of hosts etc.• Good hands-on experience on Linux (preferably Red hat) server platform• Should have knowledge in at least one o...

    $435 Average bid
    $435 Avg Bid
    15 bids

    ...topics like AWS Azure GCP DigitalOcean Heroku Alibaba Linux Unix Windows Server (Active Directory) MySQL PostgreSQL SQL Server Oracle MongoDB Apache Cassandra Couchbase Neo4J DynamoDB Amazon Redshift Azure Synapse Google BigQuery Snowflake SQL Data Modelling ETL tools (Informatica, SSIS, Talend, Azure Data Factory, etc.) Data Pipelines Hadoop framework services (e.g. HDFS, Sqoop, Pig, Hive, Impala, Hbase, Flume, Zookeeper, etc.) Spark (EMR, Databricks etc.) Tableau PowerBI Artificial Intelligence Machine Learning Natural Language Processing Python C++ C# Java Ruby Golang Node.js JavaScript .NET Swift Android Shell scripting Powershell HTML5 AngularJS ReactJS VueJS Django Flask Git CI/CD (Jenkins, Bamboo, TeamCity, Octopus Deploy) Puppet/Ansible/Chef Docker Kubernetes ECS/EKS Test...

    $52 Average bid
    $52 Avg Bid
    23 bids

    .../ Define the problem. Create Tables with constraints Design a Schema based on tables and explain the schema. Create primary keys, foreign keys. Create Procedures. Create functions. Create Views Create Index Use of the following Clauses: Example : order by, between, group by, having, order by, AND, OR, with Use Aggregate Functions Use of nested queries, Scalar Subquery. Part 2 has to be done in HBASE Create Tables – 4 tables with Column family and columns Column family - 5 column families: Make sure have different parameter. Ex: versions Minimum 4 Columns in each Column family Insert records Delete records Perform basic queries like your assignment1 Try to extract data using timestamp Insert partial data in a row Describe table. Check table status – enabled or disable...

    $222 Average bid
    $222 Avg Bid
    33 bids

    .../ Define the problem. Create Tables with constraints Design a Schema based on tables and explain the schema. Create primary keys, foreign keys. Create Procedures. Create functions. Create Views Create Index Use of the following Clauses: Example : order by, between, group by, having, order by, AND, OR, with Use Aggregate Functions Use of nested queries, Scalar Subquery. Part 2 has to be done in HBASE Create Tables – 4 tables with Column family and columns Column family - 5 column families: Make sure have different parameter. Ex: versions Minimum 4 Columns in each Column family Insert records Delete records Perform basic queries like your assignment1 Try to extract data using timestamp Insert partial data in a row Describe table. Check table status – enabled or disable...

    $69 Average bid
    $69 Avg Bid
    10 bids

    Looking for Flutter ( Dart) developer work for long term project with fixed monthly payment . Required skills : 1: At least 2 app published 2: Dart 3: API 4: Sqlite ,hbase 5: In app purchase integration experience 6: Bluetooth experience

    $468 Average bid
    $468 Avg Bid
    74 bids

    ...oriented discussion. Must Have: ● At least 6+ years of total IT experience ● At least 4+ years of experience in design and development using Hadoop technology stack and programming languages ● Hands-on experience in 2 or more areas: o Hadoop, HDFS, MR o Spark Streaming, Spark SQL, Spark ML o Kafka/Flume. o Apache NiFi o Worked with Hortonworks Data Platform o Hive / Pig / Sqoop o NoSQL Databases HBase/Cassandra/Neo4j/MongoDB o Visualisation & Reporting frameworks like D3.js, Zeppellin, Grafana, Kibana Tableau, Pentaho o Scrapy for crawling websites o Good to have knowledge of Elastic Search o Good to have understanding of Google Analytics data streaming. o Data security (Kerberos/Open LDAP/Knox/Ranger) ● Should have a very good overview of the current landscape and ability to...

    $43 / hr Average bid
    $43 / hr Avg Bid
    6 bids

    Data Engineers 6+ yrs : At least 6+ years of total IT experience ● At least 4+ years of experience in design and development using Hadoop technology stack and programming languages ● Hands-on experience in 2 or more areas: o Hadoop, HDFS, MR o Spark Streaming, Spark SQL, Spark ML o Kafka/Flume. o Apache NiFi Worked with Hortonworks Data Platform o Hive / Pig / Sqoop o NoSQL Databases HBase/Cassandra/Neo4j/MongoDB o Visualisation & Reporting frameworks like D3.js, Zeppellin, Grafana, Kibana Tableau, Pentaho o Scrapy for crawling websites o Good to have knowledge of Elastic Search o Good to have understanding of Google Analytics data streaming. o Data security (Kerberos/Open LDAP/Knox/Ranger) ● Should have a very good overview of the current landscape and ability t...

    $2939 Average bid
    $2939 Avg Bid
    2 bids

    LDAP service on Oracle Linux 7.3 with rpm packages. LDAP for HDP-2.5.3.0 3. Install and configure Ranger HDP service security on Hbase and Solr tables and collection and validate the security with 5 users

    $235 Average bid
    $235 Avg Bid
    1 bids

    ...taking advantage of the CI/CD pipelines when possible - Help with troubleshooting and configuration fine-tuning on several platforms (apache, haddoop, hbase etc) - Build and maintain a local testing environment replica for developers. - Help plan for "non hyper cloud" deployments. OpenStack, ProxMox, Kubernetes. All are on the table but the most "appropriate" one must be selected considering the architecture and CI/CD capabilities. - Build and maintain "on prem" alternatives of the AWS structure. This will include hardware planing (server) but also deployment of several VMs (or containers at some point) with techs including php+nginx, hadoop with hbase (and phoenix), sql database (probably mysql) and CEPH object storage. - Be the technical cha...

    $26 / hr Average bid
    $26 / hr Avg Bid
    17 bids

    The purpose of this project is to develop a working prototype of a network monitoring and reporting Platform that receives network health and status, traffic data from several network infrastructure monitoring sources, and produces an aggr...Platform that receives network health and status, traffic data from several network infrastructure monitoring sources, and produces an aggregate of network status data for processing by a data analytics engine. This prototype will be known as NetWatch. The NetWatch solution will utilize data processing and analytics services via the Hadoop infrastructure, and data reporting features of the Hbase or MYSQL/Datameer tool. The prototype will be used by the Network A&E team to determine its viability as a working engine for network status ...

    $11 - $28
    $11 - $28
    0 bids

    Please have a look at the below stack. 1. Bash Scripting. 2. Hive 3. Scala Spark 4. HBase and other regular big data technologies.

    $782 Average bid
    Local
    $782 Avg Bid
    16 bids

    - Backup HBase database on internal infrastructure

    $25 / hr Average bid
    $25 / hr Avg Bid
    3 bids

    We are looking for a machine learning engineer who must have the following experience: 1. python coding: +7 years of experience 2. Machine Leaning: +5 years of experience (Scikit-Learn, TensorFlow, Caffe, MXNet, Keras, XGBoost) 3. AI/Deep Learning: +5 years of experience 4. Cloud computing: AWS, S3, EC2, EMR, SageMaker, ECS, Lambda, IAM 5. distributed computing technology: Hadoop, Spark, HBase, Hive / Impala, or any similar technology Should be an independent developer, NO CONSULTING COMPANY There will be series of technical interview about python coding, machine learning, AI , cloud computing. Candidate must have an excellent skill in python coding and be able to answer challenging python questions during the interview

    $84 / hr Average bid
    $84 / hr Avg Bid
    13 bids

    Design, code, test Hive, Sqoop, HBase, Yarn, UNIX Shell scripting Spark and Scala mandatory You should have working experience in previous projects not a beginner level projects so please be ready to design develop and fix the bugs. Working hours and all We can decide over the chat.

    $84 / hr Average bid
    $84 / hr Avg Bid
    4 bids

    am trying to run hbase backup command and got below error root@machine:~/hbase-2.4.12# hbase backup Error: Could not find or load main class backup Caused by: : backup need to fix that some tips below : Hbase install below Just enable the configuration on xml file and start the hbase and confirm is working well run Hbase on linux Ubuntu some helps below:

    $20 / hr Average bid
    $20 / hr Avg Bid
    3 bids

    moving data from wkc to atlas. There is an issue in one of the category relationship mapping

    $130 Average bid
    $130 Avg Bid
    3 bids

    Roles And R...high-performance web services for data tracking. High-speed querying. Managing and deploying HBase. Being a part of a POC effort to help build new Hadoop clusters. Test prototypes and oversee handover to operational teams. Propose best practices/standards. Skills Required: Good knowledge in back-end programming, specifically java, JS, Node.js and OOAD Good knowledge of database structures, theories, principles, and practices. Ability to write Pig Latin scripts. Hands on experience in HiveQL. Familiarity with data loading tools like Flume, Sqoop. Knowledge of workflow/schedulers like Oozie. Analytical and problem solving skills, applied to Big Data domain Proven understanding with Hadoop, HBase, Hive, Pig, and HBase. Good aptitude in multi-threading and...

    $17 / hr Average bid
    $17 / hr Avg Bid
    1 bids

    Hi Tapasi K., I noticed your profile and would like to offer you my project. Write a spark submit job that accesses a data in a hive table in one hadoop/spark cluster , accesses data in an hbase table in another hadoop cluster , combine( do some aggregation) this data and save result in both hive and hbase. P.S. Hive is in a different hadoop cluster than hbase ( both in same network / VPC subnet ) .

    $153 Average bid
    $153 Avg Bid
    1 bids

    We need to hire a Hadoop and Spark expert. Tasks to be done: - Configure properly Hadoop cluster in HA mode - Configure properly Spark cluster in HA mode - Install and configure HBase - Install and configure Oozie - Install and configure SSL for all the tools mentioned above. - Configure authentication for all the tools mentioned above. Installation will be done in an on-premise environment. Linux based OS (centos 9) will be used. All the Hadoop and Spark software will be the full open source version. We are not using Cloudera, Hortonworks, MapR or similars. Project will be payed by an hourly rate for the amount of time it takes to finish the tasks mentioned above. Only tech folks with experience will be considered! :)

    $100 / hr Average bid
    $100 / hr Avg Bid
    1 bids

    - Existing infrastructure needs to be backed up with Ansible - Should have knowledge of the following technologies - Ansible - Terraform - Docker - Kubernetes - Postgres - HBase - Gitlab

    $17 / hr Average bid
    $17 / hr Avg Bid
    4 bids

    ...full product life-cycles • Coding skills in JavaScript with a strong base in object-oriented design and functional programming • Strong Experience in Node.Js and React.Js web framework • Understanding of basic data structures & algorithms • Experienced with relational databases(MySQL, Postgres, etc) good working knowledge of SQL Experience with non-relational databases (MongoDb, Cassandra, Hbase, DynamoDb), designing schemas • Experience in API design and best practices • Experience in building microservices-based architectures • Strong experience on any of frameworks such as Express, Koa, Sails, StrongLoop etc. • Web fundamentals like HTML5 and CSS3 • Good design and prototyping skills • Ability to technically l...

    $2334 Average bid
    $2334 Avg Bid
    9 bids

    Design, code, test Hive, Sqoop, HBase, Yarn, UNIX Shell scripting Spark and Scala mandatory You should have experience in previous projects not a beginner level projects so please be ready to design develop and fix the bugs. Working hours and all We can decide over the chat.

    $20 / hr Average bid
    $20 / hr Avg Bid
    3 bids

    ...Docker, Kubernetes, CI/CD GitLab, GitHub, JFrog Artifactory, Docker, Kubernetes, RESTful API, HEAT, Tosca, YAML Validation: Jenkins, Gherkin, Cucumber, Ruby, HP Quality Center Cloud, Python, Docker Developer: (Must have): Python, Pandas, Pytest, CI/CD, Jira, Confluence, GitHub, (Good to have): Python Flask, Dash, DevOps, Big Data architecture, SCRUM, SonarQube (Good to have): Docker, Hadoop, HBase, Kafka, NiFi, Camunda, Databricks, Cloudera, Kubernetes. Engineer capable of designing solutions, writing code, testing code, automating testing and deployment Proven skills, knowledge and experience with statistical computer languages. (Python, etc.) and associated ecosystem (jupyterlab, jupyter notebook, ...), Good knowledge of CI/CD and automated testing using Python Cloud, P...

    $2330 Average bid
    $2330 Avg Bid
    10 bids

    Need someone who have experience with Big Data Technology 1. Spark 2. Hadoop 3. HBase 4. Kafka 5. Zookeeper

    $26 Average bid
    $26 Avg Bid
    7 bids

    Need someone who have experience with Big Data Technology 1. Spark 2. Hadoop 3. HBase 4. Kafka 5. Zookeeper

    $34 Average bid
    $34 Avg Bid
    3 bids

    Column family databases are best known because of Google’s BigTable implementation. They are very similar on the surface to relational databases, but they have critical conceptual differences. You will not be able to apply the same sort of solutions that you used in a relational database to a column database.

    $15 - $46
    $15 - $46
    0 bids

    ...Cross Region Replication 8. DynamoDB Performance and Partition Key Selection 9. Snowball and AWS Big Data 10. AWS DMS 11. AWS Aurora in Big Data 12. Key Takeaways Module 5 - AWS Big Data Processing Services 1. Learning Objective 2. Amazon EMR 3. Apache Hadoop 4. EMR Architecture 5. EMR Releases and Cluster 6. Choosing Instance and Monitoring 7. Demo - Advance EMR Setting Options 8. Hive on EMR 9. HBase with EMR 10. Presto with EMR 11. Spark with EMR 12. EMR File Storage 13. AWS Lambda 14. Key Takeaways Module 6 - Analysis 1. Learning Objective 2. Redshift Intro and Use cases 3. Redshift Architecture 4. MPP and Redshift in AWS Eco-System 5. Columnar Databases 6. Redshift Table Design - Part 2 7. Demo - Redshift Maintenance and Operations 8. Machine Learning Introduction 9. Machine...

    $328 Average bid
    $328 Avg Bid
    3 bids

    ...Cross Region Replication 8. DynamoDB Performance and Partition Key Selection 9. Snowball and AWS Big Data 10. AWS DMS 11. AWS Aurora in Big Data 12. Key Takeaways Module 5 - AWS Big Data Processing Services 1. Learning Objective 2. Amazon EMR 3. Apache Hadoop 4. EMR Architecture 5. EMR Releases and Cluster 6. Choosing Instance and Monitoring 7. Demo - Advance EMR Setting Options 8. Hive on EMR 9. HBase with EMR 10. Presto with EMR 11. Spark with EMR 12. EMR File Storage 13. AWS Lambda 14. Key Takeaways Module 6 - Analysis 1. Learning Objective 2. Redshift Intro and Use cases 3. Redshift Architecture 4. MPP and Redshift in AWS Eco-System 5. Columnar Databases 6. Redshift Table Design - Part 2 7. Demo - Redshift Maintenance and Operations 8. Machine Learning Introduction 9. Machine...

    $216 Average bid
    $216 Avg Bid
    6 bids

    ...like GCP, AWS. • Experience developing/deploying ML solutions in a public cloud (AWS/Azure/Google Cloud) platforms • • Has a Bachelor's degree or equivalent real-world experience, preferably in a related field (Engineering, Computer Science, Statistics, Applied Math) • Is always willing to learn and apply new techniques where appropriate • Distributed computing platforms, such as Hadoop (Hive, HBase, Pig), Spark, GraphLab • Databases (traditional and noSQL) • Bonus Points if you have: • Experience with a mass-market consumer-facing product • Familiarity with auto-differentiation libraries (e.g., TensorFlow, PyTorch, etc.) • Proficiency with the pydata stack (NumPy, scipy, pandas) • Familiarity with scheduling and orch...

    $3613 Average bid
    $3613 Avg Bid
    25 bids

    • At least 5 years of consu...Catalog, Cosmo Db, ML Studio, AI/ML, Azure Functions, ARM Templates, Azure DevOps, CI/CD etc. • Cloud migration methodologies and processes including tools like Azure Data Factory, Event Hub, etc. • Familiarity with the Technology stack available in the industry for data management, data ingestion, capture, processing, and curation: Kafka, StreamSets, Attunity, GoldenGate, Map Reduce, Hadoop, Hive, Hbase, Cassandra, Spark, Flume, Hive, Impala, mongoDB, PostgreSQL etc. • Bachelors or higher degree in Computer Science or a related discipline. • Experience in clinical domain is preferred. Budget is 550 to 750$ / a Month . Work is 10 Hours a day and 6 days a week . 1 day will be off contract will be 3 Months and then it will get r...

    $932 Average bid
    $932 Avg Bid
    3 bids