Redshift hadoop일자리

필터

내 최근 검색 기록
필터링 기준:
예산
최대
최대
최대
유형
보유 기술
언어
    프로젝트 진행 단계
    2,000 redshift hadoop 건의 일자리 확인, 급여 기준: USD

    I have encountered a problem with my Hadoop project and need assistance. My system is showing ": HADOOP_HOME and are unset", and I am not certain if I've set the HADOOP_HOME and variables correctly. This happens creating a pipeline release in devops. In this project, I am looking for someone who: - Has extensive knowledge about Hadoop and its environment variables - Can determine whether I have set the HADOOP_HOME and variables correctly and resolve any issues regarding the same - Able to figure out the version of Hadoop installed on my system and solve compatibility issues if any I will pay for the solution immediately.

    $22 / hr (Avg Bid)
    $22 / hr (평균 입찰가)
    14 건의 입찰

    *Title: Freelance Data Engineer* *Description:* We are seeking a talented freelance data engineer to join our team on a project basis. The ideal candidate will have a strong background in data engineering, with expertise in designing, implementing, and maintaining data pipelines and infrastructure. You will work closely with our data scientists and analysts to ensure the smooth flow of data from various sources to our data warehouse, and to support the development of analytics and machine learning solutions. This is a remote position with flexible hours. *Responsibilities:* - Design, build, and maintain scalable and efficient data pipelines to collect, process, and store large volumes of data from diverse sources. - Collaborate with data scientists and analysts to understand data require...

    $84 (Avg Bid)
    $84 (평균 입찰가)
    3 건의 입찰

    As a beginner, I am seeking a knowledgeable developer who can guide me on effectively using Google Cloud for Hadoop, Spark, Hive, pig, and MR. The main goal is data processing and analysis. Key Knowledge Areas Needed: - Google Cloud usage for big data management - Relevant functionalities of Hadoop, Spark, Hive, pig, and MR - Best practices for data storage, retrieval, and workflow streamlining Ideal Skills: - Extensive Google Cloud experience - Proficiency in Hadoop, Spark, Hive, Pig, and MR for data processes - Strong teaching abilities for beginners - Demonstrated experience in data processing and analysis.

    $171 (Avg Bid)
    $171 (평균 입찰가)
    14 건의 입찰

    Explain the concept : Amazon RDS Amazon Aurora Amazon DynamoDB Amazon Neptune Amazon Memory DB for Redis DocumentDB Amazon QLDB Athena Overview Redshift Overview EMR Overview

    $6 / hr (Avg Bid)
    $6 / hr (평균 입찰가)
    7 건의 입찰

    As a beginner, I am seeking a knowledgeable developer who can guide me on effectively using Google Cloud for Hadoop, Spark, Hive, pig, and MR. The main goal is data processing and analysis. Key Knowledge Areas Needed: - Google Cloud usage for big data management - Relevant functionalities of Hadoop, Spark, Hive, pig, and MR - Best practices for data storage, retrieval, and workflow streamlining Ideal Skills: - Extensive Google Cloud experience - Proficiency in Hadoop, Spark, Hive, Pig, and MR for data processes - Strong teaching abilities for beginners - Demonstrated experience in data processing and analysis.

    $184 (Avg Bid)
    $184 (평균 입찰가)
    26 건의 입찰

    Budget: 5$ I...png. I am also attaching some logos below but feel free to browse the internet and find the png logos. All the logos provided should be standard sized, transparent background in png format with one color and one grayscale export. I need the following logos: SAP SAP Business Objects SAP Analytics Cloud Microsoft Microsoft Power BI Microsoft Power Apps Microsoft Azure Microsoft Fabric AWS AWS Redshift Google Google Cloud Platform Google Big Query Tableau Qlik Salesforce Zendesk For your application to be considered, please: - Be experienced in creating and editing logos - Include at least one example of a logo for me to see you can edit this properly. Attention to detail and a quick turnaround time are critical for this project. I look forward to your speedy ...

    $5 / hr (Avg Bid)
    $5 / hr (평균 입찰가)
    32 건의 입찰

    As a beginner, I am seeking a knowledgeable developer who can guide me on effectively using Google Cloud for Hadoop, Spark, Hive, pig, and MR. The main goal is data processing and analysis. Key Knowledge Areas Needed: - Google Cloud usage for big data management - Relevant functionalities of Hadoop, Spark, Hive, pig, and MR - Best practices for data storage, retrieval, and workflow streamlining Ideal Skills: - Extensive Google Cloud experience - Proficiency in Hadoop, Spark, Hive, Pig, and MR for data processes - Strong teaching abilities for beginners - Demonstrated experience in data processing and analysis.

    $21 (Avg Bid)
    $21 (평균 입찰가)
    11 건의 입찰

    ...commonly used packages specially with GCP. Hands on experience on Data migration and data processing on the Google Cloud stack, specifically: Big Query Cloud Dataflow Cloud DataProc Cloud Storage Cloud DataPrep Cloud PubSub Cloud Composer & Airflow Experience designing and deploying large scale distributed data processing systems with few technologies such as PostgreSQL or equivalent databases, SQL, Hadoop, Spark, Tableau Hands on Experience with Python-Json nested data operation. Exposure or Knowledge of API design, REST including versioning, isolation, and micro-services. Proven ability to define and build architecturally sound solution designs. Demonstrated ability to rapidly build relationships with key stakeholders. Experience of automated unit testing, automated integra...

    $13 / hr (Avg Bid)
    $13 / hr (평균 입찰가)
    11 건의 입찰

    I am looking for a skilled professional who can efficiently set up an big data cluster. REQUIREMENTS: • Proficiency in Elasticsearch,Hadoop,Spark,Cassandra • Experience in working with large-scale data storage (10+ terabytes). • Able to structure data effectively. SPECIFIC TASKS INCLUDE: - Setting up the Elasticsearch,Hadoop,Spark,Cassandra big data cluster. - Ensuring the data to be stored is structured. - Prep for the ability to handle more than 10 terabytes of data. The ideal candidate will have substantial experience in large data structures and a deep understanding of the bigdata database technology. I encourage experts in big data management and those well-versed with the best practices of bigdata to bid for this project.

    $30 / hr (Avg Bid)
    $30 / hr (평균 입찰가)
    3 건의 입찰

    ...tasks that need to be performed to ensure that our data structures run smoothly and effectively. Specifically: - One of your main responsibilities would be to add 5 new calculated columns in our Redshift table. To do this, you will need to update existing Glue jobs that are written in Python. Test this in DEV, UAT and Production. - Furthermore, you will need to create or update views and stored procedures in Redshift to create a tableau extract. The core competences that candidates need to possess are: - Extensive experience with AWS data stack including Glue (used with Python), S3, Redshift etc. - Strong understanding of data structures and databases. - Solid knowledge of Python. - A proven track record working on similar projects would be adva...

    $1000 (Avg Bid)
    $1000 (평균 입찰가)
    18 건의 입찰

    I'm looking to have a SQL view query written in Redshift. With no specific tables indicated, the freelancer should be adept enough to identify the relevant tables during the development process. Requirement: - Expertise in SQL and Amazon Redshift. What To Include In Your Proposal: - Please include any past work related to SQL queries, especially if you've previously worked with Redshift. Deliverable: - The use of this SQL view query is for reporting purposes, so it needs to be efficient and reliable.

    $78 (Avg Bid)
    $78 (평균 입찰가)
    6 건의 입찰

    We are looking for an Informatica BDM developer with 7+ yrs of experience, who can support us for 8 hours in a day from Mon - Friday. Title : Informatica BDM Developer Experience : 5 + Yrs 100%Remote Contract : Long term Timings: 10:30 am - 07:30 pm IST Required Skills: Informatica Data Engineering, DIS and MAS • Databricks, Hadoop • Relational SQL and NoSQL databases, including some of the following: Azure Synapse/SQL DW and SQL Database, SQL Server and Oracle • Core cloud services from at least one of the major providers in the market (Azure, AWS, Google) • Agile Methodologies, such as SCRUM • Task tracking tools, such as TFS and JIRA

    $1227 (Avg Bid)
    $1227 (평균 입찰가)
    3 건의 입찰

    I am seeking a skilled professional proficient in managing big data tasks with Hadoop, Hive, and PySpark. The primary aim of this project involves processing and analyzing structured data. Key Tasks: - Implementing Hadoop, Hive, and PySpark for my project to analyze large volumes of structured data. - Use Hive and PySpark for sophisticated data analysis and processing techniques. Ideal Skills: - Proficiency in Hadoop ecosystem - Experience with Hive and PySpark - Strong background in working with structured data - Expertise in big data processing and data analysis - Excellent problem-solving and communication skills Deliverables: - Converting raw data into useful information using Hive and Visualizing the results of queries into the graphical representations. - C...

    $17 / hr (Avg Bid)
    $17 / hr (평균 입찰가)
    15 건의 입찰
    Data Engineer 종료 left

    ...custom scripts as needed. ● Ensure the efficient extraction, transformation, and loading of data from diverse sources into our data warehouse. Data Warehousing: ● Design and maintain data warehouse solutions on AWS, with a focus on scalability, performance, and reliability. ● Implement and optimize data models for efficient storage and retrieval in AWS Redshift. AWS Service Utilization: ● Leverage AWS services such as S3, Lambda, Glue, Redshift, and others to build end-to-end data solutions. ● Stay abreast of AWS developments and recommend the adoption of new services to enhance our data architecture. SQL Expertise: ● Craft complex SQL queries to support data analysis, reporting, and business intelligence requirements. ● Optimize SQL code for performance and efficiency, ...

    $2347 (Avg Bid)
    $2347 (평균 입찰가)
    8 건의 입찰

    ...currently seeking a Hadoop Professional with strong expertise in Pyspark for a multi-faceted project. Your responsibilities will extend to but not limited to: - Data analysis: You'll be working with diverse datasets including customer data, sales data and sensor data. Your role will involve deciphering this data, identifying key patterns and drawing out impactful insights. - Data processing: A major part of this role will be processing the mentioned datasets, and preparing them effectively for analysis. - Performance optimization: The ultimate aim is to enhance our customer targeting, boost sales revenue and identify patterns in sensor data. Utilizing your skills to optimize performance in these sectors will be highly appreciated. The ideal candidate will be skilled in ...

    $463 (Avg Bid)
    $463 (평균 입찰가)
    25 건의 입찰

    ...R), and other BI essentials, join us for global projects. What We're Looking For: Business Intelligence Experts with Training Skills: Data analysis, visualization, and SQL Programming (Python, R) Business acumen and problem-solving Effective communication and domain expertise Data warehousing and modeling ETL processes and OLAP Statistical analysis and machine learning Big data technologies (Hadoop, Spark) Agile methodologies and data-driven decision-making Cloud technologies (AWS, Azure) and data security NoSQL databases and web scraping Natural Language Processing (NLP) and sentiment analysis API integration and data architecture Why Work With Us: Global Opportunities: Collaborate worldwide across diverse industries. Impactful Work: Empower businesses through data-drive...

    $21 / hr (Avg Bid)
    $21 / hr (평균 입찰가)
    24 건의 입찰
    AWS Expert 종료 left

    ...understanding of cloud and cloud native principles and practices · Hands-on experience with cloud environments - AWS, GCP, Azure · AWS Services:EC2, EC2 container service, Lambda, Elastic beanstalk, S3, EFS, Storage gateway, Glacier, VPC, Direct connect, Transit Gateway, ELB, Auto Scaling, ACM, Cloud Front, Cloud Formation, Cloud Watch, Cloud Trail, SNS, SES, SQS, SWF, IAM, RDS, DynamoDB, Elasticache, Redshift, AWS Backup · Operating Systems: UNIX, Redhat LINUX, Windows · Networking & Protocols: TCP/IP, Telnet, HTTP, HTTPS, FTP, SNMP, LDAP, DNS, DHCP, ARP, SSL, IDM 6.0 and 7.0 · DevOps Tools: Puppet, Chef, Subversion (SVN), GIT, Jenkins, Hudson, Puppet, Ansible, Docker and Kubernetes · Scripting Languages: UNIX Shell Scripting (Bour...

    $8 / hr (Avg Bid)
    $8 / hr (평균 입찰가)
    1 건의 입찰

    I'm launching an extensive project that needs a proficient expert in Google Cloud Platform (including BigQuery, GCS, Airflow/Composer), Hadoop, Java, Python, and Splunk. The selected candidate should display exemplary skills in these tools, and offer long-term support. Key Responsibilities: - Data analysis and reporting - Application development - Log monitoring and analysis Skills Requirements: - Google Cloud Platform (BigQuery, GCS, Airflow/Composer) - Hadoop - Java - Python - Splunk The data size is unknown at the moment, but proficiency in managing large datasets will be advantageous. Please place your bid taking into account all these factors. Your prior experience handling similar projects will be a plus. I look forward to working with a dedicated and know...

    $488 (Avg Bid)
    $488 (평균 입찰가)
    54 건의 입찰

    I need an experienced AWS Data Engineer to develop a robust data warehouse for moving data from Sql Server to S3 using AWS DMS. Then from S3 to Redshift. The data, which is currently in CSV files, will need to be efficiently imported into AWS. The ultimate aim is to enable seamless analytics and reporting from the data warehouse. Ideal Skills and Experience: - Proficiency in AWS data warehousing - Strong experience with CSV files - Prior experience with Sales Data highly preferred - Clear understanding of data warehouse design and architecture. - Proficiency in AWS DMS , AWS S3. - AWS Email Notification.

    $122 (Avg Bid)
    $122 (평균 입찰가)
    7 건의 입찰

    ...specifically Cassandra, BigQuery, Snowflake, and Redshift. Key Responsibilities: - Research, understand, and articulate the distinct approaches of and the other specified databases - Translate complex concepts into clear, concise, and reader-friendly articles Ideal Candidate Should Have: - You need to have very deep expertise in databases and distributed systems. - Ideally, a Ph.D. or some deep research writing experience. Any conference publications in top conferences are a plus. - An understanding of database architectures - Prior experience writing technical articles for a technical audience - The ability to explain complex topics in an easy-to-understand manner - Knowledge about Cassandra, BigQuery, Snowflake, and Redshift will be a big plus. In the scope of this ...

    $19 / hr (Avg Bid)
    $19 / hr (평균 입찰가)
    21 건의 입찰

    ...commonly used packages specially with GCP. Hands on experience on Data migration and data processing on the Google Cloud stack, specifically: Big Query Cloud Dataflow Cloud DataProc Cloud Storage Cloud DataPrep Cloud PubSub Cloud Composer & Airflow Experience designing and deploying large scale distributed data processing systems with few technologies such as PostgreSQL or equivalent databases, SQL, Hadoop, Spark, Tableau Hands on Experience with Python-Json nested data operation. Exposure or Knowledge of API design, REST including versioning, isolation, and micro-services. Proven ability to define and build architecturally sound solution designs. Demonstrated ability to rapidly build relationships with key stakeholders. Experience of automated unit testing, automated integra...

    $14 / hr (Avg Bid)
    $14 / hr (평균 입찰가)
    6 건의 입찰

    As part of a critical project, I'm looking for an AWS data engineer who has substantial experience in the field spanning 8-10 years. Key Responsibilities: - AWS data engineering - Handling data migration tasks - Working exclusively with structured data. Required Skills and Experience: - Deep knowledge of AWS services such as AWS Glue, Amazon Redshift, and Amazon Athena is mandatory. Prior experience working with these services is a prerequisite. - The candidate should have extensive experience in managing structured data - A minimum of 8-10 years experience in data engineering is required, specifically within the AWS environment - A strong background in data migration tasks is paramount for effective execution of duties. Given the nature of the data involved in this pr...

    $13 / hr (Avg Bid)
    $13 / hr (평균 입찰가)
    11 건의 입찰

    I'm seeking a talented 3D artist to create a high-quality, realistic model of a bottle. My goal is to use this model for static product visualizations. Key Project Elements: - Craft a detailed 3D bottle model - Ensure realistic textures and materials - Optimize for static product visualization renders - Tailor specifically for rendering with Redshift in Cinema 4D Ideal Skills: - Proficiency in Cinema 4D and Redshift - Experience with product visualization - Strong portfolio of realistic 3D models - Ability to deliver detailed and accurate work The final output should be a meticulously detailed, photo-realistic 3D model that can be rendered in high-quality static images. I need an artist who can not only capture the intricate details of the bottle but also present it...

    $380 (Avg Bid)
    $380 (평균 입찰가)
    52 건의 입찰

    As an ecommerce platform looking to optimize our data management, I require assistance with several key aspects of my AWS big data project, including: - Data lake setup and configuration - Development of AWS Glue jobs - Deployment of Hadoop and Spark clusters - Kafka data streaming The freelancer hired for this project must possess expertise in AWS, Kafka, and Hadoop. Strong experience with AWS Glue is essential given the heavy utilization planned for the tool throughout the project. Your suggestions and recommendations regarding these tools and technologies will be heartily welcomed, but keep in mind specific tools are needed to successfully complete this project.

    $844 (Avg Bid)
    $844 (평균 입찰가)
    20 건의 입찰

    ...spearhead the operation. I plan to utilise Amazon S3, Amazon Redshift, and Amazon EMR for this project. I need your insight and hands-on experience with these AWS services, to effectively manage and shape the course of this project. The project will handle medium-sized data, approximately between 1TB-10TB. It indicates a large-scale job that demands acute attention and expertise. In regards to data security and privacy,itional encryption measures will be required on top of basic security measures. If you are well-versed in data encryption methods and can ensure the reliability and security of sensitive information, then you are who I am looking for. Skills and Experience: - Comprehensive knowledge of Amazon S3, Amazon Redshift, Amazon EMR - Experience with medium t...

    $888 (Avg Bid)
    $888 (평균 입찰가)
    27 건의 입찰

    I am seeking an experienced AWS data architect capable of designing an optimized data architecture for my project. This would involve the integral use of multiple AWS services including Amazon S3, Amazon Redshift, Amazon Athena, AWS Glue, AWS Lambda, and AWS DynamoDB. It's crucial for the project that the freelancer has expertise in integrating AWS Aurora into this data structure. Key responsibilities: - Comprehensive understanding of Amazon S3, Redshift, Athena, Glue, Lambda, and DynamoDB. - Experience integrating AWS Aurora into data structures. - Design of highly optimized and scalable data architectures. - Familiarity with data migration and management strategies. - Robust problem-solving abilities with a keen attention to detail. Our ideal candidate is analytical...

    $23 / hr (Avg Bid)
    $23 / hr (평균 입찰가)
    41 건의 입찰

    ...Queries: Write a SQL query to find the second highest salary. Design a database schema for a given problem statement. Optimize a given SQL query. Solution Design: Design a parking lot system using object-oriented principles. Propose a data model for an e-commerce platform. Outline an approach to scale a given algorithm for large datasets. Big Data Technologies (if applicable): Basic questions on Hadoop, Spark, or other big data tools. How to handle large datasets efficiently. Writing map-reduce jobs (if relevant to the role). Statistical Analysis and Data Processing: Write a program to calculate statistical measures like mean, median, mode. Implement data normalization or standardization techniques. Process and analyze large datasets using Python libraries like Pandas. Rememb...

    $8 / hr (Avg Bid)
    $8 / hr (평균 입찰가)
    36 건의 입찰

    ...customer-centric software products · Analyze existing software implementations to identify areas of improvement and provide deadline estimates for implementing new features · Develop software applications using technologies that include and not limited to core Java (11+ ), Kafka or messaging system, Web Frameworks like Struts / Spring, relational (Oracle) and non-relational databases (SQL, MongoDB, Hadoop, etc), with RESTful microservice architecture · Implement security and data protection features · Update and maintain documentation for team processes, best practices, and software runbooks · Collaborating with git in a multi-developer team · Appreciation for clean and well documented code · Contribution to database design ...

    $1406 (Avg Bid)
    $1406 (평균 입찰가)
    51 건의 입찰

    a project of data analysis/data engineering involving big data needs to be done. Candidate must have command on big data solutions like hadoop

    $11 / hr (Avg Bid)
    $11 / hr (평균 입찰가)
    8 건의 입찰

    Project Title: Advanced Hadoop Administrator Description: - We are seeking an advanced Hadoop administrator for an inhouse Hadoop setup project. - The ideal candidate should have extensive experience and expertise in Hadoop administration. - The main tasks of the Hadoop administrator will include data processing, data storage, and data analysis. - The project is expected to be completed in less than a month. - The Hadoop administrator will be responsible for ensuring the smooth functioning of the Hadoop system and optimizing its performance. - The candidate should have a deep understanding of Hadoop architecture, configuration, and troubleshooting. - Experience in managing large-scale data processing and storage environments is requi...

    $310 (Avg Bid)
    $310 (평균 입찰가)
    3 건의 입찰

    ...pipe, streams, Stored procedure, Task, Hashing, Row Level Security, Time Travel etc. Proficiency in SQL, data structures, and database design principles. Strong experience in ETL or ELT Data Pipelines and various aspects, terminologies with Pure SQL like SCD Dimensions, Delta Processing etc. 3+ years of Experience of working with AWS cloud services- S3, Lambda, Glue, Athena, IAM, CloudWatch, Redshift etc. 5+ years of proven expertise in creating pipelines for real time and near real time integration working with different data sources - flat files, XML, JSON, Avro files and databases Excellent communication skills, including the ability to explain complex technical concepts clearly. Prior experience in a client-facing or consulting role is advantageous. Ability to manage p...

    $9022 (Avg Bid)
    $9022 (평균 입찰가)
    2 건의 입찰

    I am looking for a freelancer to help me with a Proof of Concept (POC) project focusing on Hadoop. Requirement: We drop a file in HDFS, which is then pushed to Spark or Kafka and it pushes final output/results into a database. Objective is to show we can handle million of records as input and put it in destination. The POC should be completed within 3-4 days and should have a simple level of complexity. Skills and experience required: - Strong knowledge and experience with Hadoop - Familiarity with HDFS and Kafka/Spark - Ability to quickly understand and implement a simple POC project - Good problem-solving skills and attention to detail

    $169 (Avg Bid)
    $169 (평균 입찰가)
    9 건의 입찰

    ...performance and efficiency Pyspark ,sql,python Cdk Typescript Aws glue ,Emr and andes Currently Migrating from teradata to aws. Responsibilities: - Migrate data from another cloud provider to AWS, ensuring a smooth transition and minimal downtime - Design and develop applications that utilize AWS Glue and Athena for data processing and analysis - Optimize data storage and retrieval using AWS S3 and Redshift, as well as other relevant AWS services - Collaborate with other team members and stakeholders to ensure project success and meet client requirements If you have a strong background in AWS migration, expertise in working with structured data, and proficiency in utilizing AWS Glue and Athena, then this project is perfect for you. Apply now and join our team to hel...

    $9 / hr (Avg Bid)
    $9 / hr (평균 입찰가)
    14 건의 입찰

    I am seeking assistance with a research project focused on data warehouse implementation, specifically in the area of cloud-based data warehouses. Skills and experience required for this project include: - Strong knowledge of data warehousing concepts and principles - Experience with cloud-based data warehousing platforms, such as Amazon Redshift or Google BigQuery - Proficiency in data modeling and designing data warehouse schemas - Understanding of ETL (Extract, Transform, Load) processes and tools - Ability to analyze and integrate data from multiple sources - Familiarity with SQL and other programming languages for data manipulation and analysis The deliverable for this project is a comprehensive report that summarizes the research findings and provides recommendations for i...

    $27 (Avg Bid)
    $27 (평균 입찰가)
    4 건의 입찰

    ...of DataNode 3: Mike Set the last two digits of the IP address of each DataNode: IP address of DataNode 1: IP address of DataNode 2: IP address of DataNode 3: Submission Requirements: Submit the following screenshots: Use commands to create three directories on HDFS, named after the first name of each team member. Use commands to upload the Hadoop package to HDFS. Use commands to show the IP addresses of all DataNodes. Provide detailed information (ls -l) of the blocks on each DataNode. Provide detailed information (ls -l) of the fsimage file and edit log file. Include screenshots of the Overview module, Startup Process module, DataNodes module, and Browse Directory module on the Web UI of HDFS. MapReduce Temperature Analysis You are

    $15 (Avg Bid)
    $15 (평균 입찰가)
    2 건의 입찰

    ...Spark, is also crucial. In addition to these core skills, we require expertise in AWS cloud services, particularly AWS Glue and Amazon Kinesis. Experience with AWS Glue will be vital for ETL operations and data integration tasks, while familiarity with Amazon Kinesis is important for real-time data processing applications. Furthermore, the candidate should have a solid understanding of Amazon Redshift, our data warehousing solution, to manage and analyze large datasets efficiently. The Senior Data Engineer will be responsible for designing and implementing scalable data pipelines, ensuring data quality and integrity, and optimizing data processing workflows. The role involves close collaboration with data scientists, analysts, and other stakeholders to support data-driven decis...

    $11 / hr (Avg Bid)
    $11 / hr (평균 입찰가)
    11 건의 입찰

    I have AWS RDS MySQL database that is of 1TB (1000GB) size. I'm looking for freelancer to migrate it to AWS Redshift. I have ~10 MySQL queries that I want to be re-written for Redshift. I will share all the details. Start your proposal with "Redshift" keyword to be considered.

    $1241 (Avg Bid)
    $1241 (평균 입찰가)
    20 건의 입찰

    I am looking for a freelancer to create a document that compares the performance and cost of Redshift serverless and Redshift provisioned. The purpose of this document is to help me make a decision between the two options. I want the comparision in terms of query time, pricing, warm up time and compare with Redshift provisioned and redshift serverless. Specific dimensions that I would like to compare include speed, cost, and scalability. The target audience for this document is technical experts. Ideal skills and experience for this project include: - Strong knowledge of Redshift serverless and Redshift provisioned - Experience in conducting performance and cost comparisons - Ability to present complex technical information in a clear and ...

    $29 (Avg Bid)
    $29 (평균 입찰가)
    4 건의 입찰

    Big data project in java needed to be done in 24 hrs. Person needs to be experienced in spark. hadoop.

    $132 (Avg Bid)
    $132 (평균 입찰가)
    10 건의 입찰

    I am looking for a freelancer who can complete some somewhat simple SQL queries for data analysis. The ideal candidate should have experience in MySQL and be comfortable working with intermediate queries involving JOINS and GROUP BY. This project requires the following skills and experience: - Strong proficiency in SQL - Experience with MySQL or Redshift SQL - Ability to write intermediate queries involving JOINS and GROUP BY If you have the necessary skills and experience, please submit your proposal.

    $130 (Avg Bid)
    $130 (평균 입찰가)
    42 건의 입찰

    ...Timings : 12pm - 9pm IST M - F AWS Data Engineer Requirements • Collaborate with business analysts to understand and gather requirements for existing or new ETL pipelines. • Connect with stakeholders daily to discuss project progress and updates. • Work within an Agile process to deliver projects in a timely and efficient manner. • Have worked extensively on Redshift and understands performance tuning techniques and management of Redshift data workloads. • Design and develop Airflow DAGs to schedule and manage ETL workflows. • Implement best practices for data engineering, including data modeling, data warehousing, and data pipeline architecture. • Monitor and troubleshoot ETL pipelines to ensure smooth operation. • Custom Python f...

    $3294 (Avg Bid)
    $3294 (평균 입찰가)
    16 건의 입찰

    Looking for hadoop specialist to design the query optimisation design . Currently when the search is made its getting freezing when the user tries to run more than one search at a time . Need to implement a solution . This is a remote project . Share your idea first if you have done any such work . Here the UI is in React and Backend is in Node js .

    $16 / hr (Avg Bid)
    $16 / hr (평균 입찰가)
    38 건의 입찰

    #Your code goes here import '' import '' def jbytes(*args) { |arg| arg.to_s.to_java_bytes } end def put_many(table_name, row, column_values) table = (@, table_name) p = (*jbytes(row)) do |column, value| family, qualifier = (':') (jbytes(family, qualifier), jbytes(value)) end (p) end # Call put_many function with sample data put_many 'wiki', 'DevOps', { "text:" => "What DevOps IaC do you use?", "revision:author" => "Frayad Gebrehana", "revision:comment" => "Terraform" } # Get data from the 'wiki' table get 'wiki', 'DevOps' #Do not remove the exit call below exit

    $60 (Avg Bid)
    $60 (평균 입찰가)
    7 건의 입찰

    I am looking for a freelancer who can help me with a project that involves reading data from a Redshift serverless DB using AWS Glue. The project requirements are as follows: Output format preference: - The extracted data should be in JSON format. Data transformation and cleaning: - No significant cleaning or transformation is required. - I just need the raw data extracted from the Redshift serverless DB. Ideal skills and experience: - Experience working with Redshift serverless DB and AWS Glue. - Experience in python and typescript If you have the necessary skills and experience, please submit your proposal outlining your approach to this project.

    $175 (Avg Bid)
    $175 (평균 입찰가)
    22 건의 입찰

    I am in need of assistance with Hadoop for the installation and setup of the platform. Skills and experience required: - Proficiency in Hadoop installation and setup - Knowledge of different versions of Hadoop (Hadoop 1.x and Hadoop 2.x) - Ability to work within a tight timeline (project needs to be completed within 7 hours) Please note that there is no specific preference for the version of Hadoop to be used.

    $13 (Avg Bid)
    $13 (평균 입찰가)
    2 건의 입찰

    I am a professional with advanced knowledge and experience in Redshift. I am interested in furthering my understanding of the subject by delving deeper into the topics of data warehousing and query optimization. I am seeking individual training to best fit my needs.

    $75 (Avg Bid)
    $75 (평균 입찰가)
    1 건의 입찰

    ...following AWS services: EC2, S3, Redshift, and Glue. The main objective of this project is to efficiently store, analyze, and migrate data using AWS services. I expect the project to be completed within 1-3 months. Responsibilities: The chosen candidate will be responsible for the following tasks: - Utilize EC2 for scalable computing power and efficient data processing. - Utilize S3 for secure and reliable data storage. - Utilize Redshift for high-performance data warehousing and analytics. - Utilize Glue for ETL (Extract, Transform, Load) processes to efficiently transform and migrate data. Requirements: The ideal candidate for this project should have the following skills and experience: - Strong expertise in AWS services, specifically EC2, S3, Redshift, and G...

    $4 / hr (Avg Bid)
    $4 / hr (평균 입찰가)
    8 건의 입찰

    ...Observatory Data Repository etc; You are free to use any visual tool that is free for research purposes. Some suggestions include Tableau Public, Google Data Studio,Microsoft Power BI, etc. You may also consider the following visual tools for data extraction, transformation, and loading: you can use any of the following : , Talend Open Studio,MarkLogic,Oracle Data Integrator,Amazon RedShift I will need assistance with data extraction, transforming ,loading and perform exploratory data analysis to understand patterns and trends in the data and Create meaningful visualizations to represent your findings. My desired end result is an Excel spreadsheet or some other file for the data presentation . If you are able to execute this task effectively and timely, I would be very eager to

    $12 (Avg Bid)
    $12 (평균 입찰가)
    3 건의 입찰

    Hello, I need someone who can provide a support for my aws glue job project. Its mainly involve in: -creating glue job script, -configuring connection with redshift, pgAdmin db, s3 -creating triggers and crawlers -transforming and enriching data -python knowledge is required -testing and debugging experience - creating glue workflow is a plus please reach out if you are interested in providing support. Regards Sanch Adane

    $21 / hr (Avg Bid)
    $21 / hr (평균 입찰가)
    22 건의 입찰

    Wordpress Black theme Design in photo Images can take from udemy Content here Content Coupon Code: 90OFFOCT23 (subscribe by 7 Oct’23 or till stock lasts) Data Engineering Career Path: Big Data Hadoop and Spark with Scala: Scala Programming In-Depth: Apache Spark In-Depth (Spark with Scala): DP-900: Microsoft Azure Data Fundamentals: Data Science Career Path: Data Analysis In-Depth (With Python): https://www

    $7 (Avg Bid)
    상금 보장형
    $7
    4 건의 응모작