Jobs
Interviews

100 Aws Redshift Jobs - Page 2

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 9.0 years

0 Lacs

karnataka

On-site

You will be responsible for leading testing efforts for data conversion, migration, and ETL projects, ensuring the quality of tests across various project phases. Your key responsibilities will include leading testing activities across multiple source and target systems, analyzing data mapping and transformation rules, defining test strategies and plans for data migration validation, collaborating with different teams, validating data processes, developing SQL queries for data validation, coordinating with cross-functional teams, tracking defects, and mentoring junior team members. To be successful in this role, you should have proven experience as a Test Lead for data migration, conversion, and ETL testing projects, hands-on experience with ETL tools like Informatica, Talend, or DataStage, strong SQL skills, experience in handling large volumes of data, familiarity with data warehousing concepts, proficiency in test management tools like JIRA, strong analytical and problem-solving abilities, and excellent communication and coordination skills. Nice to have skills include experience with cloud-based data platforms, exposure to automation frameworks for data validation, knowledge of industry-specific data models, and testing certifications like ISTQB. Ideally, you should hold a Bachelor's or Master's degree in Computer Science, Information Systems, or a related field.,

Posted 2 weeks ago

Apply

6.0 - 11.0 years

8 - 13 Lacs

bengaluru

Work from Office

Design, build, and maintain data pipelines on the AWS platform. Work with AWS services like S3, Glue, EMR, and Redshift. Process and analyze large datasets to support business insights. Ensure data quality, integrity, and security in the data lake. Location - Pan India.

Posted 2 weeks ago

Apply

8.0 - 12.0 years

0 Lacs

hyderabad, telangana

On-site

As a Data Testing Engineer with over 8 years of experience, you will be responsible for developing, maintaining, and executing test cases to validate the accuracy, completeness, and consistency of data across different layers of the data warehouse. Your primary focus will be on testing ETL processes to ensure that data is correctly extracted, transformed, and loaded from source to target systems while adhering to business rules. You will also be required to perform source-to-target data validation to ensure data integrity and identify any discrepancies or data quality issues that may arise. Additionally, you will need to develop automated data validation scripts using SQL, Python, or testing frameworks to streamline and scale testing efforts effectively. In this role, you will conduct testing in cloud-based data platforms such as AWS Redshift, Google BigQuery, and Snowflake to ensure performance and scalability. It is essential to have familiarity with ETL testing tools and frameworks like Informatica, Talend, and dbt. Experience with scripting languages to automate data testing processes will be beneficial for this position. Moreover, familiarity with data visualization tools like Tableau, Power BI, or Looker will further enhance your ability to analyze and present data effectively. This role offers a hybrid work location in Hyderabad and Gurgaon, with an immediate to 15 days notice period requirement. If you are passionate about ensuring data quality and integrity through rigorous testing processes, this opportunity is perfect for you.,

Posted 2 weeks ago

Apply

6.0 - 10.0 years

0 Lacs

kolkata, west bengal

On-site

At EY, you'll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture, and technology to become the best version of you. And we're counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself and a better working world for all. EY's Consulting Services is a unique, industry-focused business unit that provides a broad range of integrated services leveraging deep industry experience with strong functional and technical capabilities and product knowledge. EY's financial services practice offers integrated Consulting services to financial institutions and other capital markets participants. Within EY's Consulting Practice, the Data and Analytics team solves big, complex issues and capitalizes on opportunities to deliver better working outcomes that help expand and safeguard businesses, now and in the future. This way, we help create a compelling business case for embedding the right analytical practice at the heart of clients" decision-making. **The Opportunity** As a Senior Designer and Developer working with Informatica Intelligent Cloud Services (IICS), you will play a crucial role in designing, developing, and managing complex data integration workflows involving multiple sources such as files and tables. Your responsibilities will span across various data sources to ensure seamless data movement and transformation for analytics and business intelligence purposes. **Key Roles and Responsibilities of an IICS Senior Designer and Developer:** - **Designing and Developing Data Integration Solutions:** - Develop ETL (Extract, Transform, Load) mappings and workflows using Informatica Cloud IICS to integrate data from various sources like files, multiple database tables, cloud storage, and APIs. - Configure synchronization tasks involving multiple database tables to ensure efficient data extraction and loading. - Build reusable mapping templates for different data loads including full, incremental, and CDC loads. - **Handling Multiple Data Sources:** - Work with structured, semi-structured, and unstructured data sources including Oracle, SQL Server, Azure Data Lake, Azure Blob Storage, and more. - Manage file ingestion tasks to load large datasets from on-premises systems to cloud data lakes or warehouses. - Use various cloud connectors and transformations to process and transform data efficiently. - **Data Quality, Governance, and Documentation:** - Implement data quality and governance policies to ensure data accuracy, integrity, and security. - Create detailed documentation such as source-to-target mappings, ETL design specifications, and data migration strategies. - Develop audit frameworks to track data loads and support compliance requirements. - **Project Planning and Coordination:** - Plan and monitor ETL development projects, coordinate with cross-functional teams, and communicate effectively across organizational levels. - Report progress, troubleshoot issues, and coordinate deployments. - **Performance Tuning and Troubleshooting:** - Optimize ETL workflows and mappings for performance. - Troubleshoot issues using IICS frameworks and collaborate with support teams as needed. - **Leadership and Mentoring (Senior Role Specific):** - Oversee design and development efforts, review work of junior developers, and ensure adherence to best practices. - Lead the creation of ETL standards and methodologies to promote consistency across projects. **Summary of Skills and Tools Commonly Used:** - Informatica Intelligent Cloud Services (IICS), Informatica Cloud Data Integration (CDI) - SQL, PL/SQL, API integrations (REST V2), ODBC connections, Flat Files, ADLS, Sales Force Netzero - Cloud platforms: Azure Data Lake, Azure Synapse, Snowflake, AWS Redshift - Data modeling and warehousing concepts - Data quality tools and scripting languages - Project management and documentation tools In essence, a Senior IICS Designer and Developer role requires technical expertise in data integration across multiple sources, project leadership, and ensuring high-quality data pipelines to support enterprise BI and analytics initiatives. **What We Look For:** We are seeking a team of individuals with commercial acumen, technical experience, and enthusiasm to learn new things in a fast-moving environment. Join a market-leading, multi-disciplinary team of professionals and work with leading businesses across various industries. **What Working at EY Offers:** At EY, you will work on inspiring and meaningful projects, receive support, coaching, and feedback from engaging colleagues, and have opportunities for skill development and career progression. You will have the freedom and flexibility to handle your role in a way that suits you best. Join EY in building a better working world, creating long-term value for clients, people, and society, and fostering trust in the capital markets through data-driven solutions and diverse global teams.,

Posted 2 weeks ago

Apply

8.0 - 12.0 years

10 - 15 Lacs

hyderabad

Work from Office

Role & responsibilities: The Data Engineer, Specialist is responsible for building and maintaining scalable data pipelines to ensure seamless integration and efficient data flow across various platforms. This role involves designing, developing, and optimizing ETL (Extract, Transform, Load) processes, managing databases and leveraging big data technologies to support analytics and business intelligence initiatives. Preferred candidate profile: Design, develop and maintain scalable ETL (Extract, Transform, Load) processes to efficiently extract data from various structured and unstructured sources, ensuring accuracy, consistency and performance optimization. Architect and manage database systems to support large-scale data storage and retrieval, ensuring high availability, security and efficiency in handling complex datasets. Integrate and transform data from multiple sources including APIs, on-premises databases and cloud storage, creating unified datasets to support data-driven decision-making across the organization. Collaborate with business intelligence analysts, data scientists and other stakeholders to understand specific data needs, ensuring the delivery of high-quality, business-relevant datasets. Monitor, troubleshoot and optimize data pipelines and workflows to resolve performance bottlenecks, improve processing efficiency and ensure data integrity. Develop automation frameworks for data ingestion, transformation and reporting to streamline data operations and reduce manual effort. Work with cloud-based data platforms and technologies such as AWS (Redshift, Glue, S3), Google Cloud (Big Query, Dataflow), or Azure (Synapse, Data Factory) to build scalable data solutions. Optimize data storage, indexing, and query performance to support real-time analytics and reporting, ensuring cost-effective and high-performing data solutions. Lead or contribute to special projects involving data architecture improvements, migration to modern data platforms, or advanced analytics initiatives.

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

The role at DigitalX Real World Insight Enabler pod involves working closely with team members to support access, questions, and governance related issues to real world data. You will be responsible for ensuring accuracy, compliance with policies, and maintaining effective communication with internal stakeholders and vendors. Additionally, you will support operational activities for multiple analytics and technology groups in a dynamic team environment. As an Administrator of RWD BoldX Freshservice agile ticketing system, your responsibilities will include developing catalogue forms, automation, enhancements, and handling user requests for RWD services. You will manage the RWD ticketing system demand queue by reviewing requests, routing tickets, and requests based on defined parameters to appropriate RWD Operation groups through Fresh Service. This will involve gathering and reporting information, problem-solving, maintaining records and reports, and providing support to internal RWD team members. Your role will also entail performing liaison activities with contractors, managers, and the RWD team to obtain appropriate ticket forms, documents, approvals, training, third party agreements, and reviews. You will develop, update, and provide RWD training modules using BrainShark to ensure compliance of dataset and environment use. Regularly reviewing user access to ensure various levels of access for RWD managed data and tools are appropriate and current will be part of your responsibilities. You will enhance, develop, and maintain detailed written RWD procedures for new and existing processes. Furthermore, you will assist the Privacy team in processing GDPR Data Subject Requests inside AWS RWD Environment and perform RWD AWS monthly audits to review contractor access. Collaborating with the Vendor Management team around TPA needs for contractor RWD dataset requests and performing RWD EU/US Redshift role assessments are also key aspects of the role. Additionally, you will be responsible for sending and recording vendor communications and responses for BoldX PDS RFI Project Management platform. The required qualifications for this position include experience using help desk tools, basic knowledge of AWS Redshift/Data Lake, proficiency in Microsoft Office products, excellent customer service skills, attention to detail, ability to work in a global organization, strong communication and collaboration skills, and a minimum of 5 years of experience with a BA/BS or equivalent training. Hybrid work arrangements (on-site/work from home) from certain locations may be permitted in accordance with Astellas Responsible Flexibility Guidelines. Astellas is committed to equality of opportunity in all aspects of employment, including Disability/Protected Veterans.,

Posted 2 weeks ago

Apply

2.0 - 6.0 years

0 Lacs

karnataka

On-site

As a professional services firm affiliated with KPMG International Limited, KPMG entities in India have been serving national and international clients since August 1993. With offices located across India in cities like Ahmedabad, Bengaluru, Chandigarh, Chennai, Gurugram, Hyderabad, Jaipur, Kochi, Kolkata, Mumbai, Noida, Pune, Vadodara, and Vijayawada, our team of professionals leverages a global network of firms while maintaining expertise in local laws, regulations, markets, and competition. We are committed to delivering rapid, performance-based, industry-focused, and technology-enabled services that showcase our extensive knowledge of global and local industries, as well as our deep understanding of the Indian business environment. In order to achieve this, we are looking for individuals with expertise in the following technologies and tools: Python, SQL, AWS Lambda, AWS Glue, AWS RDS, AWS S3, AWS Athena, AWS Redshift, AWS EventBridge, PySpark, Snowflake, GIT, Azure DevOps, JIRA, Cloud Computing, Agile methodologies, Automation, and Talend. If you are passionate about working in a dynamic environment that values equal employment opportunities and embraces diverse perspectives, we invite you to join our team at KPMG in India.,

Posted 2 weeks ago

Apply

5.0 - 10.0 years

15 - 25 Lacs

kolkata, hyderabad, bengaluru

Hybrid

Genpact (NYSE: G) is a global professional services and solutions firm delivering outcomes that shape the future. Our 125,000+ people across 30+ countries are driven by our innate curiosity, entrepreneurial agility, and desire to create lasting value for clients. Powered by our purpose the relentless pursuit of a world that works better for people – we serve and transform leading enterprises, including the Fortune Global 500, with our deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. Inviting applications for the role of Lead Consultant-Data Engineer, AWS+Python, Spark, Kafka for ETL! Responsibilities Develop, deploy, and manage ETL pipelines using AWS services, Python, Spark, and Kafka. Integrate structured and unstructured data from various data sources into data lakes and data warehouses. Design and deploy scalable, highly available, and fault-tolerant AWS data processes using AWS data services (Glue, Lambda, Step, Redshift) Monitor and optimize the performance of cloud resources to ensure efficient utilization and cost-effectiveness. Implement and maintain security measures to protect data and systems within the AWS environment, including IAM policies, security groups, and encryption mechanisms. Migrate the application data from legacy databases to Cloud based solutions (Redshift, DynamoDB, etc) for high availability with low cost Develop application programs using Big Data technologies like Apache Hadoop, Apache Spark, etc with appropriate cloud-based services like Amazon AWS, etc. Build data pipelines by building ETL processes (Extract-Transform-Load) Implement backup, disaster recovery, and business continuity strategies for cloud-based applications and data. Responsible for analysing business and functional requirements which involves a review of existing system configurations and operating methodologies as well as understanding evolving business needs Analyse requirements/User stories at the business meetings and strategize the impact of requirements on different platforms/applications, convert the business requirements into technical requirements Participating in design reviews to provide input on functional requirements, product designs, schedules and/or potential problems Understand current application infrastructure and suggest Cloud based solutions which reduces operational cost, requires minimal maintenance but provides high availability with improved security Perform unit testing on the modified software to ensure that the new functionality is working as expected while existing functionalities continue to work in the same way Coordinate with release management, other supporting teams to deploy changes in production environment Qualifications we seek in you! Minimum Qualifications Experience in designing, implementing data pipelines, build data applications, data migration on AWS Strong experience of implementing data lake using AWS services like Glue, Lambda, Step, Redshift Experience of Databricks will be added advantage Strong experience in Python and SQL Proven expertise in AWS services such as S3, Lambda, Glue, EMR, and Redshift. Advanced programming skills in Python for data processing and automation. Hands-on experience with Apache Spark for large-scale data processing. Experience with Apache Kafka for real-time data streaming and event processing. Proficiency in SQL for data querying and transformation. Strong understanding of security principles and best practices for cloud-based environments. Experience with monitoring tools and implementing proactive measures to ensure system availability and performance. Excellent problem-solving skills and ability to troubleshoot complex issues in a distributed, cloud-based environment. Strong communication and collaboration skills to work effectively with cross-functional teams. Preferred Qualifications/ Skills Master’s Degree-Computer Science, Electronics, Electrical. AWS Data Engineering & Cloud certifications, Databricks certifications Experience with multiple data integration technologies and cloud platforms Knowledge of Change & Incident Management process Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values diversity and inclusion, respect and integrity, customer focus, and innovation. Get to know us at genpact.com and on LinkedIn, X, YouTube, and Facebook. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training.

Posted 3 weeks ago

Apply

4.0 - 6.0 years

6 - 9 Lacs

gurugram

Work from Office

Role Description : As a Senior Cloud Data Platform (AWS) Specialist at Incedo, you will be responsible for designing, deploying and maintaining cloud-based data platforms on the AWS platform. You will work with data engineers, data scientists and business analysts to understand business requirements and design scalable, reliable and cost-effective solutions that meet those requirements. Roles & Responsibilities: Designing, developing and deploying cloud-based data platforms using Amazon Web Services (AWS) Integrating and processing large amounts of structured and unstructured data from various sources Implementing and optimizing ETL processes and data pipelines Developing and maintaining security and access controls Collaborating with other teams to ensure the consistency and integrity of data Troubleshooting and resolving data platform issues Technical Skills Skills Requirements: In-depth knowledge of AWS services and tools such as AWS Glue, AWS Redshift, and AWS Lambda Experience in building scalable and reliable data pipelines using AWS services, Apache Spark, and related big data technologies Familiarity with cloud-based infrastructure and deployment, specifically on AWS Strong knowledge of programming languages such as Python, Java, and SQL Must have excellent communication skills and be able to communicate complex technical information to non-technical stakeholders in a clear and concise manner. Must understand the company's long-term vision and align with it. Provide leadership, guidance, and support to team members, ensuring the successful completion of tasks, and promoting a positive work environment that fosters collaboration and productivity, taking responsibility of the whole team. Nice-to-have skills Qualifications 4-6 years of work experience in relevant field B.Tech/B.E/M.Tech or MCA degree from a reputed university. Computer science background is preferred

Posted 3 weeks ago

Apply

4.0 - 6.0 years

8 - 13 Lacs

gurugram

Work from Office

Role Description: As a Senior Cloud Data Platform (AWS) Specialist at Incedo, you will be responsible for designing, deploying and maintaining cloud-based data platforms on the AWS platform. You will work with data engineers, data scientists and business analysts to understand business requirements and design scalable, reliable and cost-effective solutions that meet those requirements. Roles & Responsibilities: Designing, developing and deploying cloud-based data platforms using Amazon Web Services (AWS) Integrating and processing large amounts of structured and unstructured data from various sources Implementing and optimizing ETL processes and data pipelines Developing and maintaining security and access controls Collaborating with other teams to ensure the consistency and integrity of data Troubleshooting and resolving data platform issues Technical Skills Skills Requirements: In-depth knowledge of AWS services and tools such as AWS Glue, AWS Redshift, and AWS Lambda Experience in building scalable and reliable data pipelines using AWS services, Apache Spark, and related big data technologies Familiarity with cloud-based infrastructure and deployment, specifically on AWS Strong knowledge of programming languages such as Python, Java, and SQL Must have excellent communication skills and be able to communicate complex technical information to non-technical stakeholders in a clear and concise manner. Must understand the company's long-term vision and align with it. Provide leadership, guidance, and support to team members, ensuring the successful completion of tasks, and promoting a positive work environment that fosters collaboration and productivity, taking responsibility of the whole team. Nice-to-have skills Qualifications 4-6 years of work experience in relevant field B.Tech/B.E/M.Tech or MCA degree from a reputed university. Computer science background is preferred

Posted 3 weeks ago

Apply

2.0 - 6.0 years

8 - 12 Lacs

bengaluru

Work from Office

Role & responsibilities Develop and maintain scalable ETL/ELT pipelines using Databricks (PySpark, Delta Lake). Design and optimize data models in AWS Redshift for performance and scalability. Manage Redshift clusters and EC2-based deployments, ensuring reliability and cost efficiency. Integrate data from diverse sources (structured/unstructured) into centralized data platforms. Implement data quality checks, monitoring, and logging across pipelines. Collaborate with data scientists, analysts, and business stakeholders to deliver high-quality datasets. Required Skills & Experience: 36 years of experience in data engineering. Strong expertise in Databricks (Spark, Delta Lake, notebooks, job orchestration). Hands-on experience with AWS Redshift (cluster management, performance tuning, workload optimization). Proficiency with AWS EC2, S3, and related AWS services. Strong SQL and Python skills. Experience with CI/CD and version control (Git). Preferred candidate profile We are seeking a skilled Data Engineer with hands-on experience in Databricks and AWS Redshift (including EC2 deployments) to design, build, and optimize data pipelines that support analytics and business intelligence initiatives.

Posted 3 weeks ago

Apply

5.0 - 10.0 years

15 - 25 Lacs

kolkata, hyderabad, bengaluru

Hybrid

Genpact (NYSE: G) is a global professional services and solutions firm delivering outcomes that shape the future. Our 125,000+ people across 30+ countries are driven by our innate curiosity, entrepreneurial agility, and desire to create lasting value for clients. Powered by our purpose the relentless pursuit of a world that works better for people – we serve and transform leading enterprises, including the Fortune Global 500, with our deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. Inviting applications for the role of Lead Consultant-Data Engineer, AWS+Python, Spark, Kafka for ETL! Responsibilities Develop, deploy, and manage ETL pipelines using AWS services, Python, Spark, and Kafka. Integrate structured and unstructured data from various data sources into data lakes and data warehouses. Design and deploy scalable, highly available, and fault-tolerant AWS data processes using AWS data services (Glue, Lambda, Step, Redshift) Monitor and optimize the performance of cloud resources to ensure efficient utilization and cost-effectiveness. Implement and maintain security measures to protect data and systems within the AWS environment, including IAM policies, security groups, and encryption mechanisms. Migrate the application data from legacy databases to Cloud based solutions (Redshift, DynamoDB, etc) for high availability with low cost Develop application programs using Big Data technologies like Apache Hadoop, Apache Spark, etc with appropriate cloud-based services like Amazon AWS, etc. Build data pipelines by building ETL processes (Extract-Transform-Load) Implement backup, disaster recovery, and business continuity strategies for cloud-based applications and data. Responsible for analysing business and functional requirements which involves a review of existing system configurations and operating methodologies as well as understanding evolving business needs Analyse requirements/User stories at the business meetings and strategize the impact of requirements on different platforms/applications, convert the business requirements into technical requirements Participating in design reviews to provide input on functional requirements, product designs, schedules and/or potential problems Understand current application infrastructure and suggest Cloud based solutions which reduces operational cost, requires minimal maintenance but provides high availability with improved security Perform unit testing on the modified software to ensure that the new functionality is working as expected while existing functionalities continue to work in the same way Coordinate with release management, other supporting teams to deploy changes in production environment Qualifications we seek in you! Minimum Qualifications Experience in designing, implementing data pipelines, build data applications, data migration on AWS Strong experience of implementing data lake using AWS services like Glue, Lambda, Step, Redshift Experience of Databricks will be added advantage Strong experience in Python and SQL Proven expertise in AWS services such as S3, Lambda, Glue, EMR, and Redshift. Advanced programming skills in Python for data processing and automation. Hands-on experience with Apache Spark for large-scale data processing. Experience with Apache Kafka for real-time data streaming and event processing. Proficiency in SQL for data querying and transformation. Strong understanding of security principles and best practices for cloud-based environments. Experience with monitoring tools and implementing proactive measures to ensure system availability and performance. Excellent problem-solving skills and ability to troubleshoot complex issues in a distributed, cloud-based environment. Strong communication and collaboration skills to work effectively with cross-functional teams. Preferred Qualifications/ Skills Master’s Degree-Computer Science, Electronics, Electrical. AWS Data Engineering & Cloud certifications, Databricks certifications Experience with multiple data integration technologies and cloud platforms Knowledge of Change & Incident Management process Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values diversity and inclusion, respect and integrity, customer focus, and innovation. Get to know us at genpact.com and on LinkedIn, X, YouTube, and Facebook. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training.

Posted 3 weeks ago

Apply

8.0 - 12.0 years

0 Lacs

karnataka

On-site

At PwC, the focus in data and analytics engineering is on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. You play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will concentrate on designing and building data infrastructure and systems to enable efficient data processing and analysis. Your responsibilities include developing and implementing data pipelines, data integration, and data transformation solutions. As an AWS Architect / Manager at PwC - AC, you will interact with Offshore Manager/Onsite Business Analyst to understand the requirements and will be responsible for end-to-end implementation of Cloud data engineering solutions like Enterprise Data Lake and Data hub in AWS. Strong experience in AWS cloud technology is required, along with planning and organization skills. You will work as a cloud Architect/lead on an agile team and provide automated cloud solutions, monitoring the systems routinely to ensure that all business goals are met as per the Business requirements. **Position Requirements:** **Must Have:** - Experience in architecting and delivering highly scalable, distributed, cloud-based enterprise data solutions - Strong expertise in the end-to-end implementation of Cloud data engineering solutions like Enterprise Data Lake, Data hub in AWS - Hands-on experience with Snowflake utilities, SnowSQL, SnowPipe, ETL data Pipelines, Big Data model techniques using Python / Java - Design scalable data architectures with Snowflake, integrating cloud technologies (AWS, Azure, GCP) and ETL/ELT tools such as DBT - Guide teams in proper data modeling (star, snowflake schemas), transformation, security, and performance optimization - Experience in load from disparate data sets and translating complex functional and technical requirements into detailed design - Deploying Snowflake features such as data sharing, events, and lake-house patterns - Experience with data security and data access controls and design - Understanding of relational as well as NoSQL data stores, methods, and approaches (star and snowflake, dimensional modeling) - Good knowledge of AWS, Azure, or GCP data storage and management technologies such as S3, Blob/ADLS, and Google Cloud Storage - Proficient in Lambda and Kappa Architectures - Strong AWS hands-on expertise with a programming background preferably Python/Scala - Knowledge of Big Data frameworks and related technologies with experience in Hadoop and Spark - Strong experience in AWS compute services like AWS EMR, Glue, and Sagemaker and storage services like S3, Redshift & Dynamodb - Experience with AWS Streaming Services like AWS Kinesis, AWS SQS, and AWS MSK - Troubleshooting and Performance tuning experience in Spark framework - Spark core, Sql, and Spark Streaming - Experience in flow tools like Airflow, Nifi, or Luigi - Knowledge of Application DevOps tools (Git, CI/CD Frameworks) - Experience in Jenkins or Gitlab with rich experience in source code management like Code Pipeline, Code Build, and Code Commit - Experience with AWS CloudWatch, AWS Cloud Trail, AWS Account Config, AWS Config Rules - Understanding of Cloud data migration processes, methods, and project lifecycle - Business/domain knowledge in Financial Services/Healthcare/Consumer Market/Industrial Products/Telecommunication, Media and Technology/Deal advisory along with technical expertise - Experience in leading technical teams, guiding and mentoring team members - Analytical & problem-solving skills - Communication and presentation skills - Understanding of Data Modeling and Data Architecture **Desired Knowledge/Skills:** - Experience in building stream-processing systems using solutions such as Storm or Spark-Streaming - Experience in Big Data ML toolkits like Mahout, SparkML, or H2O - Knowledge in Python - Certification on AWS Architecture desirable - Worked in Offshore/Onsite Engagements - Experience in AWS services like STEP & Lambda - Project Management skills with consulting experience in Complex Program Delivery **Professional And Educational Background:** BE/B.Tech/MCA/M.Sc/M.E/M.Tech/MBA **Minimum Years Experience Required:** Candidates with 8-12 years of hands-on experience **Additional Application Instructions:** Add here and change text color to black or remove bullet and section title if not applicable.,

Posted 4 weeks ago

Apply

2.0 - 6.0 years

0 Lacs

karnataka

On-site

As an AWS Developer at PwC's Advisory Acceleration Center, you will collaborate with the Offshore Manager and Onsite Business Analyst to comprehend requirements and take charge of implementing Cloud data engineering solutions on AWS, such as Enterprise Data Lake and Data hub. With a focus on architecting and delivering scalable cloud-based enterprise data solutions, you will bring your expertise in end-to-end implementation of Cloud data engineering solutions using tools like Snowflake utilities, SnowSQL, SnowPipe, ETL data Pipelines, and Big Data model techniques using Python/Java. Your responsibilities will include loading disparate data sets, translating complex requirements into detailed designs, and deploying Snowflake features like data sharing, events, and lake-house patterns. You are expected to possess a deep understanding of relational and NoSQL data stores, including star and snowflake dimensional modeling, and demonstrate strong hands-on expertise in AWS services such as EMR, Glue, Sagemaker, S3, Redshift, Dynamodb, and AWS Streaming Services like Kinesis, SQS, and MSK. Troubleshooting and performance tuning experience in Spark framework, familiarity with flow tools like Airflow, Nifi, or Luigi, and proficiency in Application DevOps tools like Git, CI/CD frameworks, Jenkins, and Gitlab are essential for this role. Desired skills include experience in building stream-processing systems using solutions like Storm or Spark-Streaming, knowledge in Big Data ML toolkits such as Mahout, SparkML, or H2O, proficiency in Python, and exposure to Offshore/Onsite Engagements and AWS services like STEP & Lambda. Candidates with 2-4 years of hands-on experience in Cloud data engineering solutions, a professional background in BE/B.Tech/MCA/M.Sc/M.E/M.Tech/MBA, and a passion for problem-solving and effective communication are encouraged to apply to be part of PwC's dynamic and inclusive work culture, where learning, growth, and excellence are at the core of our values. Join us at PwC, where you can make a difference today and shape the future tomorrow!,

Posted 1 month ago

Apply

0.0 - 3.0 years

0 Lacs

noida, uttar pradesh

On-site

We are seeking a Data Engineer Intern or Trainee with the following key skills: - Proficient in SQL Database tuning and performance optimization - Experience with Airflow implementation using Python or Scala - Strong knowledge of Python and PySpark - Familiarity with AWS Redshift, Snowflake, or Databricks for data warehousing - Ability to work with ETL services in AWS such as EMR, GLUE, S3, Redshift, or similar services in GCP or Azure. This opportunity is open to both freshers and individuals with up to 1 year of experience. Comprehensive on-the-job training will be provided for freshers. Candidates with a B.Tech background and no prior IT experience are also encouraged to apply. Job Types: Full-time, Permanent, Fresher Benefits: - Paid sick time - Performance bonus Schedule: - Monday to Friday Experience: - Total work: 1 year (Preferred) Work Location: In person Expected Start Date: 04/08/2025,

Posted 1 month ago

Apply

2.0 - 6.0 years

0 Lacs

karnataka

On-site

As an AWS Developer at PwC's Acceleration Center in Bangalore, you will be responsible for the end-to-end implementation of Cloud data engineering solutions like Enterprise Data Lake and Data hub in AWS. You will collaborate with Offshore Manager/Onsite Business Analyst to understand requirements and architect scalable, distributed, cloud-based enterprise data solutions. Your role will involve hands-on experience with Snowflake utilities, SnowSQL, SnowPipe, ETL data Pipelines, and Big Data model techniques using Python/Java. You must have a deep understanding of relational and NoSQL data stores, methods, and approaches such as star and snowflake dimensional modeling. Strong expertise in AWS services like EMR, Glue, Sagemaker, S3, Redshift, Dynamodb, and streaming services like Kinesis, SQS, and MSK is essential. Troubleshooting and performance tuning experience in Spark framework, along with knowledge of flow tools like Airflow, Nifi, or Luigi, is required. Experience with Application DevOps tools like Git, CI/CD Frameworks, Jenkins, or Gitlab is preferred. Familiarity with AWS CloudWatch, Cloud Trail, Account Config, Config Rules, and Cloud data migration processes is expected. Good analytical, problem-solving, communication, and presentation skills are essential for this role. Desired skills include building stream-processing systems using Storm or Spark-Streaming, experience in Big Data ML toolkits like Mahout, SparkML, or H2O, and knowledge of Python. Exposure to Offshore/Onsite Engagements and AWS services like STEP and Lambda would be a plus. Candidates with 2-4 years of hands-on experience in cloud data engineering solutions and a background in BE/B.Tech/MCA/M.Sc/M.E/M.Tech/MBA are encouraged to apply. Travel to client locations may be required based on project needs. This position falls under the Advisory line of service and the Technology Consulting horizontal, with the designation of Associate based in Bangalore, India. If you are passionate about working in a high-performance culture that values diversity, inclusion, and professional development, PwC could be the ideal place for you to grow and excel in your career. Apply now to be part of a global team dedicated to solving important problems and making a positive impact on the world.,

Posted 1 month ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

As a Business Intelligence Analyst in our team, you will collaborate with product managers, engineers, and business stakeholders to establish key performance indicators (KPIs) and success metrics for Creator Success. Your role involves creating detailed dashboards and self-service analytics tools utilizing platforms like QuickSight, Tableau, or similar Business Intelligence (BI) tools. You will conduct in-depth analysis on customer behavior, content performance, and livestream engagement patterns. Developing and maintaining robust ETL/ELT pipelines to handle large volumes of streaming and batch data from the Creator Success platform is a key responsibility. Additionally, you will be involved in designing and optimizing data warehouses, data lakes, and real-time analytics systems using AWS services such as Redshift, S3, Kinesis, EMR, and Glue. Ensuring data accuracy and reliability is crucial, and you will implement data quality frameworks and monitoring systems. Your qualifications should include a Bachelor's degree in Computer Science, Engineering, Mathematics, Statistics, or a related quantitative field. With at least 3 years of experience in business intelligence or analytic roles, you should have proficiency in SQL, Python, and/or Scala. Expertise in AWS cloud services like Redshift, S3, EMR, Glue, Lambda, and Kinesis is required. You should have a strong background in building and optimizing ETL pipelines, data warehousing solutions, and big data technologies like Spark and Hadoop. Familiarity with distributed computing frameworks, business intelligence tools (QuickSight, Tableau, Looker), and data visualization best practices is essential. Your proficiency in SQL and Python is highly valued, along with skills in AWS Lambda, QuickSight, Power BI, AWS S3, AWS Kinesis, ETL, Scala, AWS EMR, Hadoop, Spark, AWS Glue, and data warehousing. If you are passionate about leveraging data to drive business decisions and have a strong analytical mindset, we welcome you to join our team and make a significant impact in the field of Business Intelligence.,

Posted 1 month ago

Apply

3.0 - 7.0 years

0 Lacs

hyderabad, telangana

On-site

About McDonalds: McDonalds Corporation is one of the world's largest employers with a presence in over 100 countries. In Hyderabad, corporate opportunities are available at our global offices, which serve as innovative and operational hubs. The purpose of these offices is to enhance McDonald's global talent pool and in-house expertise. The new office in Hyderabad aims to bring together knowledge in areas like business, technology, analytics, and AI, thereby accelerating our ability to provide impactful solutions for the business and customers worldwide. Position Summary: As an Associate Technical Product Analyst in the Global Technology Enterprise Products & Platforms (EPP) Team, you will be focusing on data management and operations within the Global Data & Analytics Platform (GDAP). The vision of this platform is to always be people-led, product-centric, forward-thinking, and a trusted technology partner. Reporting to the Technical Product Manager, the Senior Technical Analyst will support technical product management leadership, provide guidance to developers, manage squad output, and participate in roadmap and backlog preparation. Responsibilities & Accountabilities: Product roadmap and backlog preparation: - Collaborate with the Technical Product Manager to prioritize roadmap and backlog items - Analyze existing processes to identify inefficiencies and opportunities for improvement - Create detailed requirements documents, user stories, and acceptance criteria - Lead agile ceremonies and act as a leader for Software Development Engineers Technical solutioning and feature development/releases: - Work on designing, developing, and documenting Talend ETL processes - Administer Talend software for data integration and quality - Collaborate with business users and product teams on various management activities - Analyze patches, review defects, and ensure high standards for product delivery Qualifications: Basic Qualifications: - Bachelor's degree in computer science or engineering - 3+ years of experience with AWS RedShift and Talend - Experience in data warehousing is a plus - Knowledge of Agile software development and working collaboratively with business partners - Strong communication skills and ability to translate technical concepts into business requirements Preferred Qualifications: - Hands-on experience with AWS RedShift, Talend, and other AWS services - Proficiency in SQL, data integration tools, and scripting languages - Understanding of DevOps practices and tools - Experience with JIRA, Confluence, and product-centric organizations - Knowledge of cloud architecture, cybersecurity, and IT General Controls (ITGC) Work location: Hyderabad, India Work pattern: Full-time role Work mode: Hybrid Additional Information: - Any additional information specific to the job or work environment will be communicated as needed.,

Posted 1 month ago

Apply

6.0 - 10.0 years

0 Lacs

karnataka

On-site

At EY, you'll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we're counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Our Technology team builds innovative digital solutions rapidly and at scale to deliver the next generation of Financial and Non-Financial services across the globe. The Position is a senior technical, hands-on delivery role, requiring the knowledge of data engineering, cloud infrastructure and platform engineering, platform operations, and production support using ground-breaking cloud and big data technologies. The ideal candidate with 6-8 years of experience will possess strong technical skills, an eagerness to learn, a keen interest in the three key pillars that our team supports i.e. Financial Crime, Financial Risk, and Compliance technology transformation. The ability to work collaboratively in a fast-paced environment, and an aptitude for picking up new tools and techniques on the job, building on existing skillsets as a foundation. In this role you will: - Ingestion and provisioning of raw datasets, enriched tables, and/or curated, re-usable data assets to enable a variety of use cases. - Driving improvements in the reliability and frequency of data ingestion including increasing real-time coverage. - Support and enhancement of data ingestion infrastructure and pipelines. - Designing and implementing data pipelines that will collect data from disparate sources across the enterprise, and from external sources and deliver it to our data platform. - Extract Transform and Load (ETL) workflows, using both advanced data manipulation tools and programmatically manipulation data throughout our data flows, ensuring data is available at each stage in the data flow, and in the form needed for each system, service, and customer along said data flow. - Identifying and onboarding data sources using existing schemas and where required, conducting exploratory data analysis to investigate and provide solutions. - Evaluate modern technologies, frameworks, and tools in the data engineering space to drive innovation and improve data processing capabilities. Core/Must-Have Skills: - 3-8 years of expertise in designing and implementing data warehouses, data lakes using Oracle Tech Stack (ETL: ODI, SSIS, DB: PLSQL and AWS Redshift). - At least 4+ years of experience in managing data extraction, transformation, and loading various sources using Oracle Data Integrator with exposure to other tools like SSIS. - At least 4+ years of experience in Database Design and Dimension modeling using Oracle PLSQL, Microsoft SQL Server. - Experience in developing ETL processes ETL control tables, error logging, auditing, data quality, etc. Should be able to implement reusability, parameterization workflow design, etc. - Advanced working SQL Knowledge and experience working with relational and NoSQL databases as well as working familiarity with a variety of databases (Oracle, SQL Server, Neo4J). - Strong analytical and critical thinking skills, with the ability to identify and resolve issues in data pipelines and systems. - Expertise in data modeling and DB Design with skills in performance tuning. - Experience with OLAP, OLTP databases, and data structuring/modeling with an understanding of key data points. - Experience building and optimizing data pipelines on Azure Databricks or AWS glue or Oracle cloud. - Create and Support ETL Pipelines and table schemas to facilitate the accommodation of new and existing data sources for the Lakehouse. - Experience with data visualization (Power BI/Tableau) and SSRS. Good to Have: - Experience working in Financial Crime, Financial Risk, and Compliance technology transformation domains. - Certification in any cloud tech stack preferred Microsoft Azure. - In-depth knowledge and hands-on experience with data engineering, Data Warehousing, and Delta Lake on-prem (Oracle RDBMS, Microsoft SQL Server) and cloud (Azure or AWS or Oracle cloud). - Ability to script (Bash, Azure CLI), Code (Python, C#), query (SQL, PLSQL, T-SQL) coupled with software versioning control systems (e.g. GitHub) AND ci/cd systems. - Design and development of systems for the maintenance of the Azure/AWS Lakehouse, ETL process, business Intelligence, and data ingestion pipelines for AI/ML use cases.,

Posted 1 month ago

Apply

5.0 - 9.0 years

0 Lacs

noida, uttar pradesh

On-site

At Capgemini Invent, we believe difference drives change. As inventive transformation consultants, we blend our strategic, creative, and scientific capabilities, collaborating closely with clients to deliver cutting-edge solutions. Join us to drive transformation tailored to our client's challenges of today and tomorrow, informed and validated by science and data, superpowered by creativity and design, all underpinned by technology created with purpose. Your role involves having IT experience with a minimum of 5+ years in creating data warehouses, data lakes, ETL/ELT, data pipelines on cloud. You should have experience in data pipeline implementation with cloud providers such as AWS, Azure, GCP, preferably in the Life Sciences Domain. Experience with cloud storage, cloud database, cloud Data Warehousing, and Data Lake solutions like Snowflake, BigQuery, AWS Redshift, ADLS, S3 is essential. You should also be familiar with cloud data integration services for structured, semi-structured, and unstructured data like Azure Databricks, Azure Data Factory, Azure Synapse Analytics, AWS Glue, AWS EMR, Dataflow, Dataproc. Good knowledge of Infra capacity sizing, costing of cloud services to drive optimized solution architecture, leading to optimal infra investment vs performance and scaling is required. Your profile should demonstrate the ability to contribute to making architectural choices using various cloud services and solution methodologies. Expertise in programming using Python is a must. Very good knowledge of cloud DevOps practices such as infrastructure as code, CI/CD components, and automated deployments on the cloud is essential. Understanding networking, security, design principles, and best practices in the cloud is expected. Knowledge of IoT and real-time streaming would be an added advantage. You will be leading architectural/technical discussions with clients and should possess excellent communication and presentation skills. At Capgemini, we recognize the significance of flexible work arrangements to provide support. Whether it's remote work or flexible work hours, you will get an environment to maintain a healthy work-life balance. Our mission is centered on your career growth, offering an array of career growth programs and diverse professions crafted to support you in exploring a world of opportunities. Equip yourself with valuable certifications in the latest technologies such as Generative AI. Capgemini is a global business and technology transformation partner, helping organizations accelerate their dual transition to a digital and sustainable world while creating tangible impact for enterprises and society. With a responsible and diverse group of over 340,000 team members in more than 50 countries, Capgemini has a strong heritage of over 55 years. Clients trust Capgemini to unlock the value of technology to address the entire breadth of their business needs, delivering end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fueled by market-leading capabilities in AI, Generative AI, cloud, and data, combined with deep industry expertise and a partner ecosystem.,

Posted 1 month ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

You have 5 to 7 years of experience and are skilled in Python, PySpark, and SQL. As a Data Engineer at AceNet Consulting, you will design, develop, and maintain data pipelines using these technologies. You should have hands-on experience with cloud data platforms and data modeling. Proficiency in tools like Git and GitHub is required, along with strong knowledge of query performance tuning. The ideal candidate for this role has at least 5 years of experience in a complex data ecosystem, working in an agile environment. Experience with cloud data platforms such as AWS Redshift and Databricks is a plus. Problem-solving and communication skills are essential for this position. Joining AceNet Consulting offers you the opportunity to work on transformative projects with leading global firms, continuous investment in your professional development, competitive compensation and benefits, ESOPs, and international assignments. The company values a supportive environment, work-life balance, and employee well-being. AceNet Consulting fosters an open culture that encourages diverse perspectives, transparent communication, and rewards contributions. If you meet the qualifications mentioned above and are passionate about technology, thrive in a fast-paced environment, and want to be part of a dynamic team, submit your resume to apply for this Data Engineer position at AceNet Consulting.,

Posted 1 month ago

Apply

10.0 - 14.0 years

0 Lacs

vadodara, gujarat

On-site

As a Lead Data Engineer at Rearc, you will play a crucial role in establishing and maintaining technical excellence within our data engineering team. Your extensive experience in data architecture, ETL processes, and data modeling will be key in optimizing data workflows for efficiency, scalability, and reliability. Collaborating closely with cross-functional teams, you will design and implement robust data solutions that align with business objectives and adhere to best practices in data management. Building strong partnerships with technical teams and stakeholders is essential as you drive data-driven initiatives and ensure their successful implementation. With over 10 years of experience in data engineering or related fields, you bring a wealth of expertise in managing and optimizing data pipelines and architectures. Your proficiency in Java and/or Python, along with experience in data pipeline orchestration using platforms like Airflow, Databricks, DBT, or AWS Glue, will be invaluable. Hands-on experience with data analysis tools and libraries such as Pyspark, NumPy, Pandas, or Dask is required, while proficiency with Spark and Databricks is highly desirable. Your proven track record of leading complex data engineering projects, coupled with hands-on experience in ETL processes, data warehousing, and data modeling tools, enables you to deliver efficient and robust data pipelines. You possess in-depth knowledge of data integration tools and best practices, as well as a strong understanding of cloud-based data services and technologies like AWS Redshift, Azure Synapse Analytics, and Google BigQuery. Your strategic and analytical skills will enable you to solve intricate data challenges and drive data-driven decision-making. In this role, you will collaborate with stakeholders to understand data requirements and challenges, implement data solutions with a DataOps mindset using modern tools and frameworks, lead data engineering projects, mentor junior team members, and promote knowledge sharing through technical blogs and articles. Your exceptional communication and interpersonal skills will facilitate collaboration with cross-functional teams and effective stakeholder engagement at all levels. At Rearc, we empower engineers to build innovative products and experiences by providing them with the best tools possible. If you are a cloud professional with a passion for problem-solving and a desire to make a difference, join us in our mission to solve problems and drive innovation in the field of data engineering.,

Posted 1 month ago

Apply

5.0 - 9.0 years

0 Lacs

hyderabad, telangana

On-site

We are seeking a skilled and experienced Data Engineer with a minimum of 5 years of expertise in data engineering and data migration projects. The ideal candidate will have a strong proficiency in SQL, Python, data modeling, data warehousing, and ETL pipeline development. It is essential to have hands-on experience with big data tools such as Hadoop and Spark, as well as familiarity with various AWS services including Redshift, S3, Glue, EMR, and Lambda. This position offers a fantastic opportunity to contribute to large-scale data solutions that drive data-informed decision-making and operational efficiency. As a Data Engineer, your responsibilities will include designing, building, and maintaining scalable data pipelines and ETL processes. You will be tasked with developing and optimizing data models and data warehouse architectures, as well as implementing and managing big data technologies and cloud-based data solutions. Your role will involve performing data migration, transformation, and integration from multiple sources, collaborating with cross-functional teams to understand data requirements, and ensuring data quality, consistency, and security throughout all data pipelines and storage systems. Additionally, you will be responsible for optimizing performance and managing cost-efficient AWS cloud resources. Basic qualifications for this role include a Master's degree in Computer Science, Engineering, Analytics, Mathematics, Statistics, IT, or a related field, along with a minimum of 5 years of hands-on experience in Data Engineering and data migration projects. Proficiency in SQL and Python for data processing and analysis is required, as well as a strong background in data modeling, data warehousing, and building data pipelines. The ideal candidate will have practical experience with big data technologies like Hadoop and Spark, and expertise in utilizing AWS services such as Redshift, S3, Glue, EMR, Kinesis, Firehose, Lambda, and IAM. An understanding of ETL development best practices and principles is also expected. Preferred qualifications include knowledge of data security and data privacy best practices, experience with DevOps and CI/CD practices related to data workflows, familiarity with data lake architectures and real-time data streaming, strong problem-solving abilities, attention to detail, excellent verbal and written communication skills, and the ability to work both independently and collaboratively in a team environment. Desirable skills for this role include experience with orchestration tools like Airflow or Step Functions, exposure to BI/Visualization tools like QuickSight, Tableau, or Power BI, and an understanding of data governance and compliance standards.,

Posted 1 month ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

As a Data Engineer with 6-9 years of experience, your role will be based in Pune. You should have a minimum of 5 years of experience as a Data Engineer, along with hands-on expertise in Star/Snowflake schema design, data modeling, data pipelining, and MLOps. Your proficiency in Data Warehouse technologies like Snowflake, AWS Redshift, and AWS data pipelines (Lambda, AWS Glue, Step Functions, etc.) will be crucial. Strong skills in SQL and at least one major programming language (Python/Java) are required. Additionally, you should be experienced with Data Analysis Tools such as Looker or Tableau, and have familiarity with Pandas, Numpy, Scikit-learn, and Jupyter notebooks. Knowledge of Git, GitHub, and JIRA is preferred. Your ability to identify and resolve data quality issues, provide end-to-end data platform support, and work effectively as an individual contributor is essential. In this role, you will need to possess strong analytical and problem-solving skills, with meticulous attention to detail. A positive mindset, can-do attitude, and a focus on simplifying tasks and building reusable components will be highly valued. You should be able to assess the suitability of new technologies for solving business problems and establish strong relationships with various stakeholders. Your responsibilities will involve designing, developing, and maintaining an accurate, secure, available, and fast data platform. You will engineer efficient, adaptable, and scalable data pipelines, integrate various data sources, create standardized datasets, and ensure product changes align well with the data platform. Collaborating with cross-functional teams, understanding their challenges, and providing data-driven solutions will be key aspects of your role. Overall, your technical skills, including expertise in data engineering, schema design, data modeling, and data warehousing, will play a vital role in driving the success of the data platform and meeting the goals of the organization.,

Posted 1 month ago

Apply

3.0 - 7.0 years

0 Lacs

hyderabad, telangana

On-site

You should have a strong knowledge of AWS services including S3, AWS DMS (Database Migration Service), and AWS Redshift Serverless. Experience in setting up and managing data pipelines using AWS DMS is required. Proficiency in creating and managing data storage solutions using AWS S3 is a key aspect of this role. You should also be proficient in working with relational databases, particularly PostgreSQL, Microsoft SQL Server, and Oracle. Experience in setting up and managing data warehouses, particularly AWS Redshift Serverless, is important for this position. Your responsibilities will include utilizing analytical and problem-solving skills to analyze and interpret complex data sets. You should have experience in identifying and resolving data integration issues such as inconsistencies or discrepancies. Strong problem-solving skills are needed to troubleshoot and resolve data integration and migration issues effectively. Soft skills are also essential for this role. You should be able to work collaboratively with database administrators and other stakeholders to ensure integration solutions meet business requirements. Strong communication skills are required to document data integration processes, including data source definitions, data flow diagrams, and system interactions. Additionally, you should be able to participate in design reviews and provide input on data integration plans. A willingness to stay updated with the latest data integration tools and technologies and recommend upgrades when necessary is expected. Knowledge of data security and privacy regulations is crucial. Experience in ensuring adherence to data security and privacy standards during data integration processes is required. AWS certifications such as AWS Certified Solutions Architect or AWS Certified Database - Specialty are a plus for this position.,

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies