Jobs
Interviews

567 Glue Jobs - Page 8

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

13.0 - 17.0 years

0 Lacs

pune, maharashtra

On-site

You are an experienced professional with over 13 years of experience in engaging with clients and translating their business needs into technical solutions. You have a proven track record of working with cloud services on platforms like AWS, Azure, or GCP. Your expertise lies in utilizing AWS data services such as Redshift, Glue, Athena, and SageMaker. Additionally, you have a strong background in generative AI frameworks like GANs and VAEs and possess advanced skills in Python, including libraries like Pandas, NumPy, Scikit-learn, and TensorFlow. Your role involves designing and implementing advanced AI solutions, focusing on areas like NLP and innovative ML algorithms. You are proficient in developing and deploying NLP models and have experience in enhancing machine learning algorithms. Your knowledge extends to MLOps principles, best practices, and the development and maintenance of CI/CD pipelines. Your problem-solving skills enable you to analyze complex data sets and derive actionable insights. Moreover, your excellent communication skills allow you to effectively convey technical concepts to non-technical stakeholders. In this role, you will be responsible for understanding clients" business use cases and technical requirements, translating them into technical designs that elegantly meet their needs. You will be instrumental in mapping decisions with requirements, identifying optimal solutions, and setting guidelines for NFR considerations during project implementation. Your tasks will include writing and reviewing design documents, reviewing architecture and design aspects, and ensuring adherence to best practices. To excel in this position, you should hold a bachelor's or master's degree in computer science, Information Technology, or a related field. Additionally, relevant certifications in AI, cloud technologies, or related areas would be advantageous. Your ability to innovate, design, and implement cutting-edge solutions will be crucial in this role, as well as your skill in technology integration and problem resolution through systematic analysis. Conducting POCs to validate suggested designs and technologies will also be part of your responsibilities.,

Posted 3 weeks ago

Apply

10.0 - 17.0 years

20 - 27 Lacs

Hyderabad

Work from Office

Required Skills and Qualifications: Extensive Experience in Data migration is a must (Teradata to Redshift preferred). Extensive testing Experience with SQL/Unix/Linux scripting is a must. Extensive experience testing Cloud/On Prem ETL (e.g. Abinitio, Informatica, SSIS, Datastage, Alteryx, Glu). Extensive experience DBMS like Oracle, Teradata, SQL Server, DB2, Redshift, Postgres and Sybase. Extensive experience using Python scripting and AWS and Cloud Technologies. Extensive experience using Athena, EMR, Redshift, AWS, and Cloud Technologies. Experienced in large-scale application development testing Cloud/ On Prem Data warehouse, Data Lake, Data science. Experience with multi-year, large-scale projects. Expert technical skills with hands-on testing experience using SQL queries. Extensive experience with both data migration and data transformation testing. Extensive experience DBMS like Oracle, Teradata, SQL Server, DB2, Redshift, Postgres and Sybase. Extensive testing Experience with SQL/Unix/Linux. Extensive experience testing Cloud/On Prem ETL (e.g. Abinitio, Informatica, SSIS, Datastage, Alteryx, Glu). Extensive experience using Python scripting and AWS and Cloud Technologies. Extensive experience using Athena, EMR , Redshift and AWS and Cloud Technologies. API/Rest Assured automation, building reusable frameworks, and good technical expertise/acumen. Java/Java Script - Implement core Java, Integration, Core Java and API. Functional/UI/ Selenium - BDD/Cucumber, Specflow, Data Validation/Kafka, BigData, also automation experience using Cypress. AWS/Cloud - Jenkins/ Gitlab/ EC2 machine, S3 and building Jenkins and CI/CD pipelines, SouceLabs.

Posted 3 weeks ago

Apply

5.0 - 10.0 years

14 - 24 Lacs

Hyderabad, Chennai, Bengaluru

Hybrid

Greetings!!!! We Have Multiple Openings for AWS Databricks and AWS Glue. Fill the below form accordingly. Interested candidates can fill the form below. AWS Databricks:- https://forms.office.com/r/VmydKh6H8R AWS Glue:-https://forms.office.com/r/afGKQhARkm AWS Glue:- Primary Skill: AWS Glue, Python,Pyspark,Airflow Secondary Skill: RDS,Redshift,Snowflake Snowflake Understanding of its architecture, data ingestion, and query optimization. (existing data warehouse) AWS Services Extensive experience with AWS Glue, AWS Lambda, Amazon EMR, Amazon S3, and Apache Airflow for building data pipelines. Python & SQL Strong programming skills for data transformation and querying. Data Warehousing Experience in managing existing Snowflake data warehouses and optimizing performance. AWS Databricks:- Primarily looking for a Data Engineer with expertise in processing data pipelines using Databricks PySpark SQL on Cloud distributions like AWS Must have AWS Databricks Good to have PySpark Snowflake Talend Requirements- Candidate must be experienced working in projects involving Other ideal qualifications include experiences in Primarily looking for a data engineer with expertise in processing data pipelines using Databricks Spark SQL on Hadoop distributions like AWS EMR Data bricks Cloudera etc. Should be very proficient in doing large scale data operations using Databricks and overall very comfortable using Python Familiarity with AWS compute storage and IAM concepts Experience in working with S3 Data Lake as the storage tier Any ETL background Talend AWS Glue etc. is a plus but not required Cloud Warehouse experience Snowflake etc. is a huge plus Carefully evaluates alternative risks and solutions before taking action. Optimizes the use of all available resources Develops solutions to meet business needs that reflect a clear understanding of the objectives practices and procedures of the corporation department and business unit Skills Hands on experience on Databricks Spark SQL AWS Cloud platform especially S3 EMR Databricks Cloudera etc. Experience on Shell scripting Exceptionally strong analytical and problem-solving skills Relevant experience with ETL methods and with retrieving data from dimensional data models and data warehouses Strong experience with relational databases and data access methods especially SQL Excellent collaboration and cross functional leadership skills Excellent communication skills both written and verbal Ability to manage multiple initiatives and priorities in a fast-paced collaborative environment Ability to leverage data assets to respond to complex questions that require timely answers has working knowledge on migrating relational and dimensional databases on AWS Cloud platform Skills Mandatory Skills : Apache Spark, Databricks, Java, Python, Scala, Spark SQL.

Posted 3 weeks ago

Apply

5.0 - 10.0 years

20 - 35 Lacs

Kochi, Bengaluru

Work from Office

Job Summary: We are seeking a highly skilled and motivated Machine Learning Engineer with a strong foundation in programming and machine learning, hands-on experience with AWS Machine Learning services (especially SageMaker), and a solid understanding of Data Engineering and MLOps practices. You will be responsible for designing, developing, deploying, and maintaining scalable ML solutions in a cloud-native environment. Key Responsibilities: • Design and implement machine learning models and pipelines using AWS SageMaker and related services. • Develop and maintain robust data pipelines for training and inference workflows. • Collaborate with data scientists, engineers, and product teams to translate business requirements into ML solutions. • Implement MLOps best practices including CI/CD for ML, model versioning, monitoring, and retraining strategies. • Optimize model performance and ensure scalability and reliability in production environments. • Monitor deployed models for drift, performance degradation, and anomalies. • Document processes, architectures, and workflows for reproducibility and compliance. Required Skills & Qualifications: • Strong programming skills in Python and familiarity with ML libraries (e.g., scikitlearn, TensorFlow, PyTorch). • Solid understanding of machine learning algorithms, model evaluation, and tuning. • Hands-on experience with AWS ML services, especially SageMaker, S3, Lambda, Step Functions, and CloudWatch. • Experience with data engineering tools (e.g., Apache Airflow, Spark, Glue) and workflow orchestration. Machine Learning Engineer - Job Description • Proficiency in MLOps tools and practices (e.g., MLflow, Kubeflow, CI/CD pipelines, Docker, Kubernetes). • Familiarity with monitoring tools and logging frameworks for ML systems. • Excellent problem-solving and communication skills. Preferred Qualifications: • AWS Certification (e.g., AWS Certified Machine Learning Specialty). • Experience with real-time inference and streaming data. • Knowledge of data governance, security, and compliance in ML systems

Posted 3 weeks ago

Apply

5.0 - 10.0 years

15 - 25 Lacs

Noida

Hybrid

Position Title: Specialist- Data Science Business Title: Engineer II- Data Science We are seeking a highly motivated and enthusiastic Senior Data Scientist with over 8 years of experience to join our dynamic team. The ideal candidate will have a strong background in AI/ML analytics and a passion for leveraging data to drive business insights and innovation. Key Responsibilities: Develop and implement machine learning models and algorithms. Work closely with project stakeholders to understand requirements and translate them into deliverables. Utilize statistical and machine learning techniques to analyze and interpret complex data sets. Stay updated with the latest advancements in AI/ML technologies and methodologies. Collaborate with cross-functional teams to support various AI/ML initiatives. Qualifications: Bachelors degree in Computer Science, Data Science, Statistics, Mathematics, or a related field. Strong understanding of machine learning, deep learning and Generative AI concepts. Preferred Skills: Experience in machine learning techniques such as Regression, Classification, Predictive modeling, Clustering, Computer vision(yolo ), Deep Learning stack, NLP using python Strong knowledge and experience in Generative AI/ LLM based development. Strong experience working with key LLM models APIs (e.g. AWS Bedrock OR Azure Open AI/ OpenAI) and LLM Frameworks (e.g. LangChain OR LlamaIndex OR RAG ) Experience with cloud infrastructure for AI/Generative AI/ML on AWS, Azure. Expertise in building enterprise grade, secure data ingestion pipelines (ETL Gluejob, Quicksight) for unstructured data including indexing, search, and advance retrieval patterns. Knowledge of effective text chunking techniques for optimal processing and indexing of large documents or datasets. Proficiency in generating and working with text embeddings with understanding of embedding spaces and their applications in semantic search and information. retrieval. Experience with RAG concepts and fundamentals ( VectorDBs, AWS OpenSearch, semantic search, etc.), Expertise in implementing RAG systems that combine knowledge bases with Generative AI models. Knowledge of training and fine-tuning Foundation Models (Athropic, Claud, Mistral, etc.), including multimodal inputs and outputs. Proficiency in Python, TypeScript, NodeJS, ReactJS (and equivalent) and frameworks. (e.g., pandas, NumPy, scikit-learn), Glue crawler, ETL Experience with data visualization tools (e.g., Matplotlib, Seaborn, Quicksight). Knowledge of deep learning frameworks (e.g., TensorFlow, Keras, PyTorch). Experience with version control systems (e.g., Git, CodeCommit). Good to have Skills Knowledge and Experience in building knowledge graphs in production. Understanding of multi-agent systems and their applications in complex problem-solving scenarios. ** Title/Designation of role are as per Global Team Job profiling. Local Titles are adjusted for understanding of regional candidates applying for job in India. Designations are dependent on years of relevant work experience and performance during interviews.

Posted 3 weeks ago

Apply

6.0 - 7.0 years

27 - 35 Lacs

Hyderabad, Chennai, Bengaluru

Work from Office

Role & responsibilities Provide technical leadership and mentorship to data engineering teams. Architect, design, and deploy scalable, secure, and high-performance data pipelines. Collaborate with stakeholders, clients, and cross-functional teams to deliver end-to-end data solutions. Drive technical strategy and implementation plans in alignment with business needs. Oversee project execution using tools like JIRA, ensuring timely delivery and adherence to best practices. Implement and maintain CI/CD pipelines and automation tools to streamline development workflows. Promote best practices in data engineering and AWS implementations across the team. Preferred candidate profile Strong hands-on expertise in Python, PySpark, and Spark architecture, including performance tuning and optimization. Advanced proficiency in SQL and experience in writing optimized stored procedures. In-depth knowledge of the AWS data engineering stack, including: AWS Glue Lambda API Gateway EMR S3 Redshift Athena Experience with Infrastructure as Code (IaC) using CloudFormation and Terraform. Familiarity with Unix/Linux scripting and system administration is a plus. Proven ability to design and deploy robust, production-grade data solutions.

Posted 3 weeks ago

Apply

3.0 - 5.0 years

10 - 15 Lacs

Mumbai, Aurangabad

Work from Office

Joining Preferred immediate joiners. Job Summary: We are seeking a skilled and motivated Data Developer with 3 to 5 years of hands-on experience in designing, developing, and maintaining scalable data solutions. The ideal candidate will work closely with data architects, data analysts, and application developers to build efficient data pipelines, transform data, and support data integration across various platforms. Key Responsibilities: Design, develop, and maintain ETL/ELT pipelines to ingest, transform, and load data from various sources (structured and unstructured). Develop and optimize SQL queries , stored procedures, views, and functions for data analysis and reporting. Work with data warehousing technologies (e.g., Snowflake, Redshift, BigQuery, Azure Synapse) to support business intelligence solutions. Collaborate with data engineers and analysts to implement robust data models and schemas for analytics. Ensure data quality , consistency, and accuracy through data validation, testing, and monitoring. Implement data security, compliance, and governance protocols in alignment with organizational policies. Maintain documentation related to data sources, data flows, and business rules. Participate in code reviews , sprint planning, and agile development practices. Technical Skills Required: Languages & Tools: SQL (Advanced proficiency required) Python or Scala for data processing Shell scripting (Bash, PowerShell) ETL Tools / Data Integration: Apache NiFi, Talend, Informatica, Azure Data Factory, SSIS, or equivalent Data Warehousing & Databases: Snowflake, Amazon Redshift, Google BigQuery, Azure Synapse SQL Server, PostgreSQL, Oracle, or MySQL Cloud Platforms (at least one): AWS (Glue, S3, Redshift, Lambda) Azure (ADF, Blob Storage, Synapse) GCP (Dataflow, BigQuery, Cloud Storage) Big Data & Streaming (Nice to Have): Apache Spark, Databricks, Kafka, Hadoop ecosystem Version Control & DevOps: Git, Bitbucket CI/CD pipelines (Jenkins, GitHub Actions) Qualifications: Bachelors or Masters degree in Computer Science , Information Systems , or related field. 35 years of professional experience as a Data Developer or Data Engineer. Strong problem-solving skills and the ability to work both independently and in a team environment. Experience working in Agile/Scrum teams is a plus. Excellent communication and documentation skills. Preferred Certifications (Optional): Microsoft Certified: Azure Data Engineer Associate AWS Certified Data Analytics Specialty Google Cloud Professional Data Engineer

Posted 3 weeks ago

Apply

10.0 - 15.0 years

22 - 37 Lacs

Bengaluru

Work from Office

Who We Are At Kyndryl, we design, build, manage and modernize the mission-critical technology systems that the world depends on every day. So why work at Kyndryl? We are always moving forward – always pushing ourselves to go further in our efforts to build a more equitable, inclusive world for our employees, our customers and our communities. The Role Are you ready to dive headfirst into the captivating world of data engineering at Kyndryl? As a Data Engineer, you'll be the visionary behind our data platforms, crafting them into powerful tools for decision-makers. Your role? Ensuring a treasure trove of pristine, harmonized data is at everyone's fingertips. As an AWS Data Engineer at Kyndryl, you will be responsible for designing, building, and maintaining scalable, secure, and high-performing data pipelines using AWS cloud-native services. This role requires extensive hands-on experience with both real-time and batch data processing, expertise in cloud-based ETL/ELT architectures, and a commitment to delivering clean, reliable, and well-modeled datasets. Key Responsibilities: Design and develop scalable, secure, and fault-tolerant data pipelines utilizing AWS services such as Glue, Lambda, Kinesis, S3, EMR, Step Functions, and Athena. Create and maintain ETL/ELT workflows to support both structured and unstructured data ingestion from various sources, including RDBMS, APIs, SFTP, and Streaming. Optimize data pipelines for performance, scalability, and cost-efficiency. Develop and manage data models, data lakes, and data warehouses on AWS platforms (e.g., Redshift, Lake Formation). Collaborate with DevOps teams to implement CI/CD and infrastructure as code (IaC) for data pipelines using CloudFormation or Terraform. Ensure data quality, validation, lineage, and governance through tools such as AWS Glue Data Catalog and AWS Lake Formation. Work in concert with data scientists, analysts, and application teams to deliver data-driven solutions. Monitor, troubleshoot, and resolve issues in production pipelines. Stay abreast of AWS advancements and recommend improvements where applicable. Your Future at Kyndryl Every position at Kyndryl offers a way forward to grow your career. We have opportunities that you won’t find anywhere else, including hands-on experience, learning opportunities, and the chance to certify in all four major platforms. Whether you want to broaden your knowledge base or narrow your scope and specialize in a specific sector, you can find your opportunity here. Who You Are You’re good at what you do and possess the required experience to prove it. However, equally as important – you have a growth mindset; keen to drive your own personal and professional development. You are customer-focused – someone who prioritizes customer success in their work. And finally, you’re open and borderless – naturally inclusive in how you work with others. Required Skills and Experience Bachelor’s or master’s degree in computer science, Engineering, or a related field Over 8 years of experience in data engineering More than 3 years of experience with the AWS data ecosystem Strong experience with Java, Pyspark, SQL, and Python Proficiency in AWS services: Glue, S3, Redshift, EMR, Lambda, Kinesis, CloudWatch, Athena, Step Functions Familiarity with data modelling concepts, dimensional models, and data lake architectures Experience with CI/CD, GitHub Actions, CloudFormation/Terraform Understanding of data governance, privacy, and security best practices Strong problem-solving and communication skills Preferred Skills and Experience Experience working as a Data Engineer and/or in cloud modernization. Experience with AWS Lake Formation and Data Catalog for metadata management. Knowledge of Databricks, Snowflake, or BigQuery for data analytics. AWS Certified Data Engineer or AWS Certified Solutions Architect is a plus. Strong problem-solving and analytical thinking. Excellent communication and collaboration abilities. Ability to work independently and in agile teams. A proactive approach to identifying and addressing challenges in data workflows. Being You Diversity is a whole lot more than what we look like or where we come from, it’s how we think and who we are. We welcome people of all cultures, backgrounds, and experiences. But we’re not doing it single-handily: Our Kyndryl Inclusion Networks are only one of many ways we create a workplace where all Kyndryls can find and provide support and advice. This dedication to welcoming everyone into our company means that Kyndryl gives you – and everyone next to you – the ability to bring your whole self to work, individually and collectively, and support the activation of our equitable culture. That’s the Kyndryl Way. What You Can Expect With state-of-the-art resources and Fortune 100 clients, every day is an opportunity to innovate, build new capabilities, new relationships, new processes, and new value. Kyndryl cares about your well-being and prides itself on offering benefits that give you choice, reflect the diversity of our employees and support you and your family through the moments that matter – wherever you are in your life journey. Our employee learning programs give you access to the best learning in the industry to receive certifications, including Microsoft, Google, Amazon, Skillsoft, and many more. Through our company-wide volunteering and giving platform, you can donate, start fundraisers, volunteer, and search over 2 million non-profit organizations. At Kyndryl, we invest heavily in you, we want you to succeed so that together, we will all succeed. Get Referred! If you know someone that works at Kyndryl, when asked ‘How Did You Hear About Us’ during the application process, select ‘Employee Referral’ and enter your contact's Kyndryl email address.

Posted 3 weeks ago

Apply

6.0 - 11.0 years

15 - 25 Lacs

Kolkata, Hyderabad, Bengaluru

Work from Office

Responsibilities Develop, deploy, and manage ETL pipelines using AWS services, Python, Spark, and Kafka. Integrate structured and unstructured data from various data sources into data lakes and data warehouses. Design and deploy scalable, highly available, and fault-tolerant AWS data processes using AWS data services (Glue, Lambda, Step, Redshift) Monitor and optimize the performance of cloud resources to ensure efficient utilization and cost-effectiveness. Implement and maintain security measures to protect data and systems within the AWS environment, including IAM policies, security groups, and encryption mechanisms. Migrate the application data from legacy databases to Cloud based solutions (Redshift, DynamoDB, etc) for high availability with low cost Develop application programs using Big Data technologies like Apache Hadoop, Apache Spark, etc with appropriate cloud-based services like Amazon AWS, etc. Build data pipelines by building ETL processes (Extract-Transform-Load) Implement backup, disaster recovery, and business continuity strategies for cloud-based applications and data. Responsible for analysing business and functional requirements which involves a review of existing system configurations and operating methodologies as well as understanding evolving business needs Analyse requirements/User stories at the business meetings and strategize the impact of requirements on different platforms/applications, convert the business requirements into technical requirements Participating in design reviews to provide input on functional requirements, product designs, schedules and/or potential problems Understand current application infrastructure and suggest Cloud based solutions which reduces operational cost, requires minimal maintenance but provides high availability with improved security Perform unit testing on the modified software to ensure that the new functionality is working as expected while existing functionalities continue to work in the same way Coordinate with release management, other supporting teams to deploy changes in production environment Qualifications we seek in you! Minimum Qualifications Experience in designing, implementing data pipelines, build data applications, data migration on AWS Strong experience of implementing data lake using AWS services like Glue, Lambda, Step, Redshift Experience of Databricks will be added advantage Strong experience in Python and SQL Proven expertise in AWS services such as S3, Lambda, Glue, EMR, and Redshift. Advanced programming skills in Python for data processing and automation. Hands-on experience with Apache Spark for large-scale data processing. Experience with Apache Kafka for real-time data streaming and event processing. Proficiency in SQL for data querying and transformation. Strong understanding of security principles and best practices for cloud-based environments. Experience with monitoring tools and implementing proactive measures to ensure system availability and performance. Excellent problem-solving skills and ability to troubleshoot complex issues in a distributed, cloud-based environment. Strong communication and collaboration skills to work effectively with cross-functional teams. Preferred Qualifications/ Skills Masters Degree-Computer Science, Electronics, Electrical. AWS Data Engineering & Cloud certifications, Databricks certifications Experience with multiple data integration technologies and cloud platforms Knowledge of Change & Incident Management process

Posted 3 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

hyderabad, telangana

On-site

Tezo is a new generation Digital & AI solutions provider, with a history of creating remarkable outcomes for our customers. We bring exceptional experiences using cutting-edge analytics, data proficiency, technology, and digital excellence. Job Overview The AWS Architect with Data Engineering Skills will be responsible for designing, implementing, and managing scalable, robust, and secure cloud infrastructure and data solutions on AWS. This role requires a deep understanding of AWS services, data engineering best practices, and the ability to translate business requirements into effective technical solutions. Key Responsibilities Architecture Design: Design and architect scalable, reliable, and secure AWS cloud infrastructure. Develop and maintain architecture diagrams, documentation, and standards. Data Engineering Design and implement ETL pipelines using AWS services such as Glue, Lambda, and Step Functions. Build and manage data lakes and data warehouses using AWS services like S3, Redshift, and Athena. Ensure data quality, data governance, and data security across all data platforms. AWS Services Management Utilize a wide range of AWS services (EC2, S3, RDS, Lambda, DynamoDB, etc.) to support various workloads and applications. Implement and manage CI/CD pipelines using AWS CodePipeline, CodeBuild, and CodeDeploy. Monitor and optimize the performance, cost, and security of AWS resources. Collaboration And Communication Work closely with cross-functional teams including software developers, data scientists, and business stakeholders. Provide technical guidance and mentorship to team members on best practices in AWS and data engineering. Security And Compliance Ensure that all cloud solutions follow security best practices and comply with industry standards and regulations. Implement and manage IAM policies, roles, and access controls. Innovation And Improvement Stay up to date with the latest AWS services, features, and best practices. Continuously evaluate and improve existing systems, processes, and architectures. (ref:hirist.tech),

Posted 3 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

You will be joining Coders Brain Technology Pvt. Ltd., a global leader in services, digital, and business solutions. At Coders Brain, we partner with our clients to simplify, strengthen, and transform their businesses. We are committed to providing the highest levels of certainty and satisfaction through our comprehensive industry expertise and global network of innovation and delivery centers. As a Data Engineer with a minimum of 5 years of experience, you will be working remotely. Your role will involve collaborating with other developers to define and refine solutions. You will work closely with the business to deliver data and analytics projects. Your responsibilities will include working on data integration with various tools such as Apache Spark, EMR, Glue, Kafka, Kinesis, and Lambda in AWS Cloud environment. You should have strong real-life experience in Python development, especially in pySpark within AWS Cloud. Designing, developing, testing, deploying, maintaining, and improving data integration pipelines will be a key part of your role. Additionally, you should have experience with Python and common libraries, Perl, Unix Scripts, and analytical skills with databases. Proficiency in source control systems like Git, Bitbucket, and continuous integration tools like Jenkins is required. Experience with continuous deployment (CI/CD), Databricks, Airflow, and Apache Spark will be beneficial. Knowledge of databases such as Oracle, SQL Server, PostgreSQL, Redshift, MySQL, or similar is essential. Exposure to ETL tools including Informatica is preferred. A degree in Computer Science, Computer Engineering, or Electrical Engineering is desired. If you are interested in this opportunity, click on the apply button. Alternatively, you can send your resume to prerna.jain@codersbrain.com or pooja.gupta@codersbrain.com.,

Posted 3 weeks ago

Apply

8.0 - 12.0 years

0 Lacs

maharashtra

On-site

If you are a software engineering leader ready to take the reins and drive impact, weve got an opportunity just for you. As a Director of Software Engineering at JPMorgan Chase within the Asset and Wealth Management LOB, you lead a data technology area and drive impact within teams, technologies, and deliveries. Utilize your in-depth knowledge of software, applications, technical processes, and product management to drive multiple complex initiatives, while serving as a primary decision maker for your teams and be a driver of engineering innovation and solution delivery. The current role focuses on delivering data solutions for some of the Wealth Management businesses. Job responsibilities Leads engineering and delivery of a data and analytics solutions Makes decisions that influence teams resources, budget, tactical operations, and the execution and implementation of processes and procedures Carries governance accountability for coding decisions, control obligations, and measures of success such as cost of ownership & maintainability Delivers technical solutions that can be leveraged across multiple businesses and domains Influences and collaborates with peer leaders and senior stakeholders across the business, product, and technology teams Champions the firms culture of diversity, equity, inclusion, and respect Required qualifications, capabilities, and skills Experience managing data solutions across a large, global consumer community in the Financial Services domain Experience hiring, developing and leading cross-functional teams of technologists Experience handling multiple, global stakeholders across business, technology and product Appreciation of the data product; modeling, sourcing, quality, lineage, discoverability, access management, visibility, purging, etc. Experience researching and upgrading to latest technologies in the continuously evolving data ecosystem Practical hybrid cloud native experience, preferably AWS Experience using current technologies, such as GraphQL, Glue, Spark, SnowFlake, SNS, SQS, Kinesis, Lambda, ECS, EventBridge, QlikSense, etc. Experience with Java and/or Python programming languages Expertise in Computer Science, Computer Engineering, Mathematics, or a related technical field Preferred qualifications, capabilities, and skills Comfortable being hands-on as required to drive solutions and solve challenges for the team Exposure and appreciation of the continuously evolving data science space Exposure to the Wealth Management business,

Posted 3 weeks ago

Apply

10.0 - 17.0 years

0 Lacs

hyderabad, telangana

On-site

We have an exciting opportunity for an ETL Data Architect position with an AI-ML driven SaaS Solution Product Company in Hyderabad. As an ETL Data Architect, you will play a crucial role in designing and implementing a robust Data Access Layer to provide consistent data access needs to the underlying heterogeneous storage layer. You will also be responsible for developing and enforcing data governance policies to ensure data security, quality, and compliance across all systems. In this role, you will lead the architecture and design of data solutions that leverage the latest tech stack and AWS cloud services. Collaboration with product managers, tech leads, and cross-functional teams will be essential to align data strategy with business objectives. Additionally, you will oversee data performance optimization, scalability, and reliability of data systems while guiding and mentoring team members on data architecture, design, and problem-solving. The ideal candidate should have at least 10 years of experience in data-related roles, with a minimum of 5 years in a senior leadership position overseeing data architecture and infrastructure. A deep background in designing and implementing enterprise-level data infrastructure, preferably in a SaaS environment, is required. Extensive knowledge of data architecture principles, data governance frameworks, security protocols, and performance optimization techniques is essential. Hands-on experience with AWS services such as RDS, Redshift, S3, Glue, Document DB, as well as other services like MongoDB, Snowflake, etc., is highly desirable. Familiarity with big data technologies (e.g., Hadoop, Spark) and modern data warehousing solutions is a plus. Proficiency in at least one programming language (e.g., Node.js, Java, Golang, Python) is a must. Excellent communication skills are crucial in this role, with the ability to translate complex technical concepts to non-technical stakeholders. Proven leadership experience, including team management and cross-functional collaboration, is also required. A Bachelor's degree in computer science, Information Systems, or related field is necessary, with a Master's degree being preferred. Preferred qualifications include experience with Generative AI and Large Language Models (LLMs) and their applications in data solutions, as well as familiarity with financial back-office operations and the FinTech domain. Stay updated on emerging trends in data technology, particularly in AI/ML applications for finance. Industry: IT Services and IT Consulting,

Posted 3 weeks ago

Apply

0.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Ready to shape the future of work At Genpact, we don&rsquot just adapt to change&mdashwe drive it. AI and digital innovation are redefining industries, and we&rsquore leading the charge. Genpact&rsquos , our industry-first accelerator, is an example of how we&rsquore scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to , our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that&rsquos shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions - we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at and on , , , and . Inviting applications for the role of Consultant -AWS! Responsibilities Design and deploy scalable, highly available , and fault-tolerant AWS data processes using AWS data services (Glue, Lambda, Step, Redshift) Monitor and optimize the performance of cloud resources to ensure efficient utilization and cost-effectiveness. Implement and maintain security measures to protect data and systems within the AWS environment, including IAM policies, security groups, and encryption mechanisms. Migrate the application data from legacy databases to Cloud based solutions (Redshift, DynamoDB, etc) for high availability with low cost Develop application programs using Big Data technologies like Apache Hadoop, Apache Spark, etc with appropriate cloud-based services like Amazon AWS, etc. Build data pipelines by building ETL processes (Extract-Transform-Load) Implement backup, disaster recovery, and business continuity strategies for cloud-based applications and data. Responsible for analysing business and functional requirements which involves a review of existing system configurations and operating methodologies as well as understanding evolving business needs Analyse requirements/User stories at the business meetings and strategize the impact of requirements on different platforms/applications, convert the business requirements into technical requirements Participating in design reviews to provide input on functional requirements, product designs, schedules and/or potential problems Understand current application infrastructure and suggest Cloud based solutions which reduces operational cost, requires minimal maintenance but provides high availability with improved security Perform unit testing on the modified software to ensure that the new functionality is working as expected while existing functionalities continue to work in the same way Coordinate with release management, other supporting teams to deploy changes in production environment Qualifications we seek in you! Minimum Qualifications Experience in designing, implementing data pipelines, build data applications, data migration on AWS Strong experience of implementing data lake using AWS services like Glue, Lambda, Step, Redshift Experience of Databricks will be added advantage Strong experience in Python and SQL Strong understanding of security principles and best practices for cloud-based environments. Experience with monitoring tools and implementing proactive measures to ensure system availability and performance. Excellent problem-solving skills and ability to troubleshoot complex issues in a distributed, cloud-based environment. Strong communication and collaboration skills to work effectively with cross-functional teams. Preferred Qualifications/ Skills Master&rsquos Degree-Computer Science, Electronics, Electrical. AWS Data Engineering & Cloud certifications, Databricks certifications Experience of working with Oracle ERP Experience with multiple data integration technologies and cloud platforms Knowledge of Change & Incident Management process Why join Genpact Be a transformation leader - Work at the cutting edge of AI, automation, and digital innovation Make an impact - Drive change for global enterprises and solve business challenges that matter Accelerate your career - Get hands-on experience, mentorship, and continuous learning opportunities Work with the best - Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture - Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let&rsquos build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.

Posted 3 weeks ago

Apply

2.0 - 7.0 years

30 - 35 Lacs

Bengaluru

Work from Office

Your Role Experience in Enterprise Data Management Consolidation (EDMCS) Enterprise Profitability & Cost Management Cloud Services (EPCM) Oracle Integration cloud (OIC). full life cycle Oracle EPM Cloud Implementation. Experience in creating forms, OIC Integrations, and complex Business Rules. Understand dependencies and interrelationships between various components of Oracle EPM Cloud. Keep abreast of Oracle EPM roadmap and key functionality to identify opportunities where it will enhance the current process within the entire Financials ecosystem. Proven ability to collaborate with internal clients in an agile manner, leveraging design thinking approaches. Collaborate with the FP&A to facilitate the Planning, Forecasting and Reporting process for the organization. Create and maintain system documentation, both functional and technical Your Profile Proven ability to collaborate with internal clients in an agile manner, leveraging design thinking approaches. Collaborate with the FP&A to facilitate the Planning, Forecasting and Reporting process for the organization. Create and maintain system documentation, both functional and technical Experience of Python, AWS Cloud (Lambda, Step functions, EventBridge etc.) is preferred. What you'll love about capgemini You can shape yourcareer with us. We offer a range of career paths and internal opportunities within Capgemini group. You will also get personalized career guidance from our leaders. You will get comprehensive wellness benefits including health checks, telemedicine, insurance with top-ups, elder care, partner coverage or new parent support via flexible work. You will have theopportunity to learn on one of the industry's largest digital learning platforms, with access to 250,000+ courses and numerous certifications. Were committed to ensure that people of all backgrounds feel encouraged and have a sense of belonging at Capgemini. You are valued for who you are, and you can bring your original self to work . Every Monday, kick off the week with a musical performance by our in-house band - The Rubber Band. Also get to participate in internal sports events , yoga challenges, or marathons. At Capgemini, you can work on cutting-edge projects in tech and engineering with industry leaders or create solutions to overcome societal and environmental challenges. About Capgemini

Posted 3 weeks ago

Apply

10.0 - 15.0 years

25 - 40 Lacs

Noida

Remote

Job Summary: We are seeking a seasoned Confluent & Oracle EBS Cloud Engineer with over 10 years of experience to lead the design and implementation of scalable, cloud-native data solutions. This role focuses on modernizing enterprise data infrastructure, driving realtime data streaming initiatives, and migrating legacy ERP systems to AWS-based platforms. Key Responsibilities: • Architect and implement cloud-based data platforms using AWS services including Redshift, Glue, DMS, and Data Lake solutions. • Lead the migration of Oracle E-Business Suite or similar ERP systems to AWS, ensuring data integrity and performance. • Design and drive the implementation of Confluent Kafka for real-time data streaming across enterprise systems. • Define and enforce data architecture standards, governance policies, and best practices. • Collaborate with engineering, data, and business teams to align architecture with strategic goals. • Optimize data pipelines and storage for scalability, reliability, and cost-efficiency. Required Qualifications: • 10+ years of experience in data architecture, cloud engineering, or enterprise systems design. • Deep expertise in AWS services including Redshift, Glue, DMS, and Data Lake architectures. • Proven experience with Confluent Kafka for real-time data streaming and eventdriven architectures. • Hands-on experience migrating large-scale ERP systems (e.g., Oracle EBS) to cloud platforms. • Strong understanding of data governance, security, and compliance in cloud environments. • Proficiency in designing scalable, fault-tolerant data systems. Preferred Qualifications: • Experience with data modeling, metadata management, and lineage tracking. • Familiarity with infrastructure-as-code and CI/CD practices. • Strong communication and leadership skills to guide cross-functional teams

Posted 3 weeks ago

Apply

3.0 - 5.0 years

12 - 16 Lacs

Kochi

Work from Office

As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and AWS Cloud Data Platform Responsibilities Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark, Scala, and Hive, Hbase or other NoSQL databases on Cloud Data Platforms (AWS) or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / AWS eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Total 3-5+ years of experience in Data Management (DW, DL, Data Platform, Lakehouse) and Data Engineering skills Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala ; Minimum 3 years of experience on Cloud Data Platforms on AWS; Experience in AWS EMR / AWS Glue / DataBricks, AWS RedShift, DynamoDB Good to excellent SQL skills Preferred technical and professional experience Certification in AWS and Data Bricks or Cloudera Spark Certified developers

Posted 3 weeks ago

Apply

4.0 - 9.0 years

12 - 16 Lacs

Kochi

Work from Office

As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and AWS Cloud Data Platform Responsibilities: Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark, Scala, and Hive, Hbase or other NoSQL databases on Cloud Data Platforms (AWS) or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / AWS eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala ; Minimum 3 years of experience on Cloud Data Platforms on AWS; Experience in AWS EMR / AWS Glue / DataBricks, AWS RedShift, DynamoDB Good to excellent SQL skills Exposure to streaming solutions and message brokers like Kafka technologies Preferred technical and professional experience Certification in AWS and Data Bricks or Cloudera Spark Certified developers

Posted 3 weeks ago

Apply

6.0 - 11.0 years

20 - 35 Lacs

Hyderabad

Hybrid

Greetings from Astrosoft Technologies! We are currently seeking for Highly skilled and talented Senior AWS Data Engineer for our Hyderabad office AstroSoft Technologies (https://www.astrosofttech.com/) Join Astrosoft Technologies , a global award-winning leader in Data, Cloud, AI/ML, and Digital Innovation , founded in 2004 , Headquarters in FL , USA , Corporate Office - India , Hyderabad. as a Senior AWS Data Engineer . We are looking for highly skilled professionals with strong cloud engineering and data pipeline expertise to be part of our fast-growing IT team in Hyderabad. If you're passionate about delivering scalable, real-time data solutions and have a strong foundation in AWS and big data technologies, we want to hear from you. Apply here to Email: karthik.jangam@astrosofttech.com Role: Senior AWS Data Engineer Location: Gachibowli, Hyderabad (Vasavi Sky City) Work Mode: Hybrid (Work from Office Tue to Thu | WFH Mon & Fri) Job Type: Full-Time Shift: 12:30 PM 9:30 PM IST Experience Required: 7+ years Key Responsibilities: Design and develop scalable data pipelines using Kafka, Kinesis, Spark, Flink Strong experience in AWS services: S3, Glue, EMR, DMS, SNS, SQS, MWAA (Airflow) Proficiency in Python , Java, or Scala (Python preferred) Infrastructure automation with Terraform Experience with ETL tools (ODI is a plus) Work with Oracle , Redshift , advanced SQL tuning , and physical DB optimization Implement monitoring tools: CloudWatch, Splunk, Data Dog and SRE bestpractices Collaborate with cross-functional teams to translate business needs into technical solutions Desired Candidate Profile: 7+ years total experience; 4+ years in AWS Data Engineering Hands-on, solution-driven mindset with strong critical thinking AWS Certification preferred Excellent communication and stakeholder engagement skills Immediate joiners preferred Why Join Astrosoft? H1B Sponsorship (based on performance/project) Daily Lunch & Dinner provided Group Health Insurance Skill Certifications & Learning Support Competitive Leave Policy Work in a collaborative, innovation-driven environment Thanks & Regards Karthik Kumar HR TAG Lead-India Astrosoft Technologies, Unit 1810, level 18, Vasavi Sky city, Gachibowli, Hyderabad, Telangana 500081. Contact: +91-8712229084 Email: karthik.jangam@astrosofttech.com Winner Telangana - Best Employer Brand Award - 2024

Posted 3 weeks ago

Apply

5.0 - 10.0 years

15 - 30 Lacs

Hyderabad

Hybrid

Job Description We are looking for an experienced Senior Data Engineer with a strong foundation in Python, SQL, and Spark , and hands-on expertise in AWS, Databricks . In this role, you will build and maintain scalable data pipelines and architecture to support analytics, data science, and business intelligence initiatives. Youll work closely with cross-functional teams to drive data reliability, quality, and performance. Responsibilities: Design, develop, and optimize scalable data pipelines using Databricks in AWS such as Glue, S3, Lambda, EMR, Databricks notebooks, workflows and jobs. Building data lake in WS Databricks. Build and maintain robust ETL/ELT workflows using Python and SQL to handle structured and semi-structured data. Develop distributed data processing solutions using Apache Spark or PySpark . Partner with data scientists and analysts to provide high-quality, accessible, and well-structured data. Ensure data quality, governance, security, and compliance across pipelines and data stores. Monitor, troubleshoot, and improve the performance of data systems and pipelines. Participate in code reviews and help establish engineering best practices. Mentor junior data engineers and support their technical development. Requirements Bachelor's or master's degree in computer science, Engineering, or a related field. 5+ years of hands-on experience in data engineering , with at least 2 years working with AWS Databricks . Strong programming skills in Python for data processing and automation. Advanced proficiency in SQL for querying and transforming large datasets. Deep experience with Apache Spark/PySpark in a distributed computing environment. Solid understanding of data modelling, warehousing, and performance optimization techniques. Proficiency with AWS services such as Glue , S3 , Lambda and EMR . Experience with version control Git or Code commit Experience in any workflow orchestration like Airflow, AWS Step funtions is a plus

Posted 3 weeks ago

Apply

7.0 - 12.0 years

20 - 35 Lacs

Pune, Gurugram

Hybrid

Python,PySpark,SQL / Queries ,AWS Elastic MapReduce (EMR),Amazon Managed Workflow for Apache Airflow (MWAA),AWS CDK, Cloud-formation, Lambda, Step-funtion,Athena,Redshift,Glue Catalog,S3 ,CI/CD: Github Actions

Posted 3 weeks ago

Apply

0.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Ready to shape the future of work At Genpact, we don&rsquot just adapt to change&mdashwe drive it. AI and digital innovation are redefining industries, and we&rsquore leading the charge. Genpact&rsquos , our industry-first accelerator, is an example of how we&rsquore scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to , our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that&rsquos shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions - we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at and on , , , and . Inviting applications for the role of a Consultant-AWS Developer! We are looking for candidates who have a passion for cloud with knowledge of different cloud environments. Ideal candidates should have technical experience in AWS Platform Services - IAM Role & Policies, Glue, Lamba, EC2, S3, SNS, SQS, EKS, KMS, etc. This key role demands a highly motivated individual with a strong background in Computer Science/ Software Engineering. You are meticulous, thorough and possess excellent communication skills to engage with all levels of our stakeholders. A self-starter, you are up-to-speed with the latest developments in the tech world. Responsibilities Hands-On experience & good skills on AWS Platform Services - IAM Role & Policies, Glue, Lamba, EC2, S3, SNS, SQS, EKS, KMS, etc. Must have good working knowledge on Kubernetes & Dockers. Utilize AWS services such as Amazon Glue, Amazon S3, AWS Lambda, and others to optimize performance, reliability, and cost-effectiveness. Develop scripts, utilities, and automation tools to facilitate the migration process and ensure compatibility with AWS services. Implement best practices for security, scalability, and fault tolerance in AWS-based solutions. Experience in AWS Cost Analysis & thorough understanding to optimize AWS Cost. Must have good working knowledge on deployment templates like Terraform%5CCloud formation. Ability to multi-task and manage various project elements simultaneously. Qualifications we seek in you! Minimum Qualifications / Skills Bachelor&rsquos Degree with experience in Information Technology. Must have experience in AWS Platform Services. Preferred Qualifications/ Skills Very good written and presentation / verbal communication skills with experience of customer interfacing role. In-depth requirement understanding skills with good analytical and problem-solving ability, interpersonal efficiency, and positive attitude. Experience in ML/ AI Experience in the telecommunication industry Experience with cloud providers (e.g., AWS, GCP) Why join Genpact Be a transformation leader - Work at the cutting edge of AI, automation, and digital innovation Make an impact - Drive change for global enterprises and solve business challenges that matter Accelerate your career - Get hands-on experience, mentorship, and continuous learning opportunities Work with the best - Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture - Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let&rsquos build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.

Posted 3 weeks ago

Apply

5.0 - 10.0 years

22 - 37 Lacs

Kolkata, Hyderabad, Bengaluru

Hybrid

Inviting applications for the role of Senior Principal Consultant-Data Engineer, AWS! Locations Bangalore, Hyderabad, Kolkata Responsibilities Develop, deploy, and manage ETL pipelines using AWS services, Python, Spark, and Kafka. Integrate structured and unstructured data from various data sources into data lakes and data warehouses. Design and deploy scalable, highly available, and fault-tolerant AWS data processes using AWS data services (Glue, Lambda, Step, Redshift) Monitor and optimize the performance of cloud resources to ensure efficient utilization and cost-effectiveness. Implement and maintain security measures to protect data and systems within the AWS environment, including IAM policies, security groups, and encryption mechanisms. Migrate the application data from legacy databases to Cloud based solutions (Redshift, DynamoDB, etc) for high availability with low cost Develop application programs using Big Data technologies like Apache Hadoop, Apache Spark, etc with appropriate cloud-based services like Amazon AWS, etc. Build data pipelines by building ETL processes (Extract-Transform-Load) Implement backup, disaster recovery, and business continuity strategies for cloud-based applications and data. Responsible for analysing business and functional requirements which involves a review of existing system configurations and operating methodologies as well as understanding evolving business needs Analyse requirements/User stories at the business meetings and strategize the impact of requirements on different platforms/applications, convert the business requirements into technical requirements Participating in design reviews to provide input on functional requirements, product designs, schedules and/or potential problems Understand current application infrastructure and suggest Cloud based solutions which reduces operational cost, requires minimal maintenance but provides high availability with improved security Perform unit testing on the modified software to ensure that the new functionality is working as expected while existing functionalities continue to work in the same way Coordinate with release management, other supporting teams to deploy changes in production environment Qualifications we seek in you! Minimum Qualifications Experience in designing, implementing data pipelines, build data applications, data migration on AWS Strong experience of implementing data lake using AWS services like Glue, Lambda, Step, Redshift Experience of Databricks will be added advantage Strong experience in Python and SQL Proven expertise in AWS services such as S3, Lambda, Glue, EMR, and Redshift. Advanced programming skills in Python for data processing and automation. Hands-on experience with Apache Spark for large-scale data processing. Experience with Apache Kafka for real-time data streaming and event processing. Proficiency in SQL for data querying and transformation. Strong understanding of security principles and best practices for cloud-based environments. Experience with monitoring tools and implementing proactive measures to ensure system availability and performance. Excellent problem-solving skills and ability to troubleshoot complex issues in a distributed, cloud-based environment. Strong communication and collaboration skills to work effectively with cross-functional teams. Preferred Qualifications/ Skills Masters Degree-Computer Science, Electronics, Electrical. AWS Data Engineering & Cloud certifications, Databricks certifications Experience with multiple data integration technologies and cloud platforms Knowledge of Change & Incident Management process

Posted 3 weeks ago

Apply

4.0 - 7.0 years

15 - 17 Lacs

Hyderabad, Bengaluru

Work from Office

Design, develop, and implement data solutions using AWS Data Stack components such as Glue and Redshift.Write and optimize advanced SQL queries for data extraction, transformation, and analysis.Develop data processing workflows and ETL processes using Python and PySpark.

Posted 3 weeks ago

Apply

4.0 - 9.0 years

6 - 16 Lacs

Indore, Gurugram, Bengaluru

Work from Office

Job Description Required Skills and Experience: Bachelor's degree in Computer Science, Engineering, or a related field. Proven experience in supporting and maintaining ETL pipelines. Strong proficiency in SQL and experience with relational databases (e.g., Redshift). Solid understanding of distributed computing concepts and experience with Apache Spark. Hands-on experience with AWS Glue and other AWS data services (e.g., S3, Lambda). Experience with data warehousing concepts and best practices. Excellent problem-solving, analytical skills and strong communication and collaboration skills. Ability to work independently and as part of a team. Preferred Skills and Experience: Experience with other ETL tools and technologies Experience with scripting languages (e.g., Python). Familiarity with Agile development methodologies. Experience with data visualization tools (e.g., Tableau, Power BI). Roles & Responsibilities Monitor and maintain existing ETL pipelines, ensuring data quality and availability. identify and resolve pipeline issues and data errors. Troubleshoot data integration processes. If needed, collaborate with data engineers and other stakeholders to resolve complex issues Develop and maintain necessary documentation for ETL processes and pipelines. Participate in on-call rotation for production support.

Posted 3 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies