Home
Jobs

267 Aws Glue Jobs - Page 3

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 10.0 years

22 - 37 Lacs

Pune, Gurugram, Bengaluru

Hybrid

Naukri logo

Experience: 5-8 Years (Lead-23 LPA), 8-10 Years (Senior Lead 35 LPA), 10+ Years (Architect- 42 LPA)- Max Location : Bangalore as 1 st preference , We can also go for Hyderabad, Chennai, Pune, Gurgaon Notice: Immediate to max 15 Days Joiner Mode of Work: Hybrid Job Description: Athena, Step Functions, Spark - Pyspark, ETL Fundamentals, SQL (Basic + Advanced), Glue, Python, Lambda, Data Warehousing, EBS /EFS, AWS EC2, Lake Formation, Aurora, S3, Modern Data Platform Fundamentals, PLSQL, Cloud front We are looking for an experienced AWS Data Engineer to design, build, and manage robust, scalable, and high-performance data pipelines and data platforms on AWS. The ideal candidate will have a strong foundation in ETL fundamentals, data modeling, and modern data architecture, with hands-on expertise across a broad spectrum of AWS services including Athena, Glue, Step Functions, Lambda, S3, and Lake Formation. Key Responsibilities: Design and implement scalable ETL/ELT pipelines using AWS Glue, Spark (PySpark), and Step Functions. Work with structured and semi-structured data using Athena, S3, and Lake Formation to enable efficient querying and access control. Develop and deploy serverless data processing solutions using AWS Lambda and integrate them into pipeline orchestration. Perform advanced SQL and PL/SQL development for data transformation, analysis, and performance tuning. Build data lakes and data warehouses using S3, Aurora, and Athena. Implement data governance, security, and access control strategies using AWS tools including Lake Formation, CloudFront, EBS/EFS, and IAM. Develop and maintain metadata, lineage, and data cataloging capabilities. Participate in data modeling exercises for both OLTP and OLAP environments. Work closely with data scientists, analysts, and business stakeholders to understand data requirements and deliver actionable insights. Monitor, debug, and optimize data pipelines for reliability and performance. Required Skills & Experience: Strong experience with AWS data services: Glue, Athena, Step Functions, Lambda, Lake Formation, S3, EC2, Aurora, EBS/EFS, CloudFront. Proficient in PySpark, Python, SQL (basic and advanced), and PL/SQL. Solid understanding of ETL/ELT processes and data warehousing concepts. Familiarity with modern data platform fundamentals and distributed data processing. Experience in data modeling (conceptual, logical, physical) for analytical and operational use cases. Experience with orchestration and workflow management tools within AWS. Strong debugging and performance tuning skills across the data stack.

Posted 4 days ago

Apply

3.0 - 5.0 years

22 - 25 Lacs

Bengaluru

Work from Office

Naukri logo

Job Description: We are looking for energetic, self-motivated and exceptional Data engineer to work on extraordinary enterprise products based on AI and Big Data engineering leveraging AWS/Databricks tech stack. He/she will work with star team of Architects, Data Scientists/AI Specialists, Data Engineers and Integration. Skills and Qualifications: 5+ years of experience in DWH/ETL Domain; Databricks/AWS tech stack 2+ years of experience in building data pipelines with Databricks/ PySpark /SQL Experience in writing and interpreting SQL queries, designing data models and data standards. Experience in SQL Server databases, Oracle and/or cloud databases. Experience in data warehousing and data mart, Star and Snowflake model. Experience in loading data into database from databases and files. Experience in analyzing and drawing design conclusions from data profiling results. Understanding business process and relationship of systems and applications. Must be comfortable conversing with the end-users. Must have ability to manage multiple projects/clients simultaneously. Excellent analytical, verbal and communication skills. Role and Responsibilities: Work with business stakeholders and build data solutions to address analytical & reporting requirements. Work with application developers and business analysts to implement and optimise Databricks/AWS-based implementations meeting data requirements. Design, develop, and optimize data pipelines using Databricks (Delta Lake, Spark SQL, PySpark), AWS Glue, and Apache Airflow Implement and manage ETL workflows using Databricks notebooks, PySpark and AWS Glue for efficient data transformation Develop/ optimize SQL scripts, queries, views, and stored procedures to enhance data models and improve query performance on managed databases. Conduct root cause analysis and resolve production problems and data issues. Create and maintain up to date documentation of the data model, data flow and field level mappings. Provide support for production problems and daily batch processing. Provide ongoing maintenance and optimization of database schemas, data lake structures (Delta Tables, Parquet), and views to ensure data integrity and performance

Posted 4 days ago

Apply

5.0 - 10.0 years

22 - 37 Lacs

Hyderabad, Chennai, Bengaluru

Work from Office

Naukri logo

We are looking for "AWS Data Engineer (With GCP, BigQuery)" with Minimum 5 years experience Contact- Yashra (95001 81847) Required Candidate profile Athena,Step Functions, Spark - Pyspark, ETL Fundamentals, SQL(Basic + Advanced), Glue, Python, Lambda, Data Warehousing, EBS /EFS, AWS EC2,Lake Formation, Aurora, S3, Modern Data Platform Fundamentals

Posted 4 days ago

Apply

5.0 - 15.0 years

0 - 28 Lacs

Bengaluru

Work from Office

Naukri logo

Key Skills : Python, Pyspark, AWS Glue, Redshift and Spark Steaming, Job Description: 6+ years of experience in data engineering, specifically in cloud environments like AWS. Proficiency in PySpark for distributed data processing and transformation. Solid experience with AWS Glue for ETL jobs and managing data workflows. Hands-on experience with AWS Data Pipeline (DPL) for workflow orchestration. Strong experience with AWS services such as S3, Lambda, Redshift, RDS, and EC2. What The requirement is, you can check with the candidate the followings (Vast knowledge of python, pyspark, glue job, lambda, step function, sql) Please find the Data Engineering requirement JD and client expectations from candidate. 1. Process these events and save data in Trusted and refined bucket schemas 2. Bring Six Tables for Historical data to Raw bucket. Populate historical data in trusted and refined bucket schemas. 3. Publish raw, trusted and refined bucket data from #2 and #3 to corresponding buckets in CCB data lake Develop Analytics pipeline to publish data to Snowflake 4. Integrate TDQ/BDQ in the Glue pipeline 5. Develop Observability dashboards for these jobs 6. Implement reliability wherever needed to prevent data loss 7. Configure Data archival policies and periodic cleanup 8. Perform end to end testing of the implementation 9. Implement all of the above in Production 10. Implement Reconcile data across SORs, Auth Data Lake and CCB Data Lake 11. Success criteria is All the 50 Kafka events are ingested in the CCB data lake and existing 16 Tableau dashboards are populated using this data.

Posted 4 days ago

Apply

7.0 - 12.0 years

15 - 30 Lacs

Hyderabad

Hybrid

Naukri logo

Job Title: Lead Data Engineer Job Summary The Lead Data Engineer will provide technical expertise in analysis, design, development, rollout and maintenance of data integration initiatives. This role will contribute to implementation methodologies and best practices, as well as work on project teams to analyse, design, develop and deploy business intelligence / data integration solutions to support a variety of customer needs. This position oversees a team of Data Integration Consultants at various levels, ensuring their success on projects, goals, trainings and initiatives though mentoring and coaching. Provides technical expertise in needs identification, data modelling, data movement and transformation mapping (source to target), automation and testing strategies, translating business needs into technical solutions with adherence to established data guidelines and approaches from a business unit or project perspective whilst leveraging best fit technologies (e.g., cloud, Hadoop, NoSQL, etc.) and approaches to address business and environmental challenges Works with stakeholders to identify and define self-service analytic solutions, dashboards, actionable enterprise business intelligence reports and business intelligence best practices. Responsible for repeatable, lean and maintainable enterprise BI design across organizations. Effectively partners with client team. Leadership not only in the conventional sense, but also within a team we expect people to be leaders. Candidate should elicit leadership qualities such as Innovation, Critical thinking, optimism/positivity, Communication, Time Management, Collaboration, Problem-solving, Acting Independently, Knowledge sharing and Approachable. Responsibilities: Design, develop, test, and deploy data integration processes (batch or real-time) using tools such as Microsoft SSIS, Azure Data Factory, Databricks, Matillion, Airflow, Sqoop, etc. Create functional & technical documentation e.g. ETL architecture documentation, unit testing plans and results, data integration specifications, data testing plans, etc. Provide a consultative approach with business users, asking questions to understand the business need and deriving the data flow, conceptual, logical, and physical data models based on those needs. Perform data analysis to validate data models and to confirm ability to meet business needs. May serve as project or DI lead, overseeing multiple consultants from various competencies Stays current with emerging and changing technologies to best recommend and implement beneficial technologies and approaches for Data Integration Ensures proper execution/creation of methodology, training, templates, resource plans and engagement review processes Coach team members to ensure understanding on projects and tasks, providing effective feedback (critical and positive) and promoting growth opportunities when appropriate. Coordinate and consult with the project manager, client business staff, client technical staff and project developers in data architecture best practices and anything else that is data related at the project or business unit levels Architect, design, develop and set direction for enterprise self-service analytic solutions, business intelligence reports, visualisations and best practice standards. Toolsets include but not limited to: SQL Server Analysis and Reporting Services, Microsoft Power BI, Tableau and Qlik. Work with report team to identify, design and implement a reporting user experience that is consistent and intuitive across environments, across report methods, defines security and meets usability and scalability best practices. Required Qualifications: 10 Years industry implementation experience with data integration tools such as AWS services Redshift, Athena, Lambda, Glue, S3, ETL, etc. 5-8 years of management experience required 5-8 years consulting experience preferred Minimum of 5 years of data architecture, data modelling or similar experience Bachelor’s degree or equivalent experience, Master’s Degree Preferred Strong data warehousing, OLTP systems, data integration and SDLC Strong experience in orchestration & working experience cloud native / 3rd party ETL data load orchestration Understanding and experience with major Data Architecture philosophies (Dimensional, ODS, Data Vault, etc.) Understanding of on premises and cloud infrastructure architectures (e.g. Azure, AWS, GCP) Strong experience in Agile Process (Scrum cadences, Roles, deliverables) & working experience in either Azure DevOps, JIRA or Similar with Experience in CI/CD using one or more code management platforms Strong databricks experience required to create notebooks in pyspark Experience using major data modelling tools (examples: ERwin, ER/Studio, PowerDesigner, etc.) Experience with major database platforms (e.g. SQL Server, Oracle, Azure Data Lake, Hadoop, Azure Synapse/SQL Data Warehouse, Snowflake, Redshift etc.) Strong experience in orchestration & working experience in either Data Factory or HDInsight or Data Pipeline or Cloud composer or Similar Understanding and experience with major Data Architecture philosophies (Dimensional, ODS, Data Vault, etc.) Understanding of modern data warehouse capabilities and technologies such as real-time, cloud, Big Data. Understanding of on premises and cloud infrastructure architectures (e.g. Azure, AWS, GCP) Strong experience in Agile Process (Scrum cadences, Roles, deliverables) & working experience in either Azure DevOps, JIRA or Similar with Experience in CI/CD using one or more code management platforms 3-5 years’ development experience in decision support / business intelligence environments utilizing tools such as SQL Server Analysis and Reporting Services, Microsoft’s Power BI, Tableau, looker etc. Preferred Skills & Experience: Knowledge and working experience with Data Integration processes, such as Data Warehousing, EAI, etc. Experience in providing estimates for the Data Integration projects including testing, documentation, and implementation Ability to Analyse business requirements as they relate to the data movement and transformation processes, research, evaluation and recommendation of alternative solutions. Ability to provide technical direction to other team members including contractors and employees. Ability to contribute to conceptual data modelling sessions to accurately define business processes, independently of data structures and then combines the two together. Proven experience leading team members, directly or indirectly, in completing high-quality major deliverables with superior results Demonstrated ability to serve as a trusted advisor that builds influence with client management beyond simply EDM. Can create documentation and presentations such that the they “stand on their own” Can advise sales on evaluation of Data Integration efforts for new or existing client work. Can contribute to internal/external Data Integration proof of concepts. Demonstrates ability to create new and innovative solutions to problems that have previously not been encountered. Ability to work independently on projects as well as collaborate effectively across teams Must excel in a fast-paced, agile environment where critical thinking and strong problem solving skills are required for success Strong team building, interpersonal, analytical, problem identification and resolution skills Experience working with multi-level business communities Can effectively utilise SQL and/or available BI tool to validate/elaborate business rules. Demonstrates an understanding of EDM architectures and applies this knowledge in collaborating with the team to design effective solutions to business problems/issues. Effectively influences and, at times, oversees business and data analysis activities to ensure sufficient understanding and quality of data. Demonstrates a complete understanding of and utilises DSC methodology documents to efficiently complete assigned roles and associated tasks. Deals effectively with all team members and builds strong working relationships/rapport with them. Understands and leverages a multi-layer semantic model to ensure scalability, durability, and supportability of the analytic solution. Understands modern data warehouse concepts (real-time, cloud, Big Data) and how to enable such capabilities from a reporting and analytic stand-point. Demonstrated ability to serve as a trusted advisor that builds influence with client management beyond simply EDM.

Posted 4 days ago

Apply

5.0 - 8.0 years

15 - 22 Lacs

Ahmedabad

Work from Office

Naukri logo

Strong proficiency in SQL, database experience – Snowflake preferred • Expertise with python, especially with Python-pandas is must • Experience of Tableau and similar BI Tools (Power BI, etc) is must Required Candidate profile Must have 4+ years' experience with Tableau, SQL, AWS & Python. Must be from Ahmedabad or must be open to relocating to Ahmedabad Experience with data modelling • Experience in AWS environments

Posted 5 days ago

Apply

10.0 - 15.0 years

25 - 40 Lacs

Bengaluru

Work from Office

Naukri logo

About Client Hiring for One of the Most Prestigious Multinational Corporations! Job Description Job Title : Aws Solution Architect Qualification : Any Graduate or Above Relevant Experience : 10 -15Years Required Technical Skill Set (Skill Name) : Data lakes, data warehouses, AWS Glue, Aurora with Postgres, MySQL and DynamoDB Location : Bangalore CTC Range : 25 LPA-40 LPA Notice period : Any Shift Timing : N/A Mode of Interview : Virtual Mode of Work : WFO( Work From Office) Pooja Singh KS IT Staffing Analyst Black and White Business solutions PVT Ltd Bangalore, Karnataka, INDIA. pooja.singh@blackwhite.in I www.blackwhite.in

Posted 6 days ago

Apply

12.0 - 15.0 years

16 - 18 Lacs

Bengaluru

Hybrid

Naukri logo

iSource Services is hiring for one of their client for the position of AWS. AWS experience (not Azure or GCP), with 12-15 years of experience, and hands-on expertise in design and implementation. Design and Develop Data Solutions, Design and implement efficient data processing pipelines using AWS services like AWS Glue, AWS Lambda, Amazon S3, and Amazon Redshift. Candidates should possess exceptional communication skills to engage effectively with US clients. The ideal candidate must be hands-on with significant practical experience. Availability to work overlapping US hours is essential. The contract duration is 6 months. For this role, we're looking for candidates with 12 to 15 years of experience. AWS experience communication skills

Posted 6 days ago

Apply

8.0 - 12.0 years

16 - 27 Lacs

Chennai, Bengaluru

Work from Office

Naukri logo

Role & responsibilities Design, develop, and optimize scalable ETL pipelines using PySpark and AWS data services Work with structured and semi-structured data from various sources and formats (CSV, JSON, Parquet) Build reusable data transformations using Spark DataFrames, RDDs, and Spark SQL Implement data validation, quality checks, and ensure schema evolution across data sources Manage deployment and monitoring of Spark jobs using AWS EMR, Glue, Lambda, and CloudWatch Collaborate with product owners, architects, and data scientists to deliver robust data workflows Tune job performance, manage partitioning strategies, and reduce job latency/cost Contribute to version control, CI/CD processes, and production support Preferred candidate profile Bachelor's or Master's degree in Computer Science, Information Technology, or a related field. 5+ years of experience in PySpark, Spark SQL, RDDs, UDFs, and Spark optimization Strong experience in building ETL workflows for large-scale data processing Solid understanding of AWS cloud ecosystem, especially S3, EMR, Glue, Lambda, Athena Proficiency in Python, SQL, and shell scripting Experience with data lakes, partitioning strategies, and file formats (e.g., Parquet, ORC) Familiarity with Git, Jenkins, and automated testing frameworks (e.g., PyTest) Experience with Redshift, Snowflake, or other DW platforms Exposure to data governance, cataloging, or DQ frameworks Terraform or infrastructure-as-code experience Understanding of Spark internals, DAGs, and caching strategies

Posted 1 week ago

Apply

6.0 - 9.0 years

25 - 30 Lacs

Gurugram

Work from Office

Naukri logo

Exp- 6 to 9 years Notice period- Immediate to 15 days Location- GGN only WFO – 4 days in a week Working Shift - 1 PM to 10 PM. Band -4A

Posted 1 week ago

Apply

8.0 - 13.0 years

25 - 30 Lacs

Gurugram

Work from Office

Naukri logo

Experienced with AWS, with a strong understanding of cloud services and infrastructure.. Knowledgeable in Big Data concepts and experienced with AWS Glue, including setting up jobs, data cataloging, and managing crawlers.. Proficient in using and maintaining Apache Airflow for workflow management and Terraform for infrastructure automation.. Skilled in Python for scripting and automation tasks.. Independent and proactive in solving problems and troubleshooting issues.. Show more Show less

Posted 1 week ago

Apply

3.0 - 5.0 years

22 - 25 Lacs

Bengaluru

Work from Office

Naukri logo

We are looking for energetic, self-motivated and exceptional Data engineer to work on extraordinary enterprise products based on AI and Big Data engineering leveraging AWS/Databricks tech stack. He/she will work with star team of Architects, Data Scientists/AI Specialists, Data Engineers and Integration. Skills and Qualifications: 5+ years of experience in DWH/ETL Domain; Databricks/AWS tech stack 2+ years of experience in building data pipelines with Databricks/ PySpark /SQL Experience in writing and interpreting SQL queries, designing data models and data standards. Experience in SQL Server databases, Oracle and/or cloud databases. Experience in data warehousing and data mart, Star and Snowflake model. Experience in loading data into database from databases and files. Experience in analyzing and drawing design conclusions from data profiling results. Understanding business process and relationship of systems and applications. Must be comfortable conversing with the end-users. Must have ability to manage multiple projects/clients simultaneously. Excellent analytical, verbal and communication skills. Role and Responsibilities: Work with business stakeholders and build data solutions to address analytical & reporting requirements. Work with application developers and business analysts to implement and optimise Databricks/AWS-based implementations meeting data requirements. Design, develop, and optimize data pipelines using Databricks (Delta Lake, Spark SQL, PySpark), AWS Glue, and Apache Airflow Implement and manage ETL workflows using Databricks notebooks, PySpark and AWS Glue for efficient data transformation Develop/ optimize SQL scripts, queries, views, and stored procedures to enhance data models and improve query performance on managed databases. Conduct root cause analysis and resolve production problems and data issues. Create and maintain up to date documentation of the data model, data flow and field level mappings. Provide support for production problems and daily batch processing. Provide ongoing maintenance and optimization of database schemas, data lake structures (Delta Tables, Parquet), and views to ensure data integrity and performance

Posted 1 week ago

Apply

3.0 - 6.0 years

20 - 25 Lacs

Bengaluru

Hybrid

Naukri logo

Join us as a Data Engineer II in Bengaluru! Build scalable data pipelines using Python, SQL, AWS, Airflow, and Kafka. Drive real-time & batch data systems across analytics, ML, and product teams. A hybrid work option is available. Required Candidate profile 3+ yrs in data engineering with strong Python, SQL, AWS, Airflow, Spark, Kafka, Debezium, Redshift, ETL & CDC experience. Must know data lakes, warehousing, and orchestration tools.

Posted 1 week ago

Apply

8.0 - 13.0 years

12 - 22 Lacs

Pune

Hybrid

Naukri logo

- Exp in developing applications using Python, Glue(ETL), Lambda, step functions services in AWS EKS, S3, Glue, EMR, RDS Data Stores, CloudFront, API Gateway - Exp in AWS services such as Amazon Elastic Compute (EC2), Glue, Amazon S3, EKS, Lambda Required Candidate profile - 10+ years of exp in software development and technical leadership, preferably having a strong financial knowledge in building complex trading applications. - 5+ years of people management exp.

Posted 1 week ago

Apply

6.0 - 10.0 years

9 - 18 Lacs

Hyderabad

Hybrid

Naukri logo

Primary Skills (Mandatory top 3 skills) : AWS working experience AWS Glue or equivalent product experience Lambda functions Python programming Kubernetes knowledge Roles and Responsibilities: - Develop Code - Deployment - Testing - Bug fixing No of interview Rounds : 2 rounds , Face to Face is mandatory

Posted 1 week ago

Apply

4.0 - 9.0 years

12 - 16 Lacs

Kochi

Work from Office

Naukri logo

As Data Engineer, you wi deveop, maintain, evauate and test big data soutions. You wi be invoved in the deveopment of data soutions using Spark Framework with Python or Scaa on Hadoop and AWS Coud Data Patform Responsibiities: Experienced in buiding data pipeines to Ingest, process, and transform data from fies, streams and databases. Process the data with Spark, Python, PySpark, Scaa, and Hive, Hbase or other NoSQL databases on Coud Data Patforms (AWS) or HDFS Experienced in deveop efficient software code for mutipe use cases everaging Spark Framework / using Python or Scaa and Big Data technoogies for various use cases buit on the patform Experience in deveoping streaming pipeines Experience to work with Hadoop / AWS eco system components to impement scaabe soutions to meet the ever-increasing data voumes, using big data/coud technoogies Apache Spark, Kafka, any Coud computing etc Required education Bacheor's Degree Preferred education Master's Degree Required technica and professiona expertise Minimum 4+ years of experience in Big Data technoogies with extensive data engineering experience in Spark / Python or Scaa ; Minimum 3 years of experience on Coud Data Patforms on AWS; Experience in AWS EMR / AWS Gue / DataBricks, AWS RedShift, DynamoDB Good to exceent SQL skis Exposure to streaming soutions and message brokers ike Kafka technoogies Preferred technica and professiona experience Certification in AWS and Data Bricks or Coudera Spark Certified deveopers

Posted 1 week ago

Apply

8.0 - 13.0 years

25 - 30 Lacs

Gurugram

Work from Office

Naukri logo

Experienced with AWS, with a strong understanding of cloud services and infrastructure, Knowledgeable in Big Data concepts and experienced with AWS Glue, including setting up jobs, data cataloging, and managing crawlers, Proficient in using and maintaining Apache Airflow for workflow management and Terraform for infrastructure automation, Skilled in Python for scripting and automation tasks, Independent and proactive in solving problems and troubleshooting issues,

Posted 1 week ago

Apply

4.0 - 7.0 years

14 - 19 Lacs

Pune

Work from Office

Naukri logo

Calling all innovators find your future at Fiserv, Were Fiserv, a global leader in Fintech and payments, and we move money and information in a way that moves the world We connect financial institutions, corporations, merchants, and consumers to one another millions of times a day quickly, reliably, and securely Any time you swipe your credit card, pay through a mobile app, or withdraw money from the bank, were involved If you want to make an impact on a global scale, come make a difference at Fiserv, Job Title Tech Lead, Software Development Engineering Job Posting Title : Tech Lead, Software Development Engineering What does a successful BackEnd Developer do at Fiserv Work in an Agile, Analyse and understand product requirements to develop application by participating in requirement gathering & analysis, Building self-contained, reusable, and testable APIs using Springboot framework, Collaborate with UI web developers and programmers to improve usability, Identifies areas for improvement in existing interface structures, Developing functional documentation and guidelines for other team members Following backend architecture guidelines and best practices Ensuring all designs and specifications are rendered properly, Conduct testing of completed features and software to Assess user experience, What You Will Do Understand the business domain and implement the requirements, Work as a part of a scrum team along with ui engineers, testing engineers and Product managers Take part in different agile ceremonies, Designing and developing the Rest/Soap APIs using spring boot, Collaborate with the UI developer in the process of building the Restful API and integration, Document and run unit tests to enhance product performance, Work on bug fixing to improve product performance, What You Will Need To Have Bachelor's degree required, 8-11 years of Java development experience, Experience with back end server-side development, Experience with Any Cloud Experience (GCP/Azure/AWS) or Kubernetes or Docker Experience with MySQL or similar relational databases Experience with API design Experienced in the day-to-day practicalities of Software Development Lifecycles such as Scrum, What Would Be Great To Have Experience integrating and implementing AWS Glue job using Python, Experience in AWS Step functions, Experience in CI/CD tools, Experience in JAVA 11 & above versions, Thank You For Considering Employment With Fiserv Please Apply using your legal name Complete the step-by-step profile and attach your resume (either is acceptable, both are preferable), Our Commitment To Diversity And Inclusion Fiserv is proud to be an Equal Opportunity Employer All qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, gender, gender identity, sexual orientation, age, disability, protected veteran status, or any other category protected by law, Note To Agencies Fiserv does not accept resume submissions from agencies outside of existing agreements Please do not send resumes to Fiserv associates Fiserv is not responsible for any fees associated with unsolicited resume submissions, Warning About Fake Job Posts Please be aware of fraudulent job postings that are not affiliated with Fiserv Fraudulent job postings may be used by cyber criminals to target your personally identifiable information and/or to steal money or financial information Any communications from a Fiserv representative will come from a legitimate Fiserv email address,

Posted 1 week ago

Apply

5.0 - 10.0 years

9 - 18 Lacs

Hyderabad

Work from Office

Naukri logo

Must have skills : Amazon Web Services (AWS) Good to have skills : Oracle Procedural Language Extensions to SQL (PLSQL), Python (Programming Language) Minimum 5 year(s) of experience is required Educational Qualification : Any Graduation Share CV on neha.mandal@mounttalent.com Summary: As an Application Developer, you will be responsible for designing, building, and configuring applications to meet business process and application requirements using Amazon Web Services (AWS) as your primary skill. Your typical day will involve working with AWS, Python, and Oracle Procedural Language Extensions to SQL (PLSQL) to develop and maintain applications that meet business needs. Responsibilities: 1. Write clean, scalable code using AWS glue with Apache Spark/Python languages 2. Remain up to date with the terminology, concepts, and best practices for coding 3. Develop technical interfaces, specifications, and architecture 4. Create and test software prototypes 5. Develop client displays and user interfaces 6. handling project related work and other requirements 7. Working knowledge of Amazon's AWS Glue, Elastic Cloud Compute( EC2 ) infrastructure for computational tasks and Simple Storage Service ( S3 ) as Storage mechanism. Technical experience: 1. Good Handson knowledge AWS glue (Additionally AWS S3, Lambda, Arora DB) 2. Having knowledge on Data warehousing concepts and good on data analysis 3. Independently analyze, solve, and correct issues in real time, providing problem resolution end-to-end 4. Good communication skills 5. Having knowledge on Agile process. Resource is willing to work in B shift. - This position is based at our Hyderabad office.

Posted 1 week ago

Apply

15.0 - 24.0 years

45 - 100 Lacs

Chennai

Remote

Naukri logo

What can you expect in a Director of Data Engineering role with TaskUs: Key Responsibilities: Manage a geographically diverse team of Managers/Senior Managers of Data Engineering responsible for the ETL to process, transform, and derive attributes for all operational data for reporting and analytics use from various transactional systems. Sets and enforces BI standards and architecture. Aligns BI architecture with enterprise architecture. Partner with business leaders, technology leaders, and other stakeholders to champion the strategy, design, development, launch, and management of cloud data engineering-related projects and initiatives that can scale and rapidly meet strategic and business objectives Define cloud data engineering strategy, roadmap, and strategic execution steps Collaborate with business leadership and technology partners to leverage data to support and optimize efficiencies Define, design & implement processes for data integration and data management on cloud data platforms, primarily AWS Accountable for management of project prioritization, progress, and workload management across cloud data engineering staff to ensure on-time delivery Review and manage the ticketing queue to ensure timely assignment and progression of support tickets Work directly with the IT Application Teams and other IT areas to understand, assess requirements, and prioritize a backlog of cloud services needed to be delivered to enable transformation Conduct comprehensive need assessments to create and implement modernized serverless data architecture plan that supports the business analytics and reporting needs Establish IT Data & Analysis standards, practices, and security measures to ensure effective and consistent information processing and consistent data quality/accessibility Help architect cloud data engineering source to target auditing altering solutions to ensure data quality Responsible for data architecture, ETL, backup, and security of new AWS-based data lake framework Conducts data quality initiatives to rid the system of old, unused, or duplicate data. Oversees complex data modeling and advanced project metadata development. Ensures that business rules are consistently applied across different user interfaces to limit the possibility of inconsistent results. Managed architected the migration of on-premise DW SQL server star schema to Redshift Designs specifications and standards for semantic layers and multidimensional models for complex BI projects, across all environments. Consults on training and usage for the business community by selecting appropriate BI tools, features, and techniques. Required Qualifications: A People Leader with strong stakeholder management experience Strong knowledge of Data Warehousing concepts with an understanding of traditional and MPP database designs, star and snowflake schemas, and database migration experience, with 10 Years of experience in data modeling. You must have at least 8 years of hands-on development experience using ETL Tools such as Pentaho, AWS Glue, Talend, or Airflow. Knowledge of the architecture, design, and implementation of MPP Databases such as Teradata, Snowflake, or Redshift. 5 years of experience in development using Cloud-based analytics solutions preferable (AWS). Knowledge of designing and implementing streaming pipelines using Apache Kafka, Apache Spark, and Fivetran Segment. At least 5 years experience in using Python in a cloud-based environment is a plus. Knowledge of NoSQL DBs such as MongoDB is not required but preferred. Structured thinker and effective communicator. Education / Certifications: Bachelors degree in Computer Science, Information Technology, or related fields (MBA or MS degree is a plus) or 15 to 20 years relevant experience in lieu of a degree. Work Location / Work Schedule / Travel: Remote (Global) How We Partner To Protect You: Task Us will neither solicit money from you during your application process nor require any form of payment in order to proceed with your application. Kindly ensure that you are always in communication with only authorized recruiters of Task Us. DEI: In Task Us we believe that innovation and higher performance are brought by people from all walks of life. We welcome applicants of different backgrounds, demographics, and circumstances. Inclusive and equitable practices are our responsibility as a business. Task Us is committed to providing equal access to opportunities. If you need reasonable accommodations in any part of the hiring process, please let us know. We invite you to explore all Task Us career opportunities and apply through the provided URL https://www.taskus.com/careers/ .

Posted 1 week ago

Apply

7.0 - 12.0 years

20 - 35 Lacs

Pune

Hybrid

Naukri logo

Job Duties and Responsibilities: We are looking for a self-starter to join our Data Engineering team. You will work in a fast-paced environment where you will get an opportunity to build and contribute to the full lifecycle development and maintenance of the data engineering platform. With the Data Engineering team you will get an opportunity to - Design and implement data engineering solutions that is scalable, reliable and secure on the Cloud environment Understand and translate business needs into data engineering solutions Build large scale data pipelines that can handle big data sets using distributed data processing techniques that supports the efforts of the data science and data application teams Partner with cross-functional stakeholder including Product managers, Architects, Data Quality engineers, Application and Quantitative Science end users to deliver engineering solutions Contribute to defining data governance across the data platform Basic Requirements: A minimum of a BS degree in computer science, software engineering, or related scientific discipline is desired 3+ years of work experience in building scalable and robust data engineering solutions Strong understanding of Object Oriented programming and proficiency with programming in Python (TDD) and Pyspark to build scalable algorithms 3+ years of experience in distributed computing and big data processing using the Apache Spark framework including Spark optimization techniques 2+ years of experience with Databricks, Delta tables, unity catalog, Delta Sharing, Delta live tables(DLT) and incremental data processing Experience with Delta lake, Unity Catalog Advanced SQL coding and query optimization experience including the ability to write analytical and nested queries 3+ years of experience in building scalable ETL/ ELT Data Pipelines on Databricks and AWS (EMR) 2+ Experience of orchestrating data pipelines using Apache Airflow/ MWAA Understanding and experience of AWS Services that include ADX, EC2, S3 3+ years of experience with data modeling techniques for structured/ unstructured datasets Experience with relational/columnar databases - Redshift, RDS and interactive querying services - Athena/ Redshift Spectrum Passion towards healthcare and improving patient outcomes Demonstrate analytical thinking with strong problem solving skills Stay on top of emerging technologies and posses willingness to learn. Bonus Experience (optional) Experience with Agile environment Experience operating in a CI/CD environment Experience building HTTP/REST APIs using popular frameworks Healthcare experience

Posted 1 week ago

Apply

9.0 - 13.0 years

32 - 40 Lacs

Ahmedabad

Remote

Naukri logo

About the Role: We are looking for a hands-on AWS Data Architect OR Lead Engineer to design and implement scalable, secure, and high-performing data solutions. This is an individual contributor role where you will work closely with data engineers, analysts, and stakeholders to build modern, cloud-native data architectures across real-time and batch pipelines. Experience: 715 Years Location: Fully Remote Company: Armakuni India Key Responsibilities: Data Architecture Design: Develop and maintain a comprehensive data architecture strategy that aligns with the business objectives and technology landscape. Data Modeling: Create and manage logical, physical, and conceptual data models to support various business applications and analytics. Database Design: Design and implement database solutions, including data warehouses, data lakes, and operational databases. Data Integration: Oversee the integration of data from disparate sources into unified, accessible systems using ETL/ELT processes. Data Governance: Implemented enforce data governance policies and procedures to ensure data quality, consistency, and security. Technology Evaluation: Evaluate and recommend data management tools, technologies, and best practices to improve data infrastructure and processes. Collaboration: Work closely with data engineers, data scientists, business analysts, and other stakeholders to understand data requirements and deliver effective solutions. Trusted by the worlds leading brands Documentation: Create and maintain documentation related to data architecture, data flows, data dictionaries, and system interfaces. Performance Tuning: Optimize database performance through tuning, indexing, and query optimization. Security: Ensure data security and privacy by implementing best practices for data encryption, access controls, and compliance with relevant regulations (e.g., GDPR, CCPA) Required Skills: Helping project teams with solutions architecture, troubleshooting, and technical implementation assistance. Proficiency in SQL and database management systems (e.g., MySQL, PostgreSQL, Oracle, SQL Server). Minimum7to15 years of experience in data architecture or related roles. Experience with big data technologies (e.g., Hadoop, Spark, Kafka, Airflow). Expertise with cloud platforms (e.g., AWS, Azure, Google Cloud) and their data services. Knowledge of data integration tools (e.g., Informatica, Talend, FiveTran, Meltano). Understanding of data warehousing concepts and tools (e.g., Snowflake, Redshift, Synapse, BigQuery). Experience with data governance frameworks and tools.

Posted 1 week ago

Apply

6.0 - 11.0 years

30 - 35 Lacs

Hyderabad, Delhi / NCR

Hybrid

Naukri logo

Support enhancements to the MDM and Performance platform Track System Performance Troubleshoot issues Resolve production issues Required Candidate profile 5+ years in Python and advanced SQL including profiling, refactoring Experience with REST API and Hands on AWS Glue EMR etc Experience with Markit EDM or Semarchy or MDM will be plus

Posted 1 week ago

Apply

2.0 - 6.0 years

0 - 1 Lacs

Pune

Work from Office

Naukri logo

As Lead Data Engineer , you'll design and manage scalable ETL pipelines and clean, structured data flows for real-time retail analytics. You'll work closely with ML engineers and business teams to deliver high-quality, ML-ready datasets. Responsibilities: Develop and optimize large-scale ETL pipelines Design schema-aware data flows and dashboard-ready datasets Manage data pipelines on AWS (S3, Glue, Redshift) Work with transactional and retail data for real-time insights

Posted 1 week ago

Apply

4.0 - 7.0 years

25 - 27 Lacs

Bengaluru

Remote

Naukri logo

4+ YOE as a Data Engineer/Scientist, hands-on experience working on Data Warehousing, Data ingestion, Data processing, Data Lakes Must have strong development experience using Python. and SQL, understanding of data orchestration tools like Airflow Required Candidate profile Experience with data extraction techniques - CDC, batch-based, Debezium, Kafka Connect, AWS DMS, queuing/messaging systems - SQS, RabbitMQ, Kinesis, AWS, Data/ML - AWS Glue, MWAA, Athena, Redshift

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies