Home
Jobs

149 Etl Pipelines Jobs

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 5.0 years

9 - 13 Lacs

Pune, Bengaluru

Work from Office

Naukri logo

At Allstate, great things happen when our people work together to protect families and their belongings from lifes uncertainties. And for more than 90 years our innovative drive has kept us a step ahead of our customers evolving needs. From advocating for seat belts, air bags and graduated driving laws, to being an industry leader in pricing sophistication, telematics, and, more recently, device and identity protection. The Business Intelligence Consultant II is responsible for leveraging data and analytics to answer complex questions and influence business strategy through communication of findings to Stake holders. "The ideal candidate will have a strong background in SQL, Data Modeling, Report Development, and Data visualization. You will work closely with stakeholders to understand business requirements and translate them into interactive reports and analytical solutions. Design, develop, and maintain Power BI dashboards and reports that provide actionable insights Write efficient, optimized and advanced SQL queries to extract and manipulate data from relational databases (e.g., Oracle, Dremio) Develop and maintain data models (star/snowflake schema) and Power BI datasets Collaborate with business stakeholders to gather and analyze reporting requirements Ensure data accuracy, consistency, and performance of reports and dashboards Implement row-level security (RLS) and data refresh schedules in Power BI Service Optimize and tune SQL queries and Power BI performance (DAX, visuals) Work with data engineers and analysts to streamline ETL processes as needed Document solutions, definitions, and business rules used in BI reports Stay current with Power BI and SQL advancements, proposing improvements as appropriate Monitor and evaluate business initiatives against key performance indicators and communicate results and recommendations to management Lead data governance efforts, including standardization of KPIs, data definitions, and documentation Provide mentorship and guidance to junior BI analysts, fostering a culture of continuous learning and knowledge sharing Identify and recommend opportunities to automate reporting and streamline data operations Required Skills & Qualifications Bachelors degree in computer science, Information Systems, Engineering, or related field 4-6 years of professional experience in BI development using SQL and Power BI Expertise in writing complex SQL queries , views, and functions Proficient in DAX , Power Query (M language) , and Power BI Dataflows Strong understanding of data warehousing concepts , data modeling , and ETL pipelines Experience with Power BI Service workspaces, publishing, RLS, and refresh scheduling Good understanding of database systems like Oracle, Dremio Experience with Microsoft Fabric or equivalent unified data platforms (OneLake, Synapse, Data Factory) Ability to work independently and manage multiple projects with minimal supervision Excellent communication, written, Interpretation and documentation skills Primary Skills Analytcial Thinking, Business Intelligence (BI) Solutions, Data Analysis, Data-Driven Decision Making, User Acceptance Testing (UAT) Shift Time Shift B (India) Recruiter Info Annapurna Jhaajhat@allstate.com About Allstate The Allstate Corporation is one of the largest publicly held insurance providers in the United States. Ranked No. 84 in the 2023 Fortune 500 list of the largest United States corporations by total revenue, The Allstate Corporation owns and operates 18 companies in the United States, Canada, Northern Ireland, and India. Allstate India Private Limited, also known as Allstate India, is a subsidiary of The Allstate Corporation. The India talent center was set up in 2012 and operates under the corporations Good Hands promise. As it innovates operations and technology, Allstate India has evolved beyond its technology functions to be the critical strategic business services arm of the corporation. With offices in Bengaluru and Pune, the company offers expertise to the parent organizations business areas including technology and innovation, accounting and imaging services, policy administration, transformation solution design and support services, transformation of property liability service design, global operations and integration, and training and transition. Learn more about Allstate India here.

Posted Just now

Apply

2.0 - 5.0 years

14 - 17 Lacs

Navi Mumbai

Work from Office

Naukri logo

As a Data Engineer at IBM, you'll play a vital role in the development, design of application, provide regular support/guidance to project teams on complex coding, issue resolution and execution. Your primary responsibilities include: Lead the design and construction of new solutions using the latest technologies, always looking to add business value and meet user requirements. Strive for continuous improvements by testing the build solution and working under an agile framework. Discover and implement the latest technologies trends to maximize and build creative solutions Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Experience with Apache Spark (PySpark)In-depth knowledge of Spark’s architecture, core APIs, and PySpark for distributed data processing. Big Data TechnologiesFamiliarity with Hadoop, HDFS, Kafka, and other big data tools. Data Engineering Skills: Strong understanding of ETL pipelines, data modelling, and data warehousing concepts. Strong proficiency in PythonExpertise in Python programming with a focus on data processing and manipulation. Data Processing FrameworksKnowledge of data processing libraries such as Pandas, NumPy. SQL ProficiencyExperience writing optimized SQL queries for large-scale data analysis and transformation. Cloud PlatformsExperience working with cloud platforms like AWS, Azure, or GCP, including using cloud storage systems Preferred technical and professional experience Define, drive, and implement an architecture strategy and standards for end-to-end monitoring. Partner with the rest of the technology teams including application development, enterprise architecture, testing services, network engineering, Good to have detection and prevention tools for Company products and Platform and customer-facing

Posted 1 hour ago

Apply

5.0 - 10.0 years

9 - 15 Lacs

Gurugram

Work from Office

Naukri logo

Experience:- 5+ yrs Location:- Remote Budget:- 15 LPA Contact:9916086641 anamika@makevisionsoutsourcing.in

Posted 5 hours ago

Apply

6.0 - 10.0 years

15 - 20 Lacs

Hyderabad

Work from Office

Naukri logo

Develop, optimize, and maintain scalable data pipelines using Python and PySpark. Design and implement data processing workflows leveraging GCP services such as: BigQuery Dataflow Cloud Functions Cloud Storage

Posted 6 hours ago

Apply

3.0 - 6.0 years

11 - 15 Lacs

Bengaluru

Work from Office

Naukri logo

Project description A DevOps Support Engineer will perform tasks related to data pipeline work and monitor and support related to job execution, data movement, and on-call support. In addition, deployed pipeline implementations will be tested for production validation. Responsibilities Provide production support for 1st tier, after hours and on call support. The candidate will eventually develop into more data engineering within the Network Operations team. The selected resource will learn the Telecommunications domain while also developing data learning skills. Skills Must have ETL pipeline, data engineering, data movement/monitoring Azure Databricks Watchtower Automation tools Testing Nice to have Data Engineering Other Languages EnglishC2 Proficient Seniority Regular

Posted 1 day ago

Apply

3.0 - 7.0 years

11 - 16 Lacs

Gurugram

Work from Office

Naukri logo

Project description We are looking for the star Python Developer who is not afraid of work and challenges! Gladly becoming a partner with famous financial institution, we are gathering a team of professionals with wide range of skills to successfully deliver business value to the client. Responsibilities Analyse existing SAS DI pipelines and SQL-based transformations. Translate and optimize SAS SQL logic into Python code using frameworks such as Pyspark. Develop and maintain scalable ETL pipelines using Python on AWS EMR. Implement data transformation, cleansing, and aggregation logic to support business requirements. Design modular and reusable code for distributed data processing tasks on EMR clusters. Integrate EMR jobs with upstream and downstream systems, including AWS S3, Snowflake, and Tableau. Develop Tableau reports for business reporting. Skills Must have 6+ years of experience in ETL development, with at least 5 years working with AWS EMR. Bachelor's degree in Computer Science, Data Science, Statistics, or a related field. Proficiency in Python for data processing and scripting. Proficient in SQL and experience with one or more ETL tools (e.g., SAS DI, Informatica)/. Hands-on experience with AWS servicesEMR, S3, IAM, VPC, and Glue. Familiarity with data storage systems such as Snowflake or RDS. Excellent communication skills and ability to work collaboratively in a team environment. Strong problem-solving skills and ability to work independently. Nice to have N/A Other Languages EnglishB2 Upper Intermediate Seniority Senior

Posted 1 day ago

Apply

5.0 - 10.0 years

9 - 15 Lacs

Pune

Remote

Naukri logo

Experience:- 5+ yrs Location:- Remote Budget:- 15 LPA Contact:9916086641 anamika@makevisionsoutsourcing.in

Posted 1 day ago

Apply

7.0 - 12.0 years

25 - 35 Lacs

Hyderabad, Pune, Bengaluru

Hybrid

Naukri logo

Must have: Scala, Spark, Azure databricks, Kubernetes Note: Quantexa certification is a Must. Good to have: Python, Pyspark, Elastic, Restful APIs ROLE PURPOSE The purpose of the Data Engineer is to design, build and unit test data pipelines and jobs for Projects and Programmes on Azure Platform. This role is purposed for Quantexa Fraud platform programme, Quantexa certified engineer is a preferred. KEY ACCOUNTABILITIES Analyse business requirements and support and maintain Quantexa platform. Build and deploy new/changes to data mappings, sessions, and workflows in Azure Cloud Platform – key focus area would be Quantexa platform on Azure. Develop both batch (using Azure Databricks) and real time (Kafka and Kubernetes) pipelines and jobs to extract, transform and load data to platform. Perform ETL routines performance tuning, troubleshooting, support, and capacity estimation. Conduct thorough testing of ETL code changes to ensure quality deliverables Provide day-to-day support and mentoring to end users who are interacting with the data Profile and understand large amounts of source data available, including structured and semi-structured/web activity data Analyse defects and provide fixes Provide release notes for deployments Support Release activities Problem solving attitude Keep up to date with new skills - Develop technology skills in other areas of Platform FUNCTIONAL / TECHNICAL SKILLS Skills and Experience: Exposure to Fraud, financial crime, customer insights or compliance-based projects that utilize detection and prediction models Experienced in ETL tools like databricks (Spark) and data projects Experience with Kubernetes to deliver real time data ingestion and transformation using scala. Scala knowledge would be highly desirable, Python knowledge is a plus Strong knowledge of SQL Strong Analytical skills Azure DevOps knowledge Experience with local IDE, design documentations, Unit testing.

Posted 1 day ago

Apply

5.0 - 8.0 years

10 - 17 Lacs

Chennai

Work from Office

Naukri logo

Data Engineer, Chennai, India. About the job: The Data Engineer is a cornerstone of Vendasta's R&D team, driving the efficient processing, organization, and delivery of clean, structured data in support of business intelligence and decision-making. By developing and maintaining scalable ELT pipelines, they ensure data reliability and scalability, adhering to Vendasta's commitment to delivering data solutions aligned with evolving business needs. Your Impact: Design, implement, and maintain scalable ELT pipelines within a Kimball Architecture data warehouse. Ensure robustness against failures and data entry errors, managing data conformation, de-duplication, survivorship, and coercion. Manage historical and hierarchical data structures, ensuring usability for the Business Intelligence (BI) team and scalability for future growth. Partner with BI teams to prioritize and deliver data solutions while maintaining alignment with business objectives. Work closely with source system owners to extract, clean, and integrate data into the data warehouse. Advocate for and influence improvements in source data integrity. Champion best practices in data engineering, including governance, lineage tracking, and quality assurance. Collaborate with Site Reliability Engineering (SRE) teams to optimize cloud infrastructure usage. Operate within an Agile framework, contributing to team backlogs via Kanban or Scrum processes as appropriate. Balance short-term deliverables with long-term technical investments in collaboration with BI and engineering management. What you bring to the table: 5 - 8 years of proficiency in ETL, SQL and experience with cloud-based platforms like Google Cloud (BigQuery, DBT, Looker). In-depth understanding of Kimball data warehousing principles, including the 34-subsystems of ETL. Strong problem-solving skills for diagnosing and resolving data quality issues. Ability to engage with BI teams and source system owners to prioritize and deliver data solutions effectively. Eagerness to advocate for data integrity improvements while respecting the boundaries of data mesh principles. Ability to balance immediate needs with long-term technical investments. Understanding of cloud infrastructure for effective resource management in partnership with SRE teams. About Vendasta: So what do we actually do? Vendasta is a SaaS company composed of a company of global brands including MatchCraft, Yesware, and Broadly, that builds and sells software and services to help small businesses operate more efficiently as a team, meet more client needs, and provide incredible client experiences. We have offices in Saskatoon, Saskatchewan, Boston and Boca Raton, Florida, and Chennai, India. Perks: Benefits of health insurance Paid time off Training & Career Development: Professional development plans, leadership workshops, mentorship programs, and more! Free Snacks, hot beverages, and catered lunches on Fridays Culture - comprised of our core values: Drive, Innovation, Respect, and Agility Night Shift Premium Provident Fund

Posted 2 days ago

Apply

6.0 - 10.0 years

1 - 1 Lacs

Bengaluru

Remote

Naukri logo

We are looking for a highly skilled Senior ETL Consultant with strong expertise in Informatica Intelligent Data Management Cloud (IDMC) components such as IICS, CDI, CDQ, IDQ, CAI, along with proven experience in Databricks.

Posted 2 days ago

Apply

2.0 - 6.0 years

5 - 9 Lacs

Pune

Work from Office

Naukri logo

Data Engineer1 Common Skils - SQL, GCP BQ, ETL pipelines using Pythin/Airflow,Experience on Spark/Hive/HDFS,Data modeling for Data conversion Resources (4) Prior experience working on a conv/migration HR project is additional skill needed along with above mentioned skills Common Skils - SQL, GCP BQ, ETL pipelines using Pythin/Airflow, Experience on Spark/Hive/HDFS, Data modeling for Data conversion Resources (4) Prior experience working on a conv/migration HR project is additional skill needed along with above mentioned skills Data Engineer - Knows HR Knowledge , all other requirement from Functional Area given by UBER Customer Name Customer Nameuber

Posted 2 days ago

Apply

8.0 - 13.0 years

14 - 24 Lacs

Hyderabad

Hybrid

Naukri logo

Key Responsibilities: • Designing and building a scalable Datawarehouse using Azure Data Factory (ADF) and Azure Synapse Pipelines and SSIS. • Creating visually appealing BI dashboards using Power BI and other reporting tools to deliver data-driven insights. • Collaborating with cross-functional teams, communicating complex concepts, and ensuring data governance and quality standards. Basic Qualifications: • 9-12 years of strong Business Intelligence/Business Analytics experience or equivalency is preferred. • B. Tech/B.E. - Any Specialization, M.E/M.Tech - Any Specialization • Strong proficiency in SQL and experience with database technologies (e.g., SQL Server). • Solid understanding of data modeling, data warehousing, and ETL concepts. • Excellent analytical and problem-solving skills, with the ability to translate complex business requirements into practical solutions. • Strong communication and collaboration skills, with the ability to effectively interact with stakeholders at all levels of the organization. • Proven ability to work independently and manage multiple priorities in a fast-paced environment. • Must have worked on ingesting data in the Enterprise Data Warehouse. • Good experience in the areas of Business Intelligence and Reporting, including but not limited to On-Prem and Cloud Technologies • Must have exposure to complete MSBI stack including Power BI and deliver end to end BI solutions independently. • Must have technical expertise in creating data pipelines/ data integration strategy using SSIS/ ADF/ Synapse Pipeline Preferred Qualifications • Hands-on experience on DBT and Fabric will be preferred. • Proficiency in programming languages such as Python or R is a plus.

Posted 2 days ago

Apply

3.0 - 8.0 years

5 - 11 Lacs

Pune, Mumbai (All Areas)

Hybrid

Naukri logo

Overview: TresVista is looking to hire an Associate in its Data Intelligence Group team, who will be primarily responsible for managing clients as well as monitor/execute projects both for the clients as well as internal teams. The Associate may be directly managing a team of up to 3-4 Data Engineers & Analysts across multiple data engineering efforts for our clients with varied technologies. They would be joining the current team of 70+ members, which is a mix of Data Engineers, Data Visualization Experts, and Data Scientists. Roles and Responsibilities: Interacting with the client (internal or external) to understand their problems and work on solutions that address their needs Driving projects and working closely with a team of individuals to ensure proper requirements are identified, useful user stories are created, and work is planned logically and efficiently to deliver solutions that support changing business requirements Managing the various activities within the team, strategizing how to approach tasks, creating timelines and goals, distributing information/tasks to the various team members Conducting meetings, documenting, and communicating findings effectively to clients, management and cross-functional teams Creating Ad-hoc reports for multiple internal requests across departments Automating the process using data transformation tools Prerequisites Strong analytical, problem-solving, interpersonal, and communication skills Advanced knowledge of DBMS, Data Modelling along with advanced querying capabilities using SQL Working experience in cloud technologies (GCP/ AWS/Azure/Snowflake) Prior experience in building and deploying ETL/ELT pipelines using CI/CD, and orchestration tools such as Apache Airflow, GCP workflows, etc. Proficiency in Python for building ETL/ELT processes and data modeling Proficiency in Reporting and Dashboards creation using Power BI/Tableau Knowledge in building ML models and leveraging Gen AI for modern architectures. Experience working with version control platforms like GitHub Familiarity with IaC tools like Terraform and Ansible is good to have Stakeholder Management and client communication experience would be preferred Experience in the Financial Services domain will be an added plus Experience in Machine Learning tools and techniques will be good to have Experience 3-7 years Education BTech/MTech/BE/ME/MBA in Analytics Compensation The compensation structure will be as per industry standards

Posted 2 days ago

Apply

8.0 - 13.0 years

15 - 30 Lacs

Bengaluru

Work from Office

Naukri logo

Role: Senior Data Engineer Location: Bangalore - Hybrid Experience : 10+ Years Job Requirements: ETL & Data Pipelines: Experience building and maintaining ETL pipelines with large data sets using AWS Glue, EMR, Kinesis, Kafka, CloudWatch Programming & Data Processing: Strong Python development experience with proficiency in Spark or PySpark Experience in using APIs Database Management: Strong skills in writing SQL queries and performance tuning in AWS Redshift Proficient with other industry-leading RDBMS such as MS SQL Server and PostgreSQL AWS Services: Proficient in working with AWS services including AWS Lambda, Event Bridge, Step Functions, SNS, SQS, S3, and MI models Interested candidates can share their resume at Neesha1@damcogroup.com

Posted 2 days ago

Apply

2.0 - 4.0 years

10 - 18 Lacs

Bengaluru

Work from Office

Naukri logo

Role & responsibilities : Design and Build Data Infrastructure : Develop scalable data pipelines and data lake/warehouse solutions for real-time and batch data using cloud and open-source tools. Develop & Automate Data Workflows : Create Python-based ETL/ELT processes for data ingestion, validation, integration, and transformation across multiple sources. Ensure Data Quality & Governance : Implement monitoring systems, resolve data quality issues, and enforce data governance and security best practices. Collaborate & Mentor : Work with cross-functional teams to deliver data solutions, and mentor junior engineers as the team grows. Explore New Tech : Research and implement emerging tools and technologies to improve system performance and scalability.

Posted 3 days ago

Apply

4.0 - 9.0 years

10 - 14 Lacs

Pune

Work from Office

Naukri logo

Job Title - Sales Excellence -Client Success - Data Engineering Specialist - CF Management Level :ML9 Location:Open Must have skills:GCP, SQL, Data Engineering, Python Good to have skills:managing ETL pipelines. Job Summary : We are: Sales Excellence. Sales Excellence at Accenture empowers our people to compete, win and grow. We provide everything they need to grow their client portfolios, optimize their deals and enable their sales talent, all driven by sales intelligence. The team will be aligned to the Client Success, which is a new function to support Accentures approach to putting client value and client experience at the heart of everything we do to foster client love. Our ambition is that every client loves working with Accenture and believes were the ideal partner to help them create and realize their vision for the future beyond their expectations. You are: A builder at heart curious about new tools and their usefulness, eager to create prototypes, and adaptable to changing paths. You enjoy sharing your experiments with a small team and are responsive to the needs of your clients. The work: The Center of Excellence (COE) enables Sales Excellence to deliver best-in-class service offerings to Accenture leaders, practitioners, and sales teams. As a member of the COE Analytics Tools & Reporting team, you will help in building and enhancing data foundation for reporting tools and Analytics tool to provide insights on underlying trends and key drivers of the business. Roles & Responsibilities: Collaborate with the Client Success, Analytics COE, CIO Engineering/DevOps team, and stakeholders to build and enhance Client success data lake. Write complex SQL scripts to transform data for the creation of dashboards or reports and validate the accuracy and completeness of the data. Build automated solutions to support any business operation or data transfer. Document and build efficient data model for reporting and analytics use case. Assure the Data Lake data accuracy, consistency, and timeliness while ensuring user acceptance and satisfaction. Work with the Client Success, Sales Excellence COE members, CIO Engineering/DevOps team and Analytics Leads to standardize Data in data lake. Professional & Technical Skills: Bachelors degree or equivalent experience in Data Engineering, analytics, or similar field. At least 4 years of professional experience in developing and managing ETL pipelines. A minimum of 2 years of GCP experience. Ability to write complex SQL and prepare data for dashboarding. Experience in managing and documenting data models. Understanding of Data governance and policies. Proficiency in Python and SQL scripting language. Ability to translate business requirements into technical specification for engineering team. Curiosity, creativity, a collaborative attitude, and attention to detail. Ability to explain technical information to technical as well as non-technical users. Ability to work remotely with minimal supervision in a global environment. Proficiency with Microsoft office tools. Additional Information: Masters degree in analytics or similar field. Data visualization or reporting using text data as well as sales, pricing, and finance data. Ability to prioritize workload and manage downstream stakeholders. About Our Company | AccentureQualification Experience: Minimum 5+ year(s) of experience is required Educational Qualification: Bachelors degree or equivalent experience in Data Engineering, analytics, or similar field

Posted 3 days ago

Apply

3.0 - 7.0 years

10 - 18 Lacs

Pune

Work from Office

Naukri logo

Roles and Responsibilities Design, develop, and maintain automated testing frameworks using AWS services such as Glue, Lambda, Step Functions, etc. Develop data pipelines using Delta Lake and ETL processes to extract insights from large datasets. Collaborate with cross-functional teams to identify requirements for test cases and create comprehensive test plans. Ensure high-quality deliverables by executing thorough testing procedures and reporting defects. Desired Candidate Profile 3-7 years of experience in QA Automation with a focus on AWS native stack (Glue, Lambda). Strong understanding of SQL concepts and ability to write complex queries. Experience working with big data technologies like Hadoop/Hive/Pyspark is an added advantage.

Posted 3 days ago

Apply

4.0 - 8.0 years

10 - 18 Lacs

Hyderabad

Hybrid

Naukri logo

About the Role: We are looking for a skilled and motivated Data Engineer with strong experience in Python programming and Google Cloud Platform (GCP) to join our data engineering team. The ideal candidate will be responsible for designing, developing, and maintaining robust and scalable ETL (Extract, Transform, Load) data pipelines. The role involves working with various GCP services, implementing data ingestion and transformation logic, and ensuring data quality and consistency across systems. Key Responsibilities: Design, develop, test, and maintain scalable ETL data pipelines using Python. Work extensively on Google Cloud Platform (GCP) services such as: Dataflow for real-time and batch data processing Cloud Functions for lightweight serverless compute BigQuery for data warehousing and analytics Cloud Composer for orchestration of data workflows (based on Apache Airflow) Google Cloud Storage (GCS) for managing data at scale IAM for access control and security Cloud Run for containerized applications Perform data ingestion from various sources and apply transformation and cleansing logic to ensure high-quality data delivery. Implement and enforce data quality checks, validation rules, and monitoring. Collaborate with data scientists, analysts, and other engineering teams to understand data needs and deliver efficient data solutions. Manage version control using GitHub and participate in CI/CD pipeline deployments for data projects. Write complex SQL queries for data extraction and validation from relational databases such as SQL Server, Oracle, or PostgreSQL. Document pipeline designs, data flow diagrams, and operational support procedures. Required Skills: 4-6 years of hands-on experience in Python for backend or data engineering projects. Strong understanding and working experience with GCP cloud services (especially Dataflow, BigQuery, Cloud Functions, Cloud Composer, etc.). Solid understanding of data pipeline architecture, data integration, and transformation techniques. Experience in working with version control systems like GitHub and knowledge of CI/CD practices. Strong experience in SQL with at least one enterprise database (SQL Server, Oracle, PostgreSQL, etc.). Good to Have (Optional Skills): Experience working with Snowflake cloud data platform. Hands-on knowledge of Databricks for big data processing and analytics. Familiarity with Azure Data Factory (ADF) and other Azure data engineering tools.

Posted 3 days ago

Apply

8.0 - 12.0 years

5 - 10 Lacs

Hyderabad, Chennai, Bengaluru

Work from Office

Naukri logo

Teradata to Snowflake and Databricks on Azure Cloud,data migration projects, including complex migrations to Databricks,Strong expertise in ETL pipeline design and optimization, particularly for cloud environments and large-scale data migration

Posted 4 days ago

Apply

2.0 - 5.0 years

6 - 10 Lacs

Chennai

Work from Office

Naukri logo

Job Title: Junior AI Engineer / Data Engineer Location: Chennai Reports To : Senior AI Engineer / Data Architect Job Summary: This role is ideal for an early-career engineer eager to develop robust data pipelines and support the development of AI/ML models. The Junior AI Engineer will primarily focus on data preparation, transformation, and infrastructure to support scalable AI systems. Key Responsibilities: Build and maintain ETL pipelines for AI applications. Assist in data wrangling, cleaning, and feature engineering. Support data scientists and AI engineers with curated, high-quality datasets. Contribute to data governance and documentation. Collaborate on proof-of-concepts and prototypes of AI solutions. Required Qualifications: Bachelors degree in computer science, Engineering, or a related field. 2+ years of experience in data engineering. Proficient in Python, SQL; exposure to Azure platform is a plus. Basic understanding of AL/ML concepts.

Posted 4 days ago

Apply

6.0 - 8.0 years

6 - 8 Lacs

Pune, Maharashtra, India

On-site

Foundit logo

Role Description You will be joining the Anti-Financial Crime (AFC) Technology team and will work as part of a multi-skilled agile squad, specializing in designing, developing, and testing engineering solutions, as well as troubleshooting and resolving technical issues to enable the Transaction Monitoring (TM) systems to identify Money Laundering or Terrorism Financing. You will have the opportunity to work on challenging problems, with large complex datasets and play a crucial role in managing and optimizing the data flows within Transaction Monitoring. You will have the opportunity to work across Cloud and BigData technologies, optimizing the performance of existing data pipelines as well as designing and creating new ETL Frameworks and solutions. You will have the opportunity to work on challenging problems, building high-performance systems to process large volumes of data, using the latest technologies. Deutsche Banks Corporate Bank division is a leading provider of cash management, trade finance and securities finance. We complete green-field projects that deliver the best Corporate Bank - Securities Services products in the world. Our team is diverse, international, and driven by shared focus on clean code and valued delivery. At every level, agile minds are rewarded with competitive pay, support, and opportunities to excel. You will work as part of a cross-functional agile delivery team. You will bring an innovative approach to software development, focusing on using the latest technologies and practices, as part of a relentless focus on business value. You will be someone who sees engineering as team activity, with a predisposition to open code, open discussion and creating a supportive, collaborative environment. You will be ready to contribute to all stages of software delivery, from initial analysis right through to production support. What we'll offer you As part of our flexible scheme, here are just some of the benefits that youll enjoy Best in class leave policy Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your key responsibilities As a Vice President, your role will include management and leadership responsibilities, such as: Leading by example, by creating efficient ETL workflows to extract data from multiple sources, transform it according to business requirements, and load it into the TM systems. Implementing data validation and cleansing techniques to maintain high data quality and detective controls to ensure the integrity and completeness of data being prepared through our Data Pipelines. Work closely with other developers and architects to design and implement solutions that meet business needs whilst ensuring that solutions are scalable, supportable and sustainable. Ensuring that all engineering work complies with industry and DB standards, regulations, and best practices Your skills and experience Good analytical problem-solving capabilities with excellent communication skills written and oral enabling authoring of documents that will support a technical team in performing development work. Experience in Google Cloud Platform is preferred but other the cloud solutions such as AWS would be considered 5+ years experience in Oracle, Control M, Linux and Agile methodology and prior experience of working in an environment using internally engineered components (database, operating system, etc.) 5+ years experience in Hadoop, Hive, Oracle, Control M, Java development is required whilst experience in OpenShift, PySpark is preferred Strong understanding of designing and delivering complex ETL pipelines in a regulatory space

Posted 4 days ago

Apply

3.0 - 5.0 years

10 - 15 Lacs

Ahmedabad

Work from Office

Naukri logo

Design, deliver & maintain the appropriate data solution to provide the correct data for analytical development to address key issues within the organization Gather detailed data requirements with a cross-functional team to deliver quality results. Required Candidate profile Strong experience with cloud services within Azure, AWS, or GCP platforms (preferably Azure) Strong experience with analytical tool (preferably SQL, dbt, Snowflake, BigQuery, Tableau)

Posted 6 days ago

Apply

5.0 - 7.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Foundit logo

Genpact (NYSE: G) is a global professional services and solutions firm delivering outcomes that shape the future. Our 125,000+ people across 30+ countries are driven by our innate curiosity, entrepreneurial agility, and desire to create lasting value for clients. Powered by our purpose - the relentless pursuit of a world that works better for people - we serve and transform leading enterprises, including the Fortune Global 500, with our deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. Inviting applications for the role of Principal Consultant- AWS Developer We are seeking an experienced Developer with expertise in AWS-based big data solutions, particularly leveraging Apache Spark on AWS EMR, along with strong backend development skills in Java and Spring. The ideal candidate will also possess a solid background in data warehousing, ETL pipelines, and large-scale data processing systems.. Responsibilities . Design and implement scalable data processing solutions using Apache Spark on AWS EMR. . Develop microservices and backend components using Java and the Spring framework. . Build, optimize, and maintain ETL pipelines for structured and unstructured data. . Integrate data pipelines with AWS services such as S3, Lambda, Glue, Redshift, and Athena. . Collaborate with data architects, analysts, and DevOps teams to support data warehousing initiatives. . Write efficient, reusable, and reliable code following best practices. . Ensure data quality, governance, and lineage across the architecture. . Troubleshoot and optimize Spark jobs and cloud-based processing workflows. . Participate in code reviews, testing, and deployments in Agile environments. Qualifications we seek in you! Minimum Qualifications . Bachelor&rsquos degree Preferred Qualifications/ Skills . Strong experience with Apache Spark and AWS EMR in production environments. . Solid understanding of AWS ecosystem, including services like S3, Lambda, Glue, Redshift, and CloudWatch. . Proven experience in designing and managing large-scale data warehousing systems. . Expertise in building and maintaining ETL pipelines and data transformation workflows. . Strong SQL skills and familiarity with performance tuning for analytical queries. . Experience working in Agile development environments using tools such as Git, JIRA, and CI/CD pipelines. . Familiarity with data modeling concepts and tools (e.g., Star Schema, Snowflake Schema). . Knowledge of data governance tools and metadata management. . Experience with containerization (Docker, Kubernetes) and serverless architectures. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. For more information, visit www.genpact.com . Follow us on Twitter, Facebook, LinkedIn, and YouTube. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.

Posted 6 days ago

Apply

8.0 - 12.0 years

16 - 27 Lacs

Chennai, Bengaluru

Work from Office

Naukri logo

Role & responsibilities Design, develop, and optimize scalable ETL pipelines using PySpark and AWS data services Work with structured and semi-structured data from various sources and formats (CSV, JSON, Parquet) Build reusable data transformations using Spark DataFrames, RDDs, and Spark SQL Implement data validation, quality checks, and ensure schema evolution across data sources Manage deployment and monitoring of Spark jobs using AWS EMR, Glue, Lambda, and CloudWatch Collaborate with product owners, architects, and data scientists to deliver robust data workflows Tune job performance, manage partitioning strategies, and reduce job latency/cost Contribute to version control, CI/CD processes, and production support Preferred candidate profile Bachelor's or Master's degree in Computer Science, Information Technology, or a related field. 5+ years of experience in PySpark, Spark SQL, RDDs, UDFs, and Spark optimization Strong experience in building ETL workflows for large-scale data processing Solid understanding of AWS cloud ecosystem, especially S3, EMR, Glue, Lambda, Athena Proficiency in Python, SQL, and shell scripting Experience with data lakes, partitioning strategies, and file formats (e.g., Parquet, ORC) Familiarity with Git, Jenkins, and automated testing frameworks (e.g., PyTest) Experience with Redshift, Snowflake, or other DW platforms Exposure to data governance, cataloging, or DQ frameworks Terraform or infrastructure-as-code experience Understanding of Spark internals, DAGs, and caching strategies

Posted 6 days ago

Apply

3.0 - 6.0 years

10 - 17 Lacs

Pune

Hybrid

Naukri logo

Software Engineer Baner, Pune, Maharashtra Department Software & Automation Employee Type Permanent Experience Range 3 - 6 Years Qualification: Bachelor's or master's degree in computer science, IT, or related field. Roles & Responsibilities: Tasks Facilitate Agile ceremonies and lead Scrum practices. Support the Product Owner in backlog management and team organization. Promote Agile best practices (Scrum, SAFe) and continuous delivery improvements. Develop and maintain scalable data pipelines using AWS and Databricks (secondary focus). Collaborate with architects and contribute to solution design (support role). Occasionally travel for global team collaboration. Scrum Master or Agile team facilitation experience. Familiarity with Python and Databricks (PySpark, SQL). Good AWS cloud exposure (S3, EC2 basics Good to Have: Certified Scrum Master (CSM) or equivalent. Experience with ETL pipelines or data engineering concepts. Multi-cultural team collaboration experience. Software Skills: JIRA Confluence Python (basic to intermediate) Databricks (basic)

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies