Jobs
Interviews

458 Etl Pipelines Jobs - Page 3

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 7.0 years

0 Lacs

coimbatore, tamil nadu

On-site

You should have at least 3+ years of experience working as a Full Stack Developer. Your primary focus will be on backend development using Python, specifically Django, FastAPI, or Flask. In addition, you should have a strong proficiency in frontend development using React.js and JavaScript/TypeScript. It is essential that you have experience with AI scripting, ML model integration, or working with AI APIs such as OpenAI, TensorFlow, PyTorch, etc. You should also possess knowledge of RESTful API design and implementation, as well as database management with both SQL and NoSQL databases. Familiarity with Docker, Kubernetes, and CI/CD pipelines is required for this role. A strong understanding of software development principles, security best practices, and performance optimization is also necessary. Preferred skills for this position include knowledge of GraphQL for API development, experience with serverless architecture and cloud functions, and an understanding of natural language processing (NLP) and AI automation. Experience in data engineering, ETL pipelines, or big data frameworks would be a plus.,

Posted 1 week ago

Apply

6.0 - 11.0 years

10 - 17 Lacs

hyderabad, pune

Hybrid

Job Description: Data Engineer 6+yrs Hybrid mode of work in Hyderabad/Pune Common Skils - SQL, GCP BQ, ETL pipelines using Python/Airflow, Experience on Spark/Hive/HDFS, Data modeling for Data conversion Resources Prior experience working on a conv/migration HR project is additional skill needed along with above mentioned skills Refer and earn rewards up to 50,000 INR. Please visit and login our job portal www.iitjobs.com for better future opportunities Download our job portal app on Android and iOS app stores.

Posted 1 week ago

Apply

4.0 - 8.0 years

0 Lacs

ahmedabad, gujarat

On-site

As a Senior Data Engineer (Azure) at our organization, you will be responsible for managing large-scale data efficiently on Azure. Your passion for overcoming challenges and delivering exceptional results will be key in this role. We are seeking an individual with expertise in Azure data services, ETL pipelines, and database management. If you are a proactive problem solver with a solid technical background and a positive team player, we are excited to hear from you. Your main responsibilities will include developing, monitoring, and maintaining end-to-end data pipelines on Azure. You will have the opportunity to work with various Azure services such as Azure Data Factory (ADF), Azure Synapse, Databricks, and Azure Blob Storage for data ingestion, processing, and analytics. Additionally, you will design and optimize data workflows for structured and unstructured data, ensuring efficient data models for optimal performance. To excel in this role, you should possess a Bachelor's/Master's degree in Computer Science, Data Engineering, or a related field, along with at least 4 years of hands-on experience in data engineering, ETL development, and data pipeline implementation. Proficiency in Python and SQL is crucial for data processing and analysis. Strong expertise in Azure data services, big data processing frameworks, and data warehouse solutions is highly desirable. Collaboration with cross-functional teams, including data scientists and software engineers, will be a key aspect of your role. You will be responsible for ensuring data quality, integrity, and compliance with security best practices. Your ability to work independently, mentor peers, and meet tight deadlines will be essential for success in this position. In addition, we offer a flat-hierarchical, friendly, engineering-oriented, and growth-focused culture with flexible work timing, leaves for life events, and work-from-home options. You will have access to free health insurance and various office facilities such as a fully-equipped game zone, an in-office kitchen with affordable lunch service, and free snacks. We also provide sponsorship for certifications/events and a library service to support your professional development. If you are looking to join a dynamic team where you can contribute your expertise in data engineering and Azure services, we encourage you to apply. Your experience with cloud-based data architecture, machine learning model deployment, and automated data pipeline deployment using CI/CD workflows will be valuable assets to our team. Join us in delivering innovative data solutions and making a meaningful impact in the field of data engineering.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

You will be responsible for developing and optimizing ETL pipelines using PySpark and Databricks. Your role will involve working with large-scale datasets and building distributed computing solutions. You will design and implement data ingestion, transformation, and processing workflows, as well as write efficient and scalable Python code for data processing. Collaboration with data engineers, data scientists, and business teams to deliver insights is a key aspect of this role. Additionally, you will be expected to optimize performance and cost efficiency for big data solutions and implement best practices for CI/CD, testing, and automation in a cloud environment. Monitoring job performance, troubleshooting failures, and tuning queries will also be part of your responsibilities.,

Posted 1 week ago

Apply

6.0 - 9.0 years

16 - 20 Lacs

hyderabad

Work from Office

Job Description Summary As Technical Product Manager for our Data Products, you will join our GridOS Data Fabric product management team who are delivering solutions designed to accelerate decarbonization by managing DERs at scale and proactively manage disruptions from climate change. Specifically, you will be accountable for managing technical product lifecycle activities around our core Data Products in partnership with our own & partner development teams to build trusted data products for our GridOS ADMS applications. Job Description Roles and Responsibilities Technical product management, responsible for delivering Data Products in partnership with both GE Vernova & partner development teams. Includes all activities related to sprint planning, backlog grooming, testing and release management. Collaborate with data engineers, data scientists, analysts, and business stakeholders to prioritize product epics & features. Ensure data products are reliable, scalable, secure, and align with regulatory and compliance standards. Advocate for data governance, data quality, and metadata management as part of product development. Evangelize the use of data products across the organization to drive data you can trust to fuel AI/ML predictive workflows. Accountability for functional, business, and broad company objectives. Integrate and develop processes that meet business needs across the organization, be involved in long-term planning, manage complex issues within functional area of expertise, and contribute to the overall business strategy. Developing specialized knowledge of latest commercial developments in own area and communication skills to influence others. Contributes towards strategy and policy development and ensure delivery within area of responsibility. Has in-depth knowledge of best practices and how own area integrates with others; has working knowledge of competition and the factors that differentiate them in the market Brings the right balance of tactical momentum and strategic focus and alignment and uses engineering team organization processes, like scrums, daily-stand-ups and not shy away from explaining deep technical requirements. Uses judgment to make decisions or solve moderately complex tasks or problems within projects, product lines, markets, sales processes, campaigns, or customers. Takes new perspective on existing solutions. Uses technical experience and expertise for data analysis to support recommendations. Uses multiple internal and limited external sources outside of own function to arrive at decisions. Acts as a resource for colleagues with less experience. May lead small projects with moderate risks and resource requirements. Explains difficult or sensitive information; works to build consensus. Developing persuasion skills required to influence others on topics within field. Required Qualifications This role requires significant experience in the Product Management & Digital Product Manager. Knowledge level is comparable to a Master's degree from an accredited university or college. Bachelors degree in Computer Science, Data Science, Engineering, or a related field. Desired Characteristics Strong oral and written communication skills. Strong interpersonal and leadership skills. Demonstrated ability to analyze and resolve problems. Demonstrated ability to lead programs projects. Ability to document, plan, market, and execute programs. Strong understanding of data infrastructure, data modeling, ETL pipelines, APIs, and cloud technologies (e.g., AWS, Azure). Experience with iterative product development and program management techniques including Agile, Safe, Scrum & DevOps. Familiarity with data privacy and security practices (e.g., GDPR, CCPA, HIPAA), and understanding of metadata, lineage, and data quality management. Knowledge and experience with electric utility industry practices.

Posted 1 week ago

Apply

3.0 - 6.0 years

3 - 4 Lacs

hyderabad

Work from Office

Responsibilities: * Design, develop & maintain data pipelines using Snowflake, DBT,SQL, ETL tools. * Collaborate with cross-functional teams on project requirements & deliverables. Office cab/shuttle

Posted 1 week ago

Apply

5.0 - 10.0 years

6 - 12 Lacs

chennai

Work from Office

Responsibilities: * Design, develop & optimize databases using Azure Synapse Analytics, PostgreSQL & ETL pipelines. * Collaborate with cross-functional teams on data warehousing projects. Health insurance Free meal Accidental insurance

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

punjab

On-site

As a Database Administrator (DBA) at our company, you will be responsible for designing, managing, and optimizing databases to ensure performance and scalability for our SaaS-based CRM Platform in the Study Abroad & EdTech industry. Your role will be pivotal in maintaining the data backbone of our platform. Your key responsibilities will include designing, implementing, and maintaining SQL and NoSQL databases such as MySQL, PostgreSQL, and MongoDB. You will be optimizing database performance, query efficiency, and data storage while ensuring database security, backup, recovery, and disaster recovery strategies are in place. Collaboration with development teams to enhance database queries and structures will be essential, along with monitoring and tuning database systems for high availability and performance. Database migrations, upgrades, and patches will be part of your routine tasks, along with utilizing Azure Boards for task management and sprint planning. You will collaborate using Git for version control when working with database scripts and ensuring database scalability to meet the growing business demands. Additionally, you will assist developers in designing database schemas aligned with Microservices architecture. The ideal candidate should possess strong expertise in SQL databases like MySQL and PostgreSQL, along with experience in NoSQL databases such as MongoDB. Proficiency in database design, performance tuning, query optimization, backup, recovery, high availability solutions, and database security & access control are required. Familiarity with cloud database solutions like AWS RDS, DynamoDB, or Azure SQL is a plus, as well as knowledge of Microservices architecture and integration with Node.js/Angular development. Experience with Azure Boards for task tracking and sprint management, Git version control for branching, merging, and pull requests related to database changes is essential. Strong problem-solving, analytical, and troubleshooting skills are highly valued in this role. Candidates with experience in database replication & clustering, knowledge of ETL pipelines, data warehousing, and hands-on experience with AWS Database Services are encouraged to apply. Familiarity with CI/CD processes for database deployments will be an added advantage. Join us in this exciting opportunity to contribute to the growth and success of our CRM Platform.,

Posted 1 week ago

Apply

10.0 - 14.0 years

0 Lacs

hyderabad, telangana

On-site

As a highly skilled Solution Architect at Quest Global, your primary responsibility will be to drive innovation and deliver impactful, scalable solutions tailored to our customers" needs. In this pivotal role, you will be developing proof of concepts, designing and leading solution implementations that enhance our offerings. Upon successful completion of the POCs, you will also be mentoring and guiding engineers in developing production-ready solutions. Your role will involve building architectures, whether new or rearchitecting existing products, and creating POCs to validate new Enterprise architectures, with a focus on Cloud technologies such as AI, Gen AI, and Data pipe enhancements. You will collaborate closely with client architects, Product Managers, and Engineering leaders to align with the product vision and help build corresponding responses from Quest Global, in collaboration with the VBU and project teams. In terms of work experience, you should have a strong background in designing and implementing AI/ML solutions using key libraries and frameworks, guiding end-to-end deployment from initial POC to production. Additionally, you should be adept at developing POCs for new solutions, demonstrating their business value to clients and leading their evolution into full-scale, production-ready solutions. Your expertise in Large Language Models should be showcased through practical examples and case studies, while also mentoring and coaching engineers throughout the implementation process to ensure robust, production-ready solutions with efficient monitoring and maintenance. Moreover, you should exhibit technical expertise in a variety of API technologies, including Python libraries, REST APIs, Java frameworks, and demonstrate capabilities to clients. Applying advanced knowledge in cloud engineering, Docker, and Kubernetes will be essential to ensure optimized, scalable, and resilient solution architectures. Your soft skills will be crucial as you engage as a trusted technical advisor to clients, understanding their business needs and identifying technical value-add opportunities. Collaborating with sales and delivery teams to identify solution opportunities, support in creating proposals, and presenting solutions will also be part of your responsibilities. Establishing and monitoring key performance indicators for solutions, driving continuous improvements, and implementing remediation plans as needed will be key aspects of your role. With at least 10 years of experience in technology, including 5+ years architecting distributed systems and developing technology roadmaps for global enterprises, you should have a deep understanding of AI/ML and experience in implementing solutions. Your familiarity with LLMs, data engineering areas, proficiency with Python, REST, Java, Docker, Kubernetes, and cloud platforms will be essential to support containerization and cloud-native development. Demonstrated ability to lead end-to-end POC design, solution development, and client presentations, along with creating scalable, seamless integrations with existing systems, are also expected from you. Your strong interpersonal skills, proven ability to communicate complex technical concepts to both technical and non-technical audiences, skills in prioritization, and managing client relationships to ensure satisfaction and long-term partnership will set you up for success in this role.,

Posted 1 week ago

Apply

2.0 - 10.0 years

0 Lacs

pune, maharashtra

On-site

As a Data Engineer, you will be responsible for designing and implementing next-generation data pipelines, analytics, and Business Intelligence solutions. You will manage various AWS resources such as RDS, EMR, MWAA, and Lambda. Your primary focus will be on developing high-quality data architecture and pipelines to support business analysts and data scientists. Collaborating with other technology teams, you will extract, transform, and load data from diverse sources. Additionally, you will work towards enhancing reporting and analysis processes by automating self-service support for customers. You should have a solid background with 7-10 years of data engineering experience, including expertise in data modeling, warehousing, and building ETL pipelines. Proficiency in at least one modern programming language, preferably Python, is required. The role entails working independently on end-to-end projects and a good understanding of distributed systems related to data storage and computing. Analyzing and interpreting data using tools like Postgres and NoSQL for at least 2 years is essential. Hands-on experience with big data frameworks such as Apache Spark, EMR, Glue, Data Lake, and BI tools like Tableau is necessary. Experience with geospatial and time series data is a plus. Desirable skills include collaborating with cross-functional teams to design and deliver data initiatives effectively. You will build fault-tolerant and scalable data solutions using cutting-edge technologies like Spark, EMR, Python, AirFlow, Glue, and S3. The role requires a proactive approach to continuously evaluate and enhance the strategy, architecture, tooling, and codebase to optimize performance, scalability, and availability.,

Posted 1 week ago

Apply

7.0 - 10.0 years

22 - 27 Lacs

noida, pune, gurugram

Hybrid

We are seeking a talented individual to join our IT team at Marsh McLennan. This role will be based in Pune/Noida/GGN/Mumbai. This is a hybrid role that has a requirement of working at least three days a week in the office. As a Data Engineer, you will be responsible for designing and implementing scalable data pipelines using Databricks and the Medallion Architecture. You will handle end-to-end ETL/ELT processes, manage large datasets, and work with tools like Python, PySpark, and AWS S3 to ensure data is transformed and optimized for analytical use. Additionally, knowledge on Informatica, Unix, and Oracle database is also a good to have. We will count on you to: Develop and maintain data pipelines using Databricks and the Medallion Architecture (Bronze, Silver, Gold layers). Design and optimize ETL/ELT workflows to ensure data quality and efficiency. Write data transformation scripts using Python and PySpark. Store and manage data in AWS S3 and integrate with other cloud-based services. Use SQL to query, clean, and manipulate large datasets. Collaborate with cross-functional teams to ensure data is accessible for business intelligence and analytics. Monitor and troubleshoot data pipelines for performance and reliability. Document data processes and follow best practices for scalability and maintainability. Ingest the data using AWS DMS from disparate sources. What you need to have: Any B.Tech/BE, M.Tech 7-10years of experience Experience with Databricks and the Medallion Architecture. Proficiency in Python, PySpark, and SQL for data processing and transformation. Solid experience with ETL/ELT pipelines and cloud-based data solutions, specifically AWS S3. Familiarity with version control using Git. Understanding of modern data architecture principles and best practices. Preferred Qualifications: Experience with Delta Lake and other Databricks technologies. Knowledge of additional AWS services (e.g., Redshift, Glue, Lambda, S3, DMS ). What makes you stand out: Excellent verbal and written communication skills, comfortable interfacing with business users Good troubleshooting and technical skills Able to work independently Why join our team: We help you be your best through professional development opportunities, interesting work and supportive leaders. We foster a vibrant and inclusive culture where you can work with talented colleagues to create new solutions and have impact for colleagues, clients and communities. Our scale enables us to provide a range of career opportunities, as well as benefits and rewards to enhance your well-being.

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

noida, uttar pradesh

On-site

The GCP Architect is responsible for designing and implementing scalable, robust, and secure cloud solutions on Google Cloud Platform. You need to have a deep understanding of cloud architecture and services, strong problem-solving skills, and the ability to lead cross-functional teams in delivering complex cloud projects. You should possess excellent leadership, communication, and interpersonal skills. Strong problem-solving abilities, attention to detail, and the capability to work in a fast-paced, dynamic environment while managing multiple priorities are essential. In terms of technical skills, you must have a strong understanding of GCP services such as Compute Engine, App Engine, Kubernetes Engine, Cloud Functions, and BigQuery. Proficiency in infrastructure-as-code tools like Terraform, as well as configuration tools such as Chef, Puppet, Ansible, and Salt, is required. Experience in deploying and managing applications with Kubernetes (GKE) and Docker, as well as Serverless architectures, is crucial. Knowledge of API management, CI/CD pipelines, DevOps practices, networking, security, and database management in a cloud environment is necessary. You should also have experience in building ETL pipelines using Dataflow, Dataproc, and BigQuery, as well as using Pub/Sub, Dataflow, and other real-time data processing services. Experience in implementing backup solutions and disaster recovery plans, designing and deploying applications with high availability and fault tolerance, and designing solutions that span across multiple cloud providers and on-premises infrastructure is expected. Key Responsibilities: Architectural Leadership: - Lead the design and development of cloud solutions on GCP. - Define and maintain the architectural vision to ensure alignment with business objectives. - Evaluate and recommend tools, technologies, and processes for the highest quality solutions. Solution Design: - Design scalable, secure, and cost-effective cloud architectures. - Develop proof-of-concept projects to validate proposed solutions. - Collaborate with stakeholders to understand business requirements and translate them into technical solutions. Implementation And Migration: - Oversee the implementation of Multi Cloud Solutions while meeting performance and reliability targets. - Lead heterogeneous cloud migration projects, ensuring minimal downtime and seamless transition with cloud agnostic tools as well as third-party toolsets. - Provide guidance and best practices for deploying and managing applications in GCP. Team Leadership And Collaboration: - Ensure no customer escalations. - Mentor and guide technical teams, fostering a culture of innovation and continuous improvement. - Collaborate with DevOps, Security, and Development teams to integrate cloud solutions. - Conduct training sessions and workshops to upskill teams on GCP services and best practices. Security And Compliance: - Ensure cloud solutions comply with security and regulatory requirements. - Implement and maintain security best practices, including identity and access management, data protection, and network security. Continuous Improvement: - Stay updated with the latest GCP services, features, and industry trends. - Continuously evaluate and improve cloud processes and architectures to enhance performance, reliability, and cost-efficiency.,

Posted 1 week ago

Apply

8.0 - 10.0 years

0 Lacs

bengaluru, karnataka, india

On-site

We help the world run better At SAP, we enable you to bring out your best. Our company culture is focused on collaboration and a shared passion to help the world run better. How We focus every day on building the foundation for tomorrow and creating a workplace that embraces differences, values flexibility, and is aligned to our purpose-driven and future-focused work. We offer a highly collaborative, caring team environment with a strong focus on learning and development, recognition for your individual contributions, and a variety of benefit options for you to choose from. What you'll do: You will take an ownership of designing and building core integration frameworks that enable real-time, event-driven data flows between distributed SAP systems. As a senior contributor, you will work closely with architects to drive end-to-end development of services and pipelines supporting distributed data processing, data transformations and intelligent automation. This is an unique opportunity to contribute to SAP's evolving data platform initiatives with hands-on involvement in Java, Python, Kafka, DevOps, Real-Time Analytics, Intelligent Monitoring, BTP and Hyperscaler ecosystems. Responsibilities: Design and develop Micro services using Java, RESTful APIs and messaging frameworks such as Apache Kafka. Design and Develop UI based on SAP UI5/Fiori is a plus Design and Develop Observability Framework for Customer Insights Build and maintain scalable data processing and ETL pipelines that support real-time and batch data flows. Experience with Databricks is an advantage. Accelerate the App2App integration roadmap by identifying reusable patterns, driving platform automation and establishing best practices. Collaborate with cross-functional teams to enable secure, reliable and performant communication across SAP applications. Build and maintain distributed data processing pipelines, supporting large-scale data ingestion, transformation and routing. Work closely with DevOps to define and improve CI/CD pipelines, monitoring and deployment strategies using modern GitOps practices. Guide cloud-native secure deployment of services on SAP BTP and major Hyperscaler (AWS, Azure, GCP). Collaborate with SAP's broader Data Platform efforts including Datasphere, SAP Analytics Cloud and BDC runtime architecture. Ensure adherence to best practices in microservices architecture, including service discovery, load balancing, and fault tolerance. Stay updated with the latest industry trends and technologies to continuously improve the development process What you bring: 8+ years of hands-on experience in backend development using Java, with strong object-oriented design and integration patterns. Hands-on experience building ETL pipelines and working with large-scale data processing frameworks. Exposure to Log Aggregator Tools like Splunk, ELK , etc. Experience or experimentation with tools such as Databricks, Apache Spark or other cloud-native data platforms is highly advantageous. Familiarity with SAP Business Technology Platform (BTP), SAP Datasphere, SAP Analytics Cloud or HANA is highly desirable. Design CI/CD pipelines, containerization (Docker), Kubernetes and DevOps best practices. Working knowledge of Hyperscaler environments such as AWS, Azure or GCP. Passionate about clean code, automated testing, performance tuning and continuous improvement. Strong communication skills and ability to collaborate with global teams across time zones. Meet your Team SAP is the market leader in enterprise application software, helping companies of all sizes and industries run at their best. As part of the Business Data Cloud (BDC) organization, the Foundation Services team is pivotal to SAP's Data & AI strategy, delivering next-generation data experiences that power intelligence across the enterprise. Located in Bangalore, India, our team drives cutting-edge engineering efforts in a collaborative, inclusive and high-impact environment, enabling innovation and integration across SAP's data platform #SAPInternalT3 Bring out your best SAP innovations help more than four hundred thousand customers worldwide work together more efficiently and use business insight more effectively. Originally known for leadership in enterprise resource planning (ERP) software, SAP has evolved to become a market leader in end-to-end business application software and related services for database, analytics, intelligent technologies, and experience management. As a cloud company with two hundred million users and more than one hundred thousand employees worldwide, we are purpose-driven and future-focused, with a highly collaborative team ethic and commitment to personal development. Whether connecting global industries, people, or platforms, we help ensure every challenge gets the solution it deserves. At SAP, you can bring out your best. We win with inclusion SAP's culture of inclusion, focus on health and well-being, and flexible working models help ensure that everyone - regardless of background - feels included and can run at their best. At SAP, we believe we are made stronger by the unique capabilities and qualities that each person brings to our company, and we invest in our employees to inspire confidence and help everyone realize their full potential. We ultimately believe in unleashing all talent and creating a better and more equitable world. SAP is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to the values of Equal Employment Opportunity and provide accessibility accommodations to applicants with physical and/or mental disabilities. If you are interested in applying for employment with SAP and are in need of accommodation or special assistance to navigate our website or to complete your application, please send an e-mail with your request to Recruiting Operations Team: [HIDDEN TEXT] For SAP employees: Only permanent roles are eligible for the SAP Employee Referral Program, according to the eligibility rules set in the . Specific conditions may apply for roles in Vocational Training. EOE AA M/F/Vet/Disability: Qualified applicants will receive consideration for employment without regard to their age, race, religion, national origin, ethnicity, age, gender (including pregnancy, childbirth, et al), sexual orientation, gender identity or expression, protected veteran status, or disability. Successful candidates might be required to undergo a background verification with an external vendor. Requisition ID: 430230 | Work Area: Software-Design and Development | Expected Travel: 0 - 10% | Career Status: Professional | Employment Type: Regular Full Time | Additional Locations: #LI-Hybrid.

Posted 1 week ago

Apply

10.0 - 20.0 years

15 - 30 Lacs

noida, gurugram, delhi / ncr

Hybrid

Role & responsibilities Skill - Data engineer python AWS Exp - 10+ Location - Gurugram NP - only immediate joiners needed Preferred candidate profile We are seeking an experienced Lead Data Engineer with strong expertise in Python, AWS cloud services, ETL pipelines, and system integrations. The ideal candidate will lead the design, development, and optimization of scalable data solutions and ensure seamless API and data integrations across systems. You will collaborate with cross-functional teams to implement robust DataOps and CI/CD pipelines. Key Responsibilities: Responsible for implementation of scalable, secure, and high-performance data pipelines. Design and develop ETL processes using AWS services (Lambda, S3, Glue, Step Functions, etc.). Own and enhance API design and integrations for internal and external data systems. Work closely with data scientists, analysts, and software engineers to understand data needs and deliver solutions. Drive DataOps practices for automation, monitoring, logging, testing, and continuous deployment. Develop CI/CD pipelines for automated deployment of data solutions. Conduct code reviews and mentor junior engineers in best practices for data engineering and cloud development. Ensure compliance with data governance, security, and privacy policies. Required Skills & Experience: 10+ years of experience in data engineering, software development, or related fields. Strong programming skills in Python for building robust data applications. Expert knowledge of AWS services, particularly Lambda, S3, Glue, CloudWatch, and Step Functions. Proven experience designing and managing ETL pipelines for large-scale data processing. Experience with API design, RESTful services, and API integration workflows. Deep understanding of DataOps practices and principles. Hands-on experience implementing CI/CD pipelines (e.g., using CodePipeline, Jenkins, GitHub Actions). Familiarity with containerization tools like Docker and orchestration tools like ECS/EKS (optional but preferred). Strong understanding of data modeling, data warehousing concepts, and performance optimization.

Posted 1 week ago

Apply

12.0 - 14.0 years

0 Lacs

hyderabad, telangana, india

On-site

About the Tech@Lilly Organization: Tech@Lilly builds and maintains capabilities using cutting edge technologies like most prominent tech companies. What differentiates Tech@Lilly is that we create new possibilities through tech to advance our purpose u2013 creating medicines that make life better for people around the world, like data driven drug discovery and connected clinical trials. We hire the best technology professionals from a variety of backgrounds, so they can bring an assortment of knowledge, skills, and diverse thinking to deliver innovative solutions in every area of the enterprise. About the Business Function: Tech@Lilly Business Units is a global organization strategically positioned so that through information and technology leadership and solutions, we create meaningful connections and remarkable experiences, so people feel genuinely cared for. The Business Unit organization is accountable for designing, developing, and supporting commercial or customer engagement services and capabilities that span multiple Business Units (Bio-Medicines, Diabetes, Oncology, International), functions, geographies, and digital channels. The areas supported by Business Unit includes: Customer Operations, Marketing and Commercial Operations, Medical Affairs, Market Research, Pricing, Reimbursement and Access, Customer Support Programs, Digital Production and Distribution, Global Patient Outcomes, and Real-World Evidence. Job Title: BU Master Data Associate Director - Ops The Associate Director u2013 Data Operations will lead a global team responsible for round-the-clock (24x7) data platform operations, ensuring uptime, performance, and reliability across regions and time zones. This role demands strong leadership in managing incident resolution, service requests, and proactive monitoring to meet SLAs and deliver uninterrupted business continuity. The ideal candidate will drive the transformation of operations through intelligent automation and AIu2014implementing AIOps, anomaly detection, root cause prediction, and self-healing mechanisms to reduce manual interventions and accelerate issue resolution. This role involves close collaboration with engineering, infrastructure, and business teams, as well as transparent reporting of KPIs and operational metrics to senior leadership. The candidate will foster a culture of continuous improvement, team accountability, and operational excellence. Success in this role requires deep expertise in managing complex data environments, people leadership, process maturity, and a passion for using technology to solve operational challenges. What youu2019ll be doing: Lead and manage a global Data Operations team providing 24x7 support for data platforms, pipelines, and critical business services. Oversee incident and problem management, ensuring timely resolution of tickets within defined SLAs and proactive reduction of recurring issues. Drive operational efficiency through the adoption of AIOps capabilities such as anomaly detection, automated alerting, root cause analysis, and self-healing workflows. Collaborate with engineering, DevOps, and infrastructure teams to ensure stable and scalable data environments across cloud and on-prem platforms. Define and monitor KPIs and operational dashboards, sharing insights and performance metrics with senior leadership on a regular cadence. Develop and standardize processes for change management, deployment governance, capacity planning, and business continuity. Lead continuous improvement initiatives including playbook automation, runbook maturity, and knowledge base development. Foster a high-performance, agile team culture with strong accountability, cross-training, and career growth pathways. Ensure adherence to security, compliance, and audit requirements in daily operations. Act as the escalation point for critical incidents and interface with business stakeholders for timely communication and impact management. How You Will Succeed:u00A0 Lead with accountability: Set clear expectations, inspire high performance, and build a culture of ownership and continuous improvement within your operations team. Champion AI and automation: Identify opportunities to implement AIOps and automation that reduce manual intervention, improve incident response, and enhance system reliability. Ensure operational excellence: Monitor KPIs, SLAs, and system health metrics to proactively manage risks, maintain uptime, and deliver superior service levels globally. Communicate with impact: Translate operational performance, root cause trends, and insights into clear updates for senior leadership and cross-functional stakeholders. Drive cross-functional collaboration: Work closely with engineering, infrastructure, and business teams to streamline issue resolution, enable platform scalability, and improve user experience. Enable continuous learning: Promote knowledge sharing, playbook development, and skill advancement across the team to improve response quality and reduce ticket turnaround time. Navigate complexity: Manage high-pressure situations across time zones, ensuring smooth handoffs, consistent service, and clear escalation paths for critical incidents. What You should Bring: Proven expertise in managing 24x7 operational support , incident management, and ensuring adherence to SLAs in data engineering or analytics environments. Strong understanding of data platforms, ETL pipelines, and cloud ecosystems (AWS, Azure, or GCP). Experience implementing or working with AIOps platforms to automate alerting, resolution workflows, and root cause analysis. Demonstrated ability to lead large-scale operations teams , manage shift-based models, and drive process standardization and improvement. Strong interpersonal and stakeholder engagement skills, with the ability to communicate effectively with engineering, product, and senior leadership . Excellent problem-solving skills with a data-driven mindset and comfort handling ambiguity in high-pressure environments. Familiarity with tools like ServiceNow, Jira , or similar for operational tracking and reporting. Basic Qualifications and Experience Requirement: Bacheloru2019s or Masteru2019s degree in Computer Science, Information Systems, Data Engineering, or a related field. 12+ years of total experience in IT/data operations, with at least 3 years in a senior leadership or operations lead role. Proven experience managing mission-critical data operations in a 24x7 environment , preferably in a global, matrixed organization. Strong background in ETL workflows, orchestration tools (e.g., Apache Airflow, Control-M) , and cloud-based data pipelines. Exposure to incident, change, and problem management processes , aligned with ITIL or SRE frameworks . Familiarity with AIOps tools and practices to drive intelligent automation and issue resolution. Experience managing onshore/offshore teams and working across multiple time zones . Demonstrated ability to define and report on KPIs/SLAs for operational performance and team productivity. Additional Skills/Preferences: Domain experience in healthcare, pharmaceutical ( Customer Master, Product Master, Alignment Master, Activity, Consent etc.u00A0 ), or regulated industries is a plus. Partner with and influence vendor resources on solution development to ensure understanding of data and technical direction for solutionsu00A0as well as delivery AWS Certified Data Engineer - Associate Databricks Certified Data Engineer (Associate or Professional) AWS Certified Architect (Associate or Professional) Familiarity with AI/ML workflows and integrating machine learning models into data pipelines Additional Information: N/A Lilly is dedicated to helping individuals with disabilities to actively engage in the workforce, ensuring equal opportunities when vying for positions. If you require accommodation to submit a resume for a position at Lilly, please complete the accommodation request form () for further assistance. Please note this is for individuals to request an accommodation as part of the application process and any other correspondence will not receive a response. Lillyu00A0does not discriminate on the basis of age, race, color, religion, gender, sexual orientation, gender identity, gender expression, national origin, protected veteran status, disability or any other legally protected status. #WeAreLilly

Posted 1 week ago

Apply

3.0 - 7.0 years

5 - 15 Lacs

hyderabad

Work from Office

Roles and Responsibilities Design, develop, test, deploy, and maintain large-scale data pipelines using Snowflake as the primary database engine. Collaborate with cross-functional teams to gather requirements and design solutions that meet business needs. Develop complex SQL queries to extract insights from large datasets stored in Snowflake tables. Troubleshoot issues related to data quality, performance tuning, and security compliance. Participate in code reviews to ensure adherence to coding standards and best practices. Desired Candidate Profile 3-7 years of experience working with Snowflake as a Data Engineer or similar role. Strong understanding of SQL programming language with ability to write efficient queries for large datasets. Proficiency in Python scripting language with experience working with popular libraries such as Pandas, NumPy, etc.

Posted 1 week ago

Apply

3.0 - 6.0 years

3 - 6 Lacs

noida, uttar pradesh, india

On-site

Databricks Snowflake Data Engineer Design, develop, and maintain scalable ETL pipelines using Databricks to process, transform, and load large datasets into snowflake or other data stores. Collaborate with cross-functional teams, including data architects, analysts, and business stakeholders, to gather data requirements and deliver efficient data solutions. Exposure to Snowflake Warehouse and exposure in SnowSQL. Experience in loading data into snowflake Data model using Pyspark Pipelines. Implementing data validation and cleansing procedures will ensure the quality, integrity, and dependability of the data. Improve the scalability, efficiency, and cost-effectiveness of data pipelines. Monitoring and resolving data pipeline problems will guarantee consistency and availability of the data. Key Skill Sets Required Experience in designing and hands-on development in cloud-based analytics solutions. Expert level understanding on Databricks, Snowflake, Snowsql & Pyspark is required. Knowledge of Dev-Ops processes (including CI/CD) and Infrastructure as code is desirable. Experience developing security models.

Posted 1 week ago

Apply

4.0 - 8.0 years

0 Lacs

thiruvananthapuram, kerala

On-site

You are a skilled Data Engineer with over 4 years of experience, seeking a role in Trivandrum (Hybrid) for a contract duration of 6+ months in IST shift. Your main responsibility will be to design and develop data warehouse solutions utilizing Azure Synapse Analytics, ADLS, ADF, Databricks, and Power BI. You will be involved in building and optimizing data pipelines, working with large datasets, and implementing automation using DevOps/CI/CD frameworks. Your expertise should lie in Azure, AWS, Terraform, ETL, Python, and data lifecycle management. You will collaborate on architecture frameworks and best practices while working on data warehouse design and implementation. In this role, you will be expected to design and develop data warehouse solutions using Azure Synapse Analytics, ADLS, ADF, Databricks, Power BI, and Azure Analysis Services. This will involve developing and optimizing SQL queries, working with SSIS for ETL processes, and handling challenging scenarios effectively in an onshore-offshore model. You will build and optimize data pipelines for ETL workloads, work with large datasets, and implement data transformation processes. You will utilize DevOps/CI/CD frameworks for automation, leveraging Infrastructure as Code (IaC) with Terraform and configuration management tools. Participation in architecture framework discussions, best practices, and the implementation and maintenance of Azure Data Factory pipelines for ETL projects will also be part of your responsibilities. Ensuring effective data ingestion, transformation, loading, validation, and performance tuning are key aspects of the role. Your skills should include expertise in Azure Synapse Analytics, ADLS, ADF, Databricks, Power BI, and Azure Analysis Services. Strong experience in SQL, SSIS, and Query Optimization is required. Hands-on experience with ETL pipelines, Data Warehousing, and Analytics, as well as proficiency in Azure, AWS, and DevOps/CI/CD frameworks are essential. Experience with Terraform, Jenkins, Infrastructure as Code (IaC), and strong knowledge of Python for data processing and transformation are also necessary. The ability to work in an ambiguous environment and translate vague requirements into concrete deliverables is crucial for this role.,

Posted 1 week ago

Apply

6.0 - 11.0 years

15 - 30 Lacs

bhubaneswar, kolkata, nagpur

Hybrid

Experience- 6 to 11 years Location- Kolkata, Bhubaneshwar, Nagpur Key Responsibilities: Design and build scalable ELT pipelines in Snowflake using DBT/SQL . Develop efficient, well-tested DBT models (staging, intermediate, and marts layers). Implement data quality, testing, and monitoring frameworks to ensure data reliability and accuracy. Optimize Snowflake queries, storage, and compute resources for performance and cost-efficiency. Collaborate with cross-functional teams to gather data requirements and deliver data solutions. Required Qualifications: 6+ years of experience as a Data Engineer, with at least 5 years working with Snowflake . Proficient with DBT (Data Build Tool) including Jinja templating, macros, and model dependency management. Strong understanding of ELT patterns and modern data stack principles. Advanced SQL skills and experience with performance tuning in Snowflake. Interested candidates share your CV at himani.girnar@alikethoughts.com with below details Candidate's name- Email and Alternate Email ID- Contact and Alternate Contact no- Total exp- Relevant experience- Current Org- Notice period- CCTC- ECTC- Current Location- Preferred Location- Pancard No-

Posted 1 week ago

Apply

5.0 - 10.0 years

16 - 25 Lacs

bengaluru

Work from Office

Roles and Responsibilities: Lead end-to-end delivery of data engineering and analytics projects , ensuring timelines, quality, and stakeholder satisfaction. Collaborate with data engineers, data analysts, cloud architects , and business stakeholders to define project scope and objectives. Manage project planning, resource allocation, budgeting, and risk assessment for data-focused initiatives. Translate business requirements into technical specifications for engineering and analytics teams. Coordinate and track progress across cross-functional teams (engineering, analytics, QA, DevOps, etc.). Monitor and report project health, risks, and dependencies using Agile or hybrid methodologies . Ensure data governance, privacy, and security requirements are adhered to throughout the project lifecycle. Work with cloud teams to ensure efficient use of cloud services (AWS, Azure, GCP) for data platforms. Drive technical discussions, review architecture, and validate feasibility of proposed solutions. Continuously improve project delivery processes and contribute to the project management best practices repository. Requirements: 5+ years of experience managing technical projects , with at least 3 years in data engineering or analytics domains . Proven experience delivering cloud-based data projects (e.g., data lakes, pipelines, warehouses). Solid understanding of cloud platforms (AWS, Azure, or GCP) especially services like S3, Redshift, BigQuery, Azure Data Factory, etc. Strong understanding of data engineering concepts : ETL/ELT, data modeling, data quality, streaming/batch processing. Familiarity with analytics workflows , BI tools, and KPIs used in business intelligence. Proficiency in Agile/Scrum methodologies and tools (Jira, Confluence, Trello, etc.). Excellent communication, stakeholder management, and leadership skills. Technical background (development or architecture) is a strong plus. Bachelor’s degree in Computer Science, Engineering, Information Systems, or related field (MBA or PMP is a plus).

Posted 1 week ago

Apply

3.0 - 5.0 years

10 - 20 Lacs

hyderabad

Hybrid

Job Title - Snowflake Data Engineer Key Skills - Snowflake , Snowpipe , Data Warehousing , Advance SQL Job Description: Design, develop, and maintain robust ELT/ETL data pipelines to load structured and semi-structured data into Snowflake. Implement data ingestion workflows using tools like Azure Data Factory, Informatica, DBT, or custom Python/SQL scripts. Write and optimize complex SQL queries, stored procedures, views, and UDFs within Snowflake. Use Snowpipe for continuous data ingestion and manage tasks, streams, and file formats for near real-time processing Optimize query performance using techniques like clustering keys, result caching, materialized views, and pruning strategies. Monitor and tune warehouse sizing and usage to balance cost and performance. Design and implement data models (star, snowflake, normalized, or denormalized) suitable for analytical workloads. Create logical and physical data models for reporting and analytics use cases.

Posted 1 week ago

Apply

8.0 - 12.0 years

7 - 10 Lacs

pune

Work from Office

Only immediate joiner preferred. Experience: 8-12 years Responsibilities: Design, develop & maintain ETL pipelines using Python, PySpark & SQL on AWS/Azure/GCP. Collaborate with cross-functional teams for data transformation & quality assurance.

Posted 1 week ago

Apply

3.0 - 8.0 years

2 - 6 Lacs

bengaluru

Work from Office

Analyzing solutions through the lens of client requirement metrics as well as business domain Conceptual thinking and the ability to find innovative ways to solve analytical problems Engage with leadership and diversified stakeholder groups to understand their analytical needs and recommend Business intelligence solutions. Own the design, development, and maintenance of ongoing performance metrics, reports, analyses, dashboards, etc. to drive key business decisions Work with data engineering, Machine learning and software development teams to enable the appropriate capture and storage of key data points Conduct written and verbal presentations to share insights and recommendations to audiences of varying levels of technical sophistication. Work closely with the business/product teams to enable data driven decision making. Execute quantitative analysis that translates data into actionable insights Influence new opportunities for business based on internal/ external data Drive the team to adopt new and innovative ways of solving business problems Responsible to lead requirement and functional discussions/workshops with customer Responsible for documentation of requirementsreview of QA test cases/ functional testing Review of UX designs to verify conformance to requirements Coordinate with different teams for obtaining planning inputs in time & ensure plans are aligned / agreed with stakeholders before set timelines Support business decisions by - formulating hypothesis, structuring the problem, analyze relevant data to verify hypothesis and prepare documentation to communicate the results Use statistical methods to analyze data and generate useful business reports Own the development and maintenance of new and existing artifacts focused on analysis of requirements, metrics, and reporting dashboards. Partner with operations/business teams to consult, develop and implement KPIs, automated reporting/process solutions, and process improvements to meet business needs. Prepare and deliver business requirements reviews to the senior management team regarding progress and roadblocks. Participate in strategic and tactical planning discussions. Design, develop and maintain scaled, automated, user-friendly systems, reports, dashboards, etc. that will support our business needs Diving deep to fully understand the ETL pipelines, report architecture and metric definitions Excellent writing skills, to create artifacts/presentations for business and tech partners Understand the customer business requirements and articulate them in the form of user stories, business requirement documents Act as a liaison between the business client and technical team by planning, conducting, and directing the analysis of complex business problems Support clients organizational transformation goals by understanding the current operational state, future state, and business values; helping to define and drive business solutions to meet the transformation goals Analyze and provide input on the methodologies and frameworks to be used during the requirement and execution of projects Lead requirements elicitation sessions to understand business problems Determine the gaps between requirements and product functionalities and define workable solutions to bridge the gaps Be part of a scrum team and help Product Owner in Sprint Prioritization Work directly with all levels through to senior management across the organization and be considered as a high-level expert Engage at the pre-sales stage to ensure the business drivers are understood and that the solutions proposed are targeted to help meet the business value KPIs indicated Experience leading a business requirement process with both technical and business partners Drive and help execute Strategy by understanding the best mix of technologies and platforms Ensuring successful Solution platform adoption by working closely with the Enterprise customers Partner with fellow colleagues on projects, pitch preparation, business development and knowledge management Develop resourceful ways to gather insights on the industry and various business domains Be a thought partner and advise client teams on segment market and institutional knowledge, industry and talent trends for strategic pitches and assignments Essential Skills Bachelors in Engineering, Computer Science, Math, Statistics, or related discipline from a reputed institute MBA or Masters degree in Computer Science, Engineering, Statistics, Mathematics or related field Expertise in writing complex calculations, measures, implementing security model using Excel Working SQL knowledge and experience working with relational databases, working familiarity with a variety of databases. Should follow best practices in terms of development, unit testing, continuous integration Extensive experience in a BA role with a technology company Experience working in very large data warehouse environments Ability to breakdown any problem into components that can be collaboratively solved Ability to negotiate on the scope of the problem through business understanding Experience in analyzing very large, complex, multi-dimensional data sets Ability to find innovative solutions to mitigate dependencies and risks for deliverables Ability to independently plan and execute deliveries Exceptional written and verbal communication skills Ability to multitask and work on a diverse range of requirements Ability to find innovative solutions to mitigate dependencies and risks for deliverables Result-oriented and ownership-driven, can end to end own a process/ project with strong attention to detail Team player, with excellent interpersonal and influencing skills, and ability to manage stakeholders across functions and disciplines. Superior project management abilities and initiative to lead assignments with minimal guidance Highly-motivated problem-solvers who possess intellectual curiosity, a readiness to tackle challenges and qualitative analytical skills Technical capabilities: Expertise in SQL, Advanced Excel, PowerPoint, other scripting languages (R, Python, etc.) Working experience with BI tools (Power BI, Tableau, Qlikview etc) 3+ years of relevant work experience in a role requiring application of analytic skills in a business analyst, data analyst, or statistical analyst role 3+ years of experience developing requirements and creating requirements documents and process flows.

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

maharashtra

On-site

The job involves taking on the role of a Gen AI Developer + Architect with specific responsibilities related to Azure AI services. Your main focus will be on developing and architecting solutions using Azure OpenAI Service and other AI technologies. You should have a deep understanding of Azure AI services, especially Azure OpenAI Service, and be proficient in large language models (LLMs) and their architectures. Additionally, you should possess expertise in Azure Machine Learning and Azure Cognitive Services, along with a strong knowledge of Azure cloud architecture and best practices. In this role, you will be expected to work on prompt engineering and fine-tuning LLMs, while also ensuring compliance with AI ethics and responsible AI principles. Familiarity with Azure security and compliance standards is essential, as well as experience in developing chatbots, data integration, prompting, and AI source interaction. Your technical skills should include strong programming abilities, particularly in Python, and expertise in Azure AI services, including Azure OpenAI Service. Proficiency in machine learning frameworks such as PyTorch and TensorFlow is necessary, along with experience in Azure DevOps and CI/CD pipelines. Knowledge of distributed computing and scalable AI systems on Azure, as well as familiarity with Azure Kubernetes Service (AKS) for AI workloads, will be beneficial. An understanding of data processing and ETL pipelines in Azure is also required to excel in this role.,

Posted 1 week ago

Apply

2.0 - 4.0 years

9 - 12 Lacs

noida

Work from Office

Job Responsibilities Design, build & maintain scalable data pipelines for ingestion, processing & storage. Collaborate with data scientists, analysts, and product teams to deliver high-quality data solutions. Optimize data systems for performance, reliability, scalability, and cost-efficiency . Implement data quality checks ensuring accuracy, completeness, and consistency. Work with structured & unstructured data from diverse sources. Develop & maintain data models, metadata, and documentation . Automate & monitor workflows using tools like Apache Airflow (or similar). Ensure data governance & security best practices are followed. Required Skills & Qualifications Bachelors/Master’s degree in Computer Science, Engineering, or related field . 3–5 years of experience in data engineering, ETL development, or backend data systems . Proficiency in SQL & Python/Scala . Experience with big data tools (Spark, Hadoop, Kafka, etc.). Hands-on with cloud data platforms (AWS Redshift, GCP BigQuery, Azure Data Lake). Familiar with orchestration tools (Airflow, Luigi, etc.). Experience with data warehousing & data modeling . Strong analytical & problem-solving skills ; ability to work independently & in teams. Preferred Qualifications Experience with containerization (Docker, Kubernetes). Knowledge of CI/CD processes & Git version control. Understanding of data privacy regulations (GDPR, CCPA, etc.). Exposure to machine learning pipelines / MLOps is a plus.

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies