Jobs
Interviews

458 Etl Pipelines Jobs - Page 2

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 5.0 years

0 Lacs

india

On-site

DESCRIPTION Have you ever wondered how Amazon shipped your order so fast Wondered where it came from or how much it cost us To help describe some of our challenges, we created a short video about Supply Chain Optimization at Amazon - http://bit.ly/amazon-scot We are seeking a Data Engineer to join our team. Amazon has a culture of data-driven decision-making and demands business intelligence that is timely, accurate, and actionable. Your work will have an immediate influence on day-to-day decision making at Amazon.com. As an Amazon Data Engineer you will be working in one of the world's largest and most complex data warehouse environments. We maintain one of the largest data marts in Amazon as well as work on Business Intelligence reporting and dashboarding solutions that are used by thousands of users world-wide. Our team is responsible for timely delivery of mission critical analytical reports and metrics that are viewed at the highest levels in the organization. You should have deep expertise in the design, creation, management and business use of extremely large datasets. You should have excellent business and communication skills to be able to work with business owners to develop and define key business questions, and to build data sets that answer those questions. You should be expert at designing, implementing, and operating stable, scalable, low cost solutions to flow data from production systems into the data warehouse and into end-user facing applications. You should be able to work with business customers in a fast paced environment understanding the business requirements and implementing reporting solutions. Above all you should be passionate about working with huge data sets and someone who loves to bring datasets together to answer business questions and drive change. Key job responsibilities This role requires an Engineer with 4+ years experience in building data solutions, combined with both consulting and hands-on expertise. The position involves helping to build new and maintain existing data warehouse implementations, developing tools to facilitate data integration, identifying and architecting appropriate storage technologies, executing projects to deliver high-quality data pipelines on time, defining continuous improvement processes, driving technology direction, and effectively leading the data engineering team. You will work with multiple internal teams who need support in managing backend data solutions. Using your deep technical expertise, strong relationship-building skills, and documentation abilities, you will create technical content, provide consultation to customers, and gather feedback to drive the AWS analytic support offering. As the voice of the customer, you will work closely with data product managers and engineering teams to help design and deliver new features and product improvements that address critical customer challenges. A day in the life A typical day on our team involves collaborating with other engineers to deploy new data solutions through our large automated systems while providing operational support for your newly deployed software. You will seek out innovative approaches to automate fixes for operational issues, leverage AWS services to solve design problems, and engage with other internal teams to integrate your applications with theirs. You'll be part of a world-class team in an inclusive environment that maintains the entrepreneurial feel of a startup. This is an opportunity to operate and engineer systems on a massive scale and to gain top-notch experience in database storage technologies BASIC QUALIFICATIONS - 3+ years of data engineering experience - Experience with data modeling, warehousing and building ETL pipelines - 2+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience - Experience programming with at least one modern language such as C++, C#, Java, Python, Golang, PowerShell, Ruby - Knowledge of batch and streaming data architectures like Kafka, Kinesis, Flink, Storm, Beam - Knowledge of distributed systems as it pertains to data storage and computing PREFERRED QUALIFICATIONS - Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions - Experience with non-relational databases / data stores (object storage, document or key-value stores, graph databases, column-family databases) Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit for more information. If the country/region you're applying in isn't listed, please contact your Recruiting Partner.

Posted 5 days ago

Apply

2.0 - 5.0 years

8 - 9 Lacs

chennai

Work from Office

Job Title: Hadoop Administrator Location: Chennai, India Experience: 5 yrs of experience in IT, with At least 2+ years of experience with cloud and system administration. At least 3 years of experience with and strong understanding of big data technologies in Hadoop ecosystem Hive, HDFS, Map/Reduce, Flume, Pig, Cloudera, HBase Sqoop, Spark etc. Company: Smartavya Analytica Private limited is a niche Data and AI company. Based in Pune, we are pioneers in data-driven innovation, transforming enterprise data into strategic insights. Established in 2017, our team has experience in handling large datasets up to 20 PBs in a single implementation, delivering many successful data and AI projects across major industries, including retail, finance, telecom, manufacturing, insurance, and capital markets. We are leaders in Big Data, Cloud and Analytics projects with super specialization in very large Data Platforms. https://smart-analytica.com SMARTAVYA ANALYTICA Smartavya Analytica is a leader in Big Data, Data Warehouse and Data Lake Solutions, Data Migration Services and Machine Learning/Data Science projects on all possible flavours namely on-prem, cloud and migration both ways across platforms such as traditional DWH/DL platforms, Big Data Solutions on Hadoop, Public Cloud and Private Cloud.smart-analytica.com Empowering Your Digital Transformation with Data Modernization and AI Job Overview: Smartavya Analytica Private Limited is seeking an experienced Hadoop Administrator to manage and support our Hadoop ecosystem. The ideal candidate will have strong expertise in Hadoop cluster administration, excellent troubleshooting skills, and a proven track record of maintaining and optimizing Hadoop environments. Key Responsibilities: Install, configure, and manage Hadoop clusters, including HDFS, YARN, Hive, HBase, and other ecosystem components. Monitor and manage Hadoop cluster performance, capacity, and security. Perform routine maintenance tasks such as upgrades, patching, and backups. Implement and maintain data ingestion processes using tools like Sqoop, Flume, and Kafka. Ensure high availability and disaster recovery of Hadoop clusters. Collaborate with development teams to understand requirements and provide appropriate Hadoop solutions. Troubleshoot and resolve issues related to the Hadoop ecosystem. Maintain documentation of Hadoop environment configurations, processes, and procedures. Requirement: Experience in Installing, configuring and tuning Hadoop distributions. Hands on experience in Cloudera. Understanding of Hadoop design principals and factors that affect distributed system performance, including hardware and network considerations. Provide Infrastructure Recommendations, Capacity Planning, work load management. Develop utilities to monitor cluster better Ganglia, Nagios etc. Manage large clusters with huge volumes of data Perform Cluster maintenance tasks Create and removal of nodes, cluster monitoring and troubleshooting Manage and review Hadoop log files Install and implement security for Hadoop clusters Install Hadoop Updates, patches and version upgrades. Automate the same through scripts Point of Contact for Vendor escalation. Work with Hortonworks in resolving issues Should have Conceptual/working knowledge of basic data management concepts like ETL, Ref/Master data, Data quality, RDBMS Working knowledge of any scripting language like Shell, Python, Perl Should have experience in Orchestration & Deployment tools. Academic Qualification: BE / B.Tech in Computer Science or equivalent along with hands-on experience in dealing with large data sets and distributed computing in data warehousing and business intelligence systems using Hadoop.

Posted 5 days ago

Apply

6.0 - 11.0 years

10 - 20 Lacs

bengaluru

Remote

Key Responsibilities Design, build, and maintain ETL pipelines using Azure Data Factory (preferably Fabric Data Factory) and SQL. Write and optimize complex SQL logic to ensure performance and scalability across large datasets. Ensure data quality, monitoring, and observability with restartability, idempotency, and debugging principles in mind. Enhance ETL processes with Python scripting where applicable. Collaborate with business unit partners to translate requirements into effective data solutions. Document workflows, standards, and best practices; mentor junior team members. Implement version control (GitHub) and CI/CD practices across SQL and ETL processes. Work with Azure components such as Blob Storage and integrate with orchestration tools. Apply troubleshooting and performance-tuning techniques to improve data pipelines. Required Skills & Experience Strong hands-on SQL development with focus on integration, optimization, and performance tuning. Proven experience with Azure Data Factory (ADF) , with preference for Fabric Data Factory . Exposure to ETL/orchestration tools such as Matillion (preferred but not mandatory). Proficiency in Python for ETL enhancements and automation. Understanding of cloud platforms , particularly Microsoft Azure services. Familiarity with version control (GitHub) and CI/CD in data environments. Excellent communication and technical writing skills to engage with stakeholders. Having Advanced Azure certifications would be a plus. Technology & Skill Areas Core: Azure Data Factory / Fabric Data Factory, SQL, Python Secondary: Matillion, Azure Blob Storage Skill Areas: Data Integration, Data Quality, Performance Optimization, Cloud Data Engineering

Posted 5 days ago

Apply

5.0 - 10.0 years

5 - 15 Lacs

hyderabad, chennai, bengaluru

Hybrid

Key Responsibilities 1. Lead a team of developers in the design, implementation, and maintenance of data processing applications using apache spark, scala, and python. 2. Collaborate with cross functional teams to gather requirements, analyze data, and develop scalable solutions. 3. Troubleshoot and resolve technical issues related to data processing applications. 4. Implement best practices in coding, testing, and deployment processes. 5. Stay updated on the latest trends and advancements in apache spark, scala, and python technologies. Skill Requirements 1. Proficiency in apache spark, scala, and python programming languages. 2. Strong understanding of data processing concepts and distributed computing. 3. Experience in leading and mentoring technical teams. 4. Excellent problem-solving skills and attention to detail. 5. Ability to work effectively in a fast paced and dynamic environment. 6. Good communication and interpersonal skills.

Posted 5 days ago

Apply

8.0 - 12.0 years

0 Lacs

chennai, tamil nadu

On-site

Job Description: You will be responsible for leading the configuration, maintenance, and optimization of the organization's network and security infrastructure to ensure high performance and reliability. Your key responsibilities will include: - Architecting and integrating networks across multiple business units and diverse cloud environments, ensuring seamless connectivity and scalability. - Championing the use of Infrastructure as Code processes to automate and maintain infrastructure consistency, scalability, and up-to-date configurations. - Overseeing the configuration, deployment, and management of F5 Local Traffic Managers (LTMs) and Advanced Security Modules (ASMs) to ensure seamless application delivery and security. - Creating and implementing strategies for integrating cloud Virtual Private Clouds (VPCs), interconnects, and direct connects, ensuring efficient and secure data flow between on-premises and cloud environments. - Proactively identifying opportunities to enhance the network's scalability and resilience, ensuring it can handle growing business demands and traffic loads. - Working closely with infrastructure, application, and security teams to ensure network designs meet cross-functional requirements and adhere to best practices. Qualifications Required: - Minimum of 8 years of relevant work experience and a Bachelor's degree or equivalent experience. Company Details: Our beliefs are the foundation for how we conduct business every day. We live each day guided by our core values of Inclusion, Innovation, Collaboration, and Wellness. Together, our values ensure that we work together as one global team with our customers at the center of everything we do and they push us to ensure we take care of ourselves, each other, and our communities. If you want to learn more about our culture and community, you can visit https://about.pypl.com/who-we-are/default.aspx.,

Posted 5 days ago

Apply

3.0 - 5.0 years

0 Lacs

india

On-site

We are seeking a Senior Engineer to join our Data Ingestion team, primarily supporting OCI cloud migration activities, system architecture, and technical enablement. This role is pivotal to ensuring seamless migration, maintaining robust architecture, and ensuring the stability and scalability of our data ingestion pipelines. Design, implement, and optimize scalable data ingestion architectures. Provide technical guidance and mentorship within the engineering team. Collaborate with product and strategy teams to align technical execution with business goals. Troubleshoot and resolve complex issues in large-scale distributed systems. Ensure data integrity, availability, and performance across ingestion pipelines. Lead and support OCI cloud migration initiatives for data ingestion services. Design, implement, and optimize scalable data ingestion architectures. Develop and maintain ETL pipelines and distributed data systems. Provide technical guidance and mentorship within the engineering team. Collaborate with product and strategy teams to align technical execution with business goals. Troubleshoot and resolve complex issues in large-scale distributed systems. Ensure data integrity, availability, and performance across ingestion pipelines. Apply strong programming skills in Java, Python, or similar languages to build and optimize solutions. Utilize Kubernetes, Docker, CI/CD pipelines, and monitoring tools for system reliability and deployment automation. Contribute to system architecture design and drive best practices in cloud-native environments. Work with cloud platforms (preferably OCI, AWS) to deliver highly available solutions. Demonstrate excellent problem-solving, collaboration, and communication skills in cross-functional teams. Career Level - IC4

Posted 5 days ago

Apply

8.0 - 10.0 years

0 Lacs

chennai, tamil nadu, india

On-site

The Company PayPal has been revolutionizing commerce globally for more than 25 years. Creating innovative experiences that make moving money, selling, and shopping simple, personalized, and secure, PayPal empowers consumers and businesses in approximately 200 markets to join and thrive in the global economy. We operate a global, two-sided network at scale that connects hundreds of millions of merchants and consumers. We help merchants and consumers connect, transact, and complete payments, whether they are online or in person. PayPal is more than a connection to third-party payment networks. We provide proprietary payment solutions accepted by merchants that enable the completion of payments on our platform on behalf of our customers. We offer our customers the flexibility to use their accounts to purchase and receive payments for goods and services, as well as the ability to transfer and withdraw funds. We enable consumers to exchange funds more safely with merchants using a variety of funding sources, which may include a bank account, a PayPal or Venmo account balance, PayPal and Venmo branded credit products, a credit card, a debit card, certain cryptocurrencies, or other stored value products such as gift cards, and eligible credit card rewards. Our PayPal, Venmo, and Xoom products also make it safer and simpler for friends and family to transfer funds to each other. We offer merchants an end-to-end payments solution that provides authorization and settlement capabilities, as well as instant access to funds and payouts. We also help merchants connect with their customers, process exchanges and returns, and manage risk. We enable consumers to engage in cross-border shopping and merchants to extend their global reach while reducing the complexity and friction involved in enabling cross-border trade. Our beliefs are the foundation for how we conduct business every day. We live each day guided by our core values of Inclusion, Innovation, Collaboration, and Wellness. Together, our values ensure that we work together as one global team with our customers at the center of everything we do - and they push us to ensure we take care of ourselves, each other, and our communities. Job Summary: Design, develop, and deploy machine learning models using GCP services such as AI Platform, BigQuery ML, Vertex AI, TensorFlow, and AutoML. Build and optimize scalable pipelines for data collection, preprocessing, and feature engineering. Collaborate with data engineers to design ETL pipelines and manage large datasets on GCP. Develop custom machine learning models tailored to business needs, including supervised, unsupervised, and reinforcement learning algorithms. Integrate AI/ML models into cloud applications and ensure seamless operation in production environments. Work closely with software engineers to embed AI/ML models into applications and products, ensuring best practices for CI/CD in ML. Conduct research to stay up-to-date with the latest AI/ML techniques and apply them to business use cases. Job Description: Essential Responsibilities: Lead the configuration, maintenance, and optimization of the organization's network and security infrastructure to ensure high performance and reliability. Architect and integrate networks across multiple business units and diverse cloud environments, ensuring seamless connectivity and scalability. Champion the use of Infrastructure as Code processes to automate and maintain infrastructure consistency, scalability, and up-to-date configurations. Oversee the configuration, deployment, and management of F5 Local Traffic Managers (LTMs) and Advanced Security Modules (ASMs) to ensure seamless application delivery and security. Create and implement strategies for integrating cloud Virtual Private Clouds (VPCs), interconnects, and direct connects, ensuring efficient and secure data flow between on-premises and cloud environments. Proactively identify opportunities to enhance the network's scalability and resilience, ensuring it can handle growing business demands and traffic loads. Work closely with infrastructure, application, and security teams to ensure network designs meet cross-functional requirements and adhere to best practices. Minimum Qualifications: Minimum of 8 years of relevant work experience and a Bachelor's degree or equivalent experience. Preferred Qualification: Master's degree or higher in Computer Science, Mathematics, or related field with keen interest in Machine Learning and AI Proven experience in developing and implementing solutions in machine learning and AI-related spaces Strong programming skills in languages such as Python, Java, or C++ In-depth knowledge of machine learning frameworks and libraries for analytics and text processing (e.g., TensorFlow, PyTorch) Experience with cloud services related to machine learning (Vertex AI, etc.) Excellent problem-solving skills and the ability to work in a fast-paced environment Strong communication skills to effectively collaborate with team members and stakeholders Strong knowledge of algorithms, statistics, data structures, distributed systems, and software engineering best practices Proven experience leading and delivering complex ML projects at production scale Experience integrating ML solutions into cloud environments (e.g., AWS, Azure, GCP) is highly desirable Perform model tuning, evaluation, and monitoring for performance and accuracy improvements. Provide mentorship and guidance to junior developers and contribute to code reviews, technical design sessions, and project planning. Ensure security, scalability, and performance of ML models deployed in production. Subsidiary: PayPal Travel Percent: 0 PayPal does not charge candidates any fees for courses, applications, resume reviews, interviews, background checks, or onboarding. Any such request is a red flag and likely part of a scam. To learn more about how to identify and avoid recruitment fraud please visit . For the majority of employees, PayPal's balanced hybrid work model offers 3 days in the office for effective in-person collaboration and 2 days at your choice of either the PayPal office or your home workspace, ensuring that you equally have the benefits and conveniences of both locations. Our Benefits: At PayPal, we're committed to building an equitable and inclusive global economy. And we can't do this without our most important asset-you. That's why we offer benefits to help you thrive in every stage of life. We champion your financial, physical, and mental health by offering valuable benefits and resources to help you care for the whole you. We have great benefits including a flexible work environment, employee shares options, health and life insurance and more. To learn more about our benefits please visit . Who We Are: to learn more about our culture and community. Commitment to Diversity and Inclusion PayPal provides equal employment opportunity (EEO) to all persons regardless of age, color, national origin, citizenship status, physical or mental disability, race, religion, creed, gender, sex, pregnancy, sexual orientation, gender identity and/or expression, genetic information, marital status, status with regard to public assistance, veteran status, or any other characteristic protected by federal, state, or local law. In addition, PayPal will provide reasonable accommodations for qualified individuals with disabilities. If you are unable to submit an application because of incompatible assistive technology or a disability, please contact us at . Belonging at PayPal: Our employees are central to advancing our mission, and we strive to create an environment where everyone can do their best work with a sense of purpose and belonging. Belonging at PayPal means creating a workplace with a sense of acceptance and security where all employees feel included and valued. We are proud to have a diverse workforce reflective of the merchants, consumers, and communities that we serve, and we continue to take tangible actions to cultivate inclusivity and belonging at PayPal. Any general requests for consideration of your skills, please . We know the confidence gap and imposter syndrome can get in the way of meeting spectacular candidates. Please don't hesitate to apply.

Posted 5 days ago

Apply

4.0 - 7.0 years

5 - 11 Lacs

hyderabad, chennai, bengaluru

Work from Office

Job Title: Engineering (IN) - Software Engineer 2 Job Description: What You'll Do The Marketing Engineering organization in Strategy, Planning & Operations is looking for a Data Engineer eager to provide the engineering muscle for core engineering work around data architecture, data pipeline development, and deploying production workflows to continue improving our data & reporting needs for Marketing in an AI future. You will be responsible for: • Working with a technical leader to lead major engineering initiatives to support decision-making by the business • You will help manage production data workflows, ensuring timely updates are made through a standard SDLC process. • You will help design and develop new architectures and workflows to support the growing business needs of Cisco Marketing. Who You'll Work With • You will report to a manager within Advanced Analytics and work directly with a Team Leader in the Engineering Center of Excellence • You will work closely with Modelers, Business Stakeholders, Analytics Translators as well as IT Who you are: You like working with Data and love getting the business to adopt Data Insights. Minimum qualifications are: • Experience with data modeling, data warehousing, and building ETL pipelines • Experience in SQL, Python, and Unix/Bash scripting • Experience working with Docker, Git, and various cloud platforms like Google Cloud and Azure. • Writing complex, optimized SQL queries across large data sets • Experience in communicating complex technical concepts to a broad variety of audiences • Proven success in communicating with users, other technical teams, and senior management to collect requirements, describe data modeling decisions and data engineering strategy. Preferred Qualifications: • BS/MS in Computer Science/Engineering. • 3+ years of experience in a Data Engineering or similar role • Knowledge of software engineering standard methodologies across development lifecycles including agile methodologies, coding standards, code reviews, source management, build processes, testing and operations. Note: Looking for Immediate candidates who can join within 15 days.

Posted 5 days ago

Apply

6.0 - 11.0 years

25 - 37 Lacs

gurugram, bengaluru

Work from Office

Key Responsibilities Lead and mentor a team of data engineers, providing technical guidance, performance feedback, and career development. Architect, develop, and maintain scalable ETL/ELT pipelines using AWS services , PySpark , SQL , and Python . Drive the design and implementation of robust data models and data warehouses/lakes to support analytics and reporting. Ensure data quality, security, and governance across all stages of the data lifecycle. Collaborate with product and engineering teams to integrate data solutions into production environments. Optimize data workflows and performance across large, complex datasets. Manage stakeholder relationships and translate business needs into technical solutions. Stay up to date with latest technologies and recommend best practices for data engineering. Technical Skills & Qualifications 7+ years of experience in Data Engineering or related roles. 2+ years in a leadership or managerial capacity. Deep expertise in AWS ecosystem , including services like S3, Glue, Redshift, Lambda, EMR, Athena, DynamoDB, and IAM. Proficient in PySpark , SQL , and Python for large-scale data processing and transformation. Strong understanding of ETL/ELT development , data modeling (star/snowflake schemas) , and data architecture . Experience in managing data infrastructure, CI/CD pipelines, and workflow orchestration tools (e.g., Airflow, Step Functions). Knowledge of data governance, security, and compliance best practices. Excellent communication and leadership skills. Preferred Qualifications AWS Certified Data Analytics, Solutions Architect, or equivalent certifications. Experience working in Agile environments. Familiarity with BI tools like QuickSight, Tableau, or Power BI. Exposure to machine learning workflows or MLOps is a plus. Why Join Decision Point? At Decision Point , we dont just build data solutions—we build decision intelligence. Joining us means being part of a fast-growing, innovation-driven company where your contributions directly impact business outcomes for global clients. We foster a culture of learning, ownership, and growth. As a Data Engineering Manager, you'll have the autonomy to drive strategic projects while working with some of the brightest minds in data and analytics. If you're looking to scale your career in a dynamic environment that values deep technical expertise and leadership, Decision Point is the place for you .

Posted 5 days ago

Apply

10.0 - 17.0 years

10 - 20 Lacs

pune, chennai, bengaluru

Work from Office

Interested can directly whatsapp or call me at 6369973379 Job Title: Lead Data Engineer - Data Integrator Experience: 10.1 15 Years Job Description We are seeking a Lead Data Engineer with strong expertise in Informatica, Python, and Kubernetes. The role involves leading data integration initiatives, designing scalable ETL pipelines, and ensuring seamless data orchestration across platforms. Key Responsibilities Lead the design, development, and deployment of data integration solutions using Informatica and Python Manage large-scale ETL pipelines and optimize data workflows Deploy and orchestrate workloads on Kubernetes Collaborate with architects, analysts, and business stakeholders to deliver reliable data solutions Ensure data quality, security, and governance within integration processes Mentor and guide junior engineers in the team Primary Skills Hands-on expertise in Informatica (PowerCenter, IICS, or equivalent) Strong Python programming skills for automation and data processing Experience with Kubernetes for container orchestration Proficiency in SQL for data querying, transformation, and optimization Good to Have Exposure to cloud platforms (Azure, AWS, or GCP) Experience with PySpark or other big data frameworks

Posted 6 days ago

Apply

3.0 - 8.0 years

6 - 16 Lacs

jaipur

Work from Office

Gen AI Full Stack Engineer - We are seeking a Full Stack Engineer with experience in building web applications integrated with Generative AI solutions. The role involves developing scalable front-end and back-end systems, implementing LLM-based features, and deploying AI-powered applications on the cloud. Data Engineer - We are seeking a skilled Data Engineer to design, build, and maintain scalable data pipelines and architectures. The role involves working closely with data scientists, analysts, and business stakeholders to ensure seamless data flow, quality, and availability across systems. Data Architect - We are looking for a Data Architect to design and implement scalable, secure, and high-performing data architectures. The role involves defining data models, integration strategies, and governance frameworks to enable analytics and business insights. Data Scientist - We are seeking a Data Scientist to analyze complex datasets, build predictive models, and develop AI/ML solutions that drive business decisions. You will work with stakeholders to design data-driven strategies and deploy scalable models.

Posted 6 days ago

Apply

6.0 - 11.0 years

15 - 30 Lacs

noida, pune, bengaluru

Hybrid

Experience- 6 to 11 years Location- Coimbatore- Key Responsibilities: Design and build scalable ELT pipelines in Snowflake using DBT/SQL . Develop efficient, well-tested DBT models (staging, intermediate, and marts layers). Implement data quality, testing, and monitoring frameworks to ensure data reliability and accuracy. Optimize Snowflake queries, storage, and compute resources for performance and cost-efficiency. Collaborate with cross-functional teams to gather data requirements and deliver data solutions. Required Qualifications: 6+ years of experience as a Data Engineer, with at least 5 years working with Snowflake . Proficient with DBT (Data Build Tool) including Jinja templating, macros, and model dependency management. Strong understanding of ELT patterns and modern data stack principles. Advanced SQL skills and experience with performance tuning in Snowflake. Interested candidates share your CV at himani.girnar@alikethoughts.com with below details Candidate's name- Email and Alternate Email ID- Contact and Alternate Contact no- Total exp- Relevant experience- Current Org- Notice period- CCTC- ECTC- Current Location- Preferred Location- Pancard No-

Posted 6 days ago

Apply

1.0 - 3.0 years

0 - 1 Lacs

india

On-site

Job Description Role & responsibilities Analyze large, complex datasets to identify patterns and trends Build and deploy predictive models and machine learning algorithms Collaborate with data engineers and business stakeholders to define problem statements and deliver solutions Create clear visualizations and dashboards to present data insights Implement data cleaning, validation, and transformation techniques Support automation of reporting processes and recurring data analysis tasks Stay up to date with latest developments in data science and AI Preferred candidate profile 1 to 3 years of experience in Data Science or related field Proficiency in Python (including libraries such as Pandas, SQL Alchemy, NumPy, Scikit-learn, etc.) Strong SQL skills and experience working with relational database Hands-on experience with ETL pipelines and data processing Check Your Resume for Match Upload your resume and our tool will compare it to the requirements for this job like recruiters do.

Posted 6 days ago

Apply

10.0 - 12.0 years

0 Lacs

bengaluru, karnataka, india

On-site

Job Description Job Description Overview As a leading global aerospace company, Boeing develops, manufactures and services commercial airplanes, defense products and space systems for customers in more than 150 countries. As a top U.S. exporter, the company leverages the talents of a global supplier base to advance economic opportunity, sustainability and community impact. Boeing's team is committed to innovating for the future, leading with sustainability, and cultivating a culture based on the company's core values of safety, quality and integrity. Technology for today and tomorrow The Boeing India Engineering & Technology Center (BIETC) is a 5500+ engineering workforce that contributes to global aerospace growth. Our engineers deliver cutting-edge R&D, innovation, and high-quality engineering work in global markets, and leverage new-age technologies such as AI/ML, IIoT, Cloud, Model-Based Engineering, and Additive Manufacturing, shaping the future of aerospace. People-driven culture At Boeing, we believe creativity and innovation thrives when every employee is trusted, empowered, and has the flexibility to choose, grow, learn, and explore. We offer variable arrangements depending upon business and customer needs, and professional pursuits that offer greater flexibility in the way our people work. We also believe that collaboration, frequent team engagements, and face-to-face meetings bring together different perspectives and thoughts - enabling every voice to be heard and every perspective to be respected. No matter where or how our teammates work, we are committed to positively shaping people's careers and being thoughtful about employee wellbeing. With us, you can create and contribute to what matters most in your career, community, country, and world. Join us in powering the progress of global aerospace. Enterprise AI & Data is looking for Experienced Cloud Solution Architect to help build cloud-based Data Products. This position will focus on supporting the Boeing Information Digital Technology & Security (IDT&S) goals of developing cloud native applications/adopting multi-cloud strategy. Position Responsibilities: . Hands-on experience in understanding aerospace domain specific data . Execute the strategy to build highly reliable and scalable data and analytics platforms to ensure the business requirements are met . Build quality checks across the data lineage and responsible in designing and implementing different data patterns. . Create and implement repeatable and reusable frameworks for delivering Data and AI solutions . Establish architecture best practices, along with comprehensive standards and guidelines, to ensure adherence across the organization . Must have experience in building self-service capabilities to users. . Has experience of building impactful or outcome-based solutions/products. . Must have clear understanding of defining data products and monetizing. . Conduct hands-on Proof of Concepts (PoCs) and Minimum Viable Products (MVPs) to address critical capability needs, facilitating effective decision-making processes . Own all communication and collaboration channels pertaining to assigned projects, including regular stakeholder review meetings and cross team alignments . Demonstrated ability to establish working relationship with vendors (Technology and Consulting), Partners and cross teams and holding them accountable . Stay up to date with the latest trends and best practices in Cloud and Data architecture tools and technologies . Experience to design and develop end-to-end ETL pipelines using Python, SQL to process structured and unstructured data from various sources including APIs, S3, and SFTP. Basic Qualifications (Required Skills/Experience): . Bachelor's degree or higher as Basic Qualification . 10+ Years of experience as Data Engineer . Strong understanding of Datawarehouse concepts, datalake, datamesh. . Familiar with ETL tools and Data ingestion patterns . Hands on experience in building data pipelines. . Hands on experience in writing complex SQL (No- SQL is a big plus) . Hands on experience with data pipeline orchestration tools . Hands on experience on Data Modelling . Experience in working with Global teams with global mindset. . Experience working on Agile projects and Agile methodology in general . Excellent problem solving, communications, interpersonal and leadership skills . Exceptional presentation, visualization, and analysis skills . Ability to understand and comprehend complex environments and systems . Inquisitive by nature and keen to figure out how things work Typical Education & Experience: . Bachelor's degree or higher with 10+ years of relevant experience (or an equivalent combination of education and experience Applications for this position will be accepted until Sept. 13, 2025 Export Control Requirements: This is not an Export Control position. Education Bachelor's Degree or Equivalent Required Relocation Relocation assistance is not a negotiable benefit for this position. Visa Sponsorship Employer will not sponsor applicants for employment visa status. Shift Not a Shift Worker (India) Equal Opportunity Employer: We are an equal opportunity employer. We do not accept unlawful discrimination in our recruitment or employment practices on any grounds including but not limited to race, color, ethnicity, religion, national origin, gender, sexual orientation, gender identity, age, physical or mental disability, genetic factors, military and veteran status, or other characteristics covered by applicable law. We have teams in more than 65 countries, and each person plays a role in helping us become one of the world's most innovative, diverse and inclusive companies. We are proud members of the and welcome applications from candidates with disabilities. Applicants are encouraged to share with our recruitment team any accommodations required during the recruitment process. Accommodations may include but are not limited to: conducting interviews in accessible locations that accommodate mobility needs, encouraging candidates to bring and use any existing assistive technology such as screen readers and offering flexible interview formats such as virtual or phone interviews.

Posted 6 days ago

Apply

2.0 - 6.0 years

0 Lacs

bangalore, karnataka

On-site

As a Python + Gen AI Developer with 3-6 years of experience, your core responsibilities will include developing and maintaining backend services using Python frameworks such as FastAPI and Flask. You will also be responsible for building and integrating RESTful APIs with cloud-native architectures, implementing CI/CD pipelines, and containerized deployments. Additionally, you will transition to designing LLM-powered applications, RAG pipelines, and Agentic AI systems. To be successful in this role, you must have a strong foundation in Python fundamentals with at least 2 years of experience in object-oriented programming, data structures, and clean code practices. Proficiency in at least one web framework such as FastAPI, Flask, or Django is essential. Hands-on experience with cloud platforms like AWS, Azure, or GCP, particularly in compute, storage, and databases, is required. You should also have experience in database management, including RDBMS, PostgreSQL, MySQL, or cloud-native databases, as well as version control using Git workflows and collaborative development. API development skills, unit testing with pytest/unittest, and a problem-solving mindset are crucial for this role. In addition to the mandatory skills, it would be beneficial to have experience in data engineering with ETL pipelines and data processing frameworks like Pandas and NumPy. Knowledge of containerization tools such as Docker and Kubernetes, DevOps practices including CI/CD tools and infrastructure as code, as well as exposure to basic ML/AI concepts using TensorFlow, PyTorch, and scikit-learn would be advantageous. Familiarity with message queues like Redis, RabbitMQ, or cloud messaging services, monitoring tools for application observability and logging frameworks, and GenAI frameworks/concepts such as Langchain, Langgraph, CrewAI, AutoGen, and RAG, is considered a good-to-have for this role.,

Posted 6 days ago

Apply

3.0 - 7.0 years

5 - 15 Lacs

hyderabad

Work from Office

Position: Snowflake Data Engineer Work Location: Hyderabad(WFO) Experience : 3+ About the opportunity : We are seeking a highly skilled and experienced Snowflake Developer with a strong background in SQL, Python, and a minimum of 3 years of hands-on experience with Snowflake. The ideal candidate will be a Snowflake Certified with a proven track record in data warehousing, data modelling, and implementing ETL/ELT pipelines using industry-standard tools. Primary Roles & Responsibilities • Design, develop, and optimize data pipelines and ETL/ELT processes in Snowflake. • Develop and optimize complex SQL queries for data extraction, transformation, and reporting. • Write robust Python scripts for automation, orchestration, and data transformations. • Migrate data from legacy systems to Snowflake and integrate various data sources. • Implement Snowflake best practices for performance tuning, security, and cost management. • Collaborate with cross-functional teams to implement end-to-end data warehouse solutions. Required Skills • Minimum 3 years of hands-on experience with Snowflake. • Strong expertise in SQL development and optimization. • Proficient in Python for scripting and data engineering tasks. • Experience in Data Warehouse architecture and Data Modeling (Star/Snowflake Schema). • Hands-on experience with ETL/ELT tools like Informatica, Matillion, dbt, Talend, or equivalent. • Experience with cloud platforms (AWS, Azure, or GCP) and associated services. • Solid understanding of performance tuning, data governance, and security concepts. • Excellent problem-solving and communication skills.

Posted 1 week ago

Apply

5.0 - 8.0 years

6 - 14 Lacs

chennai

Work from Office

Responsibilities: * Design, develop & maintain data pipelines using ETL, SQL, Python & AWS. * Optimize data warehousing solutions for performance & scalability.

Posted 1 week ago

Apply

7.0 - 10.0 years

20 - 25 Lacs

bengaluru

Work from Office

Experience: 8+ years of experience in the data field, with a proven track record of success in data engineering, data analysis, and data science. Education: Bachelor's or Master's degree in Computer Science, Engineering, Statistics, or a related field. Technical Skills: Expert proficiency in SQL and at least one programming language such as Python or R. Extensive experience with cloud platforms (e.g., AWS, GCP, Azure). Hands-on experience with big data technologies (e.g., Spark, Hadoop). Strong understanding of database technologies, including both relational (e.g., PostgreSQL, MySQL) and NoSQL (e.g., MongoDB, Cassandra) databases.

Posted 1 week ago

Apply

5.0 - 9.0 years

5 - 9 Lacs

pune, maharashtra, india

On-site

Job Summary We are looking for an experienced Data Product Manager to lead the strategy, development, and execution of data-driven products. The ideal candidate will possess deep expertise in data platforms, analytics, and AI-driven solutions , ensuring business needs are effectively translated into innovative data products. This role requires strong stakeholder management, technical knowledge, and a results-driven mindset. Key Responsibilities Define and execute the data product roadmap , ensuring alignment with business objectives. Collaborate with cross-functional teams including data engineering, analytics, and business stakeholders to develop scalable data products. Drive data strategy by identifying opportunities for innovation, automation, and advanced analytics integration. Manage the entire lifecycle of data products, from inception to execution, ensuring continuous improvement. Leverage AI/ML models to optimize product offerings and enhance decision-making. Ensure data governance, compliance, and security across all data-related initiatives. Monitor key performance metrics to evaluate product success and drive improvements. Advocate for best practices in data management, architecture, and visualization. Stay ahead of industry trends, emerging technologies, and market developments related to data and analytics. Lead stakeholder discussions, effectively communicating technical concepts in business-friendly language. Required Skills and Expertise Strong knowledge of data platforms, big data technologies, and cloud solutions (AWS, Azure, GCP). Experience with SQL, Python, Spark, and ETL pipelines for data management and transformation. Proven expertise in product lifecycle management , including agile methodologies. Familiarity with BI tools (Tableau, Power BI, Looker) and data visualization techniques. Understanding of AI/ML concepts , predictive analytics, and data monetization strategies. Excellent problem-solving skills, stakeholder engagement, and leadership capabilities. Educational Qualifications Bachelor s or Master s degree in Computer Science, Data Science, Business Analytics, or a related field. Certifications in AI/ML, data engineering, or cloud technologies are a plus. Additional Requirements Proven track record of managing data-driven products with high impact. Ability to drive innovation in data infrastructure and AI-powered solutions. Strong ability to influence and collaborate across business and technical teams.

Posted 1 week ago

Apply

6.0 - 10.0 years

0 Lacs

noida, uttar pradesh

On-site

As a Senior Lead - Data Operations at Aristocrat, you will play a crucial role in supporting the company's mission of bringing happiness to life through the power of play. Your responsibilities will include owning the end-to-end monitoring and maintenance of production data pipelines, batch processing workflows, and ETL/ELT jobs. You will serve as the point of contact for complex data incidents, conduct advanced troubleshooting on stored procedures, SQL data flows, and BI solutions, and lead Root Cause Analysis (RCA) efforts to deliver balanced solutions to recurring production issues. Furthermore, you will drive operational documentation standards by creating robust runbooks, SOPs, and support wikis. You will define and implement release governance frameworks, version control protocols, and change management policies for data products. Collaborating with Data Engineering, BI, and Architecture teams, you will facilitate seamless releases, improvements, and system migrations. Your role will also involve developing and maintaining detailed release calendars, change logs, deployment documentation, and rollback procedures. In addition to the above responsibilities, you will monitor release performance metrics, proactively address issues, and drive continuous improvement of the release process. Championing automation initiatives to eliminate manual interventions, reduce support overhead, and improve operational resilience will be a key focus. You will design and implement automated monitoring and alerting frameworks to ensure early detection of data quality or performance issues. Furthermore, providing leadership and direction for on-call rotation and weekend support coverage across global time zones, including Australia and the U.S., will be part of your role. To be successful in this role, you should have a Bachelor's degree in information technology or a related field, or equivalent experience necessary to effectively carry out the job responsibilities. You should possess 6+ years of hands-on experience in Data Operations, Production Support, or Data Engineering with a strong emphasis on production environment reliability. Experience in crafting, developing, validating, and deploying ETL Pipelines is essential. Deep knowledge of SQL, stored procedures, and performance tuning in RDBMS environments is required. Familiarity with Snowflake, Google Cloud Platform (GCP), Git, and Python scripting is a plus. Moreover, proficiency in system automation, orchestration tools, and incident management frameworks is desired. Proven success in contributing to a team-oriented environment, strong analytical, problem-solving, and decision-making capabilities, excellent communication skills (written and oral), and interpersonal skills are necessary attributes for this role. Being dedicated, highly organized, with a proactive approach to operational excellence will contribute to your success at Aristocrat. Join Aristocrat and be part of an innovative, inclusive, and world-class team where individual differences are valued, and all employees have the opportunity to realize their potential. Aristocrat offers a robust benefits package, global career opportunities, and a work environment based on shared values and an inspiring mission to bring joy to life through the power of play. Aristocrat is committed to creating an inclusive environment and welcomes applications from all individuals regardless of age, gender, race, ethnicity, cultural background, disability status, or LGBTQ+ identity. Aristocrat is an equal opportunity employer (EEO M/F/D/V) that values diversity and inclusion in the workplace.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

noida, uttar pradesh

On-site

As a Senior Technical Lead for gaming content at Aristocrat, your primary responsibility will be to lead data validation and testing for the QA strategy in data engineering. You will establish and lead the QA strategy for the data engineering stack, which includes pipelines, transformations, reporting, and data quality checks. Your role will involve designing and implementing test strategies for ETL pipelines, data transformations, and BI dashboards. You will be responsible for conducting both manual and automated testing for data pipelines and reports, as well as validating Looker dashboards and reports for data correctness, layout integrity, and performance. Automation of data validations using SQL and Python will be a key aspect of your duties. You will own and develop the data QA roadmap, transitioning from manual testing practices to full automation and CI/CD integration. It will be essential for you to maintain test documentation, encompassing test plans, cases, and defect logs, for instance, in Jira. We are looking for candidates with a minimum of 5 years of experience in QA roles, focusing on data engineering and reporting environments. Proficiency in SQL for querying and validating large datasets, as well as Python, is required. Experience in testing Looker reports and dashboards (or similar tools like Tableau/Power BI) is highly desirable. Strong problem-solving skills, attention to detail, effective communication, and the ability to work collaboratively in a team environment are essential qualities. Joining Aristocrat means being part of a team dedicated to excellence, innovation, and spreading happiness worldwide. As a world leader in gaming content and technology, we offer a robust benefits package, global career opportunities, and a workplace where individual differences are valued. Please note that depending on the nature of your role, you may be required to register with the Nevada Gaming Control Board (NGCB) and/or other gaming jurisdictions. Additionally, at this time, we are unable to sponsor work visas for this position, and candidates must be authorized to work in the job posting location on a full-time basis without the need for current or future visa sponsorship. Join us at Aristocrat and be part of a team that is committed to excellence, innovation, and spreading happiness to millions around the world!,

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

jaipur, rajasthan

On-site

You are a highly skilled and motivated AI Engineer who will be joining our team to develop reusable AI-powered tools and components that drive automation, scalability, and efficiency across our technical delivery ecosystem. Your primary focus will be on leveraging large language models (LLMs) to develop intelligent, context-aware automation tools, starting with a flagship tool that automates SQL script generation for data migrations and transformations. You will collaborate closely with solution architects and data engineers to build generative AI assets that can be integrated into repeatable client delivery work-streams, especially for non-profit and education clients using platforms like Salesforce Non-profit Cloud, Education Cloud, Salesforce NPSP, Raisers Edge, and Ellucian Banner. Your key responsibilities will include designing and developing AI-Powered Tools by building generative AI tools and services, such as automated SQL generation engines, that reduce manual coding effort and increase accuracy. You will also be responsible for prompt engineering and tuning, integrating LLMs into business workflows, data contextualization and reasoning, leveraging LLMs for contextual understanding, collaborating with cross-functional teams, ensuring model safety and performance, driving reusability and knowledge capture, and contributing to the betterment of Cloud for Good. To be successful in this role, you should have proven experience building with LLMs or other generative AI models, a strong background in Python with hands-on experience using AI/ML libraries, a solid understanding of SQL and data engineering workflows, experience working with structured data systems like SQL, Salesforce, Raisers Edge, and Banner, and the ability to build context-aware, prompt-engineered solutions that integrate with APIs or internal systems. Familiarity with MLOps or DevOps practices for deploying and monitoring AI applications is a plus, as well as excellent communication skills and the ability to translate complex AI functionality into real-world business value. Preferred skills include exposure to Salesforce ecosystem tools and data models, experience contributing to internal toolsets or reusable IP in a professional services or consulting environment, and prior experience working with data migration frameworks or ETL pipelines. If you are looking to make an impact, thrive in a collaborative and innovative environment, access professional development opportunities and mentorship, and enjoy benefits such as a competitive salary, health/wellness packages, and flexible work options, then you may be a great fit for this role.,

Posted 1 week ago

Apply

5.0 - 8.0 years

12 - 22 Lacs

hyderabad

Hybrid

Senior Consultant is responsible for supporting and enhancing the existing on-premise BI environment and developing cloud BI stack. Tasks range from database maintenance and support to general BI development, including ETL and Visualizations.Must have the ability to work on end-to-end BI solution design and delivery. Responsibilities D esign and implementation of scalable data warehouse and BI solutions on Azure. Drive the development of Power BI dashboards and analytics solutions aligned with business KPIs and executive reporting needs. Collaborate with stakeholders to define reporting requirements and translate them into efficient BI solutions. Oversee and optimize ETL/ETL pipelines for performance, scalability, and cost-efficiency. Drive the use of scripting and automation to streamline data workflows and monitoring. Collaborate with teams to integrate Big Data technologies into BI/DWH solutions. Ensure data quality, governance, and compliance across all reporting layers. Proactively identify opportunities to improve reporting processes and data warehouse architecture . Act as a trusted advisor for BI strategy, suggesting best practices and emerging tools in the Azure ecosystem. Essential Skills Technical Data Warehousing (DWH) design, architecture, and implementation Azure Synapse Analytics / Azure Data Warehouse Power BI – data modeling, DAX, dashboard/report design, optimization SQL – advanced querying, optimization, stored procedures, performance tuning Database Knowledge – relational models, indexing, schema design ETL/ELT Tools & Pipelines – data integration and orchestration Data Modeling – star schema, snowflake schema, SCDs, fact/dimension design Data Governance & Security – role-based access, compliance best practices Cloud Ecosystem (Azure) – storage, compute, and analytics services Scripting languages such as Python, Shell, or PowerShell for automation and data handling. Exposure to Big Data technologies (e.g., Spark, Databricks, Hadoop ecosystem). Personal Stakeholder Management – engage with business users to gather and translate requirements into BI solutions Communication Skills – ability to explain technical concepts to non-technical stakeholders clearly Analytical Thinking – strong problem-solving mindset with focus on business outcomes Collaboration – work effectively with cross-functional teams (Business, IT, Data Engineering) Project Ownership – ability to manage end-to-end delivery with accountability for timelines and quality Adaptability – stay updated with emerging BI and cloud technologies, driving innovation in solutions Preferred Skills Job Knowledge of Oracle modules/tables: Financials, HR, Projects Knowledge of OTBI Subject Areas/BI Publisher Understanding of Financial Concepts is a plus Personal Demonstrate proactive thinking Should have strong interpersonal relations and business acumen Negotiation and persuasion skills are required to work with partners and implement changes A very inquisitive mind that can factor in several variables acting on the situation

Posted 1 week ago

Apply

2.0 - 5.0 years

14 - 17 Lacs

mumbai

Work from Office

As a Data Engineer at IBM, you'll play a vital role in the development, design of application, provide regular support/guidance to project teams on complex coding, issue resolution and execution. Your primary responsibilities include: Lead the design and construction of new solutions using the latest technologies, always looking to add business value and meet user requirements. Strive for continuous improvements by testing the build solution and working under an agile framework. Discover and implement the latest technologies trends to maximize and build creative solutions Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Experience with Apache Spark (PySpark)In-depth knowledge of Spark’s architecture, core APIs, and PySpark for distributed data processing. Big Data TechnologiesFamiliarity with Hadoop, HDFS, Kafka, and other big data tools. Data Engineering Skills: Strong understanding of ETL pipelines, data modelling, and data warehousing concepts. Strong proficiency in PythonExpertise in Python programming with a focus on data processing and manipulation. Data Processing FrameworksKnowledge of data processing libraries such as Pandas, NumPy. SQL ProficiencyExperience writing optimized SQL queries for large-scale data analysis and transformation. Cloud PlatformsExperience working with cloud platforms like AWS, Azure, or GCP, including using cloud storage systems Preferred technical and professional experience Define, drive, and implement an architecture strategy and standards for end-to-end monitoring. Partner with the rest of the technology teams including application development, enterprise architecture, testing services, network engineering, Good to have detection and prevention tools for Company products and Platform and customer-facing

Posted 1 week ago

Apply

5.0 - 7.0 years

0 Lacs

hyderabad, telangana, india

Remote

Working with Us Challenging. Meaningful. Life-changing. Those aren&apost words that are usually associated with a job. But working at Bristol Myers Squibb is anything but usual. Here, uniquely interesting work happens every day, in every department. From optimizing a production line to the latest breakthroughs in cell therapy, this is work that transforms the lives of patients, and the careers of those who do it. You&aposll get the chance to grow and thrive through opportunities uncommon in scale and scope, alongside high-achieving teams. Take your career farther than you thought possible. Bristol Myers Squibb recognizes the importance of balance and flexibility in our work environment. We offer a wide variety of competitive benefits, services and programs that provide our employees with the resources to pursue their goals, both at work and in their personal lives. Read more: careers.bms.com/working-with-us . At Bristol Myers Squibb, we are inspired by a single vision - transforming patients' lives through science. In oncology, hematology, immunology, and cardiovascular disease - and one of the most diverse and promising pipelines in the industry - each of our passionate colleagues contribute to innovations that drive meaningful change. We bring a human touch to every treatment we pioneer. Join us and make a difference. Position Summary At BMS, digital innovation and Information Technology are central to our vision of transforming patients' lives through science. To accelerate our ability to serve patients around the world, we must unleash the power of technology. We are committed to being at the forefront of transforming the way medicine is made and delivered by harnessing the power of computer and data science, artificial intelligence, and other technologies to promote scientific discovery, faster decision making, and enhanced patient care. If you want an exciting and rewarding career that is meaningful, consider joining our diverse team! As a Data Engineer based out of our BMS Hyderabad you are part of the Data Platform team along with supporting the larger Data Engineering community, that delivers data and analytics capabilities across different IT functional domains. The ideal candidate will have a strong background in data engineering, DataOps, cloud native services, and will be comfortable working with both structured and unstructured data. Key Responsibilities The Data Engineer will be responsible for designing, building, and maintaining the ETL pipelines, data products, evolution of the data products, and utilize the most suitable data architecture required for our organization&aposs data needs. Responsible for delivering high quality, data products and analytic ready data solution Work with an end-to-end ownership mindset, innovate and drive initiatives through completion. Develop and maintain data models to support our reporting and analysis needs. Optimize data storage and retrieval to ensure efficient performance and scalability. Collaborate with data architects, data analysts and data scientists to understand their data needs and ensure that the data infrastructure supports their requirements. Ensure data quality and integrity through data validation and testing Implement and maintain security protocols to protect sensitive data. Stay up-to-date with emerging trends and technologies in data engineering and analytics. Closely partner with the Enterprise Data and Analytics Platform team, other functional data teams and Data Community lead to shape and adopt data and technology strategy. Serves as the Subject Matter Expert on Data & Analytics Solutions. Knowledgeable in evolving trends in Data platforms and Product based implementation. Has end-to-end ownership mindset in driving initiatives through completion. Comfortable working in a fast-paced environment with minimal oversight. Mentors other team members effectively to unlock full potential. Prior experience working in an Agile/Product based environment. Qualifications & Experience 5+ years of hands-on experience working on implementing and operating data capabilities and cutting-edge data solutions, preferably in a cloud environment. Breadth of experience in technology capabilities that span the full life cycle of data management including data lakehouses, master/reference data management, data quality and analytics/AI ML is needed. In-depth knowledge and hands-on experience with ASW Glue services and AWS Data engineering ecosystem. Hands-on experience developing and delivering data, ETL solutions with some of the technologies like AWS data services (Redshift, Athena, lakeformation, etc.), Cloudera Data Platform, Tableau labs is a plus 5+ years of experience in data engineering or software development Create and maintain optimal data pipeline architecture, assemble large, complex data sets that meet functional / non-functional business requirements. Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. Strong programming skills in languages such as Python, R, PyTorch, PySpark, Pandas, Scala etc. Experience with SQL and database technologies such as MySQL, PostgreSQL, Presto, etc. Experience with cloud-based data technologies such as AWS, Azure, or Google Cloud Platform Strong analytical and problem-solving skills Excellent communication and collaboration skills Functional knowledge or prior experience in Lifesciences Research and Development domain is a plus Experience and expertise in establishing agile and product-oriented teams that work effectively with teams in US and other global BMS site. Initiates challenging opportunities that build strong capabilities for self and team Demonstrates a focus on improving processes, structures, and knowledge within the team. Leads in analyzing current states, deliver strong recommendations in understanding complexity in the environment, and the ability to execute to bring complex solutions to completion. Why You Should Apply Around the world, we are passionate about making an impact on the lives of patients with serious diseases. Empowered to apply our individual talents and diverse perspectives in an inclusive culture, our shared values of passion, innovation, urgency, accountability, inclusion, and integrity bring out the highest potential of each of our colleagues. Bristol Myers Squibb recognizes the importance of balance and flexibility in our work environment. We offer a wide variety of competitive benefits, services and programs that provide our employees with the resources to pursue their goals, both at work and in their personal lives. Our company is committed to ensuring that people with disabilities can excel through a transparent recruitment process, reasonable workplace adjustments and ongoing support in their roles. Applicants can request an accommodation prior to accepting a job offer. If you require reasonable accommodation in completing this application, or any part of the recruitment process direct your inquiries to [HIDDEN TEXT]. Visit careers.bms.com/eeo-accessibility to access our complete Equal Employment Opportunity statement. If you come across a role that intrigues you but doesn&apost perfectly line up with your resume, we encourage you to apply anyway. You could be one step away from work that will transform your life and career. Uniquely Interesting Work, Life-changing Careers With a single vision as inspiring as Transforming patients' lives through science , every BMS employee plays an integral role in work that goes far beyond ordinary. Each of us is empowered to apply our individual talents and unique perspectives in a supportive culture, promoting global participation in clinical trials, while our shared values of passion, innovation, urgency, accountability, inclusion and integrity bring out the highest potential of each of our colleagues. On-site Protocol Responsibilities BMS has an occupancy structure that determines where an employee is required to conduct their work. This structure includes site-essential, site-by-design, field-based and remote-by-design jobs. The occupancy type that you are assigned is determined by the nature and responsibilities of your role: Site-essential roles require 100% of shifts onsite at your assigned facility. Site-by-design roles may be eligible for a hybrid work model with at least 50% onsite at your assigned facility. For these roles, onsite presence is considered an essential job function and is critical to collaboration, innovation, productivity, and a positive Company culture. For field-based and remote-by-design roles the ability to physically travel to visit customers, patients or business partners and to attend meetings on behalf of BMS as directed is an essential job function. BMS is dedicated to ensuring that people with disabilities can excel through a transparent recruitment process, reasonable workplace accommodations/adjustments and ongoing support in their roles. Applicants can request a reasonable workplace accommodation/adjustment prior to accepting a job offer. If you require reasonable accommodations/adjustments in completing this application, or in any part of the recruitment process, direct your inquiries to [HIDDEN TEXT] . Visit careers.bms.com/ eeo -accessibility to access our complete Equal Employment Opportunity statement. BMS cares about your well-being and the well-being of our staff, customers, patients, and communities. As a result, the Company strongly recommends that all employees be fully vaccinated for Covid-19 and keep up to date with Covid-19 boosters. BMS will consider for employment qualified applicants with arrest and conviction records, pursuant to applicable laws in your area. If you live in or expect to work from Los Angeles County if hired for this position, please visit this page for important additional information: https://careers.bms.com/california-residents/ Any data processed in connection with role applications will be treated in accordance with applicable data privacy policies and regulations. Show more Show less

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies