Jobs
Interviews

147 Apache Airflow Jobs - Page 2

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 10.0 years

5 - 15 Lacs

Hyderabad, Pune, Bengaluru

Work from Office

We are seeking an experienced Apache Airflow Subject Matter Expert (SME), (Contract , Remote - India ) to join our Data Engineering team . You will be responsible for optimizing Airflow environments, building scalable orchestration frameworks, and supporting enterprise-scale data pipelines, while collaborating with cross-functional teams. Skills: Optimize and fine-tune existing Apache Airflow environments, addressing performance and reliability. Design and develop scalable, modular, and reusable Airflow DAGs for complex data workflows. Integrate Airflow with cloud-native services such as data factories, compute platforms, storage, and analytics . Develop and maintain CI/CD pipelines for DAG deployment, testing, and release automation. Implement monitoring, alerting, and logging standards to ensure operational excellence. Provide architectural guidance and hands-on support for new data pipeline development. Document Airflow configurations, deployment processes, and operational procedures. Mentor engineers and lead knowledge-sharing on orchestration best practices. Expertise in Airflow internals , including schedulers, executors (Celery, Kubernetes), and plugins. Experience with autoscaling solutions (KEDA) and Celery for distributed task execution. Strong hands-on skills in Python programming and modular code development . Proficiency with cloud services (Azure, AWS, or GCP), including data pipelines, compute, and storage. Solid experience with CI/CD tools such as Azure DevOps, Jenkins, or GitHub Actions. Familiarity with Docker, Kubernetes, and related deployment technologies. Strong background in monitoring tools (Prometheus, Grafana) and log aggregation (ELK, Log Analytics). Excellent problem-solving, communication, and collaboration skills. Interested? Please send your updated CV to jobs.india@pixelcodetech.com and a member of our resource team will be in touch.

Posted 1 week ago

Apply

6.0 - 10.0 years

0 Lacs

haryana

On-site

At American Express, our culture is built on a 175-year history of innovation, shared values, and Leadership Behaviors, with an unwavering commitment to supporting our customers, communities, and colleagues. As a member of Team Amex, you will receive comprehensive support for your holistic well-being and numerous opportunities to enhance your skills, develop leadership qualities, and advance your career. Your voice and ideas hold significance here, making a tangible impact as we collectively shape the future of American Express. Enterprise Architecture, situated within the Chief Technology Office at American Express, plays a crucial role as a key enabler of the company's technology strategy. This organization focuses on four primary pillars: - Architecture as Code: Responsible for managing foundational technologies utilized by engineering teams across the enterprise. - Architecture as Design: Involves solution and technical design for transformation programs and critical projects requiring architectural guidance. - Governance: Defines technical standards and develops innovative tools to automate controls for ensuring compliance. - Colleague Enablement: Concentrates on colleague development, recognition, training, and enterprise outreach. As part of the team, your responsibilities will include: - Designing, developing, and ensuring the scalability, security, and resilience of applications and data pipelines. - Providing architectural guidance and documentation to support regulatory audits when necessary. - Contributing to enterprise architecture initiatives, domain reviews, and solution architecture. - Promoting innovation by exploring new tools, frameworks, and design methodologies. To qualify for this role, we are seeking candidates with the following qualifications: - Ideally possess a BS or MS degree in computer science, computer engineering, or a related technical discipline. - Minimum of 6 years of software engineering experience with a strong proficiency in Java and Node.js. - Experience with Python and workflow orchestration tools like Apache Airflow is highly desirable. - Demonstrated expertise in designing and implementing distributed systems and APIs. - Familiarity with cloud platforms such as GCP, AWS, and modern CI/CD pipelines. - Ability to articulate clear architectural documentation and present ideas concisely. - Proven success working collaboratively in a cross-functional, matrixed environment. - Passion for innovation, problem-solving, and driving technology modernization. - Preferred experience with microservices architectures and event-driven architecture. American Express provides benefits that cater to your holistic well-being, ensuring you can perform at your best. These benefits include competitive base salaries, bonus incentives, support for financial well-being and retirement, comprehensive medical, dental, vision, life insurance, and disability benefits, flexible working models, generous paid parental leave policies, access to global on-site wellness centers, confidential counseling support through the Healthy Minds program, and career development and training opportunities. Please note that an offer of employment with American Express is subject to the successful completion of a background verification check, as per applicable laws and regulations.,

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

As a Data Specialist, you will be responsible for utilizing your expertise in ETL Fundamentals, SQL, BigQuery, Dataproc, Python, Data Catalog, Data Warehousing, and various other tools to contribute to the successful implementation of data projects. Your role will involve working with technologies such as Cloud Trace, Cloud Logging, Cloud Storage, and Datafusion to build and maintain a modern data platform. To excel in this position, you should possess a minimum of 5 years of experience in the data engineering field, with a focus on GCP cloud data implementation suite including BigQuery, Pub Sub, Data Flow/Apache Beam, Airflow/Composer, and Cloud Storage. Your strong understanding of very large-scale data architecture and hands-on experience in data warehouses, data lakes, and analytics platforms will be crucial for the success of our projects. Key Requirements: - Minimum 5 years of experience in data engineering - Hands-on experience in GCP cloud data implementation suite - Strong expertise in GBQ Query, Python, Apache Airflow, and SQL (BigQuery preferred) - Extensive hands-on experience with SQL and Python for working with data If you are passionate about data and have a proven track record of delivering results in a fast-paced environment, we invite you to apply for this exciting opportunity to be a part of our dynamic team.,

Posted 1 week ago

Apply

2.0 - 10.0 years

0 Lacs

pune, maharashtra

On-site

You are an experienced Data Engineer with expertise in PySpark, Snowflake, and AWS, and you will be responsible for designing, developing, and optimizing data pipelines and workflows in a cloud-based environment. Your main focus will be leveraging AWS services, PySpark, and Snowflake for data processing and analytics. Your key responsibilities will include designing and implementing scalable ETL pipelines using PySpark on AWS, developing and optimizing data workflows for Snowflake integration, managing and configuring AWS services such as S3, Lambda, Glue, EMR, and Redshift, collaborating with data analysts and business teams to understand requirements and deliver solutions, ensuring data security and compliance with best practices in AWS and Snowflake environments, monitoring and troubleshooting data pipelines and workflows for performance and reliability, and writing efficient, reusable, and maintainable code for data processing and transformation. Required skills for this role include strong experience with AWS services (S3, Lambda, Glue, MSK, etc.), proficiency in PySpark for large-scale data processing, hands-on experience with Snowflake for data warehousing and analytics, a solid understanding of SQL and database optimization techniques, knowledge of data lake and data warehouse architectures, familiarity with CI/CD pipelines and version control systems (e.g., Git), strong problem-solving and debugging skills, experience with Terraform or CloudFormation for infrastructure as code, knowledge of Python for scripting and automation, familiarity with Apache Airflow for workflow orchestration, understanding of data governance and security best practices, and a certification in AWS or Snowflake is a plus. For education and experience, a Bachelors degree in Computer Science, Engineering, or related field with 6 to 10 years of experience is required, along with 5+ years of experience in AWS cloud engineering and 2+ years of experience with PySpark and Snowflake. This position falls under the Technology Job Family Group and the Digital Software Engineering Job Family, and it is a full-time role.,

Posted 1 week ago

Apply

2.0 - 6.0 years

0 Lacs

chennai, tamil nadu

On-site

You will be responsible for fetching and transforming data from various systems, conducting in-depth analyses to identify gaps, opportunities, and insights, and providing recommendations that support strategic business decisions. Your key responsibilities will include data extraction and transformation, data analysis and insight generation, visualization and reporting, collaboration with cross-functional teams, and building strong working relationships with external stakeholders. You will report to the VP Business Growth and work closely with clients. To excel in this role, you should have proficiency in SQL for data querying and Python for data manipulation and transformation. Experience with data engineering tools such as Spark and Kafka, as well as orchestration tools like Apache NiFi and Apache Airflow, will be essential for ETL processes and workflow automation. Expertise in data visualization tools such as Tableau and Power BI, along with strong analytical skills including statistical techniques, will be crucial. In addition to technical skills, you should possess soft skills such as flexibility, excellent communication skills, business acumen, and the ability to work independently as well as within a team. Your academic qualifications should include a Bachelors or Masters degree in Applied Mathematics, Management Science, Data Science, Statistics, Econometrics, or Engineering. Extensive experience in Data Lake architecture, building data pipelines using AWS services, proficiency in Python and SQL, and experience in the banking domain will be advantageous. Overall, you should demonstrate high motivation, a good work ethic, maturity, personal initiative, and strong oral and written communication skills to succeed in this role.,

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

kochi, kerala

On-site

The ideal candidate ready to join immediately can share their details via email for quick processing at nitin.patil@ust.com. Act swiftly for immediate attention! With over 5 years of experience, the successful candidate will have the following roles and responsibilities: - Designing, developing, and maintaining scalable data pipelines using Spark (PySpark or Spark with Scala). - Constructing data ingestion and transformation frameworks for both structured and unstructured data sources. - Collaborating with data analysts, data scientists, and business stakeholders to comprehend requirements and deliver reliable data solutions. - Handling large volumes of data while ensuring quality, integrity, and consistency. - Optimizing data workflows for enhanced performance, scalability, and cost efficiency on cloud platforms such as AWS, Azure, or GCP. - Implementing data quality checks and automation for ETL/ELT pipelines. - Monitoring and troubleshooting data issues in production environments and conducting root cause analysis. - Documenting technical processes, system designs, and operational procedures. Key Skills Required: - Minimum 3 years of experience as a Data Engineer or in a similar role. - Proficiency with PySpark or Spark using Scala. - Strong grasp of SQL for data querying and transformation purposes. - Previous experience working with any cloud platform (AWS, Azure, or GCP). - Sound understanding of data warehousing concepts and big data architecture. - Familiarity with version control systems like Git. Desired Skills: - Exposure to data orchestration tools such as Apache Airflow, Databricks Workflows, or equivalent. - Knowledge of Delta Lake, HDFS, or Kafka. - Familiarity with containerization tools like Docker/Kubernetes. - Experience with CI/CD practices and familiarity with DevOps principles. - Understanding of data governance, security, and compliance standards.,

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

As a Data Engineer II in our team, you will be responsible for managing the deprecation of migrated workflows and ensuring the seamless migration of workflows into new systems. Your expertise in building and maintaining scalable data pipelines, both on-premises and on the cloud, will be crucial. You should have a deep understanding of input and output data sources, upstream downstream dependencies, and data quality assurance. Proficiency in tools like Git, Apache Airflow, Apache Spark, SQL, data migration, and data validation is essential for this role. Your key responsibilities will include: Workflow Deprecation: - Evaluate current workflows" dependencies and consumption for deprecation. - Identify, mark, and communicate deprecated workflows using tools and best practices. Data Migration: - Plan and execute data migration tasks ensuring accuracy and completeness. - Implement strategies for accelerating data migration pace and ensuring data readiness. Data Validation: - Define and implement data validation rules for accuracy and reliability. - Monitor data quality using validation solutions and anomaly detection methods. Workflow Management: - Schedule, monitor, and automate data workflows using Apache Airflow. - Develop and manage Directed Acyclic Graphs (DAGs) in Airflow for complex data processing tasks. Data Processing: - Develop and maintain data processing scripts using SQL and Apache Spark. - Optimize data processing for performance and efficiency. Version Control: - Collaborate using Git for version control and manage the codebase effectively. - Ensure code quality and repository management best practices. Continuous Improvement: - Stay updated with the latest data engineering technologies. - Enhance performance and reliability by improving and refactoring data pipelines and processes. Skills and Qualifications: - Bachelor's degree in Computer Science, Engineering, or a related field. - Proficient in Git, SQL, and database technologies. - Experience in Apache Airflow and Apache Spark for data processing. - Knowledge of data migration, validation techniques, governance, and security. - Strong problem-solving skills and ability to work independently and in a team. - Excellent communication skills to collaborate with a global team in a high-performing environment.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

You will be responsible for designing and building scalable and efficient data warehouses to support analytics and reporting needs. Additionally, you will develop and optimize ETL pipelines by writing complex SQL queries and automating data pipelines using tools like Apache Airflow. Your role will also involve query optimization, performance tuning, and database management with MySQL, PostgreSQL, and Spark for structured and semi-structured data. Ensuring data quality and governance will be a key part of your responsibilities, where you will validate, monitor, and enforce practices to maintain data accuracy, consistency, and completeness. You will implement data governance best practices, define data standards, access controls, and policies to uphold a well-governed data ecosystem. Data modeling, ETL best practices, BI dashboarding, and proposing/implementing solutions to improve existing systems will also be part of your day-to-day tasks. Collaboration and problem-solving are essential in this role as you will work independently, collaborate with cross-functional teams, and proactively troubleshoot data challenges. Experience with dbt for data transformations is considered a bonus for this position. To qualify for this role, you should have 5-7 years of experience in the data domain, with expertise in data engineering and BI. Strong SQL skills, hands-on experience with data warehouse concepts, ETL best practices, and proficiency in MySQL, PostgreSQL, and Spark are required. Experience with Apache Airflow, data modeling techniques, BI tools like Power BI, Tableau, Apache Superset, data quality frameworks, and governance policies are also essential. The ability to work independently, identify problems, and propose effective solutions is crucial. If you are looking to join a dynamic team at Zenda and have the required experience and skills, we encourage you to apply for this position.,

Posted 1 week ago

Apply

8.0 - 12.0 years

0 Lacs

hyderabad, telangana

On-site

The Digital Success Engineering team is looking for an experienced Marketing Cloud Engineer (SMTS) to join the Customer Engagement Engineering team. In this role, you will be responsible for providing product support to customers and business users by leveraging your strong Salesforce Marketing Cloud platform knowledge, technical expertise, and exceptional business-facing skills. You will work closely with various teams to understand, troubleshoot, and coordinate operational issues within the Customer Engagement Ecosystem. Your responsibilities will include conducting technical requirements gathering, designing and implementing robust solutions for Salesforce Marketing Cloud projects, and ensuring seamless integration with internal and external systems. You will also be expected to triage and troubleshoot issues, demonstrate analytical and problem-solving expertise, and maintain technical and domain expertise in your assigned areas. To succeed in this role, you must have a Bachelor's degree in Computer Science, Software Engineering, or a related field, along with at least 8 years of hands-on experience in Salesforce Marketing Cloud and other Salesforce Core products. You should be a self-starter, able to work under pressure, and possess advanced knowledge in systems integrations, APIs, marketing compliance, and security protocols. Proficiency in various technical tools and languages such as AMPScript, HTML, CSS, JavaScript, SQL, Python, and Rest API is required. Additionally, you must have excellent communication skills to effectively collaborate with cross-functional teams, provide thought leadership, and drive successful digital program execution. Your project management skills should be top-notch, enabling you to organize, prioritize, and simplify engineering work across different technical domains. If you are a problem solver with a passion for technology, possess exceptional communication skills, and thrive in a fast-paced environment, we encourage you to apply for this role and be a key player in our Digital Success Engineering team.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

You will be responsible for designing and building data warehouses to support analytics and reporting needs. This includes architecting scalable and efficient data warehouses. Additionally, you will be developing and optimizing ETL pipelines by writing complex SQL queries and utilizing tools like Apache Airflow for automation. Query optimization and performance tuning will be a key aspect of your role, where you will focus on writing efficient SQL queries for both ETL jobs and dashboards in BI tools. Database management is another crucial responsibility, involving working with MySQL, PostgreSQL, and Spark to manage structured and semi-structured data. You will also ensure data quality and governance by validating, monitoring, and implementing governance practices to maintain data accuracy, consistency, and completeness. Implementing data governance best practices, defining data standards, access controls, and policies will be essential to maintain a well-governed data ecosystem. Your role will also include focusing on data modeling and ETL best practices to ensure robust data modeling and the application of best practices for ETL development. Working with BI tools such as Power BI, Tableau, and Apache Superset to create insightful dashboards and reports will also be part of your responsibilities. You will identify and propose improvements to existing systems and take ownership of designing and developing new data solutions. Collaboration and problem-solving are integral to this role, as you will work independently, collaborate with cross-functional teams, and proactively troubleshoot data challenges. Experience with dbt for data transformations is a bonus. Requirements for this role include 5-7 years of experience in the data domain encompassing data engineering and BI. Strong SQL skills with expertise in writing efficient and complex queries are essential. Hands-on experience with data warehouse concepts and ETL best practices, proficiency in MySQL, PostgreSQL, and Spark, and experience using Apache Airflow for workflow orchestration are also required. A strong understanding of data modeling techniques for analytical workloads, experience with Power BI, Tableau, and Apache Superset for reporting and dashboarding, familiarity with data quality frameworks, data validation techniques, and governance policies are prerequisites. The ability to work independently, identify problems, and propose effective solutions is crucial. Experience with dbt for data transformations is a bonus. This job opportunity was posted by Meenal Sharma from Zenda.,

Posted 1 week ago

Apply

4.0 - 8.0 years

0 Lacs

karnataka

On-site

Founded in 1976, CGI is among the largest independent IT and business consulting services firms in the world. With 94,000 consultants and professionals across the globe, CGI delivers an end-to-end portfolio of capabilities, from strategic IT and business consulting to systems integration, managed IT and business process services and intellectual property solutions. CGI works with clients through a local relationship model complemented by a global delivery network that helps clients digitally transform their organizations and accelerate results. CGI Fiscal 2024 reported revenue is CA$14.68 billion and CGI shares are listed on the TSX (GIB.A) and the NYSE (GIB). Learn more at cgi.com. Position: Senior Software Engineer-AI/ML Backend Developer Experience: 4-6 years Category: Software Development/ Engineering Location: Bangalore/Hyderabad/Chennai/Pune/Mumbai Shift Timing: General Shift Position ID: J0725-0150 Employment Type: Full Time Education Qualification: Bachelor's degree in computer science or related field or higher with minimum 4 years of relevant experience. We are seeking an experienced AI/ML Backend Developer to join our dynamic technology team. The ideal candidate will have a strong background in developing and deploying machine learning models, implementing AI algorithms, and managing backend systems and integrations. You will play a key role in shaping the future of our technology by integrating cutting-edge AI/ML techniques into scalable backend solutions. Your future duties and responsibilities Develop, optimize, and maintain backend services for AI/ML applications. Implement and deploy machine learning models to production environments. Collaborate closely with data scientists and frontend engineers to ensure seamless integration of backend APIs and services. Monitor and improve the performance, reliability, and scalability of existing AI/ML services. Design and implement robust data pipelines and data processing workflows. Identify and solve performance bottlenecks and optimize AI/ML algorithms for production. Stay current with emerging AI/ML technologies and frameworks to recommend and implement improvements. Required qualifications to be successful in this role Must-have Skills: - Python, TensorFlow, PyTorch, scikit-learn - Machine learning frameworks: TensorFlow, PyTorch, scikit-learn - Backend development frameworks: Flask, Django, FastAPI - Cloud technologies: AWS, Azure, Google Cloud Platform (GCP) - Containerization and orchestration: Docker, Kubernetes - Data management and pipeline tools: Apache Kafka, Apache Airflow, Spark - Database technologies: SQL databases (PostgreSQL, MySQL), NoSQL databases (MongoDB, Cassandra) - Vector Databases: Pinecone, Milvus, Weaviate - Version Control: Git - Continuous Integration/Continuous Deployment (CI/CD) pipelines: Jenkins, GitHub Actions, GitLab CI/CD Minimum of 4 years of experience developing backend systems, specifically in AI/ML contexts. Proven experience in deploying machine learning models and AI-driven applications in production. Solid understanding of machine learning concepts, algorithms, and deep learning techniques. Proficiency in writing efficient, maintainable, and scalable backend code. Experience working with cloud platforms (AWS, Azure, Google Cloud). Strong analytical and problem-solving skills. Excellent communication and teamwork abilities. Good-to-have Skills: - Java (preferred), Scala (optional) Together, as owners, let's turn meaningful insights into action. Life at CGI is rooted in ownership, teamwork, respect, and belonging. Here, you'll reach your full potential because You are invited to be an owner from day 1 as we work together to bring our Dream to life. That's why we call ourselves CGI Partners rather than employees. We benefit from our collective success and actively shape our company's strategy and direction. Your work creates value. You'll develop innovative solutions and build relationships with teammates and clients while accessing global capabilities to scale your ideas, embrace new opportunities, and benefit from expansive industry and technology expertise. You'll shape your career by joining a company built to grow and last. You'll be supported by leaders who care about your health and well-being and provide you with opportunities to deepen your skills and broaden your horizons. Come join our team, one of the largest IT and business consulting services firms in the world.,

Posted 1 week ago

Apply

8.0 - 12.0 years

0 Lacs

maharashtra

On-site

Whether you're at the start of your career or looking to discover your next adventure, your story begins here. At Citi, you'll have the opportunity to expand your skills and make a difference at one of the world's most global banks. We're fully committed to supporting your growth and development from the start with extensive on-the-job training and exposure to senior leaders, as well as more traditional learning. You'll also have the chance to give back and make a positive impact where we live and work through volunteerism. The Product Developer is a strategic professional who stays abreast of developments within their field and contributes to directional strategy by considering their application in their job and the business. Recognized as a technical authority for an area within the business, this role requires basic commercial awareness. Developed communication and diplomacy skills are necessary to guide, influence, and convince others, particularly colleagues in other areas and occasional external customers. The impact of the work is significant on the area through complex deliverables, providing advice and counsel related to the technology or operations of the business. The work impacts an entire area, which eventually affects the overall performance and effectiveness of the sub-function/job family. In this role, you're expected to: - Develop reporting and analytical solutions using various technologies like Python, relational and non-relational databases, Business Intelligence tools, and code orchestrations - Identify solutions ranging across data analytics, reporting, CRM, reference data, Workflows, and trade processing - Design compelling dashboards and Reports using business intelligence tools like Qlikview, Tableau, Pixel-perfect, etc. - Perform data investigations with a high degree of accuracy under tight timelines - Develop plans, prioritize, coordinate design and delivery of products or features to product release, and serve as a product ambassador within the user community - Mentor junior colleagues on technical topics relating to data analytics and software development and conduct code reviews - Follow market, industry, and client trends to own field and adapt them for application to Citi's products and solutions platforms - Work in close coordination with Technology, Business Managers, and other stakeholders to fulfill the delivery objectives - Partner with senior team members and leaders and a widely distributed global user community to define and implement solutions - Appropriately assess risk when making business decisions, demonstrating particular consideration for the firm's reputation and safeguarding Citigroup, its clients, and assets by driving compliance with applicable laws, rules, and regulations, adhering to Policy, applying sound ethical judgment regarding personal behavior, conduct, and business practices, and escalating, managing, and reporting control issues with transparency As a successful candidate, you'd ideally have the following skills and exposure: - 8-12 years of experience using tools for statistical modeling of large data sets and proficient knowledge of data modeling and databases, such as Microsoft SQL Server, Oracle, and Impala - Advanced knowledge of analytical and business intelligence tools including Tableau desktop, Tableau Prep, Tabpy, Access - Familiarity with product development methodologies - Proficient knowledge of programming languages and frameworks such as Python, Visual Basic, and/or R, Apache Airflow, Streamlit and/or Flask, Starburst - Well versed with code versioning tools like github, bitbucket, etc. - Ability to create business analysis, troubleshoot data quality issues, and conduct exploratory and descriptive analysis of business datasets - Ability to structure and break down problems, develop solutions, and drive results - Project Management skills with experience leading large technological initiatives Education: - Bachelor's/University degree, Master's degree preferred Take the next step in your career, apply for this role at Citi today.,

Posted 1 week ago

Apply

7.0 - 11.0 years

0 Lacs

karnataka

On-site

As a Developer contracted by Luxoft for supporting customer initiatives, your main task will involve developing solutions based on client requirements within the Telecom/network work environment. You will be responsible for utilizing technologies such as Databricks and Azure, Apache Spark, Python, SQL, and Apache Airflow to create and manage Databricks clusters for ETL processes. Integration with ADLS, Blob Storage, and efficient data ingestion from various sources including on-premises databases, cloud storage, APIs, and streaming data will also be part of your role. Moreover, you will work on handling secrets using Azure Key Vault, interacting with APIs, and gaining hands-on experience with Kafka/Azure EventHub streaming. Your expertise in data bricks delta APIs, UC catalog, and version control tools like Github will be crucial. Additionally, you will be involved in data analytics, supporting ML frameworks, and integrating with Databricks for model training. Proficiency in Python, Apache Airflow, Microsoft Azure, Databricks, SQL, ADLS, Blob storage, Kafka/Azure EventHub, and various other related skills is a must. The ideal candidate should hold a Bachelor's degree in Computer Science or a related field and possess at least 7 years of experience in development. Problem-solving skills, effective communication abilities, teamwork, and a commitment to continuous learning are essential traits for this role. Desirable skills include exposure to Snowflake, PostGre, Redis, GenAI, and a good understanding of RBAC. Proficiency in English at C2 level is required for this Senior-level position based in Bengaluru, India. This opportunity falls under the Big Data Development category within Cross Industry Solutions and is expected to be effective from 06/05/2025.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

haryana

On-site

You are a highly skilled and experienced Senior Backend Developer with a focus on Python and backend development. In this role, you will be responsible for designing, developing, and maintaining backend applications using Python. Collaborating with cross-functional teams, you will implement RESTful APIs and web services to ensure high-performance and scalable backend systems. Your key responsibilities will include optimizing database performance, working with relational databases such as MySQL and PostgreSQL, and GraphDB like Neo4j. You will also develop and manage orchestration workflows using tools like Apache Airflow, as well as implementing and maintaining CI/CD pipelines for smooth deployments. Collaboration with DevOps teams for infrastructure management will be essential, along with maintaining high-quality documentation and following version control practices. To excel in this role, you must have a minimum of 5-8 years of backend development experience with Python. It would be advantageous to have experience with backend frameworks like Node.js/Typescript and a strong understanding of relational databases with a focus on query optimization. Hands-on experience with GraphDB and familiarity with RESTful APIs, web service design principles, version control tools like Git, CI/CD pipelines, and DevOps practices are also required. Your problem-solving and analytical skills will be put to the test in this role, along with your excellent communication and collaboration abilities for working effectively with cross-functional teams. Adaptability to new technologies and a fast-paced work environment is crucial, and a Bachelor's degree in Computer Science, Engineering, or a related field is preferred. Familiarity with modern frameworks and libraries in Python or Node.js will be beneficial for success in this position. If you believe you are a perfect fit for this role, please send your CV, references, and cover letter to career@e2eresearch.com.,

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

Zeta Global is looking for a visionary backend developer to join the Data Cloud team and lead the evolution with Generative AI. By leveraging this cutting-edge technology, you will be responsible for developing next-generation data products that provide innovative solutions to client challenges. This role presents an exciting opportunity to immerse yourself in the marketing tech landscape, work on advanced AI projects, and pioneer their application in marketing. In this role, you will be expected to fulfill two main responsibilities. Firstly, as a Backend Developer, you will conduct data analysis and generate outputs for Gen AI tasks while also supporting standard data analysis functions. Secondly, as a Gen AI expert, you will be tasked with understanding and translating business requirements into Gen AI-powered outputs independently. Key Responsibilities: - Analyze data from various sources to generate insights for Gen AI and non-Gen AI tasks. - Collaborate effectively with cross-functional teams to ensure alignment and understanding. - Utilize AI assistants to solve business problems and enhance user experience. - Support product development teams in creating APIs that interact with Gen AI backend data. - Create data flow diagrams and materials for coordination with devops teams. - Manage deployment of APIs in relevant spaces like Kubernetes. - Provide technical guidance on Gen AI best practices to UI development and data analyst teams. - Stay updated on advancements in Gen AI and suggest implementation ideas to maximize its potential. Requirements: - Proven experience as a data analyst with a track record of delivering impactful insights. - Proficiency in Gen AI platforms such as OpenAI, Gemini, and experience in creating and optimizing AI models. - Familiarity with API deployment, data pipelines, and workflow automation. - Strong critical thinking skills and a proactive business mindset. - Excellent communication and collaboration abilities. Technical Skills: - Python - SQL - AWS Services (Lambda, EKS) - Apache Airflow - CICD (Serverless Framework) - Git - Jira / Trello Preferred Qualifications: - Understanding of marketing/advertising industry. - Previous involvement in at least one Gen AI project. - Strong programming skills in Python or similar languages. - Background in DevOps or closely related experience. - Proficiency in data management. What We Offer: - Opportunity to work on AI-powered data analysis and drive real impact. - Agile product development timelines for immediate visibility of your work. - Supportive work environment for continuous learning and growth. - Competitive salary and benefits package. Zeta Global is a leading marketing technology company known for its innovative solutions and industry leadership. Established in 2007, the company combines a vast proprietary data set with Artificial Intelligence to personalize experiences, understand consumer intent, and fuel business growth. Zeta Global's technology, powered by the Zeta Marketing Platform, enables end-to-end marketing programs for renowned brands across digital channels. With expertise in Email, Display, Social, Search, and Mobile marketing, Zeta delivers scalable and sustainable acquisition and engagement programs that drive results.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

hyderabad, telangana

On-site

At Lilly, the focus is on uniting caring with discovery to enhance the lives of people worldwide. As a global healthcare leader headquartered in Indianapolis, Indiana, we are committed to developing and providing life-changing medicines, advancing disease management, and contributing to communities through philanthropy and volunteerism. Our dedicated team of 35,000 employees collaborates to prioritize people and strive towards making a positive impact globally. The Enterprise Data organization at Lilly has pioneered an integrated data and analytics platform designed to facilitate the efficient processing and analysis of data sets across various environments. As part of this team, you will play a crucial role in managing, monitoring, and optimizing the flow of high-quality data to support data sharing and analytics initiatives. Your responsibilities will include monitoring data pipelines to ensure smooth data flow, managing incidents to maintain data integrity, communicating effectively with stakeholders regarding data issues, conducting root cause analysis to enhance processes, optimizing data pipeline performance, and implementing measures to ensure data accuracy and reliability. Additionally, you will be involved in cloud cost optimization, data lifecycle management, security compliance, automation, documentation, and collaboration with various stakeholders to improve pipeline performance. To excel in this role, you should possess a Bachelor's Degree in Information Technology or a related field, along with at least 5 years of work experience in Information Technology. Strong analytical, collaboration, and communication skills are essential, along with the ability to adapt to new technologies and methodologies. Proficiency in ETL processes, SQL, AWS services, CI/CD, Apache Airflow, and ITIL practices is required. Certification in AWS and experience with agile frameworks are preferred. If you are passionate about leveraging technology to drive innovation in the pharmaceutical industry and are committed to ensuring data integrity and security, we invite you to join our team in Hyderabad, India. Embrace the opportunity to contribute to Lilly's mission of making life better for individuals worldwide.,

Posted 1 week ago

Apply

1.0 - 5.0 years

0 Lacs

hyderabad, telangana

On-site

As a Data Analyst with 1+ years of experience in AdTech, you will be an integral part of our analytics team. Your primary role will involve analyzing large-scale advertising and digital media datasets to support business decisions. You will work with various AdTech data such as ads.txt, programmatic delivery, campaign performance, and revenue metrics. Your responsibilities will include designing, developing, and maintaining scalable data pipelines using GCP-native tools like Cloud Functions, Dataflow, and Composer. You will be required to write and optimize complex SQL queries in BigQuery for data extraction and transformation. Additionally, you will build and maintain dashboards and reports in Looker Studio to visualize key performance indicators (KPIs) and campaign performance. Collaboration with cross-functional teams including engineering, operations, product, and client teams will be crucial as you gather requirements and deliver analytics solutions. Monitoring data integrity, identifying anomalies, and working on data quality improvements will also be a part of your role. To be successful in this role, you should have a minimum of 1 year of experience in a data analytics or business intelligence role. Hands-on experience with AdTech datasets, strong proficiency in SQL (especially with Google BigQuery), and experience with building data pipelines using Google Cloud Platform (GCP) tools are essential. Proficiency in Looker Studio, problem-solving skills, attention to detail, and excellent communication skills are also required. Preferred qualifications include experience with additional visualization tools such as Tableau, Power BI, or Looker (BI), exposure to data orchestration tools like Apache Airflow (via Cloud Composer), familiarity with Python for scripting or automation, and understanding of cloud data architecture and AdTech integrations (e.g., DV360, Ad Manager, Google Ads).,

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

hyderabad, telangana

On-site

As a Data Engineer, you will be responsible for designing, developing, and maintaining scalable data pipelines using Spark, specifically PySpark or Spark with Scala. Your role will involve building data ingestion and transformation frameworks for various structured and unstructured data sources. Collaboration with data analysts, data scientists, and business stakeholders is essential to understand requirements and deliver reliable data solutions. Working with large volumes of data, you will ensure quality, integrity, and consistency while optimizing data workflows for performance, scalability, and cost efficiency on cloud platforms such as AWS, Azure, or GCP. Implementation of data quality checks and automation for ETL/ELT pipelines is a critical aspect of this role. Monitoring and troubleshooting data issues in production, along with performing root cause analysis, will be part of your responsibilities. Additionally, documenting technical processes, system designs, and operational procedures will be necessary. The ideal candidate for this position should have at least 3+ years of experience as a Data Engineer or in a similar role. Hands-on experience with PySpark or Spark using Scala is required, along with a strong knowledge of SQL for data querying and transformation. Experience working with any cloud platform (AWS, Azure, or GCP) and a solid understanding of data warehousing concepts and big data architecture are essential. Familiarity with version control systems like Git is also a must-have skill. In addition to the must-have skills, it would be beneficial to have experience with data orchestration tools like Apache Airflow, Databricks Workflows, or similar. Knowledge of Delta Lake, HDFS, or Kafka, familiarity with containerization tools (Docker/Kubernetes), exposure to CI/CD practices and DevOps principles, and an understanding of data governance, security, and compliance standards are considered good-to-have skills. If you meet the above requirements and are ready to join immediately, please share your details via email to nitin.patil@ust.com for quick processing. Act fast for immediate attention!,

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

Imagine what you could do here. At Apple, new ideas have a way of becoming phenomenal products, services, and customer experiences very quickly. Every single day, people do amazing things at Apple. Do you want to impact the future of Manufacturing here at Apple through cutting edge ML techniques This position involves a wide variety of skills, innovation, and is a rare opportunity to be working on ground breaking, new applications of machine-learning, research and implementation. Ultimately, your work would have a huge impact on billions of users across the globe. You can help inspire change, by using your skills to influence globally recognized products" supply chain. The goal of Apple's Manufacturing & Operations team is to take a vision of a product and turn it into a reality. Through the use of statistics, the scientific process, and machine learning, the team recommends and implements solutions to the most challenging problems. Were looking for experienced machine learning professionals to help us revolutionize how we manufacture Apples amazing products. Put your experience to work in this highly visible role. Operations Advanced Analytics team is looking for creative and motivated hands-on individual contributors who thrive in a dynamic environment and enjoy working with multi-functional teams. As a member of our team, you will work on applied machine-learning algorithms to seek problems that focus on topics such as classification, regression, clustering, optimizations and other related algorithms to impact and optimize Apples supply chain and manufacturing processes. As a part of this role, you would work with the team to build end-to-end machine learning systems and modules, and deploy the models to our factories. You'll be collaborating with Software Engineers, Machine Learning Engineers, Operations, and Hardware Engineering teams across the company. Minimum Qualifications - 3+ years experience in machine learning algorithms, software engineering, and data mining models with an emphasis on large language models (LLM) or large multimodal models (LMM). - Masters in Machine Learning, Artificial intelligence, Computer Science, Statistics, Operations Research, Physics, Mechanical Engineering, Electrical Engineering, or related field. Preferred Qualifications - Proven experience in LLM and LMM development, fine-tuning, and application building. Experience with agents and agentic workflows is a major plus. - Experience with modern LLM serving and inference frameworks, including vLLM for efficient model inference and serving. - Hands-on experience with LangChain and LlamaIndex, enabling RAG applications and LLM orchestration. - Strong software development skills with proficiency in Python. Experienced user of ML and data science libraries such as PyTorch, TensorFlow, Hugging Face Transformers, and scikit-learn. - Familiarity with distributed computing, cloud infrastructure, and orchestration tools, such as Kubernetes, Apache Airflow (DAG), Docker, Conductor, Ray for LLM training and inference at scale is a plus. - Deep understanding of transformer-based architectures (e.g., BERT, GPT, LLaMA) and their optimization for low-latency inference. - Ability to meaningfully present results of analyses in a clear and impactful manner, breaking down complex ML/LLM concepts for non-technical audiences. - Experience applying ML techniques in manufacturing, testing, or hardware optimization is a major plus. - Proven experience in leading and mentoring teams is a plus. Submit CV,

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

kolkata, west bengal

On-site

Candidates who are ready to join immediately can share their details via email for quick processing to nitin.patil@ust.com. Act fast for immediate attention! With over 5 years of experience, the ideal candidate will be responsible for designing, developing, and maintaining scalable data pipelines using Spark, either PySpark or Spark with Scala. They will also be tasked with building data ingestion and transformation frameworks for structured and unstructured data sources. Collaboration with data analysts, data scientists, and business stakeholders to understand requirements and deliver reliable data solutions is a key aspect of the role. The candidate will work with large volumes of data to ensure quality, integrity, and consistency, optimizing data workflows for performance, scalability, and cost efficiency on cloud platforms such as AWS, Azure, or GCP. Implementation of data quality checks and automation for ETL/ELT pipelines, as well as monitoring and troubleshooting data issues in production, are also part of the responsibilities. Documentation of technical processes, system designs, and operational procedures will be essential. Must-Have Skills: - At least 3 years of experience as a Data Engineer or in a similar role. - Hands-on experience with PySpark or Spark using Scala. - Strong knowledge of SQL for data querying and transformation. - Experience working with any cloud platform (AWS, Azure, or GCP). - Solid understanding of data warehousing concepts and big data architecture. - Experience with version control systems like Git. Good-to-Have Skills: - Experience with data orchestration tools like Apache Airflow, Databricks Workflows, or similar. - Knowledge of Delta Lake, HDFS, or Kafka. - Familiarity with containerization tools (Docker/Kubernetes). - Exposure to CI/CD practices and DevOps principles. - Understanding of data governance, security, and compliance standards.,

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

ahmedabad, gujarat

On-site

As a Data Engineer, you will be responsible for designing and building efficient data pipelines using Azure Databricks (PySpark). You will implement business logic for data transformation and enrichment at scale, as well as manage and optimize Delta Lake storage solutions. Additionally, you will develop REST APIs using FastAPI to expose processed data and deploy them on Azure Functions for scalable and serverless data access. You will play a key role in data orchestration by developing and managing Airflow DAGs to orchestrate ETL processes. This includes ingesting and processing data from various internal and external sources on a scheduled basis. Database management will also be part of your responsibilities, involving handling data storage and access using PostgreSQL and MongoDB, and writing optimized SQL queries to support downstream applications and analytics. Collaboration is essential in this role, as you will work cross-functionally with teams to deliver reliable, high-performance data solutions. It is important to follow best practices in code quality, version control, and documentation to ensure the success of data projects. To excel in this position, you must have at least 5 years of hands-on experience as a Data Engineer and strong expertise in Azure Cloud services. Proficiency in Azure Databricks, PySpark, and Delta Lake is required, along with solid experience in Python and FastAPI for API development. Experience with Azure Functions for serverless API deployments, managing ETL pipelines using Apache Airflow, and working with PostgreSQL and MongoDB is also necessary. Strong SQL skills and experience handling large datasets are essential. Preferred qualifications include familiarity with data ingestion from APIs or third-party data providers, experience optimizing data pipelines for performance and scalability, and a working knowledge of Azure services like Azure Data Lake and Azure Storage.,

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

thiruvananthapuram, kerala

On-site

The ideal candidate ready to join immediately can share their details via email for quick processing at nitin.patil@ust.com. Act fast for immediate attention! With over 5 years of experience, you will be responsible for designing, developing, and maintaining scalable data pipelines using Spark (PySpark or Spark with Scala). You will also build data ingestion and transformation frameworks for structured and unstructured data sources. Collaboration with data analysts, data scientists, and business stakeholders to understand requirements and deliver reliable data solutions will be a key aspect of the role. Working with large volumes of data, ensuring quality, integrity, and consistency, and optimizing data workflows for performance, scalability, and cost efficiency on cloud platforms (AWS, Azure, or GCP) are essential responsibilities. Additionally, implementing data quality checks and automation for ETL/ELT pipelines, monitoring and troubleshooting data issues in production, and performing root cause analysis will be part of your duties. You will also be expected to document technical processes, system designs, and operational procedures. Must-Have Skills: - Minimum 3 years of experience as a Data Engineer or in a similar role. - Hands-on experience with PySpark or Spark using Scala. - Strong knowledge of SQL for data querying and transformation. - Experience working with any cloud platform (AWS, Azure, or GCP). - Solid understanding of data warehousing concepts and big data architecture. - Familiarity with version control systems like Git. Good-to-Have Skills: - Experience with data orchestration tools such as Apache Airflow, Databricks Workflows, or similar. - Knowledge of Delta Lake, HDFS, or Kafka. - Familiarity with containerization tools like Docker/Kubernetes. - Exposure to CI/CD practices and DevOps principles. - Understanding of data governance, security, and compliance standards.,

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

noida, uttar pradesh

On-site

The ideal candidate for this position should have at least 5 years of experience and must be ready to join immediately. In this role, you will be responsible for designing, developing, and maintaining scalable data pipelines using Spark, specifically PySpark or Spark with Scala. You will also be tasked with building data ingestion and transformation frameworks for structured and unstructured data sources. Collaboration with data analysts, data scientists, and business stakeholders to understand requirements and deliver reliable data solutions is a key aspect of this role. Working with large volumes of data to ensure quality, integrity, and consistency is crucial. Additionally, optimizing data workflows for performance, scalability, and cost efficiency on cloud platforms such as AWS, Azure, or GCP is a significant part of the responsibilities. Implementing data quality checks and automation for ETL/ELT pipelines, monitoring and troubleshooting data issues in production, and performing root cause analysis are also essential tasks. Documentation of technical processes, system designs, and operational procedures is expected. The must-have skills for this role include at least 3 years of experience as a Data Engineer or in a similar role, hands-on experience with PySpark or Spark using Scala, strong knowledge of SQL for data querying and transformation, experience working with any cloud platform (AWS, Azure, or GCP), a solid understanding of data warehousing concepts and big data architecture, and experience with version control systems like Git. Good-to-have skills for this position include experience with data orchestration tools like Apache Airflow, Databricks Workflows, or similar, knowledge of Delta Lake, HDFS, or Kafka, familiarity with containerization tools such as Docker/Kubernetes, exposure to CI/CD practices and DevOps principles, and an understanding of data governance, security, and compliance standards. If you meet the qualifications and are interested in this exciting opportunity, please share your details via email at nitin.patil@ust.com for quick processing. Act fast to grab this immediate attention!,

Posted 2 weeks ago

Apply

4.0 - 8.0 years

0 Lacs

noida, uttar pradesh

On-site

As an AI and Machine Learning Engineer at Dailoqa, you will play a crucial role in shaping the future of Financial Services clients and the company as a whole. Working directly with the founding team, you will have the opportunity to apply the latest AI techniques to address real-world problems encountered by Financial Services clients. Your responsibilities will include designing, constructing, and enhancing datasets to assess and continually enhance our solutions, as well as engaging in strategy and product ideation sessions to influence our product and solution roadmap. Key Responsibilities: - Agentic AI Development: Build scalable multi-modal Large Language Model (LLM) based AI agents using frameworks such as LangGraph, Microsoft Autogen, or Crewai. - AI Research and Innovation: Research and develop innovative solutions for relevant AI challenges such as Retrieval-Augmented Generation (RAG), semantic search, knowledge representation, tool usage, fine-tuning, and reasoning in LLMs. - Technical Expertise: Demonstrate proficiency in a technology stack comprising Python, LlamaIndex / LangChain, PyTorch, HuggingFace, FastAPI, Postgres, SQLAlchemy, Alembic, OpenAI, Docker, Azure, Typescript, and React. - LLM and NLP Experience: Hands-on experience with LLMs, RAG architectures, Natural Language Processing (NLP), or applying Machine Learning to solve real-world problems. - Dataset Development: Establish a strong track record of constructing datasets for training and/or evaluating machine learning models. - Customer Focus: Dive deep into the domain, comprehend the problem, and concentrate on delivering value to the customer. - Adaptability: Thrive in a fast-paced environment and demonstrate enthusiasm for joining an early-stage venture. - Model Deployment and Management: Automate model deployment, monitoring, and retraining processes. - Collaboration and Optimization: Collaborate with data scientists to review, refactor, and optimize machine learning code. - Version Control and Governance: Implement version control and governance for models and data. Required Qualifications: - Bachelor's degree in computer science, Software Engineering, or a related field. - 4-8 years of experience in MLOps, DevOps, or similar roles. - Strong programming experience and familiarity with Python-based deep learning frameworks like PyTorch, JAX, Tensorflow. - Proficiency in cloud platforms (AWS, Azure, or GCP) and infrastructure-as-code tools like Terraform. Desired Skills: - Experience with experiment tracking and model versioning tools. - Proficiency with the technology stack: Python, LlamaIndex / LangChain, PyTorch, HuggingFace, FastAPI, Postgres, SQLAlchemy, Alembic, OpenAI, Docker, Azure, Typescript, React. - Knowledge of data pipeline orchestration tools like Apache Airflow or Prefect. - Familiarity with software testing and test automation practices. - Understanding of ethical considerations in machine learning deployments. - Strong problem-solving skills and ability to work in a fast-paced environment.,

Posted 2 weeks ago

Apply

8.0 - 12.0 years

0 Lacs

maharashtra

On-site

The role of Staff Engineer - Data in SonyLIV's Digital Business is to lead the data engineering strategy, architect scalable data infrastructure, drive innovation in data processing, ensure operational excellence, and build a high-performance team to enable data-driven insights for OTT content and user engagement. This position is based in Mumbai and requires a minimum of 8 years of experience in the field. Responsibilities include defining the technical vision for scalable data infrastructure using modern technologies like Spark, Kafka, Snowflake, and cloud services, leading innovation in data processing and architecture through real-time data processing and streaming analytics, ensuring operational excellence in data systems by setting and enforcing standards for data reliability and privacy, building and mentoring a high-caliber data engineering team, collaborating with cross-functional teams, and driving data quality and business insights through automated quality frameworks and BI dashboards. The successful candidate should have 8+ years of experience in data engineering, business intelligence, and data warehousing, with expertise in high-volume, real-time data environments. They should possess a proven track record in building and managing large data engineering teams, designing and implementing scalable data architectures, proficiency in SQL, experience with object-oriented programming languages, and knowledge of A/B testing methodologies and statistical analysis. Preferred qualifications include a degree in a related technical field, experience managing the end-to-end data engineering lifecycle, working with large-scale infrastructure, familiarity with automated data lineage and auditing tools, expertise with BI and visualization tools, and advanced processing frameworks. Joining SonyLIV offers the opportunity to drive the future of data-driven entertainment by collaborating with industry professionals, working with comprehensive data sets, leveraging cutting-edge technology, and making a tangible impact on product delivery and user engagement. The ideal candidate will bring a strong foundation in data infrastructure, experience in leading and scaling data teams, and a focus on operational excellence to enhance efficiency.,

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies