Home
Jobs

2514 Airflow Jobs - Page 38

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

· Design, develop, and deploy machine learning models for real-world applications. · Build and maintain scalable ML pipelines using tools like Airflow, MLflow, or Kubeflow. · Collaborate with data scientists, data engineers, and product teams to understand business needs and translate them into ML solutions. · Perform data preprocessing, feature engineering, and model evaluation. · Optimize model performance and ensure robustness, fairness, and explainability. · Monitor and maintain models in production, including retraining and performance tracking. · Contribute to the development of internal ML tools and frameworks. · Stay up to date with the latest research and best practices in machine learning and MLOps. Requirements · Strong working experience in Python · Hands on Experience in machine learning platforms, frameworks, and libraries · Strong understanding of Deep Learning concepts and conduct experiments and analyze results to optimize model performance and accuracy · Strong understanding of software engineering concepts to build robust and scalable AI systems. · Conceive, design, and develop NLP concepts (text representation, semantic extraction techniques and modeling) · Knowledge in building web apps/UI/Reporting using Python packages like PlotlyDash, Streamlit and Panel etc. with User Centric design is an advantage. · Familiarity with cloud-based AI/ML services Benefits · Competitive salary and performance-based bonuses. · Comprehensive insurance plans. · Collaborative and supportive work environment. · Chance to learn and grow with a talented team. · A positive and fun work environment. Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Company Description Blend is a premier AI services provider, committed to co-creating meaningful impact for its clients through the power of data science, AI, technology, and people. With a mission to fuel bold visions, Blend tackles significant challenges by seamlessly aligning human expertise with artificial intelligence. The company is dedicated to unlocking value and fostering innovation for its clients by harnessing world-class people and data-driven strategy. We believe that the power of people and AI can have a meaningful impact on your world, creating more fulfilling work and projects for our people and clients. For more information, visit www.blend360.com Job Description We are looking for an experienced Senior Data Engineer with a strong foundation in Python, SQL, and Spark , and hands-on expertise in AWS, Databricks . In this role, you will build and maintain scalable data pipelines and architecture to support analytics, data science, and business intelligence initiatives. You’ll work closely with cross-functional teams to drive data reliability, quality, and performance. Responsibilities Design, develop, and optimize scalable data pipelines using Databricks in AWS such as Glue, S3, Lambda, EMR, Databricks notebooks, workflows and jobs. Building data lake in WS Databricks. Build and maintain robust ETL/ELT workflows using Python and SQL to handle structured and semi-structured data. Develop distributed data processing solutions using Apache Spark or PySpark. Partner with data scientists and analysts to provide high-quality, accessible, and well-structured data. Ensure data quality, governance, security, and compliance across pipelines and data stores. Monitor, troubleshoot, and improve the performance of data systems and pipelines. Participate in code reviews and help establish engineering best practices. Mentor junior data engineers and support their technical development. Qualifications Requirements Bachelor's or master's degree in computer science, Engineering, or a related field. 5+ years of hands-on experience in data engineering, with at least 2 years working with AWS Databricks. Strong programming skills in Python for data processing and automation. Advanced proficiency in SQL for querying and transforming large datasets. Deep experience with Apache Spark/PySpark in a distributed computing environment. Solid understanding of data modelling, warehousing, and performance optimization techniques. Proficiency with AWS services such as Glue, S3, Lambda and EMR. Experience with version control Git or Code commit Experience in any workflow orchestration like Airflow, AWS Step funtions is a plu Show more Show less

Posted 1 week ago

Apply

9.0 - 14.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

The Applications Development Senior Programmer Analyst (data engineering senior programmer analyst) is an intermediate level position responsible for participation in the establishment and implementation of new or revised data platform eco systems and programs in coordination with the Technology team. The overall objective of this role is to contribute to data engineering scrum team to implement the business requirements: Responsibilities: Build and maintain batch or real-time data pipelines in data platform. Maintain and optimize the data infrastructure required for accurate extraction, transformation, and loading of data from a wide variety of data sources. Develop ETL (extract, transform, load) processes to help extract and manipulate data from multiple sources. Monitor and control all phases of development process and analysis, design, construction, testing, and implementation as well as provide user and operational support on applications to business users Automate data workflows such as data ingestion, aggregation, and ETL processing. Prepare raw data in Data Warehouses into a consumable dataset for both technical and non-technical stakeholders. Build, maintain, and deploy data products for analytics and data science teams on data platform Ensure data accuracy, integrity, privacy, security, and compliance through quality control procedures. Monitor data systems performance and implement optimization solution. Has the ability to operate with a limited level of direct supervision. Can exercise independence of judgement and autonomy. Acts as SME to senior stakeholders and /or other team members. Serve as advisor or coach to new or lower level analysts Appropriately assess risk when business decisions are made, demonstrating particular consideration for the firm's reputation and safeguarding Citigroup, its clients and assets, by driving compliance with applicable laws, rules and regulations, adhering to Policy, applying sound ethical judgment regarding personal behavior, conduct and business practices, and escalating, managing and reporting control issues with transparency. Qualifications: 9 to 14 years of relevant experience in Data engineering role Advanced SQL/ RDBMS skills and experience with relational databases and database design. Strong proficiency in object-oriented languages: Python, PySpark is must Experience working with Bigdata - Hive/Impala/S3/HDFS Experience working with data ingestion tools such as Talend or Ab Initio. Nice to working with data lakehouse architecture such as AWS Cloud/Airflow/Starburst/Iceberg Strong proficiency in scripting languages like Bash, UNIX Shell scripting Strong proficiency in data pipeline and workflow management tools Strong project management and organizational skills. Excellent problem-solving, communication, and organizational skills. Proven ability to work independently and with a team. Experience in managing and implementing successful projects Ability to adjust priorities quickly as circumstances dictate Consistently demonstrates clear and concise written and verbal communication Education: Bachelor’s degree/University degree or equivalent experience ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Applications Development ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Most Relevant Skills Please see the requirements listed above. ------------------------------------------------------ Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster. Show more Show less

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

What impact will you make? Every day, your work will make an impact that matters, while you thrive in a dynamic culture of inclusion, collaboration and high performance. As the undisputed leader in professional services, Deloitte is where you will find unrivaled opportunities to succeed and realize your full potentiaL Deloitte is where you will find unrivaled opportunities to succeed and realize your full potential. The Team Deloitte’s practice can help you uncover and unlock the value buried deep inside vast amounts of data. Our global network provides strategic guidance and implementation services to help companies manage data from disparate sources and convert it into accurate, actionable information that can support fact-driven decision-making and generate an insight-driven advantage. Our practice addresses the continuum of opportunities in business intelligence & visualization, data management, performance management and next-generation analytics and technologies, including big data, cloud, cognitive and machine learning. Learn more about Analytics and Information Management Practice. As a Data Engineer , you will bring extensive expertise on data handling and curation capabilities to the team. You’ll be responsible for building intelligent domains using market leading tools, ultimately improving the way we work in Marketing. Experience It is expected that the role holder will most likely have the following qualifications and experience 5-9 years In Data Engineering, software development such as ELT/ETL, data extraction and manipulation in Data Lake/Data Warehouse environment Expert level Hands to the following: Python, SQL PySpark DBT and Apache Airflow Postgres/others RDBMS DevOps, Jenkins, CI/CD Data Governance and Data Quality frameworks Data Lakes, Data Warehouse AWS services including S3, SNS, SQS, Lambda, EMR, Glue, Athena, EC2, VPC etc. Source code control - GitHub, VSTS etc. Key Tasks, Accountabilities and Challenges of this role: Design, develop, test, deploy, maintain and improve software Preparing and maintaining systems and program documentation. Assisting in the analysis and development of applications programs and databases. Modifying and troubleshooting applications programs. Coaching, mentoring, and guiding junior developer engineers Provide key support on fail and fix for assigned application/s. Undertake complex testing activities in relation to software solution Our purpose Deloitte is led by a purpose: To make an impact tha t matters. Every day, Deloitte people are making a real impact in the places they live and work. We pride ourselves on doing not only what is good for clients, but also what is good for our people and the Communities in which we live and work—always striving to be an organization that is held up as a role model of quality, integrity, and positive change. Learn more about Deloitte's impact on the world. Show more Show less

Posted 1 week ago

Apply

4.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

About Us Zelis is modernizing the healthcare financial experience in the United States (U.S.) by providing a connected platform that bridges the gaps and aligns interests across payers, providers, and healthcare consumers. This platform serves more than 750 payers, including the top 5 health plans, BCBS insurers, regional health plans, TPAs and self-insured employers, and millions of healthcare providers and consumers in the U.S. Zelis sees across the system to identify, optimize, and solve problems holistically with technology built by healthcare experts—driving real, measurable results for clients. Why We Do What We Do In the U.S., consumers, payers, and providers face significant challenges throughout the healthcare financial journey. Zelis helps streamline the process by offering solutions that improve transparency, efficiency, and communication among all parties involved. By addressing the obstacles that patients face in accessing care, navigating the intricacies of insurance claims, and the logistical challenges healthcare providers encounter with processing payments, Zelis aims to create a more seamless and effective healthcare financial system. Zelis India plays a crucial role in this mission by supporting various initiatives that enhance the healthcare financial experience. The local team contributes to the development and implementation of innovative solutions, ensuring that technology and processes are optimized for efficiency and effectiveness. Beyond operational expertise, Zelis India cultivates a collaborative work culture, leadership development, and global exposure, creating a dynamic environment for professional growth. With hybrid work flexibility, comprehensive healthcare benefits, financial wellness programs, and cultural celebrations, we foster a holistic workplace experience. Additionally, the team plays a vital role in maintaining high standards of service delivery and contributes to Zelis’ award-winning culture. Position Overview About Zelis Zelis is a leading payments company in healthcare, guiding, pricing, explaining, and paying for care on behalf of insurers and their members. We align the interests of payers, providers, and consumers to deliver a better financial experience and more affordable, transparent care for all. Partnering with 700+ payers, supporting 4 million+ providers and 100 million members across the healthcare industry. About ZDI Zelis Data Intelligence (ZDI) is a centralized data team that partners across Zelis business units to unlock the value of data through intelligence and AI solutions. Our mission is to transform data into a strategic and competitive asset by fostering collaboration and innovation. Enable the democratization and productization of data assets to drive insights and decision-making. Develop new data and product capabilities through advanced analytics and AI-driven solutions. Collaborate closely with business units and enterprise functions to maximize the impact of data. Leverage intelligence solutions to unlock efficiency, transparency, and value across the organization. Key Responsibilities Product Expertise & Collaboration Become an expert in product areas, acting as the go-to person for stakeholders before engaging with technical data and data engineering teams. Lead the creation of clear user stories and tasks in collaboration with Engineering teams to track ongoing and upcoming work. Design, build, and own repeatable processes for implementing projects. Collaborate with software engineers, data engineers, data scientists, and other product teams to scope new or refine existing product features and data capabilities that increase business value, adoption, and user engagement. Understand how the product area aligns with the wider company roadmap and educate internal teams on the organization’s vision. Requirements Management & Communication Ensure consistent updates of tickets and timelines, following up with technical teams on status and roadblocks. Draft clear and concise business requirements and technical product documentation. Understand the Zelis healthcare ecosystem (e.g., claims, payments, provider and member data) and educate the company on requirements and guidelines for accessing, sharing, and requesting information to inform advanced analytics, feature enhancements, and new product innovation. Communicate with technical audiences to identify requirements, gaps, and barriers, translating needs into product features. Track key performance indicators to evaluate product performance. Qualifications Bachelor’s or Master’s degree in Computer Science, Information Systems, or a related field. 4+ years technical experience business analyst, data analyst, technical product, engineering, etc. experience with demonstrated ability to deliver alongside technical teams 4+ years of direct experience with Agile methodologies & frameworks and product tools such as Jira and Confluence to author user stories, acceptance criteria etc. Technical depth that enables you to collaborate with software engineers, data engineers and data scientists and drive technical discussions about design of data visualizations, data models, ETLs, deployment of data infrastructure Understanding of Data Management, Data Engineering, API development, Cloud Engineering, Advanced Analytics, Data Science, or Product Analytics concepts or other data/product tools such as SQL, Python, R, Spark, AWS, Azure, Airflow, Snowflake, and PowerBI Preferred Qualifications Strong communication skills, with clear verbal communication as well as explicit and mindful written communication skills to work with technical teams B2B or B2C experience helpful Familiarity with the US healthcare system Hands-on experience with Snowflake or other cloud platforms, including data pipeline architecture, cloud-based systems, BI/Analytics, and deploying data infrastructure solutions. Show more Show less

Posted 1 week ago

Apply

2.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

What impact will you make? Every day, your work will make an impact that matters, while you thrive in a dynamic culture of inclusion, collaboration and high performance. As the undisputed leader in professional services, Deloitte is where you will find unrivaled opportunities to succeed and realize your full potentiaL Deloitte is where you will find unrivaled opportunities to succeed and realize your full potential. The Team Deloitte’s practice can help you uncover and unlock the value buried deep inside vast amounts of data. Our global network provides strategic guidance and implementation services to help companies manage data from disparate sources and convert it into accurate, actionable information that can support fact-driven decision-making and generate an insight-driven advantage. Our practice addresses the continuum of opportunities in business intelligence & visualization, data management, performance management and next-generation analytics and technologies, including big data, cloud, cognitive and machine learning. Learn more about Analytics and Information Management Practice. Role Purpose Our purpose in the B&PB Data Engineering team is to develop and deliver best-in-class data assets and manage data domains for our Business and Private Banking customers and colleagues—seamlessly and reliably every time. We are passionate about simplicity and meeting the needs of our stakeholders, while continuously innovating to drive value. As a Data Engineer , you will bring strong expertise in data handling and curation to the team. You will be responsible for building reusable datasets and improving the way we work within BPB. This role requires a strong team player mindset and a focus on contributing to the team’s overall success. Job Title: Analyst Data Engineer Division: Business and Private Banking (BPB) Team Name: Data and Analytics, Data Analytics & Strategy Execution Reporting to (People Leader Position): Manager, Data Engineering Location: Gurgaon Core Responsibilities Manage ETL jobs and ensure data requirements from BPB reporting and business teams are met. Assist with operational data loads and support ongoing data ingestion processes. Translate business requirements into technical specifications. Work alongside senior data engineers to deliver scalable, efficient data solutions. Key Role Responsibilities Actively participate in the development, testing, deployment, monitoring, and refinement of data services. Manage and resolve incidents/problems; apply fixes and resolve systematic issues. Collaborate with stakeholders to triage issues and implement solutions that restore productivity. Risk Proactively manage risk in accordance with all policy and compliance requirements. Perform appropriate controls and adhere to all relevant processes and procedures. Promptly escalate any events, issues, or breaches as they are identified. Understand and own the risk responsibilities associated with the role. Accountabilities Build effective working relationships with BPB teams to ensure alignment with the overall Data Analytics Strategy. Deliver ETL pipelines that meet business reporting and data needs. Orchestrate and automate data workflows to ensure timely and reliable dataset delivery. Translate business goals into technical data engineering requirements. People Accountability People Accountability: Individual Contributor Number of Direct Reports: 0 Essential Capabilities Individuals with a minimum of 1–2 years of experience in a similar data engineering or technical role. A tertiary qualification in Computer Science or a related discipline. Critical thinkers who use networks, knowledge, and data to drive better outcomes for the business and customers. Continuous improvers who challenge the status quo and advocate for better solutions. Team players who value diverse skills and perspectives. Customer-focused individuals who define problems and develop solutions based on stakeholder needs. Required Technical Skills: Experience with design, build, and implementation of data engineering pipelines using: SQL Python Airflow Databricks (or Snowflake) Experience with cloud-based data solutions (preferably AWS). Familiarity with on-premises data environments such as Oracle. Strong development and performance tuning skills with RDBMS platforms including: Oracle Teradata Snowflake Redshift Our purpose Deloitte is led by a purpose: To make an impact tha t matters. Every day, Deloitte people are making a real impact in the places they live and work. We pride ourselves on doing not only what is good for clients, but also what is good for our people and the Communities in which we live and work—always striving to be an organization that is held up as a role model of quality, integrity, and positive change. Learn more about Deloitte's impact on the world. Show more Show less

Posted 1 week ago

Apply

4.0 - 9.0 years

4 - 8 Lacs

Pune

Work from Office

Naukri logo

Role & responsibilities Complex SQL joins, group by, window functions Spark execution & tuning – tasks, joins/agg, partitionBy, AQE, pruning Airflow – job scheduling, task dependencies, debugging DBT + Data modeling – staging, intermediate, mart layers, SQL tests Cost/performance optimization – repartition, broadcast joins, caching Tooling basics – Git (merge conflicts), Linux, basic Python/Java, Docker

Posted 1 week ago

Apply

4.0 - 8.0 years

4 - 8 Lacs

Pune

Work from Office

Naukri logo

JD Complex SQL joins, group by, window functions Spark execution & tuning – tasks, joins/agg, partitionBy, AQE, pruning Airflow – job scheduling, task dependencies, debugging DBT + Data modeling – staging, intermediate, mart layers, SQL tests Cost/performance optimization – repartition, broadcast joins, caching J oling basics – Git (merge conflicts), Linux, basic Python/Java, DockerJ

Posted 1 week ago

Apply

6.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Overview: We are looking for a skilled and motivated Data Engineer with strong experience in Python programming and Google Cloud Platform (GCP) to join our data engineering team. The ideal candidate will be responsible for designing, developing, and maintaining robust and scalable ETL (Extract, Transform, Load) data pipelines. The role involves working with various GCP services, implementing data ingestion and transformation logic, and ensuring data quality and consistency across systems. Key Responsibilities: Design, develop, test, and maintain scalable ETL data pipelines using Python Work extensively on Google Cloud Platform (GCP) services such as: Dataflow for real-time and batch data processing Cloud Functions for lightweight serverless compute BigQuery for data warehousing and analytics Cloud Composer for orchestration of data workflows (based on Apache Airflow) Google Cloud Storage (GCS) for managing data at scale IAM for access control and security Cloud Run for containerized applications Perform data ingestion from various sources and apply transformation and cleansing logic to ensure high-quality data delivery Implement and enforce data quality checks, validation rules, and monitoring Collaborate with data scientists, analysts, and other engineering teams to understand data needs and deliver efficient data solutions Manage version control using GitHub and participate in CI/CD pipeline deployments for data projects Write complex SQL queries for data extraction and validation from relational databases such as SQL Server, Oracle, or PostgreSQL Document pipeline designs, data flow diagrams, and operational support procedures Required Skills: 4–6 years of hands-on experience in Python for backend or data engineering projects Strong understanding and working experience with GCP cloud services (especially Dataflow, BigQuery, Cloud Functions, Cloud Composer, etc.) Solid understanding of data pipeline architecture, data integration, and transformation techniques Experience in working with version control systems like GitHub and knowledge of CI/CD practices Strong experience in SQL with at least one enterprise database (SQL Server, Oracle, PostgreSQL, etc.) Good to Have (Optional Skills): Experience working with Snowflake cloud data platform Hands-on knowledge of Databricks for big data processing and analytics Familiarity with Azure Data Factory (ADF) and other Azure data engineering tools Show more Show less

Posted 1 week ago

Apply

8.0 - 13.0 years

30 - 32 Lacs

Hyderabad

Work from Office

Naukri logo

Required: Bachelors degree in computer science or engineering. 7+ years of experience with data analytics, data modeling, and database design. 5+ years of experience with Vertica. 2+ years of coding and scripting (Python, Java, Scala) and design experience. 2+ years of experience with Airflow. Experience with ELT methodologies and tools. Experience with GitHub. Expertise in tuning and troubleshooting SQL. Strong data integrity, analytical and multitasking skills. Excellent communication, problem solving, organizational and analytical skills. Able to work independently. Additional / preferred skills: Familiar with agile project delivery process. Knowledge of SQL and use in data access and analysis. Ability to manage diverse projects impacting multiple roles and processes. Able to troubleshoot problem areas and identify data gaps and issues. Ability to adapt to fast changing environment. Experience designing and implementing automated ETL processes. Experience with MicroStrategy reporting tool.

Posted 1 week ago

Apply

5.0 - 8.0 years

10 - 16 Lacs

Hyderabad, Chennai, Bengaluru

Work from Office

Naukri logo

Hi, Hope you are doing great!!! Rockline Tech Champ Ind Private Limited is a leading IT consulting and technology solutions provider for global mid-to-large enterprises. The company offers a dynamic, innovation-driven environment and is looking for experienced professionals who thrive on challenges and aim to make a real impact in a fast-paced, quality-focused team. Req: AI OPS Engineer Location: Pune and Bangalore - Hybrid at office Experience: 5yrs -8yrs Skill Set: CI/CD pipelines orchestration by Airflow, Azure Devops "Designing and implementing cloud solutions, build MLOps on cloud (AWS, Azure) , Data Science , Python, Bash, Pytorch, TensorFlow, KubeFlow, MLFlow, Docker, Kubernetes, Openshift, Git Hub Mode: Contract to Hire - (Initially 6-8 months Rockline Tech Champ Ind Private Limited payroll after that you will be Perm with client) ****************** Need resource who can join in Immediately/ 15 days****************** ****************** PF Cut mandatory 3600rs****************** Kindly share below details. 1) Pan card No: 2) DOB: 3) First name: 4) Last name: 5) Contact number: 6) Emergency contact number(relationship): 7) Current CTC: 8) Expected CTC: 9) Notice Period: 10) Current location: 11) Preferred location: 12) Total /Rel experience(in Years): 13) Relevant Experience (in years): 14) Can Join (Client looking who can join Immediate/ 15 Days): 15) Pls mention reason for interested Contract to Hire role: 16) Current Company (If any payroll: Pls mention): 17) PF cut mandatory: Pls share UAN number

Posted 1 week ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Description Some careers shine brighter than others. If you’re looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions. We are currently seeking an experienced professional to join our team in the role of Consultant Specialist. In this role, you will: Able to Evaluate and provide technical solutions to solve a variety of complex and interdependent processes. Ensure data quality and accuracy by implementing data quality checks, data contracts and data governance processes. Collaborate with software teams and business analysts to understand their data requirements and deliver quality fit for purpose data solutions Lead the team to deliver the end-to-end solution Requirements To be successful in this role, you should meet the following requirements: Lead the design, development, and maintenance of complex applications. Good knowledge of data pipelines and workflows using Apache Airflow on GCP. Integrate and manage data from various sources into BigQuery and GCS. Write and optimize complex SQL queries for data extraction, transformation, and loading (ETL/ELT) processes. Ensure data quality and consistency across all data pipelines. Troubleshoot and resolve issues related to data pipelines and workflows. Create and maintain technical documentation for data pipelines and workflows. Mentor and guide junior data engineers, providing technical leadership and support. Lead project planning, execution, and delivery, ensuring timely and successful completion of data engineering projects. Stay up-to-date with the latest industry trends and technologies in data engineering and cloud computing Familiarity with containerization technologies like Docker and Kubernetes. Knowledge of data governance and security best practices. Experience in working with Agile methodology. Experience with CI/CD pipelines and DevOps practices You’ll achieve more when you join HSBC. www.hsbc.com/careers HSBC is committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within an inclusive and diverse environment. Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website. Issued by – HSDI Show more Show less

Posted 1 week ago

Apply

6.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Bachelor’s degree in computer science, engineering, or a related field. Master’s degree preferred. Data: 6+ years of experience with data analytics and data warehousing. Sound knowledge of data warehousing concepts. SQL: 6+ years of hands-on experience on SQL and query optimization for data pipelines. ELT/ETL: 6+ years of experience in Informatica/ 3+ years of experience in IICS/IDMC Migration Experience: Experience Informatica on prem to IICS/IDMC migration Cloud: 5+ years’ experience working in AWS cloud environment Python: 5+ years of hands-on experience of development with Python Workflow: 4+ years of experience in orchestration and scheduling tools (e.g. Apache Airflow) Advanced Data Processing: Experience using data processing technologies such as Apache Spark or Kafka Troubleshooting: Experience with troubleshooting and root cause analysis to determine and remediate potential issues Communication: Excellent communication, problem-solving and organizational and analytical skills Able to work independently and to provide leadership to small teams of developers. Reporting: Experience with data reporting (e.g. MicroStrategy, Tableau, Looker) and data cataloging tools (e.g. Alation) Experience in Design and Implementation of ETL solutions with effective design and optimized performance, ETL Development with industry standard recommendations for jobs recovery, fail over, logging, alerting mechanisms. Show more Show less

Posted 1 week ago

Apply

8.0 - 12.0 years

16 - 30 Lacs

Pune, Bengaluru, Delhi / NCR

Work from Office

Naukri logo

We are looking for an experienced Senior Software Engineer with deep expertise in Spark SQL / SQL development to lead the design, development, and optimization of complex database systems. As a Senior Spark SQL/SQL Developer, you will play a key role in creating and maintaining high-performance, scalable database solutions that meet business requirements and support critical applications. You will collaborate with engineering teams, mentor junior developers, and drive improvements in database architecture and performance. Key Responsibilities: Design, develop, and optimize complex Spark SQL / SQL queries, stored procedures, views, and triggers for high-performance systems. Lead the design and implementation of scalable database architectures to meet business needs. Perform advanced query optimization and troubleshooting to ensure database performance, efficiency, and reliability. Mentor junior developers and provide guidance on best practices for SQL development, performance tuning, and database design. Collaborate with cross-functional teams, including software engineers, product managers, and system architects, to understand requirements and deliver robust database solutions. Conduct code reviews to ensure code quality, performance standards, and compliance with database design principles. Develop and implement strategies for data security, backup, disaster recovery, and high availability. Monitor and maintain database performance, ensuring minimal downtime and optimal resource utilization. Contribute to long-term technical strategies for database management and integration with other systems. Write and maintain comprehensive documentation on database systems, queries, and architecture. Required Skills & Qualifications: Experience: 7+ years of hands-on experience in SQL Developer / data engineering or a related field. Expert-level proficiency in Spark SQL and extensive experience with Bigdata (Hive), MPP (Teradata), relational databases such as SQL Server, MySQL, or Oracle. Strong experience in database design, optimization, and troubleshooting. Deep knowledge of query optimization, indexing, and performance tuning techniques. Strong understanding of database architecture, scalability, and high-availability strategies. Experience with large-scale, high-transaction databases and data warehousing. Strong problem-solving skills with the ability to analyze complex data issues and provide effective solutions. Data testing and data reconciliation Ability to mentor and guide junior developers and promote best practices in SQL development. Proficiency in database migration, version control, and integration with applications. Excellent communication and collaboration skills, with the ability to interact with both technical and non-technical stakeholders. Preferred Qualifications: Experience with NoSQL databases (e.g., MongoDB, Cassandra) and cloud-based databases (e.g., AWS RDS, Azure SQL Database). Familiarity with data analytics, ETL processes, and data pipelines. Experience in automation tools, CI/CD pipelines, and agile methodologies. Familiarity with programming languages such as Python, Java, or C#. Education: Bachelor's or Master's degree in Computer Science, Information Technology, or a related field (or equivalent experience). Role & responsibilities Preferred candidate profile

Posted 1 week ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Position Description Core Skills Secondary Skills Bachelor's in Computer Science, Computer Engineering or related field 5+ yrs. Development experience with Spark (PySpark), Python and SQL. Extensive knowledge building data pipelines Hands on experience with Databricks Devlopment Strong experience with Strong experience developing on Linux OS. Experience with scheduling and orchestration (e.g. Databricks Workflows,airflow, prefect, control-m). Solid understanding of distributed systems, data structures, design principles. Agile Development Methodologies (e.g. SAFe, Kanban, Scrum). Comfortable communicating with teams via showcases/demos. Play key role in establishing and implementing migration patterns for the Data Lake Modernization project. Actively migrate use cases from our on premises Data Lake to Databricks on GCP. Collaborate with Product Management and business partners to understand use case requirements and reporting. Adhere to internal development best practices/lifecycle (e.g. Testing, Code Reviews, CI/CD, Documentation) . Document and showcase feature designs/workflows. Participate in team meetings and discussions around product development. Stay up to date on industry latest industry trends and design patterns. 3+ years experience with GIT. 3+ years experience with CI/CD (e.g. Azure Pipelines). Experience with streaming technologies, such as Kafka, Spark. Experience building applications on Docker and Kubernetes. Cloud experience (e.g. Azure, Google). Your future duties and responsibilities Required Qualifications To Be Successful In This Role Together, as owners, let’s turn meaningful insights into action. Life at CGI is rooted in ownership, teamwork, respect and belonging. Here, you’ll reach your full potential because… You are invited to be an owner from day 1 as we work together to bring our Dream to life. That’s why we call ourselves CGI Partners rather than employees. We benefit from our collective success and actively shape our company’s strategy and direction. Your work creates value. You’ll develop innovative solutions and build relationships with teammates and clients while accessing global capabilities to scale your ideas, embrace new opportunities, and benefit from expansive industry and technology expertise. You’ll shape your career by joining a company built to grow and last. You’ll be supported by leaders who care about your health and well-being and provide you with opportunities to deepen your skills and broaden your horizons. Come join our team—one of the largest IT and business consulting services firms in the world. Show more Show less

Posted 1 week ago

Apply

3.0 - 8.0 years

11 - 16 Lacs

Bengaluru

Work from Office

Naukri logo

As a Senior People Data Ops Product Manager, you will own enterprise data products and assets, such as curated data sets, semantic layers and foundational data pipelines of HR pyramid. You will drive product strategy, discovery, delivery, and evolution of these data products to ensure they meet the analytical, operational, and compliance needs of Target s diverse user base. About the Role As a Senior People Data Ops Product Manager, you will work in Target s product model and partner closely with engineers, data scientists, UX designers, governance and privacy experts, and business stakeholders to build and scale data products that deliver measurable outcomes. You will be accountable for understanding customer needs and business objectives, and translating them into a clear roadmap of capabilities that drive adoption and impact. You will: Define the vision, strategy, and roadmap for one or more data products, aligning with enterprise data and business priorities. Deeply understand your users analysts, data scientists, engineers, and business leaders and their data needs. Translate complex requirements into clear user stories, acceptance criteria, and product specifications. Drive decisions about data sourcing, quality, access, and governance in partnership with engineering, privacy, and legal teams. Prioritize work in a unified backlog across discovery, design, data modeling, engineering, and testing. Ensure high-quality, reliable, and trusted data is accessible and usable for a variety of analytical and operational use cases. Evangelize the value of your data product across the enterprise and support enablement and adoption efforts. Use data to make decisions about your product's performance, identify improvements, and evaluate new opportunities. About You Must have minimum 3 years of college degree in computer science or information technology. A total of 9+ years of experience in which 5+ years of product management experience, ideally with a focus on data products, platforms, or analytical tooling Deep understanding of data conceptsdata modeling, governance, quality, privacy, and lifecycle management Experience delivering products in agile environments (e.g., user stories, iterative development, scrum teams) Ability to translate business needs into technical requirements and communicate effectively across roles Demonstrated success in building products that support data consumers like analysts, engineers, and business users Experience working with modern data technologies (e.g., Snowflake, Hadoop, Airflow, GCP, etc.) is a plus Strategic thinker with strong analytical and problem-solving skills Strong leadership, collaboration, and communication skills Willing to coach and mentor team members.

Posted 1 week ago

Apply

6.0 - 11.0 years

18 - 25 Lacs

Hyderabad

Work from Office

Naukri logo

SUMMARY Data Modeling Professional Location Hyderabad/Pune Experience: The ideal candidate should possess at least 6 years of relevant experience in data modeling with proficiency in SQL, Python, Pyspark, Hive, ETL, Unix, Control-M (or similar scheduling tools) along with GCP. Key Responsibilities: Develop and configure data pipelines across various platforms and technologies. Write complex SQL queries for data analysis on databases such as SQL Server, Oracle, and HIVE. Create solutions to support AI/ML models and generative AI. Work independently on specialized assignments within project deliverables. Provide solutions and tools to enhance engineering efficiencies. Design processes, systems, and operational models for end-to-end execution of data pipelines. Preferred Skills: Experience with GCP, particularly Airflow, Dataproc, and Big Query, is advantageous. Requirements Requirements: Strong problem-solving and analytical abilities. Excellent communication and presentation skills. Ability to deliver high-quality materials against tight deadlines. Effective under pressure with rapidly changing priorities. Note: The ability to communicate efficiently at a global level is paramount. --- Minimum 6 years of experience in data modeling with SQL, Python, Pyspark, Hive, ETL, Unix, Control-M (or similar scheduling tools). Proficiency in writing complex SQL queries for data analysis. Experience with GCP, particularly Airflow, Dataproc, and Big Query, is an advantage. Strong problem-solving and analytical abilities. Excellent communication and presentation skills. Ability to work effectively under pressure with rapidly changing priorities.

Posted 1 week ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

We are looking for Data Engineer to join our Data Management team. If you are passionate about data, keeping up to date with new technologies and have a good eye for detail, this is the role for you. This role will form a major part of our Data and Technology department, in this role you will be responsible for expanding and optimizing our data and data pipeline architecture for reporting as well as ML and measurement models, ensuring that our team have the data available to them to drive our clients’ business forward. You will support our GCP Engineers in using Google Cloud platform to build rich databases, transforming and moulding data from a variety of sources, such as Google Analytics; Campaign Manager; Search Ads 360; and client data. This data will build the foundations for us to provide industry leading solutions to transform our clients marketing and media activity. You will also support our Data Scientist in answering more ad hoc queries, building statistical and machine learning models and designing data visualisations based on client-specific needs. This is an exciting data-driven role and we’re looking for someone to take it to their first step int data and cloud solutions. Solving complex problems is something you do for fun. You possess great attention to detail and a precise eye for presenting data in strong, visually striking ways. You will have the determination to advance our product offering and look to create the new and the different to consistently push our business forward, constantly coming up with new ideas and helping us to add even more value to our clients. This is a rare and exciting opportunity to become part of one of the most innovative and progressive Media agencies in the world! As a core member of the team you will have opportunity to help define our product, work with, our industry-leading clients and the chance to build your career & your future the way you want it! 3 Best Things About The Job You will learn a lot of different skills and be exposed to loss of new tools A supportive working environment to promote a healthy work-life harmony Continuous mentorship and learning to support both career and personal development Measures Of Success In 3 months, you would have: Have a good understanding of our internal/client GCP architecture, current tools and process and projects the wider team are working on. In 6 Months, You Would Have Started a learning path towards a GCP certification Started developing a new internal product/tool In 12 Months, You Will Be Supporting Senior members of the team with projects. Taken ownership of a few key internal process. Helping to develop the next generation of products/ and services Run internal training sessions to help upskill junior members of the team Responsibilities Of The Role Aid in the establishment, configuration, and upkeep of Google Cloud Platform (GCP) and other cloud services. Monitor the cloud infrastructure for performance and security issues, taking proactive measures to address any concerns. Work in conjunction with senior engineers to design and deploy scalable and dependable cloud solutions. Troubleshoot and rectify technical problems related to cloud services and applications. Assist in the application of security best practices to safeguard cloud resources and data. Participate in regular backup and disaster recovery planning and testing. Stay abreast of the latest trends and advancements in cloud technologies. Document procedures and configurations to facilitate knowledge sharing within the team. Provide technical support and assistance stakeholders. What You Will Need Understanding of cloud computing concepts and technologies, with a keen interest in Google Cloud Platform (GCP). Familiarity with cloud service providers, such as Microsoft Azure, AWS, or GCP, is advantageous. Proficient programming skills in one or more languages (preferably Python). Familiar with containerisation tools such as Docker. Experience using batch data workflow tools such as Airflow. Comfortable working in an agile environment. Understanding of networking fundamentals, including TCP/IP, DNS, and VPN. Strong problem-solving abilities and the capacity to work effectively in a team. Excellent communication and interpersonal skills. Eagerness to learn and adapt to new technologies. Ability to hit the ground running and quickly adapt to new tasks. requisitionid:39846 Show more Show less

Posted 1 week ago

Apply

4.0 - 9.0 years

10 - 14 Lacs

Hyderabad

Work from Office

Naukri logo

Understanding of design, configuring infrastructure based on provided design, managing GCP infrastructure using Terraform. Automate the provisioning, configuration, and management of GCP resources, including Compute Engine, Cloud Storage, Cloud SQL, Spanner, Kubernetes Engine (GKE), and serverless offerings like Cloud Functions and Cloud Run. Manage and configure GCP service accounts, IAM roles, and permissions to ensure secure access to resources. Implement and manage load balancers (HTTP(S), TCP/UDP) for high availability and scalability. Develop and maintain CI/CD pipelines using Cloud build, GitHub Actions or similar tools. Monitor and optimize the performance and availability of our GCP infrastructure. Primary Skills Terraform CI/CD Pipeline IAC Docker, Kubernetes Secondary Skills AWS Azure Github

Posted 1 week ago

Apply

8.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Linkedin logo

About Us Innovation. Sustainability. Productivity. This is how we are Breaking New Ground in our mission to sustainably advance the noble work of farmers and builders everywhere. With a growing global population and increased demands on resources, our products are instrumental to feeding and sheltering the world. From developing products that run on alternative power to productivity-enhancing precision tech, we are delivering solutions that benefit people – and they are possible thanks to people like you. If the opportunity to build your skills as part of a collaborative, global team excites you, you’re in the right place. Grow a Career. Build a Future! Be part of this company at the forefront of agriculture and construction, that passionately innovates to drive customer efficiency and success. And we know innovation can’t happen without collaboration. So, everything we do at CNH Industrial is about reaching new heights as one team, always delivering for the good of our customers. Job Purpose The CFD Analysis Engineer will be responsible for providing fluid/thermal analysis for agricultural (tractors, combines, harvesters, sprayers) and construction machines (excavators, wheel loaders, loader backhoes). As a member of the CFD Team, he will be supporting the design of components and subsystems like: A/C & HVAC systems Engine cooling packages Hydraulics Transmissions Engine air intakes & exhausts Key Responsibilities Develops virtual simulation models using CFD (Computational Fluid Dynamics) for the evaluation of engineering designs of agricultural and construction machinery. Makes recommendations to peers and direct manager based on sound engineering principles, practices and judgment pertaining to thermal/fluid problems as a contribution to the overall engineering and manufacturing objectives. Utilizes Star CCM+, Ensight, ANSYS Fluent, GT-Power, Actran Creo, TeamCenter and relevant software to develop and simulate designs for cooling packages, exhaust systems, engine air intakes, HVAC systems, transmissions, and other relevant components being developed and/or improved. Performs engineering calculations for emissions, chemical reactions, sprays, thermal, airflow, hydraulic, aero-acoustic, particle flows, and refrigeration problems to determine the size and performance of assemblies and parts and to solve design problems. Incorporates engineering standards, methodologies and global product development processes into daily work tasks. Experience Required MS Degree in Engineering or comparable program, with 8 years of professional industry experience. Good knowledge of the Computational Fluid Dynamics field. Some knowledge in the areas of underhood engine cooling, two-phase flows, and climatization. Knowledge of exhaust after treatment analysis; familiarity with SCR (Selective Catalytic Reactors), DPF (Diesel Particulate Filters), or DOC (Diesel Oxidation Catalysts). Some basic knowledge and understanding of aero-acoustics and fan noise Preferred Qualifications Master’s degree in mechanical engineering from reputed institute Doctoral degree (Ph.D.) is a plus What We Offer We offer dynamic career opportunities across an international landscape. As an equal opportunity employer, we are committed to delivering value for all our employees and fostering a culture of respect. Benefits At CNH, we understand that the best solutions come from the diverse experiences and skills of our people. Here, you will be empowered to grow your career, to follow your passion, and help build a better future. To support our employees, we offer regional comprehensive benefits, including: Flexible work arrangements Savings & Retirement benefits Tuition reimbursement Parental leave Adoption assistance Fertility & Family building support Employee Assistance Programs Charitable contribution matching and Volunteer Time Off Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

We are seeking a highly skilled and motivated MLOps Site Reliability Engineer (SRE) to join our team. In this role, you will be responsible for ensuring the reliability, scalability, and performance of our machine learning infrastructure. You will work closely with data scientists, machine learning engineers, and software developers to build and maintain robust and efficient systems that support our machine learning workflows. This position offers an exciting opportunity to work on cutting-edge technologies and make a significant impact on our organization's success. Design, implement, and maintain scalable and reliable machine learning infrastructure. Collaborate with data scientists and machine learning engineers to deploy and manage machine learning models in production. Develop and maintain CI/CD pipelines for machine learning workflows. Monitor and optimize the performance of machine learning systems and infrastructure. Implement and manage automated testing and validation processes for machine learning models. Ensure the security and compliance of machine learning systems and data. Troubleshoot and resolve issues related to machine learning infrastructure and workflows. Document processes, procedures, and best practices for machine learning operations. Stay up-to-date with the latest developments in MLOps and related technologies. Qualifications Required: Bachelor's degree in Computer Science, Engineering, or a related field. Proven experience as a Site Reliability Engineer (SRE) or in a similar role. Strong knowledge of machine learning concepts and workflows. Proficiency in programming languages such as Python, Java, or Go. Experience with cloud platforms such as AWS, Azure, or Google Cloud. Familiarity with containerization technologies like Docker and Kubernetes. Experience with CI/CD tools such as Jenkins, GitLab CI, or CircleCI. Strong problem-solving skills and the ability to troubleshoot complex issues. Excellent communication and collaboration skills. Preferred Master's degree in Computer Science, Engineering, or a related field. Experience with machine learning frameworks such as TensorFlow, PyTorch, or Scikit-learn. Knowledge of data engineering and data pipeline tools such as Apache Spark, Apache Kafka, or Airflow. Experience with monitoring and logging tools such as Prometheus, Grafana, or ELK stack. Familiarity with infrastructure as code (IaC) tools like Terraform or Ansible. Experience with automated testing frameworks for machine learning models. Knowledge of security best practices for machine learning systems and data. Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

By clicking the “Apply” button, I understand that my employment application process with Takeda will commence and that the information I provide in my application will be processed in line with Takeda’s Privacy Notice and Terms of Use. I further attest that all information I submit in my employment application is true to the best of my knowledge. Job Description: The Future Begins Here : At Takeda, we are leading digital evolution and global transformation. By building innovative solutions and future-ready capabilities, we are meeting the need of patients, our people, and the planet. Bengaluru, the city, which is India’s epicenter of Innovation, has been selected to be home to Takeda’s recently launched Innovation Capability Center. We invite you to join our digital transformation journey. In this role, you will have the opportunity to boost your skills and become the heart of an innovative engine that is contributing to global impact and improvement. At Takeda’s ICC we Unite in Diversity: Takeda is committed to creating an inclusive and collaborative workplace, where individuals are recognized for their backgrounds and abilities they bring to our company. We are continuously improving our collaborators journey in Takeda, and we welcome applications from all qualified candidates. Here, you will feel welcomed, respected, and valued as an important contributor to our diverse team. The Opportunity: As a Data Engineer you will be building and maintaining data systems and construct datasets that are easy to analyse and support Business Intelligence requirements as well as downstream systems. Responsibilities: Develops and maintains scalable data pipelines and builds out new integrations using AWS native technologies to support continuing increases in data source, volume, and complexity. Automate and Manage job scheduling for ETL processes across various applications and platforms within an enterprise. Design, implement, and maintain robust CI/CD pipelines to automate software deployments and data pipelines. Implements processes and systems to drive data reconciliation, monitor data quality, ensuring production data is always accurate and available for key stakeholders, downstream systems, and business processes that depend on it. Performs data analysis required to troubleshoot data related issues and assist in the resolution of data issues. Works closely with enterprise teams including Enterprise Architecture, Security, and Enterprise Data Backbone Engineering to design and develop data integration patterns/solutions along with proper data models supporting different data and analytics use cases. Skills and Qualifications: Bachelors’ Degree, from an accredited institution in Engineering, Computer Science, or related field. 5+ years of total experience with data integration tools, workflow automation and DevOps. 3+ years of Experience with Tidal Automation and Tidal Repository for scheduling jobs and managing batch processes across different environments. Proficiency in Amazon Managed Apache Airflow to automate workflows and data pipelines. Strong experience with CI/CD tools like Github, Gitlab or Jforg. Strong programming and scripting skills in Python. Focusing on AWS native services and optimizing the data landscape through the adoption of these services. Experience with Agile development methodologies. Curiosity and adaptability to learn new technologies and improve existing processes. Excellent written and verbal communication skills including the ability to interact effectively with multifunctional teams. BENEFITS: It is our priority to provide competitive compensation and a benefit package that bridges your personal life with your professional career. Amongst our benefits are Competitive Salary + Performance Annual Bonus Flexible work environment, including hybrid working Comprehensive Healthcare Insurance Plans for self, spouse, and children Group Term Life Insurance and Group Accident Insurance programs Health & Wellness programs including annual health screening, weekly health sessions for employees. Employee Assistance Program 3 days of leave every year for Voluntary Service in additional to Humanitarian Leaves Broad Variety of learning platforms Diversity, Equity, and Inclusion Programs Reimbursements – Home Internet & Mobile Phone Employee Referral Program Leaves – Paternity Leave (4 Weeks) , Maternity Leave (up to 26 weeks), Bereavement Leave (5 calendar days) ABOUT ICC IN TAKEDA: Takeda is leading a digital revolution. We’re not just transforming our company; we’re improving the lives of millions of patients who rely on our medicines every day. As an organization, we are committed to our cloud-driven business transformation and believe the ICCs are the catalysts of change for our global organization. Locations: IND - Bengaluru Worker Type: Employee Worker Sub-Type: Regular Time Type: Full time Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Job Description Fair knowledge of programming languages like Python /SQLand advanced SQL Having good business knowledge like sales cycle , Inventory , supply application Fair knowledge of Snowflake and GItHUB , DBT Sigma / DOMO , Airflow , HVR , SAP Ability to identify, analyze, and resolve problems logically Ability to troubleshoot and identify root cause Responsibilities Maintain production systems reliability through correct utilization of IT standards and governance processes Collaborate with business / functional team, develop detailed plans and accurate estimates for completion of build, system testing and implementation of project Review program codes and suggest correction in case of any errors Conduct performance tuning to improve system performance over multiple business processes Monitor and setup the jobs in test/ sandbox systems for testing Making sure the correct test data is arranged for checking on reports Working with business to check on the testing and sign off of the reports Qualifications Snowflake : SQL DBT Airflow Github SAP and HVR : sap integration with HVR Sigma & DOMO for reporting Proficiency in Python. Understanding of Airflow architecture and concepts. Experience with SQL and database design. About Us ABOUT US Bristlecone is the leading provider of AI-powered application transformation services for the connected supply chain. We empower our customers with speed, visibility, automation, and resiliency – to thrive on change. Our transformative solutions in Digital Logistics, Cognitive Manufacturing, Autonomous Planning, Smart Procurement and Digitalization are positioned around key industry pillars and delivered through a comprehensive portfolio of services spanning digital strategy, design and build, and implementation across a range of technology platforms. Bristlecone is ranked among the top ten leaders in supply chain services by Gartner. We are headquartered in San Jose, California, with locations across North America, Europe and Asia, and over 2,500 consultants. Bristlecone is part of the $19.4 billion Mahindra Group. Equal Opportunity Employer Bristlecone is an equal opportunity employer. All applicants will be considered for employment without attention to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran or disability status . Information Security Responsibilities Understand and adhere to Information Security policies, guidelines and procedure, practice them for protection of organizational data and Information System. Take part in information security training and act while handling information. Report all suspected security and policy breach to InfoSec team or appropriate authority (CISO). Understand and adhere to the additional information security responsibilities as part of the assigned job role. Show more Show less

Posted 1 week ago

Apply

7.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

Job Description The leader must demonstrate an ability to anticipate, understand, and act on evolving customer needs, both stated and unstated. Through this, the candidate must create a customer-centric organization and use innovative thinking frameworks to foster value-added relations. With the right balance of bold initiatives, continuous improvement, and governance, the leader must adhere to the delivery standards set by the client and eClerx by leveraging the knowledge of market drivers and competition to effectively anticipate trends and opportunities. Besides, the leader must demonstrate a capacity to transform, align, and energize organization resources, and take appropriate risks to lead the organization in a new direction. As a leader, the candidate must build engaged and high-impact direct, virtual, and cross-functional teams, and take the lead towards raising the performance bar, build capability and bring out the best in their teams. By collaborating and forging partnerships both within and outside the functional area, the leader must work towards a shared vision and achieve positive business outcomes. Associate Program Manager Roles And Responsibilities Understand client’s requirement and provide effective and efficient solution in Snowflake. Understanding data transformation and translation requirements and which tools to leverage to get the job done. Ability to do Proof of Concepts (POCs) in areas that need R&D on cloud technologies. Understanding data pipelines and modern ways of automating data pipeline using cloud based Testing and clearly document implementations, so others can easily understand the requirements, implementation, and test conditions. Perform data quality testing and assurance as a part of designing, building and implementing scalable data solutions in SQL. Technical And Functional Skills Master’s / Bachelor’s degree in Engineering, Analytics, or a related field. Total 7+ years of experience with relevant ~4+ years of Hands-on experience with Snowflake utilities – SnowSQL, SnowPipe, Time travel, Replication, Zero copy cloning. Strong working knowledge on Python. Understanding data transformation and translation requirements and which tools to leverage to get the job done Understanding data pipelines and modern ways of automating data pipeline using cloud based Testing and clearly document implementations, so others can easily understand the requirements, implementation, and test conditions. In-depth understanding of data warehouse and ETL tools. Perform data quality testing and assurance as a part of designing, building and implementing scalable data solutions in SQL. Experience Snowflake API’s is mandatory. Candidate must have strong knowledge in Scheduling and Monitoring using Airflow DAGs. Strong experience in writing SQL Queries, Joins, Store Procedure, User Defined Functions. Should have sound knowledge in Data architecture and design. Should have hands on experience in developing Python scripts for data manipulation. Snowflake snowpro core certification. Developing scripts using Unix, Python, etc. About Us At eClerx, we serve some of the largest global companies – 50 of the Fortune 500 clients. Our clients call upon us to solve their most complex problems, and deliver transformative insights. Across roles and levels, you get the opportunity to build expertise, challenge the status quo, think bolder, and help our clients seize value About The Team eClerx is a global leader in productized services, bringing together people, technology and domain expertise to amplify business results. Our mission is to set the benchmark for client service and success in our industry. Our vision is to be the innovation partner of choice for technology, data analytics and process management services. Since our inception in 2000, we've partnered with top companies across various industries, including financial services, telecommunications, retail, and high-tech. Our innovative solutions and domain expertise help businesses optimize operations, improve efficiency, and drive growth. With over 18,000 employees worldwide, eClerx is dedicated to delivering excellence through smart automation and data-driven insights. At eClerx, we believe in nurturing talent and providing hands-on experience. eClerx is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability or protected veteran status, or any other legally protected basis, per applicable law. Show more Show less

Posted 1 week ago

Apply

5.0 - 15.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Dear Associate Greetings from TATA Consultancy Services!! Thank you for expressing your interest in exploring a career possibility with the TCS Family. We have a job opportunity for AWS Data Science with Devops and Databricks at Tata Consultancy Services at 14th June 2025. Hiring For : AWS Data Science with Devops and Databricks Mandatory Skills: SQL, Pyspark/Python,AWS, Databricks, Snowflak,S3, EMR, EC2, Airflow, Lambda Experience : 5-15 years Mode of interview: in-person walk in drive Date of interview: 14 June 2025 Venue : Zone 3 Auditorium, Tata Consultancy Services, Sahyadri Park, Rajiv Gandhi Infotech Park, Hinjewadi Phase 3, Pune – 411057 If you are interested in this exciting opportunity, Please share your updated resume on jeena.james1@tcs.com along with the additional information mentioned below: Name: Preferred Location: Contact No: Email id: Highest Qualification: Current Organization Total Experience: Relevant Experience in Devops with Databricks: Current CTC: Expected CTC: Notice Period: Gap Duration: Gap Details: Attended interview with TCS in past(details): Please share your I begin portal EP id if already registered: Willing to attend walk in on 14th June: (Yes/No) Note: only Eligible candidates with Relevant experience will be contacted further Thanks & Regards, Jeena James, Website: http://www.tcs.com Email: jeena.james1@tcs.com Human Resource - Talent Acquisition Group, Tata Consultancy Services Show more Show less

Posted 1 week ago

Apply

Exploring Airflow Jobs in India

The airflow job market in India is rapidly growing as more companies are adopting data pipelines and workflow automation. Airflow, an open-source platform, is widely used for orchestrating complex computational workflows and data processing pipelines. Job seekers with expertise in airflow can find lucrative opportunities in various industries such as technology, e-commerce, finance, and more.

Top Hiring Locations in India

  1. Bangalore
  2. Mumbai
  3. Hyderabad
  4. Pune
  5. Gurgaon

Average Salary Range

The average salary range for airflow professionals in India varies based on experience levels: - Entry-level: INR 6-8 lakhs per annum - Mid-level: INR 10-15 lakhs per annum - Experienced: INR 18-25 lakhs per annum

Career Path

In the field of airflow, a typical career path may progress as follows: - Junior Airflow Developer - Airflow Developer - Senior Airflow Developer - Airflow Tech Lead

Related Skills

In addition to airflow expertise, professionals in this field are often expected to have or develop skills in: - Python programming - ETL concepts - Database management (SQL) - Cloud platforms (AWS, GCP) - Data warehousing

Interview Questions

  • What is Apache Airflow? (basic)
  • Explain the key components of Airflow. (basic)
  • How do you schedule a DAG in Airflow? (basic)
  • What are the different operators in Airflow? (medium)
  • How do you monitor and troubleshoot DAGs in Airflow? (medium)
  • What is the difference between Airflow and other workflow management tools? (medium)
  • Explain the concept of XCom in Airflow. (medium)
  • How do you handle dependencies between tasks in Airflow? (medium)
  • What are the different types of sensors in Airflow? (medium)
  • What is a Celery Executor in Airflow? (advanced)
  • How do you scale Airflow for a high volume of tasks? (advanced)
  • Explain the concept of SubDAGs in Airflow. (advanced)
  • How do you handle task failures in Airflow? (advanced)
  • What is the purpose of a TriggerDagRun operator in Airflow? (advanced)
  • How do you secure Airflow connections and variables? (advanced)
  • Explain how to create a custom Airflow operator. (advanced)
  • How do you optimize the performance of Airflow DAGs? (advanced)
  • What are the best practices for version controlling Airflow DAGs? (advanced)
  • Describe a complex data pipeline you have built using Airflow. (advanced)
  • How do you handle backfilling in Airflow? (advanced)
  • Explain the concept of DAG serialization in Airflow. (advanced)
  • What are some common pitfalls to avoid when working with Airflow? (advanced)
  • How do you integrate Airflow with external systems or tools? (advanced)
  • Describe a challenging problem you faced while working with Airflow and how you resolved it. (advanced)

Closing Remark

As you explore job opportunities in the airflow domain in India, remember to showcase your expertise, skills, and experience confidently during interviews. Prepare well, stay updated with the latest trends in airflow, and demonstrate your problem-solving abilities to stand out in the competitive job market. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies