Home
Jobs

2646 Airflow Jobs - Page 29

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

What be you’ll doing? Build and deploy the infrastructure for ingesting high-volume support data from consumer interactions, devices, and apps. Design and implement the processes that turn data into insights. Model and mine the data to describe the system's behaviour and to predict future actions. Enable data driven change -- Build effective visualizations and reports presenting data insights to all stakeholders, internal (corporate) and external (our SaaS customers) Develop and maintain the data-related code in an automated CI/CD build/test/deploy environment Generate specific reports needed across tenants to allow insight into agent performance and business effectiveness. Research individually and in collaboration with other teams on how to solve problems What we seek in you: Bachelor's degree in computer science, Information Systems, or a related field. Minimum of 5+ years of relevant working experience in data engineering. Experience working with cloud Data Warehouse solutions and AWS Cloud-based solutions. Must have strong experience with AWS Glue, DMS, Snowflake. Advanced SQL skills and experience with relational databases and database design. Experience with working on large data sets and distributed computing like Spark Strong proficiency in data pipeline and workflow management tools (Airflow). Excellent problem-solving, communication, and organizational skills. Proven ability to work independently and with a team. Is a self-starter and action biased. Strong in communications that handles stake holder communications. Follows agile methodology to work, collaborate and deliver in a global team set up. Ability to learn and adapt quickly. Life at Next: At our core, we're driven by the mission of tailoring growth for our customers by enabling them to transform their aspirations into tangible outcomes. We're dedicated to empowering them to shape their futures and achieve ambitious goals. To fulfil this commitment, we foster a culture defined by agility, innovation, and an unwavering commitment to progress. Our organizational framework is both streamlined and vibrant, characterized by a hands-on leadership style that prioritizes results and fosters growth. Perks of working with us: Clear objectives to ensure alignment with our mission, fostering your meaningful contribution. Abundant opportunities for engagement with customers, product managers, and leadership. You'll be guided by progressive paths while receiving insightful guidance from managers through ongoing feedforward sessions. Cultivate and leverage robust connections within diverse communities of interest. Choose your mentor to navigate your current endeavors and steer your future trajectory. Embrace continuous learning and upskilling opportunities through Nexversity. Enjoy the flexibility to explore various functions, develop new skills, and adapt to emerging technologies. Embrace a hybrid work model promoting work-life balance. Access comprehensive family health insurance coverage, prioritizing the well-being of your loved ones. Embark on accelerated career paths to actualize your professional aspirations. Who we are? We enable high growth enterprises build hyper personalized solutions to transform their vision into reality. With a keen eye for detail, we apply creativity, embrace new technology and harness the power of data and AI to co-create solutions tailored made to meet unique needs for our customers. Join our passionate team and tailor your growth with us! Show more Show less

Posted 1 week ago

Apply

8.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Senior Data Engineer Position Summary The Senior Data Engineer leads complex data engineering projects working on designing data architectures that align with business requirements This role focuses on optimizing data workflows managing data pipelines and ensuring the smooth operation of data systems Minimum Qualifications 8 Years overall IT experience with minimum 5 years of work experience in below tech skills Tech Skill Strong experience in Python Scripting and PySpark for data processing Proficiency in SQL dealing with big data over Informatica ETL Proven experience in Data quality and data optimization of data lake in Iceberg format with strong understanding of architecture Experience in AWS Glue jobs Experience in AWS cloud platform and its data services S3 Redshift Lambda EMR Airflow Postgres SNS Event bridge Expertise in BASH Shell scripting Strong understanding of healthcare data systems and experience leading data engineering teams Experience in Agile environments Excellent problem solving skills and attention to detail Effective communication and collaboration skills Responsibilities Leads development of data pipelines and architectures that handle large scale data sets Designs constructs and tests data architecture aligned with business requirements Provides technical leadership for data projects ensuring best practices and high quality data solutions Collaborates with product finance and other business units to ensure data pipelines meet business requirements Work with DBT Data Build Tool for transforming raw data into actionable insights Oversees development of data solutions that enable predictive and prescriptive analytics Ensures the technical quality of solutions managing data as it moves across environments Aligns data architecture to Healthfirst solution architecture Show more Show less

Posted 1 week ago

Apply

5.0 years

5 - 8 Lacs

Hyderābād

On-site

GlassDoor logo

Category: Software Development/ Engineering Main location: India, Andhra Pradesh, Hyderabad Position ID: J0625-0219 Employment Type: Full Time Position Description: Company Profile: Founded in 1976, CGI is among the largest independent IT and business consulting services firms in the world. With 94,000 consultants and professionals across the globe, CGI delivers an end-to-end portfolio of capabilities, from strategic IT and business consulting to systems integration, managed IT and business process services and intellectual property solutions. CGI works with clients through a local relationship model complemented by a global delivery network that helps clients digitally transform their organizations and accelerate results. CGI Fiscal 2024 reported revenue is CA$14.68 billion and CGI shares are listed on the TSX (GIB.A) and the NYSE (GIB). Learn more at cgi.com. Your future duties and responsibilities: Position: Senior Software Engineer Experience: 5-10 years Category: Software Development/ Engineering Shift Timings: 1:00 pm to 10:00 pm Main location: Hyderabad Work Type: Work from office Skill: Spark (PySpark), Python and SQL Employment Type: Full Time Position ID: J0625-0219 Required qualifications to be successful in this role: Must have Skills: 5+ yrs. Development experience with Spark (PySpark), Python and SQL. Extensive knowledge building data pipelines Hands on experience with Databricks Devlopment Strong experience with Strong experience developing on Linux OS. Experience with scheduling and orchestration (e.g. Databricks Workflows,airflow, prefect, control-m). Good to have skills: Solid understanding of distributed systems, data structures, design principles. Agile Development Methodologies (e.g. SAFe, Kanban, Scrum). Comfortable communicating with teams via showcases/demos. Play key role in establishing and implementing migration patterns for the Data Lake Modernization project. Actively migrate use cases from our on premises Data Lake to Databricks on GCP. Collaborate with Product Management and business partners to understand use case requirements and reporting. Adhere to internal development best practices/lifecycle (e.g. Testing, Code Reviews, CI/CD, Documentation) . Document and showcase feature designs/workflows. Participate in team meetings and discussions around product development. Stay up to date on industry latest industry trends and design patterns. 3+ years experience with GIT. 3+ years experience with CI/CD (e.g. Azure Pipelines). Experience with streaming technologies, such as Kafka, Spark. Experience building applications on Docker and Kubernetes. Cloud experience (e.g. Azure, Google). Skills: English Python SQLite What you can expect from us: Together, as owners, let’s turn meaningful insights into action. Life at CGI is rooted in ownership, teamwork, respect and belonging. Here, you’ll reach your full potential because… You are invited to be an owner from day 1 as we work together to bring our Dream to life. That’s why we call ourselves CGI Partners rather than employees. We benefit from our collective success and actively shape our company’s strategy and direction. Your work creates value. You’ll develop innovative solutions and build relationships with teammates and clients while accessing global capabilities to scale your ideas, embrace new opportunities, and benefit from expansive industry and technology expertise. You’ll shape your career by joining a company built to grow and last. You’ll be supported by leaders who care about your health and well-being and provide you with opportunities to deepen your skills and broaden your horizons. Come join our team—one of the largest IT and business consulting services firms in the world.

Posted 1 week ago

Apply

5.0 years

6 - 7 Lacs

Hyderābād

On-site

GlassDoor logo

Job description Some careers shine brighter than others. If you’re looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions. We are currently seeking an experienced professional to join our team in the role of Consultant Specialist In this role you will be Design and Develop ETL Processes: Lead the design and implementation of ETL processes using all kinds of batch/streaming tools to extract, transform, and load data from various sources into GCP. Collaborate with stakeholders to gather requirements and ensure that ETL solutions meet business needs. Data Pipeline Optimization: Optimize data pipelines for performance, scalability, and reliability, ensuring efficient data processing workflows. Monitor and troubleshoot ETL processes, proactively addressing issues and bottlenecks. Data Integration and Management: Integrate data from diverse sources, including databases, APIs, and flat files, ensuring data quality and consistency. Manage and maintain data storage solutions in GCP (e.g., BigQuery, Cloud Storage) to support analytics and reporting. GCP Dataflow Development: Write Apache Beam based Dataflow Job for data extraction, transformation, and analysis, ensuring optimal performance and accuracy. Collaborate with data analysts and data scientists to prepare data for analysis and reporting. Automation and Monitoring: Implement automation for ETL workflows using tools like Apache Airflow or Cloud Composer, enhancing efficiency and reducing manual intervention. Set up monitoring and alerting mechanisms to ensure the health of data pipelines and compliance with SLAs. Data Governance and Security: Apply best practices for data governance, ensuring compliance with industry regulations (e.g., GDPR, HIPAA) and internal policies. Collaborate with security teams to implement data protection measures and address vulnerabilities. Documentation and Knowledge Sharing: Document ETL processes, data models, and architecture to facilitate knowledge sharing and onboarding of new team members. Conduct training sessions and workshops to share expertise and promote best practices within the team. Requirements To be successful in this role, you should meet the following requirements: Education: Bachelor’s degree in Computer Science, Information Systems, or a related field. Experience: Minimum of 5 years of industry experience in data engineering or ETL development, with a strong focus on Data Stage and GCP. Proven experience in designing and managing ETL solutions, including data modeling, data warehousing, and SQL development. Technical Skills: Strong knowledge of GCP services (e.g., BigQuery, Dataflow, Cloud Storage, Pub/Sub) and their application in data engineering. Experience of cloud-based solutions, especially in GCP, cloud certified candidate is preferred. Experience and knowledge of Bigdata data processing in batch mode and streaming mode, proficient in Bigdata eco systems, e.g. Hadoop, HBase, Hive, MapReduce, Kafka, Flink, Spark, etc. Familiarity with Java & Python for data manipulation on Cloud/Bigdata platform. Analytical Skills:Strong problem-solving skills with a keen attention to detail. Ability to analyze complex data sets and derive meaningful insights. Benefits:Competitive salary and comprehensive benefits package. Opportunity to work in a dynamic and collaborative environment on cutting-edge data projects. Professional development opportunities to enhance your skills and advance your career. If you are a passionate data engineer with expertise in ETL processes and a desire to make a significant impact within our organization, we encourage you to apply for this exciting opportunity! You’ll achieve more when you join HSBC. www.hsbc.com/careers HSBC is committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within an inclusive and diverse environment. Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website. Issued by – HSDI

Posted 1 week ago

Apply

4.0 years

0 Lacs

Hyderābād

On-site

GlassDoor logo

Job Summary: We are looking for an experienced Data Engineer with 4+ years of proven expertise in building scalable data pipelines, integrating complex datasets, and working with cloud-based and big data technologies. The ideal candidate should have hands-on experience with data modeling, ETL processes, and real-time data streaming. Key Responsibilities: Design, develop, and maintain scalable and efficient data pipelines and ETL workflows. Work with large datasets from various sources, ensuring data quality and consistency. Collaborate with Data Scientists, Analysts, and Software Engineers to support data needs. Optimize data systems for performance, scalability, and reliability. Implement data governance and security best practices. Troubleshoot data issues and identify improvements in data processes. Automate data integration and reporting tasks. Required Skills & Qualifications: Bachelor’s or Master’s degree in Computer Science, Engineering, or related field. 4+ years of experience in data engineering or similar roles . Strong programming skills in Python , SQL , and Shell scripting . Experience with ETL tools (e.g., Apache Airflow, Talend, AWS Glue). Proficiency in data modeling , data warehousing , and database design . Hands-on experience with cloud platforms (AWS, GCP, or Azure) and services like S3, Redshift, BigQuery, Snowflake . Experience with big data technologies such as Spark, Hadoop, Kafka . Strong understanding of data structures, algorithms , and system design . Familiarity with CI/CD tools , version control (Git), and Agile methodologies. Preferred Skills: Experience with real-time data streaming (Kafka, Spark Streaming). Knowledge of Docker , Kubernetes , and infrastructure-as-code tools like Terraform . Exposure to machine learning pipelines or data science workflows is a plus. Interested candidates can send their resume Job Type: Full-time Schedule: Day shift Work Location: In person

Posted 1 week ago

Apply

13.0 - 20.0 years

40 - 45 Lacs

Bengaluru

Work from Office

Naukri logo

Principal Architect - Platform & Application Architect Experience 15+ years in software/data platform architecture 5+ years in architectural leadership roles Architecture & Data Platform Expertise Education Bachelors/Master’s in CS, Engineering, or related field Title: Principal Architect Location: Onsite Bangalore Experience: 15+ years in software & data platform architecture and technology strategy Role Overview We are seeking a Platform & Application Architect to lead the design and implementation of a next-generation, multi-domain data platform and its ecosystem of applications. In this strategic and hands-on role, you will define the overall architecture, select and evolve the technology stack, and establish best practices for governance, scalability, and performance. Your responsibilities will span across the full data lifecycle—ingestion, processing, storage, and analytics—while ensuring the platform is adaptable to diverse and evolving customer needs. This role requires close collaboration with product and business teams to translate strategy into actionable, high-impact platform & products. Key Responsibilities 1. Architecture & Strategy Design the end-to-end architecture for a On-prem / hybrid data platform (data lake/lakehouse, data warehouse, streaming, and analytics components). Define and document data blueprints, data domain models, and architectural standards. Lead build vs. buy evaluations for platform components and recommend best-fit tools and technologies. 2. Data Ingestion & Processing Architect batch and real-time ingestion pipelines using tools like Kafka, Apache NiFi, Flink, or Airbyte. Oversee scalable ETL/ELT processes and orchestrators (Airflow, dbt, Dagster). Support diverse data sources: IoT, operational databases, APIs, flat files, unstructured data. 3. Storage & Modeling Define strategies for data storage and partitioning (data lakes, warehouses, Delta Lake, Iceberg, or Hudi). Develop efficient data strategies for both OLAP and OLTP workloads. Guide schema evolution, data versioning, and performance tuning. 4. Governance, Security, and Compliance Establish data governance , cataloging , and lineage tracking frameworks. Implement access controls , encryption , and audit trails to ensure compliance with DPDPA, GDPR, HIPAA, etc. Promote standardization and best practices across business units. 5. Platform Engineering & DevOps Collaborate with infrastructure and DevOps teams to define CI/CD , monitoring , and DataOps pipelines. Ensure observability, reliability, and cost efficiency of the platform. Define SLAs, capacity planning, and disaster recovery plans. 6. Collaboration & Mentorship Work closely with data engineers, scientists, analysts, and product owners to align platform capabilities with business goals. Mentor teams on architecture principles, technology choices, and operational excellence. Skills & Qualifications Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. 12+ years of experience in software engineering, including 5+ years in architectural leadership roles. Proven expertise in designing and scaling distributed systems, microservices, APIs, and event-driven architectures using Java, Python, or Node.js. Strong hands-on experience with building scalable data platforms on premise/Hybrid/cloud environments. Deep knowledge of modern data lake and warehouse technologies (e.g., Snowflake, BigQuery, Redshift) and table formats like Delta Lake or Iceberg. Familiarity with data mesh, data fabric, and lakehouse paradigms. Strong understanding of system reliability, observability, DevSecOps practices, and platform engineering principles. Demonstrated success in leading large-scale architectural initiatives across enterprise-grade or consumer-facing platforms. Excellent communication, documentation, and presentation skills, with the ability to simplify complex concepts and influence at executive levels. Certifications such as TOGAF or AWS Solutions Architect (Professional) and experience in regulated domains (e.g., finance, healthcare, aviation) are desirable.

Posted 1 week ago

Apply

5.0 years

15 - 17 Lacs

Hyderābād

Remote

GlassDoor logo

Hi, Greetings from Warrior Tech Solutions. We are hiring for the below position: Role: Python Developer/Big Data Developer Location: Chennai/Hyderabad (Onsite – 5 Days) Experience: 5+ Years Notice: Immediate to 15 days Required Skills: 5+ years of hands-on experience in Python development. Strong understanding of object-oriented programming (OOP) in Python. Experience with pip, virtual environments, and Python packaging. Proficiency in Git version control and working with remote repositories (GitHub/GitLab). Practical experience working with Google Cloud Platform (GCP) . Experience with BigQuery or other SQL-based databases via Python integrations. Background in designing and building scalable data pipelines. Excellent communication skills and ability to work in distributed team settings. Preferred / Nice-to-Have Skills: Experience with Apache Beam or Google Dataflow . Knowledge of CI/CD tools and DevOps pipelines in GCP. Familiarity with data engineering principles and ETL frameworks. Exposure to workflow orchestration tools like Apache Airflow is a plus. If interested, kindly share the resumes to *bharathi@ewarriorstechsolutions.com* or contact @8015568995. Job Type: Full-time Pay: ₹1,500,000.00 - ₹1,700,000.00 per year Schedule: Day shift Experience: Python: 5 years (Preferred) BigQuery: 3 years (Preferred) GCP: 5 years (Preferred) Data flow: 5 years (Preferred) Apache Beam: 5 years (Preferred) SQL: 5 years (Preferred) Work Location: In person

Posted 1 week ago

Apply

0 years

0 Lacs

Trivandrum, Kerala, India

Remote

Linkedin logo

Description Data Engineer Responsibilities : Deliver end-to-end data and analytics capabilities, including data ingest, data transformation, data science, and data visualization in collaboration with Data and Analytics stakeholder groups Design and deploy databases and data pipelines to support analytics projects Develop scalable and fault-tolerant workflows Clearly document issues, solutions, findings and recommendations to be shared internally & externally Learn and apply tools and technologies proficiently, including: Languages: Python, PySpark, ANSI SQL, Python ML libraries Frameworks/Platform: Spark, Snowflake, Airflow, Hadoop , Kafka Cloud Computing: AWS Tools/Products: PyCharm, Jupyter, Tableau, PowerBI Performance optimization for queries and dashboards Develop and deliver clear, compelling briefings to internal and external stakeholders on findings, recommendations, and solutions Analyze client data & systems to determine whether requirements can be met Test and validate data pipelines, transformations, datasets, reports, and dashboards built by team Develop and communicate solutions architectures and present solutions to both business and technical stakeholders Provide end user support to other data engineers and analysts Candidate Requirements Expert experience in the following[Should have/Good to have]: SQL, Python, PySpark, Python ML libraries. Other programming languages (R, Scala, SAS, Java, etc.) are a plus Data and analytics technologies including SQL/NoSQL/Graph databases, ETL, and BI Knowledge of CI/CD and related tools such as Gitlab, AWS CodeCommit etc. AWS services including EMR, Glue, Athena, Batch, Lambda CloudWatch, DynamoDB, EC2, CloudFormation, IAM and EDS Exposure to Snowflake and Airflow. Solid scripting skills (e.g., bash/shell scripts, Python) Proven work experience in the following: Data streaming technologies Big Data technologies including, Hadoop, Spark, Hive, Teradata, etc. Linux command-line operations Networking knowledge (OSI network layers, TCP/IP, virtualization) Candidate should be able to lead the team, communicate with business, gather and interpret business requirements Experience with agile delivery methodologies using Jira or similar tools Experience working with remote teams AWS Solutions Architect / Developer / Data Analytics Specialty certifications, Professional certification is a plus Bachelor Degree in Computer Science relevant field, Masters Degree is a plus Show more Show less

Posted 1 week ago

Apply

4.0 years

4 - 8 Lacs

Gurgaon

On-site

GlassDoor logo

We are seeking an experienced Python Developer with a strong background in Databricks to join our data engineering and analytics team. The ideal candidate will play a key role in building and maintaining scalable data pipelines and analytical platforms using Python and Databricks, with an emphasis on performance and cloud integration. You will be responsible for: · Design, develop, and maintain scalable Python applications for data processing and analytics. · Build and manage ETL pipelines using Databricks on Azure/AWS cloud platforms. · Collaborate with analysts and other developers to understand business requirements and implement data-driven solutions. · Optimize and monitor existing data workflows to improve performance and scalability. · Write clean, maintainable, and testable code following industry best practices. · Participate in code reviews and provide constructive feedback. · Maintain documentation and contribute to project planning and reporting. What skills & experience you’ll bring to us · Bachelor's degree in Computer Science, Engineering, or related field · Prior experience as a Python Developer or similar role, with a strong portfolio showcasing your past projects. · 4-6 years of Python experience · Strong proficiency in Python programming. · Hands-on experience with Databricks platform (Notebooks, Delta Lake, Spark jobs, cluster configuration, etc.). · Good knowledge of Apache Spark and its Python API (PySpark). · Experience with cloud platforms (preferably Azure or AWS) and working with Databricks on cloud. · Familiarity with data pipeline orchestration tools (e.g., Airflow, Azure Data Factory, etc.). · Strong understanding of database systems (SQL/NoSQL) and data modeling. · Strong communication skills and ability to collaborate effectively with cross-functional teams Want to apply? Get in touch today We’re always excited to hear from passionate individuals ready to make a difference and join our team, we’d love to connect. Reach out to us through our email: shubhangi.chandani@ahomtech.com and hr@ahomtech.com — and let’s start the conversation. *Immediate joiners are only preferred *Good communication skills are preferred Job Type: Full-time Pay: ₹400,000.00 - ₹800,000.00 per year Benefits: Provident Fund Location Type: In-person Schedule: Day shift Application Question(s): We want to fill this position urgently. Are you an immediate joiner? Do you hands-on experience with Databricks platform (Notebooks, Delta Lake, Spark jobs, cluster configuration, etc.)? Do you experience with cloud platforms (preferably Azure or AWS) and working with Databricks on cloud.? Experience: Python: 4 years (Required) Work Location: In person Speak with the employer +91 9198018443 Application Deadline: 14/06/2025 Expected Start Date: 17/06/2025

Posted 1 week ago

Apply

4.0 - 7.0 years

2 - 11 Lacs

Gurgaon

On-site

GlassDoor logo

Bachelor's degree in Computer Science, Engineering, or related field 4-7 years as a Python Developer or similar role, with a strong portfolio showcasing your past projects. Hands-on experience with Databricks platform (Notebooks, Delta Lake, Spark jobs, cluster configuration, etc.). Good knowledge of Apache Spark and its Python API (PySpark). Experience with cloud platforms (preferably Azure or AWS) and working with Databricks on cloud. Familiarity with data pipeline orchestration tools (e.g., Airflow, Azure Data Factory, etc.). Strong understanding of database systems (SQL/NoSQL) and data modeling. Strong communication skills and ability to collaborate effectively with cross- functional teams Job Type: Full-time Pay: ₹250,000.00 - ₹1,100,000.00 per year Work Location: In person

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

Delhi

On-site

GlassDoor logo

Delhi / Bangalore Engineering / Full Time / Hybrid What is Findem: Findem is the only talent data platform that combines 3D data with AI. It automates and consolidates top-of-funnel activities across your entire talent ecosystem, bringing together sourcing, CRM, and analytics into one place. Only 3D data connects people and company data over time - making an individual’s entire career instantly accessible in a single click, removing the guesswork, and unlocking insights about the market and your competition no one else can. Powered by 3D data, Findem’s automated workflows across the talent lifecycle are the ultimate competitive advantage. Enabling talent teams to deliver continuous pipelines of top, diverse candidates while creating better talent experiences, Findem transforms the way companies plan, hire, and manage talent. Learn more at www.findem.ai Experience - 5 - 9 years We are looking for an experienced Big Data Engineer, who will be responsible for building, deploying and managing various data pipelines, data lake and Big data processing solutions using Big data and ETL technologies. Location- Delhi, India Hybrid- 3 days onsite Responsibilities Build data pipelines, Big data processing solutions and data lake infrastructure using various Big data and ETL technologies Assemble and process large, complex data sets that meet functional non-functional business requirements ETL from a wide variety of sources like MongoDB, S3, Server-to-Server, Kafka etc., and processing using SQL and big data technologies Build analytical tools to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics Build interactive and ad-hoc query self-serve tools for analytics use cases Build data models and data schema for performance, scalability and functional requirement perspective Build processes supporting data transformation, metadata, dependency and workflow management Research, experiment and prototype new tools/technologies and make them successful Skill Requirements Must have-Strong in Python/Scala Must have experience in Big data technologies like Spark, Hadoop, Athena / Presto, Redshift, Kafka etc Experience in various file formats like parquet, JSON, Avro, orc etc Experience in workflow management tools like airflow Experience with batch processing, streaming and message queues Any of visualization tools like Redash, Tableau, Kibana etc Experience in working with structured and unstructured data sets Strong problem solving skills Good to have Exposure to NoSQL like MongoDB Exposure to Cloud platforms like AWS, GCP, etc Exposure to Microservices architecture Exposure to Machine learning techniques The role is full-time and comes with full benefits. We are globally headquartered in the San Francisco Bay Area with our India headquarters in Bengaluru. Equal Opportunity As an equal opportunity employer, we do not discriminate on the basis of race, color, religion, national origin, age, sex (including pregnancy), physical or mental disability, medical condition, genetic information, gender identity or expression, sexual orientation, marital status, protected veteran status or any other legally-protected characteristic.

Posted 1 week ago

Apply

0 years

0 - 0 Lacs

Delhi

On-site

GlassDoor logo

Key Responsibilities: Chiller Maintenance: Conduct daily/weekly checks on water-cooled or air-cooled chillers Monitor suction/discharge pressures , temperatures, and refrigerant levels Clean evaporator and condenser tubes , inspect for scaling or fouling Perform oil and filter changes , refrigerant leak detection, and logbook entries AHU (Air Handling Unit): Inspect and clean coils, filters, blower fans, and dampers Check motor alignment, belt tension, bearing lubrication , and vibration Verify proper airflow, pressure drop , and temperature control operation Pumps: Maintain and troubleshoot chilled water and condenser water pumps Monitor mechanical seals, bearings, couplings , and motor health Record flow rates, differential pressures , and ensure proper operation VFDs (Variable Frequency Drives): Inspect VFD panels for cooling, dust buildup, and wiring issues Monitor drive performance, fault logs , and communication with BMS Coordinate with controls team for parameter settings and motor tunin Cooling Towers: Perform nozzle cleaning, fan inspection, drift eliminator checks Maintain proper chemical treatment and water level control Inspect and maintain gearbox, fan motor, and louvers Job Types: Full-time, Permanent Pay: ₹15,000.00 - ₹20,000.00 per month Schedule: Day shift Evening shift Morning shift Work Location: In person

Posted 1 week ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Role Overview We are seeking a skilled Data Analyst to support our platform powering operational intelligence across airports and similar sectors. The ideal candidate will have experience working with time-series datasets and operational information to uncover trends, anomalies, and actionable insights. This role will work closely with data engineers, ML teams, and domain experts to turn raw data into meaningful intelligence for business and operations stakeholders. Key Responsibilities Analyze time-series and sensor data from various sources Develop and maintain dashboards, reports, and visualizations to communicate key metrics and trends. Correlate data from multiple systems (vision, weather, flight schedules, etc) to provide holistic insights. Collaborate with AI/ML teams to support model validation and interpret AI-driven alerts (e.g., anomalies, intrusion detection). Prepare and clean datasets for analysis and modeling; ensure data quality and consistency. Work with stakeholders to understand reporting needs and deliver business-oriented outputs. Qualifications & Required Skills Bachelor’s or Master’s degree in Data Science, Statistics, Computer Science, Engineering, or a related field. 5+ years of experience in a data analyst role, ideally in a technical/industrial domain. Strong SQL skills and proficiency with BI/reporting tools (e.g., Power BI, Tableau, Grafana). Hands-on experience analyzing structured and semi-structured data (JSON, CSV, time-series). Proficiency in Python or R for data manipulation and exploratory analysis. Understanding of time-series databases or streaming data (e.g., InfluxDB, Kafka, Kinesis). Solid grasp of statistical analysis and anomaly detection methods. Experience working with data from industrial systems or large-scale physical infrastructure. Good-to-Have Skills Domain experience in airports, smart infrastructure, transportation, or logistics. Familiarity with data platforms (Snowflake, BigQuery, Custom-built using open-source). Exposure to tools like Airflow, Jupyter Notebooks and data quality frameworks. Basic understanding of AI/ML workflows and data preparation requirements Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

About The Role Grade Level (for internal use): 03 Who We Are Kensho is a 120-person AI and machine learning company within S&P Global. With expertise in Machine Learning and data discovery, we develop and deploy novel solutions for S&P Global and its customers worldwide. Our solutions help businesses harness the power of data and Artificial Intelligence to innovate and drive progress. Kensho's solutions and research focus on speech recognition, entity linking, document extraction, automated database linking, text classification, natural language processing, and more. Are you looking to solve hard problems and enjoy working with teammates with diverse perspectives? If so, we would love to help you excel here at Kensho. About The Team Kensho’s Applications group develops the web apps and APIs that deliver Kensho’s AI capabilities to our customers. Our teams are small, product-focused, and intent on shipping high-quality code that best leverages our efforts. We’re collegial, humble, and inquisitive, and we delight in learning from teammates with backgrounds, skills, and interests different from our own. Kensho Link team, within the Applications Department, is a machine learning service that allows users to map entities in their datasets with unique entities drawn from S&P Global’s world-class company database with precision and speed. Link started as an internal Kensho project to help S&P Global Market Intelligence Team to integrate datasets more quickly into their platform. It uses ML based algorithms trained to return high quality links, even when the data inputs are incomplete or contain errors. In simple words, Kensho’s Link product helps in connecting the disconnected information about a company at one place – and it does so with scale. Link leverages a variety of NLP and ML techniques to process and link millions of company entities in hours. About The Role As a Senior Backend Engineer you will develop reliable, secure, and performant APIs that apply Kensho’s AI capabilities to specific customer workflows. You will collaborate with colleagues from Product, Machine Learning, Infrastructure, and Design, as well as with other engineers within Applications. You have a demonstrated capacity for depth, and are comfortable working with a broad range of technologies. Your verbal and written communication is proactive, efficient, and inclusive of your geographically-distributed colleagues. You are a thoughtful, deliberate technologist and share your knowledge generously. Equivalent to Grade 11 Role (Internal) You Will Design, develop, test, document, deploy, maintain, and improve software Manage individual project priorities, deadlines, and deliverables Work with key stakeholders to develop system architectures, API specifications, implementation requirements, and complexity estimates Test assumptions through instrumentation and prototyping Promote ongoing technical development through code reviews, knowledge sharing, and mentorship Optimize Application Scaling: Efficiently scale ML applications to maximize compute resource utilization and meet high customer demand. Address Technical Debt: Proactively identify and propose solutions to reduce technical debt within the tech stack. Enhance User Experiences: Collaborate with Product and Design teams to develop ML-based solutions that enhance user experiences and align with business goals. Ensure API security and data privacy by implementing best practices and compliance measures. Monitor and analyze API performance and reliability, making data-driven decisions to improve system health. Contribute to architectural discussions and decisions, ensuring scalability, maintainability, and performance of the backend systems. Qualifications At least 5+ years of direct experience developing customer-facing APIs within a team Thoughtful and efficient communication skills (both verbal and written) Experience developing RESTful APIs using a variety of tools Experience turning abstract business requirements into concrete technical plans Experience working across many stages of the software development lifecycle Sound reasoning about the behavior and performance of loosely-coupled systems Proficiency with algorithms (including time and space complexity analysis), data structures, and software architecture At least one domain of demonstrable technical depth Familiarity with CI/CD practices and tools to streamline deployment processes. Experience with containerization technologies (e.g., Docker, Kubernetes) for application deployment and orchestration. Technologies We Love Python, Django, FastAPI mypy, OpenAPI RabbitMQ, Celery, Kafka OpenSearch, PostgreSQL, Redis Git, Jsonnet, Jenkins, Docker, Kubernetes Airflow, AWS, Terraform Grafana, Prometheus ML Libraries: PyTorch, Scikit-learn, Pandas What’s In It For You? Our Purpose Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our Benefits Include Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Inclusive Hiring And Opportunity At S&P Global At S&P Global, we are committed to fostering an inclusive workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and equal opportunity, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf 20 - Professional (EEO-2 Job Categories-United States of America), BSMGMT203 - Entry Professional (EEO Job Group) Job ID: 312713 Posted On: 2025-04-15 Location: Hyderabad, Telangana, India Show more Show less

Posted 1 week ago

Apply

3.0 years

5 - 18 Lacs

Noida

On-site

GlassDoor logo

Data Engineer (with SQL, Python, Airflow, Bash) About the Role We are seeking a highly skilled and experienced Senior/Lead Data Engineer to join our growing Data Engineering Team. In this critical role, you will design, architect, and develop cutting-edge multi-tenant SaaS data solutions hosted on Azure Cloud . Your work will focus on delivering robust, scalable, and high-performance data pipelines and integrations that support our enterprise provider and payer data ecosystem. This role is ideal for someone with deep experience in ETL/ELT processes, data warehousing principles, and real-time and batch data integrations. As a senior member of the team, you will also be expected to mentor and guide junior engineers, help define best practices, and contribute to the overall data strategy. We are specifically looking for someone with strong hands-on experience in SQL, Python, and ideally Airflow and Bash scripting. Key Responsibilities Architect and implement scalable data integration and data pipeline solutions using Azure cloud services. Design, develop, and maintain ETL/ELT processes, including data extraction, transformation, loading, and quality checks using tools like SQL, Python, and Airflow . Build and automate data workflows and orchestration pipelines; knowledge of Airflow or equivalent tools is a plus. Write and maintain Bash scripts for automating system tasks and managing data jobs. Collaborate with business and technical stakeholders to understand data requirements and translate them into technical solutions. Develop and manage data flows, data mappings, and data quality & validation rules across multiple tenants and systems. Implement best practices for data modeling, metadata management, and data governance. Configure, maintain, and monitor integration jobs to ensure high availability and performance. Lead code reviews, mentor data engineers, and help shape engineering culture and standards. Stay current with emerging technologies and recommend tools or processes to improve the team's effectiveness. Required Qualifications Bachelor’s or Master’s degree in Computer Science, Information Systems, or related field. 3+ years of experience in data engineering, with a strong focus on Azure-based solutions. Proficiency in SQL and Python for data processing and pipeline development. Experience in developing and orchestrating pipelines using Airflow (preferred) and writing automation scripts using Bash . Proven experience in designing and implementing real-time and batch data integrations. Hands-on experience with Azure Data Factory, Azure Data Lake, Azure Synapse, Databricks, or similar technologies. Strong understanding of data warehousing principles, ETL/ELT methodologies, and data pipeline architecture. Familiarity with data quality, metadata management, and data validation frameworks. Strong problem-solving skills and the ability to communicate complex technical concepts clearly. Preferred Qualifications Experience with multi-tenant SaaS data solutions. Background in healthcare data, especially provider and payer ecosystems. Familiarity with DevOps practices, CI/CD pipelines, and version control systems (e.g., Git). Experience mentoring and coaching other engineers in technical and architectural decision-making. Job Type: Full-time Pay: ₹586,118.08 - ₹1,894,567.99 per year Benefits: Health insurance Schedule: Day shift Application Question(s): 3+ years of experience in data engineering, with a strong focus on Azure-based solutions(Mandatory) Hands-on experience with Azure Data Factory, Azure Data Lake, Azure Synapse, Databricks, or similar technologies(Min 3 years) Experience in developing and orchestrating pipelines using Airflow (preferred) and writing automation scripts using Bash(Min 3 years) Are you from Delhi NCR loaction(Mandatory) Experience: Airflow: 3 years (Required) Bash (Unix shell): 3 years (Required) SQL: 3 years (Required) Work Location: In person

Posted 1 week ago

Apply

0 years

8 - 10 Lacs

Udaipur

On-site

GlassDoor logo

About the job Role Description This is a full-time on-site role for a Tech Lead (AI and Data) located in Bhopal. The Tech Lead will be responsible for managing and overseeing the technical execution of AI and data projects. Daily tasks involve troubleshooting, providing technical support, supervising IT-related activities, and ensuring the team is trained and well-supported. The Tech Lead will also collaborate with Kadel Labs to ensure successful product development and implementation. Tech Skills Here are six key technical skills an AI Tech Lead should possess: Machine Learning & Deep Learning – Strong grasp of algorithms (supervised, unsupervised, reinforcement) – Experience building and tuning neural networks (CNNs, RNNs, transformers) Data Engineering & Pipeline Architecture – Designing ETL/ELT workflows, data lakes, and feature stores – Proficiency with tools like Apache Spark, Kafka, Airflow, or Databricks Model Deployment & MLOps – Containerization (Docker) and orchestration (Kubernetes) for scalable inference – CI/CD for ML (e.g. MLflow, TFX, Kubeflow) and automated monitoring of model drift Cloud Platforms & Services – Hands-on with AWS (SageMaker, Lambda), Azure (ML Studio, Functions), or GCP (AI Platform) – Infrastructure-as-Code (Terraform, ARM templates) for reproducible environments Software Engineering Best Practices – Strong coding skills in Python (TensorFlow, PyTorch, scikit-learn) and familiarity with Java/Scala or Go – API design (REST/GraphQL), version control (Git), unit testing, and code reviews Data Security & Privacy in AI – Knowledge of PII handling, differential privacy, and secure data storage/encryption – Understanding of compliance standards (GDPR, HIPAA) and bias mitigation techniques Other Qualifications Troubleshooting and Technical Support skills Experience in Information Technology and Customer Service Ability to provide Training and guidance to team members Strong leadership and project management skills Excellent communication and collaboration abilities Experience in AI and data technologies is a plus Bachelor's or Master's degree in Computer Science, Information Technology, or a related field Job Types: Full-time, Permanent Pay: ₹875,652.61 - ₹1,016,396.45 per year Benefits: Health insurance Schedule: Day shift Monday to Friday Work Location: In person

Posted 1 week ago

Apply

6.0 years

0 Lacs

Andhra Pradesh

On-site

GlassDoor logo

Minimum 6 years of hands-on experience in data engineering or big data development roles. Strong programming skills in Python and experience with Apache Spark (PySpark preferred). Proficient in writing and optimizing complex SQL queries. Hands-on experience with Apache Airflow for orchestration of data workflows. Deep understanding and practical experience with AWS services: Data Storage & Processing: S3, Glue, EMR, Athena Compute & Execution: Lambda, Step Functions Databases: RDS, DynamoDB Monitoring: CloudWatch Experience with distributed data processing, parallel computing, and performance tuning. Strong analytical and problem-solving skills. Familiarity with CI/CD pipelines and DevOps practices is a plus. About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.

Posted 1 week ago

Apply

5.0 - 9.0 years

7 - 17 Lacs

Pune

Work from Office

Naukri logo

Job Overview: Diacto is seeking an experienced and highly skilled Data Architect to lead the design and development of scalable and efficient data solutions. The ideal candidate will have strong expertise in Azure Databricks, Snowflake (with DBT, GitHub, Airflow), and Google BigQuery. This is a full-time, on-site role based out of our Baner, Pune office. Qualifications: B.E./B.Tech in Computer Science, IT, or related discipline MCS/MCA or equivalent preferred Key Responsibilities: Design, build, and optimize robust data architecture frameworks for large-scale enterprise solutions Architect and manage cloud-based data platforms using Azure Databricks, Snowflake, and BigQuery Define and implement best practices for data modeling, integration, governance, and security Collaborate with engineering and analytics teams to ensure data solutions meet business needs Lead development using tools such as DBT, Airflow, and GitHub for orchestration and version control Troubleshoot data issues and ensure system performance, reliability, and scalability Guide and mentor junior data engineers and developers Experience and Skills Required: 5 to12 years of experience in data architecture, engineering, or analytics roles Hands-on expertise in Databricks , especially Azure Databricks Proficient in Snowflake , with working knowledge of DBT, Airflow, and GitHub Experience with Google BigQuery and cloud-native data processing workflows Strong knowledge of modern data architecture, data lakes, warehousing, and ETL pipelines Excellent problem-solving, communication, and analytical skills Nice to Have: Certifications in Azure, Snowflake, or GCP Experience with containerization (Docker/Kubernetes) Exposure to real-time data streaming and event-driven architecture Why Join Diacto Technologies? Collaborate with experienced data professionals and work on high-impact projects Exposure to a variety of industries and enterprise data ecosystems Competitive compensation, learning opportunities, and an innovation-driven culture Work from our collaborative office space in Baner, Pune How to Apply: Option 1 (Preferred) Copy and paste the following link on your browser and submit your application for the automated interview process : - https://app.candidhr.ai/app/candidate/gAAAAABoRrTQoMsfqaoNwTxsE_qwWYcpcRyYJk7NzSUmO3LKb6rM-8FcU58CUPYQKc65n66feHor-TGdCEfyouj0NmKdgYcNbA==/ Option 2 1. Please visit our website's career section at https://www.diacto.com/careers/ 2. Scroll down to the " Who are we looking for ?" section 3. Find the listing for " Data Architect (Data Bricks) " and 4. Proceed with the virtual interview by clicking on " Apply Now ."

Posted 1 week ago

Apply

4.0 years

0 Lacs

India

On-site

Linkedin logo

Required Skills and Qualifications: 3–4 years of hands-on experience in Google Cloud Platform (GCP) . 1–2 years of working experience with SAP BODS , particularly in data flow development, testing, and deployment. Strong understanding of SQL/NoSQL database systems. Hands-on experience with big data technologies (Hadoop, Spark, Kafka). Strong scripting and programming skills in Python , Java , Scala , or similar. Working knowledge of Linux systems. Hands-on experience with Databricks including Unity Catalog and performance optimization. Familiarity with DBT , Airflow , and other transformation/orchestration tools. Solid understanding of data pipeline architecture , workflow orchestration , and data engineering best practices . Exposure to containerization tools such as Docker/Kubernetes (preferred). Experience working with RESTful APIs or Data as a Service models (preferred). Excellent communication and collaboration skills. Nice to Have: Experience with Azure or AWS cloud services in addition to GCP. Background in working within agile teams or DevOps environments. Knowledge of data governance and security best practices in cloud data platforms. Show more Show less

Posted 1 week ago

Apply

4.0 years

0 Lacs

India

On-site

Linkedin logo

About Us BrightEdge is a leading enterprise SEO and content performance platform trusted by over 1,500 global brands including Microsoft, Adobe, and Marriott. What makes BrightEdge special is our innovative technology that transforms complex search data into actionable insights. We're not just another martech company – we literally pioneered the SEO platform category and continue to lead with AI-powered solutions. Working with us means joining a team that's solving fascinating technical and business challenges at scale. You'll be directly impacting how major brands connect with their audiences online. We're looking for a talented Big Data Engineer to join our Professional Services team to help us scale and optimize our data processing capabilities. Role Overview As a Big Data Engineer at BrightEdge, you will design, build, and maintain high-performance data pipelines that process terabytes of data. You'll work on optimizing our existing systems, identifying and resolving performance bottlenecks, and implementing solutions that improve the overall efficiency of our platform. This role is critical in ensuring our data infrastructure can handle increasing volumes of data while maintaining exceptional performance standards. Key Responsibilities Design and implement scalable batch processing systems using Python and big data technologies Optimize database performance, focusing on slow-running queries and latency improvements Use Python profilers and performance monitoring tools to identify bottlenecks Reduce P95 and P99 latency metrics across our data platform Build efficient ETL pipelines that can handle large-scale data processing Collaborate with data scientists and product teams to understand data requirements Monitor and troubleshoot data pipeline issues in production Implement data quality checks and validation mechanisms Document data architecture and engineering processes Stay current with emerging big data technologies and best practices Qualifications Required Bachelor's degree in Computer Science, Engineering, or related technical field 4+ years of experience in data engineering roles Strong Python programming skills with focus on data processing libraries Experience with big data technologies (Spark, Hadoop, etc.) Proven experience optimizing database performance (SQL or NoSQL) Knowledge of data pipeline orchestration tools (Airflow, Luigi, etc.) Understanding of performance optimization techniques and profiling tools Experience with cloud platforms (AWS, GCP, or Azure) Ability to translate business requirements into technical specifications Ability to work in UK timezone (12 PM IST to 9:30 PM IST) Preferred Bachelor's / Master's degree in Computer Science or related field Experience with SEO data or web crawling systems Experience with Clickhouse Database Knowledge of distributed systems and microservices architecture Familiarity with container orchestration (Kubernetes, Docker) Experience with real-time data processing Contributions to open-source projects Experience with machine learning operations Skills and Abilities Strong analytical thinking and problem-solving skills Excellent attention to detail and commitment to data quality Ability to work effectively in a collaborative team environment Good communication skills to explain complex technical concepts Self-motivated with the ability to work independently Passion for performance optimization and efficiency Show more Show less

Posted 1 week ago

Apply

9.0 - 10.0 years

9 - 10 Lacs

Chennai, Tamil Nadu, India

On-site

Foundit logo

Qualification Total 9 years of experience with minimum 5 years of experience working as DBT administrator DBT Core Cloud Manage DBT projects, models, tests, snapshots, and deployments in both DBT Core and DBT Cloud Administer and manage DBT Cloud environments including users, permissions, job scheduling, and Git integration Onboarding and enablement of DBT users on DBT Cloud platform Work closely with users to support DBT adoption and usage SQL Warehousing Write optimized SQL and work with data warehouses like Snowflake, BigQuery, Redshift, or Databricks Cloud Platforms Use AWS, GCP, or Azure for data storage (e.g., S3, GCS), compute, and resource management Orchestration Tools Automate DBT runs using Airflow, Prefect, or DBT Cloud job scheduling Version Control CI CD Integrate DBT with Git and manage CI/CD pipelines for model promotion and testing Monitoring Logging Track job performance and errors using tools like dbt-artifacts, Datadog, or cloud-native logging Access Security Configure IAM roles, secrets, and permissions for secure DBT and data warehouse access Documentation Collaboration Maintain model documentation, use dbt docs, and collaborate with data teams

Posted 1 week ago

Apply

8.0 - 12.0 years

50 - 65 Lacs

Bengaluru

Work from Office

Naukri logo

Job Title: Staff Engineer Gen-AI Experience: 8.0 Year To 10.0 Year CTC Salary: 50.00 LPA To 65.00 LPA Location: Bengaluru/Bangalore Job Description Build Gen-AI native products: Architect, build, and ship platforms powered by LLMs, agents, and predictive AI. Stay hands-on: Design systems, write code, debug, and drive product excellence. Lead with depth: Mentor a high-caliber team of full stack engineers. Speed to market: Rapidly ship and iterate on MVPs to maximize learning and feedback. Own the full stack: From backend data pipelines to intuitive UIsfrom Airflow to React from BigQuery to embeddings. Scale what works: Ensure scalability, security, and performance in multi-tenant, cloud-native environments (GCP). Collaborate deeply: Work closely with product, growth, and leadership to align tech with business priorities. What You Bring 8+ years of experience building and scaling full-stack, data-driven products Proficiency in backend (Node.js, Python) and frontend (React), with solid GCP experience Strong grasp of data pipelines, analytics, and real-time data processing Familiarity with Gen-AI frameworks (LangChain, LlamaIndex, OpenAI APIs, vector databases) Proven architectural leadership and technical ownership Product mindset with a bias for execution and iteration Our Tech Stack Cloud: Google Cloud Platform Backend: Node.js, Python, Airflow Data: BigQuery, Cloud SQL AI/ML: TensorFlow, OpenAI APIs, custom agents Frontend: React.js Interested professional can share Resume at harshita.g@recex.co Thanks & Regards Harshita Recex

Posted 1 week ago

Apply

6.0 years

0 Lacs

Jaipur, Rajasthan, India

On-site

Linkedin logo

About Hakkoda Hakkoda, an IBM Company, is a modern data consultancy that empowers data driven organizations to realize the full value of the Snowflake Data Cloud. We provide consulting and managed services in data architecture, data engineering, analytics and data science. We are renowned for bringing our clients deep expertise, being easy to work with, and being an amazing place to work! We are looking for curious and creative individuals who want to be part of a fast-paced, dynamic environment, where everyone’s input and efforts are valued. We hire outstanding individuals and give them the opportunity to thrive in a collaborative atmosphere that values learning, growth, and hard work. Our team is distributed across North America, Latin America, India and Europe. If you have the desire to be a part of an exciting, challenging, and rapidly-growing Snowflake consulting services company, and if you are passionate about making a difference in this world, we would love to talk to you!. We are looking for a skilled and motivated Data Analyst / Data Engineer to join our growing data team in Jaipur. The ideal candidate should have hands-on experience with SQL, Python, Power BI , and familiarity with Snowflake is a strong advantage. You will play a key role in building data pipelines, delivering analytical insights, and enabling data-driven decision-making across the organization. Role Description Develop and manage robust data pipelines and workflows for data integration, transformation, and loading. Design, build, and maintain interactive Power BI dashboards and reports based on business needs. Optimize existing Power BI reports for performance, usability, and scalability. Write and optimize complex SQL queries for data analysis and reporting. Use Python for data manipulation, automation, and advanced analytics where applicable. Collaborate with business stakeholders to understand requirements and deliver actionable insights. Ensure high data quality, integrity, and governance across all reporting and analytics layers. Work closely with data engineers, analysts, and business teams to deliver scalable data solutions. Leverage cloud data platforms like Snowflake for data warehousing and analytics (good to have). Qualifications 3–6 years of professional experience in data analysis or data engineering. Bachelor’s degree in computer science, Engineering, Data Science, Information Technology, or a related field. Strong proficiency in SQL with the ability to write complex queries and perform data modeling. Hands-on experience with Power BI for data visualization and business intelligence reporting. Programming knowledge in Python for data processing and analysis. Good understanding of ETL/ELT, data warehousing concepts, and cloud-based data ecosystems. Excellent problem-solving skills, attention to detail, and analytical thinking. Strong communication and interpersonal skills to work effectively with cross-functional teams. Preferred / Good To Have Experience working with large datasets and cloud platforms like Snowflake, Redshift, or BigQuery. Familiarity with workflow orchestration tools (e.g., Airflow) and version control systems (e.g., Git). Power BI Certification (e.g., PL-300: Microsoft Power BI Data Analyst). Exposure to Agile methodologies and end-to-end BI project life cycles. Benefits Health Insurance Paid leave Technical training and certifications Robust learning and development opportunities Incentive Toastmasters Food Program Fitness Program Referral Bonus Program Hakkoda is committed to fostering diversity, equity, and inclusion within our teams. A diverse workforce enhances our ability to serve clients and enriches our culture. We encourage candidates of all races, genders, sexual orientations, abilities, and experiences to apply, creating a workplace where everyone can succeed and thrive. Ready to take your career to the next level? 🚀 💻 Apply today👇 and join a team that’s shaping the future!! Hakkoda is an IBM subsidiary which has been acquired by IBM and will be integrated in the IBM organization. Hakkoda will be the hiring entity. By Proceeding with this application, you understand that Hakkoda will share your personal information with other IBM subsidiaries involved in your recruitment process, wherever these are located. More information on how IBM protects your personal information, including the safeguards in case of cross-border data transfer, are available here. Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

Remote

Linkedin logo

When you join Verizon You want more out of a career. A place to share your ideas freely — even if they’re daring or different. Where the true you can learn, grow, and thrive. At Verizon, we power and empower how people live, work and play by connecting them to what brings them joy. We do what we love — driving innovation, creativity, and impact in the world. Our V Team is a community of people who anticipate, lead, and believe that listening is where learning begins. In crisis and in celebration, we come together — lifting our communities and building trust in how we show up, everywhere & always. Want in? Join the #VTeamLife. What you’ll be doing... Design, build and maintain robust, scalable data pipelines and ETL processes Ensure high data quality, accuracy and integrity across all systems. Work with structure and unstructured data from multiple sources. Optimize data work flows for performance, reliability, and cost efficiency. Collaborate with analysts, data scientists to meet data needs Monitor,troubleshoot, and improve existing data systems and jobs Apply best practices in data governance, security and compliance . Use tools like Spark, Kafka, Airflow, SQL, Python and cloud platforms Stay updated with emerging technologies and continuously improve data infrastructure. What we’re looking for… You Will Need To Have Bachelor's degree or four or more years of work experience. Expertise in AWS Data Stack – Strong hands-on experience with S3, Glue, EMR, Lambda, Kinesis, Redshift, Athena, and IAM security best practices. Big Data & Distributed Computing – Deep understanding of Apache Spark (batch and streaming) large-scale data processing and analytics. Real-Time & Batch Data Processing – Proven experience designing, implementing, and optimizing event-driven and streaming data pipelines using Kafka and Kinesis. ETL/ELT & Data Modeling – Strong experience in architecting and optimizing scalable ETL/ELT pipelines for structured and unstructured data. Programming Skills – Proficiency in Scala and Java for data processing and automation. Database & SQL Optimization – Strong understanding of SQL and experience with relational (PostgreSQL, MySQL). Expertise in SQL query tuning, data warehousing and working with Parquet, Avro, ORC formats. Infrastructure as Code (IaC) & DevOps – Experience with CloudFormation, CDK, and CI/CD pipelines for automated deployments in AWS. Monitoring, Logging & Observability – Familiarity with AWS CloudWatch, Prometheus, or similar monitoring tools. API Integration – Ability to fetch and process data from external APIs and databases. Architecture & Scalability Mindset – Ability to design and optimize data architectures for high-volume, high-velocity, and high-variety datasets. Performance Optimization – Experience in optimizing data pipelines for cost and performance. Cross-Team Collaboration – Work closely with Data Scientists, Analysts, DevOps, and Business Teams to deliver end-to-end data solutions. Even better if you have one or more of the following: Agile & CI/CD Practices – Comfortable working in Agile/Scrum environments, driving continuous integration and continuous deployment. #TPDRNONCDIO Where you’ll be working In this hybrid role, you'll have a defined work location that includes work from home and assigned office days set by your manager. Scheduled Weekly Hours 40 Equal Employment Opportunity Verizon is an equal opportunity employer. We evaluate qualified applicants without regard to race, gender, disability or any other legally protected characteristics. Show more Show less

Posted 1 week ago

Apply

8.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Linkedin logo

Role Description Job Title: Lead I - Software Testing (Automation, Python, AWS) Experience Range: 5–8 Years Hiring Locations: Trivandrum, Hyderabad, Kochi Must-Have Skills Strong experience in automation testing using Python Testing in a Hadoop environment with HDFS Proficiency in AWS services including S3, Lambdas, Managed Airflow (MWAA), and EMR Serverless Strong analytical and problem-solving skills Experience with agile methodologies and data testing Excellent verbal and written communication skills Strong organizational skills and ability to meet tight deadlines High attention to detail and accuracy Good-to-Have Skills Educational background in Mathematics, Physics, Statistics, or related disciplines Experience preparing detailed specifications and reports Previous experience in a similar testing role Job Description We are seeking a highly skilled and detail-oriented Lead I - Software Testing professional with expertise in automation, Python, and AWS . The ideal candidate will have hands-on experience in testing applications within a big data ecosystem , particularly using PySpark in Hadoop/HDFS environments. You should be comfortable working with cloud-native services on AWS , including EMR Serverless , Lambdas , and MWAA . Strong communication, problem-solving, and organizational abilities are essential. This role demands a proactive mindset and the ability to thrive in a fast-paced, agile environment. Skills Automation Testing,Python,Aws,Pyspark Show more Show less

Posted 1 week ago

Apply

Exploring Airflow Jobs in India

The airflow job market in India is rapidly growing as more companies are adopting data pipelines and workflow automation. Airflow, an open-source platform, is widely used for orchestrating complex computational workflows and data processing pipelines. Job seekers with expertise in airflow can find lucrative opportunities in various industries such as technology, e-commerce, finance, and more.

Top Hiring Locations in India

  1. Bangalore
  2. Mumbai
  3. Hyderabad
  4. Pune
  5. Gurgaon

Average Salary Range

The average salary range for airflow professionals in India varies based on experience levels: - Entry-level: INR 6-8 lakhs per annum - Mid-level: INR 10-15 lakhs per annum - Experienced: INR 18-25 lakhs per annum

Career Path

In the field of airflow, a typical career path may progress as follows: - Junior Airflow Developer - Airflow Developer - Senior Airflow Developer - Airflow Tech Lead

Related Skills

In addition to airflow expertise, professionals in this field are often expected to have or develop skills in: - Python programming - ETL concepts - Database management (SQL) - Cloud platforms (AWS, GCP) - Data warehousing

Interview Questions

  • What is Apache Airflow? (basic)
  • Explain the key components of Airflow. (basic)
  • How do you schedule a DAG in Airflow? (basic)
  • What are the different operators in Airflow? (medium)
  • How do you monitor and troubleshoot DAGs in Airflow? (medium)
  • What is the difference between Airflow and other workflow management tools? (medium)
  • Explain the concept of XCom in Airflow. (medium)
  • How do you handle dependencies between tasks in Airflow? (medium)
  • What are the different types of sensors in Airflow? (medium)
  • What is a Celery Executor in Airflow? (advanced)
  • How do you scale Airflow for a high volume of tasks? (advanced)
  • Explain the concept of SubDAGs in Airflow. (advanced)
  • How do you handle task failures in Airflow? (advanced)
  • What is the purpose of a TriggerDagRun operator in Airflow? (advanced)
  • How do you secure Airflow connections and variables? (advanced)
  • Explain how to create a custom Airflow operator. (advanced)
  • How do you optimize the performance of Airflow DAGs? (advanced)
  • What are the best practices for version controlling Airflow DAGs? (advanced)
  • Describe a complex data pipeline you have built using Airflow. (advanced)
  • How do you handle backfilling in Airflow? (advanced)
  • Explain the concept of DAG serialization in Airflow. (advanced)
  • What are some common pitfalls to avoid when working with Airflow? (advanced)
  • How do you integrate Airflow with external systems or tools? (advanced)
  • Describe a challenging problem you faced while working with Airflow and how you resolved it. (advanced)

Closing Remark

As you explore job opportunities in the airflow domain in India, remember to showcase your expertise, skills, and experience confidently during interviews. Prepare well, stay updated with the latest trends in airflow, and demonstrate your problem-solving abilities to stand out in the competitive job market. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies