Home
Jobs

2873 Airflow Jobs - Page 10

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

As a Data Engineer , you are required to: Design, build, and maintain data pipelines that efficiently process and transport data from various sources to storage systems or processing environments while ensuring data integrity, consistency, and accuracy across the entire data pipeline. Integrate data from different systems, often involving data cleaning, transformation (ETL), and validation. Design the structure of databases and data storage systems, including the design of schemas, tables, and relationships between datasets to enable efficient querying. Work closely with data scientists, analysts, and other stakeholders to understand their data needs and ensure that the data is structured in a way that makes it accessible and usable. Stay up-to-date with the latest trends and technologies in the data engineering space, such as new data storage solutions, processing frameworks, and cloud technologies. Evaluate and implement new tools to improve data engineering processes. Qualification : Bachelor's or Master's in Computer Science & Engineering, or equivalent. Professional Degree in Data Science, Engineering is desirable. Experience level : At least 3 - 5 years hands-on experience in Data Engineering, ETL. Desired Knowledge & Experience : Spark: Spark 3.x, RDD/DataFrames/SQL, Batch/Structured Streaming Knowing Spark internals: Catalyst/Tungsten/Photon Databricks: Workflows, SQL Warehouses/Endpoints, DLT, Pipelines, Unity, Autoloader IDE: IntelliJ/Pycharm, Git, Azure Devops, Github Copilot Test: pytest, Great Expectations CI/CD Yaml Azure Pipelines, Continuous Delivery, Acceptance Testing Big Data Design: Lakehouse/Medallion Architecture, Parquet/Delta, Partitioning, Distribution, Data Skew, Compaction Languages: Python/Functional Programming (FP) SQL: TSQL/Spark SQL/HiveQL Storage: Data Lake and Big Data Storage Design additionally it is helpful to know basics of: Data Pipelines: ADF/Synapse Pipelines/Oozie/Airflow Languages: Scala, Java NoSQL: Cosmos, Mongo, Cassandra Cubes: SSAS (ROLAP, HOLAP, MOLAP), AAS, Tabular Model SQL Server: TSQL, Stored Procedures Hadoop: HDInsight/MapReduce/HDFS/YARN/Oozie/Hive/HBase/Ambari/Ranger/Atlas/Kafka Data Catalog: Azure Purview, Apache Atlas, Informatica Required Soft skills & Other Capabilities : Great attention to detail and good analytical abilities. Good planning and organizational skills Collaborative approach to sharing ideas and finding solutions Ability to work independently and also in a global team environment. Show more Show less

Posted 3 days ago

Apply

5.0 years

0 Lacs

Jaipur

On-site

ABOUT HAKKODA Hakkoda, an IBM Company, is a modern data consultancy that empowers data driven organizations to realize the full value of the Snowflake Data Cloud. We provide consulting and managed services in data architecture, data engineering, analytics and data science. We are renowned for bringing our clients deep expertise, being easy to work with, and being an amazing place to work! We are looking for curious and creative individuals who want to be part of a fast-paced, dynamic environment, where everyone’s input and efforts are valued. We hire outstanding individuals and give them the opportunity to thrive in a collaborative atmosphere that values learning, growth, and hard work. Our team is distributed across North America, Latin America, India and Europe. If you have the desire to be a part of an exciting, challenging, and rapidly-growing Snowflake consulting services company, and if you are passionate about making a difference in this world, we would love to talk to you!. We are seeking a skilled and collaborative Sr. Data/Python Engineer with experience in the development of production Python-based applications (Such as Django, Flask, FastAPI on AWS) to support our data platform initiatives and application development. This role will initially focus on building and optimizing Streamlit application development frameworks, CI/CD Pipelines, ensuring code reliability through automated testing with Pytest , and enabling team members to deliver updates via CI/CD pipelines . Once the deployment framework is implemented, the Sr Engineer will own and drive data transformation pipelines in dbt and implement a data quality framework. Key Responsibilities: Lead application testing and productionalization of applications built on top of Snowflake - This includes implementation and execution of unit testing and integration testing - Automated test suites include use of Pytest and Streamlit App Tests to ensure code quality, data accuracy, and system reliability. Development and Integration of CI/CD pipelines (e.g., GitHub Actions, Azure DevOps, or GitLab CI) for consistent deployments across dev, staging, and production environments. Development and testing of AWS-based pipelines - AWS Glue, Airflow (MWAA), S3. Design, develop, and optimize data models and transformation pipelines in Snowflake using SQL and Python. Build Streamlit-based applications to enable internal stakeholders to explore and interact with data and models. Collaborate with team members and application developers to align requirements and ensure secure, scalable solutions. Monitor data pipelines and application performance, optimizing for speed, cost, and user experience. Create end-user technical documentation and contribute to knowledge sharing across engineering and analytics teams. Work in CST hours and collaborate with onshore and offshore teams. Qualifications, Skills & Experience 5+ years of experience in Data Engineering or Python based application development on AWS (Flask, Django, FastAPI, Streamlit) - Experience building data data-intensive applications on python as well as data pipelines on AWS in a must. Bachelor’s degree in computer science, Information Systems, Data Engineering, or a related field (or equivalent experience). Proficient in SQL and Python for data manipulation and automation tasks. Experience with developing and productionalizing applications built on Python based Frameworks such as FastAPI, Django, Flask. Experience with application frameworks such as Streamlit, Angular, React etc for rapid data app deployment. Solid understanding of software testing principles and experience using Pytest or similar Python frameworks. Experience configuring and maintaining CI/CD pipelines for automated testing and deployment. Familiarity with version control systems such as Gitlab . Knowledge of data governance, security best practices, and role-based access control (RBAC) in Snowflake. Preferred Qualifications: Experience with dbt (data build tool) for transformation modeling. Knowledge of Snowflake’s advanced features (e.g., masking policies, external functions, Snowpark). Exposure to cloud platforms (e.g., AWS, Azure, GCP). Strong communication and documentation skills. Benefits: Health Insurance Paid leave Technical training and certifications Robust learning and development opportunities Incentive Toastmasters Food Program Fitness Program Referral Bonus Program Hakkoda is committed to fostering diversity, equity, and inclusion within our teams. A diverse workforce enhances our ability to serve clients and enriches our culture. We encourage candidates of all races, genders, sexual orientations, abilities, and experiences to apply, creating a workplace where everyone can succeed and thrive. Ready to take your career to the next level? \uD83D\uDE80 \uD83D\uDCBB Apply today\uD83D\uDC47 and join a team that’s shaping the future!! Hakkoda is an IBM subsidiary which has been acquired by IBM and will be integrated in the IBM organization. Hakkoda will be the hiring entity. By Proceeding with this application, you understand that Hakkoda will share your personal information with other IBM subsidiaries involved in your recruitment process, wherever these are located. More information on how IBM protects your personal information, including the safeguards in case of cross-border data transfer, are available here.

Posted 3 days ago

Apply

5.0 - 7.0 years

0 - 0 Lacs

Visakhapatnam

On-site

Job Description : Maintenance Incharge (Catering Industry - Multi-Kitchen Operations) Position Titl e: Maintenance Incharge / Head of Maintenance Engineering Department : Engineering & Maintenance Reports T o: Operations Manager / Asst General Manager Location: [All Location(s) - , Multi-Outlet Facility] Employment Type: Full-Time Mission of the Role To ensure the seamless, safe, and efficient operation of all kitchen equipment, utilities, and facility infrastructure across catering operations, minimizing downtime, ensuring compliance, and maximizing equipment lifespan through expert technical oversight, proactive maintenance planning, and hands-on leadership. Core Responsibilities Strategic Maintenance Leadership: Develop, implement, and oversee a comprehensive Preventive Maintenance (PM) program for all critical kitchen equipment (boilers, motors, grinders, exhausts, refrigeration) and facility systems across all designated kitchens. Create and manage the annual maintenance budget, prioritizing critical repairs and upgrades. Lead, mentor, and schedule the maintenance team (technicians, helpers), ensuring adequate coverage for all shifts and locations. Maintain detailed records (CMMS - Computerized Maintenance Management System preferred) of all maintenance activities, work orders, spare parts inventory, and equipment history. Technical Expertise & Troubleshooting (Critical Systems): Boilers: Possess in-depth knowledge of operation, maintenance (daily checks, water treatment, blowdowns), troubleshooting, safety protocols (including statutory compliance), and minor repairs of industrial catering boilers (steam/hot water). Understand pressure systems regulations. Motors & Drives: Expert in troubleshooting, repairing, and maintaining electric motors (specifically 2HP and above commonly found in mixers, grinders, exhaust fans, pumps), including understanding starters (DOL, Star-Delta), VFDs, bearings, alignment, and load testing. Exhaust Systems (Sukhad): Thorough understanding of commercial kitchen exhaust hoods, ductwork, fire suppression systems (Ansul), and extraction fans. Ensure optimal airflow, grease management, and compliance with fire safety regulations. Schedule and oversee deep cleaning. Refrigeration & Cold Rooms: Maintain optimal performance of walk-in cold rooms, freezers, chillers, refrigerators, and ice machines. Troubleshoot refrigerant issues (within permissible scope), compressors, condensers, evaporators, controls, and temperature monitoring systems. Understand HACCP implications of temperature failures. Grinders & Processing Equipment: Expertise in maintaining, troubleshooting, and repairing commercial meat grinders, vegetable cutters, mixers, blenders, and food processors. Focus on safety interlocks, blade sharpening/replacement, gearboxes, and drive mechanisms. Other Key Equipment: Oversee maintenance of ovens (convection, deck, combi), fryers, cooking ranges, dishwashers (conveyor, flight type), pasta cookers, bain-maries, hot cupboards, and associated gas/electric/steam lines. Operational Excellence & Compliance: Preventive Maintenance: Execute and supervise scheduled PM tasks rigorously to prevent breakdowns. Breakdown Management: Respond urgently to equipment failures in kitchens, diagnose faults accurately, perform repairs efficiently, or coordinate with external vendors when necessary to minimize disruption to food production. Spare Parts Management: Maintain optimal inventory levels of critical spare parts for key equipment. Source parts cost-effectively. Safety & Compliance: Ensure all work adheres to strict safety standards (LOTO, electrical safety, working at height, confined space if applicable), food safety regulations (preventing contamination during repairs), and local statutory requirements (boiler inspections, electrical certifications, fire safety). Vendor Management: Liaise with and oversee external contractors for specialized repairs, statutory inspections, and major overhauls. Ensure quality and cost control. Energy Efficiency: Identify and implement opportunities to improve energy efficiency of equipment (e.g., optimizing boiler operation, motor efficiency, refrigeration settings). Training & Communication: Train kitchen staff on the correct and safe basic operation and minor care (e.g., cleaning, reporting issues) of equipment. Train maintenance technicians on specific equipment and procedures. Communicate effectively with Kitchen Managers, Chefs, and Operations Management regarding maintenance schedules, downtime, and critical issues. Prepare regular reports on maintenance performance, downtime analysis, and cost tracking. Mandatory Qualifications & Experience Education: ITI (Electrical/Mechanical/Fitter) Diploma or equivalent. A Diploma/Degree in Mechanical/Electrical Engineering is highly preferred. Experience: Minimum 5-7 years of hands-on experience in maintenance, with at least 3 years specifically in the hospitality/catering industry or a heavy industrial setting with similar equipment (FMCG, Pharma plant kitchens). Proven experience leading a maintenance team is essential. Technical Skills (Non-Negotiable): Deep Practical Knowledge: Proven expertise in troubleshooting, repairing, and maintaining: Industrial Boilers (Operation, Maintenance, Safety) Electric Motors (2HP and above - Dismantling, Rewinding/Bearing Replacement, Alignment, Starter Circuits) Commercial Kitchen Exhaust Systems (Sukhad - Hoods, Ducts, Fans, Fire Systems) Refrigeration Systems & Walk-in Cold Rooms/Freezers (Compressors, Controls, Defrost, Glycol Systems) Heavy-Duty Grinders, Mixers, Cutters, and Food Processing Machinery. Strong Fundamentals: Excellent understanding of mechanical systems (gearboxes, bearings, belts, chains, pneumatics), electrical systems (single & three-phase power, controls, basic PLC understanding), and plumbing. Safety Focus: Thorough knowledge of relevant safety protocols (Electrical, LOTO, Pressure Vessels, Working at Height). Tools: Proficiency with hand tools, power tools, electrical testing equipment (multimeter, clamp meter, megger), and welding/gas cutting (advantageous). Certifications (Highly Desirable): Boiler Operation Engineer (BOE) certificate or equivalent (mandatory in some jurisdictions). Refrigeration handling certificate (type depending on local regulations). Certified Maintenance & Reliability Professional (CMRP) or similar. Electrical License (if applicable locally). Soft Skills: Strong leadership and team management abilities. Excellent problem-solving and analytical skills under pressure. Outstanding communication (verbal & written) and interpersonal skills. Proactive, organized, and meticulous with documentation. Ability to prioritize effectively in a fast-paced, 24/7 environment. Basic computer literacy (MS Office, CMMS software). Working Conditions Primarily based in industrial kitchen/production environments (hot, humid, noisy). Requires frequent standing, walking, bending, lifting (up to 25kg), and working in confined spaces. On-call availability for emergencies outside normal hours (nights, weekends, holidays) is essential. May require travel between multiple kitchen locations if applicable. Job Types: Full-time, Permanent Pay: ₹20,000.00 - ₹30,000.00 per month Benefits: Health insurance Leave encashment Provident Fund Schedule: Day shift Morning shift Work Location: In person

Posted 3 days ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Job Title: Senior Software Engineer Department: IDP About Us HG Insights is the global leader in technology intelligence, delivering actionable AI driven insights through advanced data science and scalable big data solutions. Our Big Data Insights Platform processes billions of unstructured documents and powers a vast data lake, enabling enterprises to make strategic, data-driven decisions. Join our team to solve complex data challenges at scale and shape the future of B2B intelligence. What You’ll Do: Design, build, and optimize large-scale distributed data pipelines for processing billions of unstructured documents using Databricks, Apache Spark, and cloud-native big data tools Architect and scale enterprise-grade big-data systems, including data lakes, ETL/ELT workflows, and syndication platforms for customer-facing Insights-as-a-Service (InaaS) products. Collaborate with product teams to develop features across databases, backend services, and frontend UIs that expose actionable intelligence from complex datasets. Implement cutting-edge solutions for data ingestion, transformation, and analytics using Hadoop/Spark ecosystems, Elasticsearch, and cloud services (AWS EC2, S3, EMR). Drive system reliability through automation, CI/CD pipelines (Docker, Kubernetes, Terraform), and infrastructure-as-code practices. What You’ll Be Responsible For Leading the development of our Big Data Insights Platform, ensuring scalability, performance, and cost-efficiency across distributed systems. Mentoring engineers, conducting code reviews, and establishing best practices for Spark optimization, data modeling, and cluster resource management. Building & Troubleshooting complex data pipeline issues, including performance tuning of Spark jobs, query optimization, and data quality enforcement. Collaborating in agile workflows (daily stand-ups, sprint planning) to deliver features rapidly while maintaining system stability. Ensuring security and compliance across data workflows, including access controls, encryption, and governance policies. What You’ll Need BS/MS/Ph.D. in Computer Science or related field, with 5+ years of experience building production-grade big data systems. Expertise in Scala/Java for Spark development, including optimization of batch/streaming jobs and debugging distributed workflows. Proven track record with: Databricks, Hadoop/Spark ecosystems, and SQL/NoSQL databases (MySQL, Elasticsearch). Cloud platforms (AWS EC2, S3, EMR) and infrastructure-as-code tools (Terraform, Kubernetes). RESTful APIs, microservices architectures, and CI/CD automation37. Leadership experience as a technical lead, including mentoring engineers and driving architectural decisions. Strong understanding of agile practices, distributed computing principles, and data lake architectures. Airflow orchestration (DAGs, operators, sensors) and integration with Spark/Databricks 7+ years of designing, modeling and building big data pipelines in an enterprise work setting. Nice-to-Haves Experience with machine learning pipelines (Spark MLlib, Databricks ML) for predictive analytics. Knowledge of data governance frameworks and compliance standards (GDPR, CCPA). Contributions to open-source big data projects or published technical blogs/papers. DevOps proficiency in monitoring tools (Prometheus, Grafana) and serverless architectures. Show more Show less

Posted 3 days ago

Apply

0 years

0 Lacs

Kochi, Kerala, India

On-site

Linkedin logo

Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology Your Role And Responsibilities As an Associate Software Developer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets. In This Role, Your Responsibilities May Include Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise search applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviours. Build teams or writing programs to cleanse and integrate data in an efficient and reusable manner, developing predictive or prescriptive models, and evaluating modelling results Preferred Education Master's Degree Required Technical And Professional Expertise Experience in Big Data Technology like Hadoop, Apache Spark, Hive. Practical experience in Core Java (1.8 preferred) /Python/Scala. Having experience in AWS cloud services including S3, Redshift, EMR etc. Strong expertise in RDBMS and SQL. Good experience in Linux and shell scripting. Experience in Data Pipeline using Apache Airflow Preferred Technical And Professional Experience You thrive on teamwork and have excellent verbal and written communication skills. Ability to communicate with internal and external clients to understand and define business needs, providing analytical solutions Ability to communicate results to technical and non-technical audiences Show more Show less

Posted 3 days ago

Apply

6.0 years

0 Lacs

Pune, Maharashtra, India

Remote

Linkedin logo

HackerOne is a global leader in offensive security solutions. Our HackerOne Platform combines AI with the ingenuity of the largest community of security researchers to find and fix security, privacy, and AI vulnerabilities across the software development lifecycle. The platform offers bug bounty, vulnerability disclosure, pentesting, AI red teaming, and code security. We are trusted by industry leaders like Amazon, Anthropic, Crypto.com, General Motors, GitHub, Goldman Sachs, Uber, and the U.S. Department of Defense. HackerOne was named a Best Workplace for Innovators by Fast Company in 2023 and a Most Loved Workplace for Young Professionals in 2024. HackerOne Values HackerOne is dedicated to fostering a strong and inclusive culture. HackerOne is Customer Obsessed and prioritizes customer outcomes in our decisions and actions. We Default to Disclosure by operating with transparency and integrity, ensuring trust and accountability. Employees, researchers, customers, and partners Win Together by fostering empowerment, inclusion, respect, and accountability. Senior Analytics Engineer, DataOne Location: Pune, India This role requires the candidate to be based in Pune and work from an office 4 or 5 days a week. Please only apply if you're okay with these requirements. *** Position Summary HackerOne is seeking a Senior Analytics Engineer to join our DataOne team. You will lead the discovery, architecture, and development of high-impact, high-performance, scalable source of truth data marts and data products. Joining our growing, distributed organization, you'll be instrumental in building the foundation that powers HackerOne's one source of truth. As a Senior Analytics Engineer, you'll be able to lead challenging projects and foster collaboration across the company. Leveraging your extensive technological expertise, domain knowledge, and dedication to business objectives, you'll drive innovation to propel HackerOne forward. DataOne democratizes source-of-truth information and insights to enable all Hackeronies to ask the right questions, tell cohesive stories, and make rigorous decisions so that HackerOne can delight our Customers and empower the world to build a safer internet . The future is one where every Hackeronie is a catalyst for positive change , driving data-informed innovation while fostering our culture of transparency, collaboration, integrity, excellence, and respect for all . What You Will Do Your first 30 days will focus on getting to know HackerOne. You will join your new squad and begin onboarding - learn our technology stack (Python, Airflow, Snowflake, DBT, Meltano, Fivetran, Looker, AWS), and meet our Hackeronies. Within 60 days, you will deliver impact on a company level with consistent contribution to high-impact, high-performance, scalable source of truth data marts and data products. Within 90 days, you will drive the continuous evolution and innovation of data at HackerOne, identifying and leading new initiatives. Additionally, you foster cross-departmental collaboration to enhance these efforts. Deliver impact by developing the roadmap for continuously and iteratively launching high-impact, high-performance, scalable source of truth data marts and data products, and by leading and delivering cross-functional product and technical initiatives. Be a technical paragon and cross-functional force multiplier, autonomously determining where to apply focus, contributing at all levels, elevating your squad, and designing solutions to ambiguous business challenges, in a fast-paced early-stage environment. Drive continuous evolution and innovation, the adoption of emerging technologies, and the implementation of industry best practices. Champion a higher bar for discoverability, usability, reliability, timeliness, consistency, validity, uniqueness, simplicity, completeness, integrity, security, and compliance of information and insights across the company. Provide technical leadership and mentorship, fostering a culture of continuous learning and growth. Minimum Qualifications 6+ years experience as an Analytics Engineer, Business Intelligence Engineer, Data Engineer, or similar role w/ proven track record of launching source of truth data marts. 6+ years of experience building and optimizing data pipelines, products, and solutions. Must be flexible to align with ocassional evening meetings in USA timezone. Extensive experience working with various data technologies and tools such as Airflow, Snowflake, Meltano, Fivetran, DBT, and AWS. Expert in SQL for data manipulation in a fast-paced work environment. Expert in creating compelling data stories using data visualization tools such as Looker, Tableau, Sigma, Domo, or PowerBI. Proven track record of having substantial impact across the company, as well as externally for the company, demonstrating your ability to drive positive change and achieve significant results. English fluency, excellent communication skills, and can present data-driven narratives in verbal, presentation, and written formats. Passion for working backwards from the Customer and empathy for business stakeholders. Experience shaping the strategic vision for data. Experience working with Agile and iterative development processes. Preferred Qualifications Strong proficiency in at least one data programming language such as Python or R. Experience working within and with data from business applications such as Salesforce, Clari, Gainsight, Workday, GitLab, Slack, or Freshservice. Proven track record of driving innovation, adopting emerging technologies and implementing industry best practices. Thrive on solving for ambiguous problem statements in an early-stage environment. Experience designing advanced data visualizations and data-rich interfaces in Figma or equivalent. Compensation Bands: Pune, India ₹3.7M – ₹4.6M Offers Equity Job Benefits: Health (medical, vision, dental), life, and disability insurance* Equity stock options Retirement plans Paid public holidays and unlimited PTO Paid maternity and parental leave Leaves of absence (including caregiver leave and leave under CO's Healthy Families and Workplaces Act) Employee Assistance Program Flexible Work Stipend Eligibility may differ by country We're committed to building a global team! For certain roles outside the United States, U.K., and the Netherlands, we partner with Remote.com as our Employer of Record (EOR). Visa/work permit sponsorship is not available. Employment at HackerOne is contingent on a background check. HackerOne is an Equal Opportunity Employer in the terms and conditions of employment for all employees and job applicants without regard to race, color, religion, sex, sexual orientation, age, gender identity or gender expression, national origin, pregnancy, disability or veteran status, or any other protected characteristic as outlined by international, federal, state, or local laws. This policy applies to all HackerOne employment practices, including hiring, recruiting, promotion, termination, layoff, recall, leave of absence, compensation, benefits, training, and apprenticeship. HackerOne makes hiring decisions based solely on qualifications, merit, and business needs at the time. For US based roles only: Pursuant to the San Francisco Fair Chance Ordinance, all qualified applicants with arrest and conviction records will be considered for the position. Show more Show less

Posted 3 days ago

Apply

2.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Description RESPONSIBILITIES Design, develop, implement, test, and maintain automated test suites and frameworks for AI/ML pipelines. Collaborate closely with ML engineers and data scientists to understand model architectures and data workflows. Develop and execute test plans, test cases, and test scripts to identify software defects in AI/ML applications. Ensure end-to-end quality of AI/ML solutions, including data integrity, model performance, and system integration. Implement continuous integration and continuous deployment (CI/CD) processes for ML pipelines. Conduct performance and scalability testing for AI/ML systems. Document and track software defects using bug-tracking systems, and report issues to development teams. Participate in code reviews and provide feedback on testability and quality. Help foster a culture of quality and continuous improvement within the ML engineering group. Stay updated with the latest trends and best practices in AI/ML testing and quality assurance. Must Haves: Bachelor’s degree in Computer Science, Engineering, or a related field. 2+ years of experience in quality assurance, specifically testing AI/ML applications. Experience with the following: Strong programming skills in Python (experience with libraries like PyTest or unittest). Familiarity with machine learning frameworks (TensorFlow, PyTorch, or scikit-learn). Experience with test automation tools and frameworks. Knowledge of CI/CD tools (Jenkins, GitLab CI, or similar). Experience with containerization technologies like Docker and orchestration systems like Kubernetes. Proficient in Linux operating systems. Familiarity with version control systems like Git. Strong understanding of software testing methodologies and best practices. Excellent analytical and problem-solving skills. Excellent communication and collaboration skills. Bonus Attributes: Experience with testing data pipelines and ETL processes. Cloud platform experience; GCP, AWS or Azure are acceptable. Knowledge of big data technologies like Apache Spark, Kafka, or Airflow. Experience with performance testing tools. Understanding of data science concepts and statistical analysis. Certifications in software testing or cloud technologies. Abilities: Ability to work with a high level of initiative, accuracy, and attention to detail. Ability to prioritize multiple assignments effectively. Ability to meet established deadlines. Ability to successfully, efficiently, and professionally interact with staff and customers. Excellent organization skills. Critical thinking ability ranging from moderately to highly complex. Flexibility in meeting the business needs of the customer and the company. Ability to work creatively and independently with latitude and minimal supervision. Ability to utilize experience and judgment in accomplishing assigned goals. Experience in navigating organizational structure. Show more Show less

Posted 3 days ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

A Snapshot of Your Day Siemens Energy “Digital Products and Solutions” team (DPS) supports all SE Business Areas in their growth of their digital business! We consult, design and develop tailored solutions for the customer market, based on their specific technical and commercial constraints. Our second mission is to professionalize, automate and standardize SE software development. How You’ll Make An Impact Designing and implementing scalable, secure, and high-performance Database/data warehouse-based platform and solutions Designing and specifying the overall database/data warehouse structure based on functional and technical requirements. Developing logical and physical data models Developing strategies for data acquisition, archive recovery, and database implementation Manage data structures, performance management and tuning, data ingest into the databases, system monitoring, capacity management, availability management, backup and restore. Ensuring compliance with security and data protection policies. Implement hands on proof of concepts aimed towards automation and improvement of the data platform. What You Bring Bachelor’s Degree or higher in Computer Science or a related technical field. 5+ years of experience with at least 3 years as Data Engineer in high volume ETL/Data Warehousing projects Strong RDMBS Skills and experience with PostgreSQL and Snowflake. Knowledge of Python Extensive experience in development, operations, and administration of Data Warehouse solutions. Experience with ETL solutions (preferably Airflow) in a complex, high-volume data environment. Knowledge of DevOps concepts and experience with continuous integration and continuous delivery (CI/CD) tools Strong data analysis skills. Knowledge of AWS services such as Lambda, SQS, SNS. About The Team Our team belongs to the Digital Products and Solutions Function within Siemens Energy. Our mission is to grow the digital software business and develop solutions and products for both internal and external customers. These solutions include Edge Computing and applications, On-site sensor technology integration, Cloud-based platforms and cloud-based software solutions and applications. The solutions, applications, and platforms we provide allow data acquired to be used to improve the operation and maintenance of power plants and industrial facilities of all sizes this includes the development of digital twins, analytics platforms and agents, artificial intelligent and machine learning applications and algorithms. Who is Siemens Energy? At Siemens Energy, we are more than just an energy technology company. With ~100,000 dedicated employees in more than 90 countries, we develop the energy systems of the future, ensuring that the growing energy demand of the global community is met reliably and sustainably. The technologies created in our research departments and factories drive the energy transition and provide the base for one sixth of the world's electricity generation. Our global team is committed to making sustainable, reliable, and affordable energy a reality by pushing the boundaries of what is possible. We uphold a 150-year legacy of innovation that encourages our search for people who will support our focus on decarbonization, new technologies, and energy transformation. Find out how you can make a difference at Siemens Energy: https://www.siemens-energy.com/employeevideo Our Commitment to Diversity Lucky for us, we are not all the same. Through diversity, we generate power. We run on inclusion and our combined creative energy is fueled by over 130 nationalities. Siemens Energy celebrates character – no matter what ethnic background, gender, age, religion, identity, or disability. We energize society, all of society, and we do not discriminate based on our differences. Rewards/Benefits All employees are automatically covered under the Medical Insurance. Company paid considerable Family floater cover covering employee, spouse and 2 dependent children up to 25 years of age. Siemens Energy provides an option to opt for Meal Card to all its employees which will be as per the terms and conditions prescribed in the company policy. – As a part of CTC, tax saving measure Flexi Pay empowers employees with the choice to customize the amount in some of the salary components within a defined range thereby optimizing the tax benefits. Accordingly, each employee is empowered to decide on the best Possible net income out of the same fixed individual base pay on a monthly basis https://jobs.siemens-energy.com/jobs Show more Show less

Posted 3 days ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Experience - 5+ yrs Location - Bangalore, Hyderabad, Chennai, Trivandrum , Cochin. Skills - Python,Gcp,Bigquery,Java Notice period - Immediate - 30 Days. We are looking for skilled Data Engineers to join our team, specializing in data migration, integration, and pipeline development using Google Cloud Platform (GCP) and BigQuery . Key Responsibilities: Migrate data from SQL Server and on-prem databases to Google BigQuery Develop and optimize ETL pipelines using Apache Airflow, Python, or Spark Analyze and refactor existing SSIS packages for GCP compatibility Integrate data from diverse sources including APIs and external databases Write complex SQL queries , develop views, and optimize stored procedures in BigQuery Required Skills: Strong experience with Python , GCP , BigQuery , Java , SQL , and Spark Hands-on experience in data warehousing , data migration , and pipeline development Show more Show less

Posted 3 days ago

Apply

9.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

📈 Experience: 9+ Years 📍 Location: Pune 📢 Immediate to 15 days and are highly encouraged to apply! 🔧 Primary Skills: Data Engineer, Lead, Architect, Python, SQL, Apache Airflow, Apache Spark, AWS (S3, Lambda, Glue) Job Overview We are seeking a highly skilled Data Architect / Data Engineering Lead with over 9 years of experience to drive the architecture and execution of large-scale, cloud-native data solutions. This role demands deep expertise in Python, SQL, Apache Spark, Apache Airflow , and extensive hands-on experience with AWS services. You will lead a team of engineers, design robust data platforms, and ensure scalable, secure, and high-performance data pipelines in a cloud-first environment. Key Responsibilities Data Architecture & Strategy Architect end-to-end data platforms on AWS using services such as S3, Redshift, Glue, EMR, Athena, Lambda, and Step Functions. Design scalable, secure, and reliable data pipelines and storage solutions. Establish data modeling standards, metadata practices, and data governance frameworks. Leadership & Collaboration Lead, mentor, and grow a team of data engineers, ensuring delivery of high-quality, well-documented code. Collaborate with stakeholders across engineering, analytics, and product to align data initiatives with business objectives. Champion best practices in data engineering, including reusability, scalability, and observability. Pipeline & Platform Development Develop and maintain scalable ETL/ELT pipelines using Apache Airflow , Apache Spark , and AWS Glue . Write high-performance data processing code using Python and SQL . Manage data workflows and orchestrate complex dependencies using Airflow and AWS Step Functions. Monitoring, Security & Optimization Ensure data reliability, accuracy, and security across all platforms. Implement monitoring, logging, and alerting for data pipelines using AWS-native and third-party tools. Optimize cost, performance, and scalability of data solutions on AWS. Required Qualifications 9+ years of experience in data engineering or related fields, with at least 2 years in a lead or architect role. Proven experience with: Python and SQL for large-scale data processing. Apache Spark for batch and streaming data. Apache Airflow for workflow orchestration. AWS Cloud Services , including but not limited to: S3, Redshift, EMR, Glue, Athena, Lambda, IAM, CloudWatch Strong understanding of data modeling, distributed systems, and modern data architecture patterns. Excellent leadership, communication, and stakeholder management skills. Preferred Qualifications Experience implementing data platforms using AWS Lakehouse architecture. Familiarity with Docker , Kubernetes , or similar container/orchestration systems. Knowledge of CI/CD and DevOps practices for data engineering. Understanding of data privacy and compliance standards (GDPR, HIPAA, etc.). Show more Show less

Posted 4 days ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

#Hiring #productmanger Product Manager with GCP experience is Mandate Mode - Hybrid Location - Chennai Skills Preferred: JIRA, Python, GCP, GCP Cloudrun, Angular, AIRFLOW, Big Query, Terraform LLM, Cycode, Dynatrace, Checkmarx, Fossa Job details : technically-minded Product Manager in Chennai to drive strategy, execution, and compliance for our software development. You'll define roadmaps, manage backlogs in JIRA, collaborate with engineers, ensure technical quality, uphold compliance standards, and communicate effectively with stakeholders while focusing on delivering high-impact customer value and maintaining product health. Show more Show less

Posted 4 days ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

About Client: Our Client is a global IT services company headquartered in Southborough, Massachusetts, USA. Founded in 1996, with a revenue of $1.8B, with 35,000+ associates worldwide, specializes in digital engineering, and IT services company helping clients modernize their technology infrastructure, adopt cloud and AI solutions, and accelerate innovation. It partners with major firms in banking, healthcare, telecom, and media. Our Client is known for combining deep industry expertise with agile development practices, enabling scalable and cost-effective digital transformation. The company operates in over 50 locations across more than 25 countries, has delivery centers in Asia, Europe, and North America and is backed by Baring Private Equity Asia. Job Title: AWS Data Engineer Location : Pan India Experience : 8-6 Years Job Typ e: Contract to Hire Notice Period : Immediate Joiners Mandatory Skills:, AWS services s3, Lambda, Redshift, Glue,Python,PySpark,SQL Job description: JD: Description - External At Storable, were on a mission to power the future of storage. Our innovative platform helps businesses manage, track, and grow their self-storage operations, and were looking for a Data Manager to join our data-driven team. Storable is committed to leveraging cutting-edge technologies to improve the efficiency, accessibility, and insights derived from data, empowering our team to make smarter decisions and foster impactful growth. As a Data Manager, you will play a pivotal role in overseeing and shaping our data operations, ensuring that our data is organized, accessible, and effectively managed across the organization. You will lead a talented team, work closely with cross-functional teams, and drive the development of strategies to enhance data quality, availability, and security. Key Responsibilities: Lead Data Management Strategy Define and execute the data management vision, strategy, and best practices, ensuring alignment with Storables business goals and objectives. Oversee Data Pipelines: Design, implement, and maintain scalable data pipelines using industry-standard tools to efficiently process and manage large-scale datasets. Ensure Data Quality & Governance, Implement data governance policies and frameworks to ensure data accuracy, consistency, and compliance across the organization. Manage Cross-Functional Collaboration - Partner with engineering, product, and business teams to make data accessible and actionable, and ensure it drives informed decision-making. Optimize Data Infrastructure: Leverage modern data tools and platforms. AWS, Apache Airflow, Apache Iceberg to create an efficient, reliable, and scalable data infrastructure. Monitor & Improve Performance: Mentorship & Leadership Lead and develop a team of data engineers and analysts, fostering a collaborative environment where innovation and continuous improvement are valued Qualifications Proven Expertise in Data Management: Significant experience in managing data infrastructure, data governance, and optimizing data pipelines at scale. Technical Proficiency : Strong hands-on experience with data tools and platforms such as Apache Airflow, Apache Iceberg, and AWS services s3, Lambda, Redshift, Glue Data Pipeline Mastery Familiarity with designing, implementing, and optimizing data pipelines and workflows in Python or other languages for data processing Experience with Data Governance: Solid understanding of data privacy, quality control, and governance best practice Leadership Skills: Ability to lead and mentor teams, influence stakeholders, and drive data initiatives across the organization. Analytical Mindset: Strong problem-solving abilities and a data-driven approach to improving business operations. Excellent Communication: Ability to communicate complex data concepts to both technical and non-technical stakeholders effectively. Bonus Points : Experience with visualization tools Looker, Tableau and reporting frameworks to provide actionable insights. Show more Show less

Posted 4 days ago

Apply

4.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

About Sanofi We are an innovative global healthcare company, driven by one purpose: we chase the miracles of science to improve people’s lives. Our team, across some 100 countries, is dedicated to transforming the practice of medicine by working to turn the impossible into the possible. We provide potentially life-changing treatment options and life-saving vaccine protection to millions of people globally, while putting sustainability and social responsibility at the center of our ambitions. Sanofi has recently embarked into a vast and ambitious digital transformation program. A cornerstone of this roadmap is the acceleration of its data transformation and of the adoption of artificial intelligence (AI) and machine learning (ML) solutions that will accelerate Manufacturing & Supply performance and help bring drugs and vaccines to patients faster, to improve health and save lives. Who You Are: You are a dynamic Data Engineer interested in challenging the status quo to design and develop globally scalable solutions that are needed by Sanofi’s advanced analytic, AI and ML initiatives for the betterment of our global patients and customers. You are a valued influencer and leader who has contributed to making key datasets available to data scientists, analysts, and consumers throughout the enterprise to meet vital business use needs. You have a keen eye for improvement opportunities while continuing to fully comply with all data quality, security, and governance standards. Our vision for digital, data analytics and AI Join us on our journey in enabling Sanofi’s Digital Transformation through becoming an AI first organization. This means: AI Factory - Versatile Teams Operating in Cross Functional Pods: Utilizing digital and data resources to develop AI products, bringing data management, AI and product development skills to products, programs and projects to create an agile, fulfilling and meaningful work environment. Leading Edge Tech Stack: Experience building products that will be deployed globally on a leading-edge tech stack. World Class Mentorship and Training: Working with renowned leaders and academics in machine learning to further develop your skillsets There are multiple vacancies across our Digital profiles and NA region. Further assessments will be completed to determine specific function and level of hired candidates. Job Highlights Propose and establish technical designs to meet business and technical requirements Develop and maintain data engineering solutions based on requirements and design specifications using appropriate tools and technologies Create data pipelines / ETL pipelines and optimize performance Test and validate developed solution to ensure it meets requirements Create design and development documentation based on standards for knowledge transfer, training, and maintenance Work with business and products teams to understand requirements, and translate them into technical needs Adhere to and promote to best practices and standards for code management, automated testing, and deployments Leverage existing or create new standard data pipelines within Sanofi to bring value through business use cases Develop automated tests for CI/CD pipelines Gather/organize large & complex data assets, and perform relevant analysis Conduct peer reviews for quality, consistency, and rigor for production level solution Actively contribute to Data Engineering community and define leading practices and frameworks Communicate results and findings in a clear, structured manner to stakeholders Remains up to date on the company’s standards, industry practices and emerging technologies Key Functional Requirements & Qualifications Experience working cross-functional teams to solve complex data architecture and engineering problems Demonstrated ability to learn new data and software engineering technologies in short amount of time Good understanding of agile/scrum development processes and concepts Able to work in a fast-paced, constantly evolving environment and manage multiple priorities Strong technical analysis and problem-solving skills related to data and technology solutions Excellent written, verbal, and interpersonal skills with ability to communicate ideas, concepts and solutions to peers and leaders Pragmatic and capable of solving complex issues, with technical intuition and attention to detail Service-oriented, flexible, and approachable team player Fluent in English (Other languages a plus) Key Technical Requirements & Qualifications Bachelor’s Degree or equivalent in Computer Science, Engineering, or relevant field 4 to 5+ years of experience in data engineering, integration, data warehousing, business intelligence, business analytics, or comparable role with relevant technologies and tools, such as Spark/Scala, Informatica/IICS/Dbt Understanding of data structures and algorithms Working knowledge of scripting languages (Python, Shell scripting) Experience in cloud-based data platforms (Snowflake is a plus) Experience with job scheduling and orchestration (Airflow is a plus) Good knowledge of SQL and relational databases technologies/concepts Experience working with data models and query tuning Nice To Haves Experience working in life sciences/pharmaceutical industry is a plus Familiarity with data ingestion through batch, near real-time, and streaming environments Familiarity with data warehouse concepts and architectures (data mesh a plus) Familiarity with Source Code Management Tools (GitHub a plus) Pursue Progress Discover Extraordinary Better is out there. Better medications, better outcomes, better science. But progress doesn’t happen without people – people from different backgrounds, in different locations, doing different roles, all united by one thing: a desire to make miracles happen. So, let’s be those people. Watch our ALL IN video and check out our Diversity, Equity and Inclusion actions at sanofi.com! null Show more Show less

Posted 4 days ago

Apply

510.0 years

0 Lacs

Bhopal, Madhya Pradesh, India

On-site

Linkedin logo

Role : Data Engineers (510 Years of Experience) Experience : 510 years Location : Gurgaon, Pune, Bangalore, Chennai, Jaipur and Bhopal Skills : Python/Scala, SQL, ETL, Big Data (Spark, Kafka, Hive), Cloud (AWS/Azure/GCP), Data Warehousing Responsibilities : Build and maintain robust, scalable data pipelines and systems. Design and implement ETL processes to support analytics and reporting. Optimize data workflows for performance and scalability. Collaborate with data scientists, analysts, and engineering teams. Ensure data quality, governance, and security compliance. Required Skills Strong experience with Python/Scala, SQL, and ETL tools. Hands-on with Big Data technologies (Hadoop, Spark, Kafka, Hive, etc. Proficiency in Cloud Platforms (AWS/GCP/Azure). Experience with data warehousing (e.g., Redshift, Snowflake, BigQuery). Familiarity with CI/CD pipelines and version control systems. Nice To Have Experience with Airflow, Databricks, or dbt. Knowledge of real-time data processing (ref:hirist.tech) Show more Show less

Posted 4 days ago

Apply

3.0 - 4.0 years

0 Lacs

Mumbai Metropolitan Region

Remote

Linkedin logo

Job Title : Data Scientist - Computer Vision & Generative AI. Location : Mumbai. Experience Level : 3 to 4 years. Employment Type : Full-time. Industry : Renewable Energy / Solar Services. Job Overview We are seeking a talented and motivated Data Scientist with a strong focus on computer vision, generative AI, and machine learning to join our growing team in the solar services sector. You will play a pivotal role in building AI-driven solutions that transform how solar infrastructure is analyzed, monitored, and optimized using image-based intelligence. From drone and satellite imagery to on-ground inspection photos, your work will enable intelligent automation, predictive analytics, and visual understanding in critical areas like fault detection, panel degradation, site monitoring, and more. If you're passionate about working at the cutting edge of AI for real-world sustainability impact, we'd love to hear from you. Key Responsibilities Design, develop, and deploy computer vision models for tasks such as object detection, classification, segmentation, anomaly detection, etc. Work with generative AI techniques (e.g. , GANs, diffusion models) to simulate environmental conditions, enhance datasets, or create synthetic training data. Build ML pipelines for end-to-end model training, validation, and deployment using Python and modern ML frameworks. Analyze drone, satellite, and on-site images to extract meaningful insights for solar panel performance, wear-and-tear detection, and layout optimization. Collaborate with cross-functional teams (engineering, field ops, product) to understand business needs and translate them into scalable AI solutions. Continuously experiment with the latest models, frameworks, and techniques to improve model performance and robustness. Optimize image pipelines for performance, scalability, and edge/cloud deployment. Key Requirements 3-4 years of hands-on experience in data science, with a strong portfolio of computer vision and ML projects. Proven expertise in Python and common data science libraries : NumPy, Pandas, Scikit-learn, etc. Proficiency with image-based AI frameworks : OpenCV, PyTorch or TensorFlow, Detectron2, YOLOv5/v8, MMDetection, etc. Experience with generative AI models like GANs, Stable Diffusion, or ControlNet for image generation or augmentation. Experience building and deploying ML models using MLflow, TorchServe, or TensorFlow Serving. Familiarity with image annotation tools (e.g. , CVAT, Labelbox), and data versioning tools (e.g. , DVC). Experience with cloud platforms (AWS, GCP, or Azure) for storage, training, or model deployment. Experience with Docker, Git, and CI/CD pipelines for reproducible ML workflows. Ability to write clean, modular code and a solid understanding of software engineering best practices in AI/ML projects. Strong problem-solving skills, curiosity, and ability to work independently in a fast-paced environment. Bonus / Preferred Skills Experience with remote sensing and working with satellite or drone imagery. Exposure to MLOps practices and tools like Kubeflow, Airflow, or SageMaker Pipelines. Knowledge of solar technologies, photovoltaic systems, or renewable energy is a plus. Familiarity with edge computing for vision applications on IoT devices or drones. (ref:hirist.tech) Show more Show less

Posted 4 days ago

Apply

8.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

You Lead the Way. We've Got Your Back. With the right backing, people and businesses have the power to progress in incredible ways. When you join Team Amex, you become part of a global and diverse community of colleagues with an unwavering commitment to back our customers, communities, and each other. Here, you'll learn and grow as we help you create a career journey that's unique and meaningful to you with benefits, programs, and flexibility that support you personally and professionally. At American Express, you'll be recognized for your contributions, leadership, and impact—every colleague has the opportunity to share in the company's success. Together, we'll win as a team, striving to uphold our company values and powerful backing promise to provide the world's best customer experience every day. And we'll do it with the utmost integrity, and in an environment where everyone is seen, heard and feels like they belong. Join Team Amex and let's lead the way together. American Express has embarked on an exciting transformation driven by an energetic new team of high performers. This is a great opportunity to join the Customer Marketing organization within American Express Technologies and become a driver of this exciting journey. We are looking for a highly skilled and experienced Senior Engineer with a history of building Bigdata, GCP Cloud, Python and Spark applications. The Senior Engineer will play a crucial role in designing, implementing, and optimizing data solutions to support our organization's data-driven initiatives. This role requires expertise in data engineering, strong problem-solving abilities, and a collaborative mindset to work effectively with various stakeholders. Joining the Enterprise Marketing team, this role will be focused on the delivery of innovative solutions to satisfy the needs of our business. As an agile team we work closely with our business partners to understand what they require, and we strive to continuously improve as a team. We pride ourselves on a culture of kindness and positivity, and a continuous focus on supporting colleague development to help you achieve your career goals. We lead with integrity, and we emphasize work/life balance for all of our teammates. How will you make an impact in this role? There are hundreds of opportunities to make your mark on technology and life at American Express. Here's just some of what you'll be doing: As a part of our team, you will be developing innovative, high quality, and robust operational engineering capabilities. Develop software in our technology stack which is constantly evolving but currently includes Big data, Spark, Python, Scala, GCP, Adobe Suit ( like Customer Journey Analytics ). Work with Business partners and stakeholders to understand functional requirements, architecture dependencies, and business capability roadmaps. Create technical solution designs to meet business requirements. Define best practices to be followed by team. Taking your place as a core member of an Agile team driving the latest development practices Identify and drive reengineering opportunities, and opportunities for adopting new technologies and methods. Suggest and recommend solution architecture to resolve business problems. Perform peer code review and participate in technical discussions with the team on the best solutions possible. As part of our diverse tech team, you can architect, code and ship software that makes us an essential part of our customers' digital lives. Here, you can work alongside talented engineers in an open, supportive, inclusive environment where your voice is valued, and you make your own decisions on what tech to use to solve challenging problems. American Express offers a range of opportunities to work with the latest technologies and encourages you to back the broader engineering community through open source. And because we understand the importance of keeping your skills fresh and relevant, we give you dedicated time to invest in your professional development. Find your place in technology of #TeamAmex. Minimum Qualifications : · BS or MS degree in computer science, computer engineering, or other technical discipline, or equivalent work experience. · 8+ years of hands-on software development experience with Big Data & Analytics solutions – Hadoop Hive, Spark, Scala, Hive, Python, shell scripting, GCP Cloud Big query, Big Table, Airflow. · Working knowledge of Adobe suit like Adobe Experience Platform, Adobe Customer Journey Analytics · Proficiency in SQL and database systems, with experience in designing and optimizing data models for performance and scalability. · Design and development experience with Kafka, Real time ETL pipeline, API is desirable. · Experience in designing, developing, and optimizing data pipelines for large-scale data processing, transformation, and analysis using Big Data and GCP technologies. · Certifications in cloud platform (GCP Professional Data Engineer) is a plus. · Understanding of distributed (multi-tiered) systems, data structures, algorithms & Design Patterns. · Strong Object-Oriented Programming skills and design patterns. · Experience with CICD pipelines, Automated test frameworks, and source code management tools (XLR, Jenkins, Git, Maven). · Good knowledge and experience with configuration management tools like GitHub · Ability to analyze complex data engineering problems, propose effective solutions, and implement them effectively. · Looks proactively beyond the obvious for continuous improvement opportunities. · Communicates effectively with product and cross functional team. · Willingness to learn new technologies and leverage them to their optimal potential. · Understanding of various SDLC methodologies, familiarity with Agile & scrum ceremonies. We back you with benefits that support your holistic well-being so you can be and deliver your best. This means caring for you and your loved ones' physical, financial, and mental health, as well as providing the flexibility you need to thrive personally and professionally: Competitive base salaries Bonus incentives Support for financial-well-being and retirement Comprehensive medical, dental, vision, life insurance, and disability benefits (depending on location) Flexible working model with hybrid, onsite or virtual arrangements depending on role and business need Generous paid parental leave policies (depending on your location) Free access to global on-site wellness centers staffed with nurses and doctors (depending on location) Free and confidential counseling support through our Healthy Minds program Career development and training opportunities American Express is an equal opportunity employer and makes employment decisions without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran status, disability status, age, or any other status protected by law. Offer of employment with American Express is conditioned upon the successful completion of a background verification check, subject to applicable laws and regulations. Show more Show less

Posted 4 days ago

Apply

7.0 - 15.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Linkedin logo

DataArchitecture Design: Develop and maintain a comprehensive data architecture strategy that aligns with the business objectives and technology landscape. DataModeling:Createand managelogical, physical, and conceptual data models to support various business applications and analytics. DatabaseDesign: Design and implement database solutions, including data warehouses, data lakes, and operational databases. DataIntegration: Oversee the integration of data from disparate sources into unified, accessible systems using ETL/ELT processes. DataGovernance:Implementand enforce data governance policies and procedures to ensure data quality, consistency, and security. TechnologyEvaluation: Evaluate and recommend data management tools, technologies, and best practices to improve data infrastructure and processes. Collaboration: Work closely with data engineers, data scientists, business analysts, and other stakeholders to understand data requirements and deliver effective solutions. Trusted by the world’s leading brands Documentation:Createand maintain documentation related to data architecture, data flows, data dictionaries, and system interfaces. PerformanceTuning: Optimize database performance through tuning, indexing, and query optimization. Security: Ensure data security and privacy by implementing best practices for data encryption, access controls, and compliance with relevant regulations (e.g., GDPR, CCPA) Requirements Helpingproject teams withsolutions architecture,troubleshooting, and technical implementation assistance. Proficiency in SQL and database management systems (e.g., MySQL, PostgreSQL, Oracle, SQL Server). Minimum7to15yearsofexperienceindataarchitecture or related roles. Experiencewithbig data technologies (e.g., Hadoop, Spark, Kafka, Airflow). Expertisewithcloud platforms (e.g., AWS, Azure, Google Cloud) and their data services. Knowledgeofdataintegration tools (e.g., Informatica, Talend, FiveTran, Meltano). Understandingofdatawarehousing concepts and tools (e.g., Snowflake, Redshift, Synapse, BigQuery).  Experiencewithdata governanceframeworks and tools. Show more Show less

Posted 4 days ago

Apply

8.0 years

0 Lacs

Mumbai, Maharashtra, India

Remote

Linkedin logo

Overview About this role We are looking for an innovative hands-on technology leader and run Global Data Operations for one of the largest global FinTech’s. This is a new role that will transform how we manage and process high quality data at scale and reflects our commitment to invest in an Enterprise Data Platform to unlock our data strategy for BlackRock and our Aladdin Client Community. A technology first mindset, to manage and run a modern global data operations function with high levels of automation and engineering, is essential. This role requires a deep understanding of data, domains, and the associated controls. Key Responsibilities The ideal candidate will be a high-energy, technology and data driven individual who has a track record of leading and doing the day to day operations. Ensure on time high quality data delivery with a single pane of glass for data pipeline observability and support Live and breathe best practices of data ops such as culture, processes and technology Partner cross-functionally to enhance existing data sets, eliminating manual inputs and ensuring high quality, and onboarding new data sets Lead change while ensuring daily operational excellence, quality, and control Build and maintain deep alignment with key internal partners on ops tooling and engineering Foster an agile collaborative culture which is creative open, supportive, and dynamic Knowledge And Experience 8+ years’ experience in hands-on data operations including data pipeline monitoring and engineering Technical expert including experience with data processing, orchestration (Airflow) data ingestion, cloud-based databases/warehousing (Snowflake) and business intelligence tools The ability to operate and monitor large data sets through the data lifecycle, including the tooling and observability required to be ensure data quality and control at scale Experience implementing, monitoring, and operating data pipelines that are fast, scalable, reliable, and accurate Understanding of modern-day data highways, the associated challenges, and effective controls Passionate about data platforms, data quality and everything data Practical and detailed oriented operations leader Inquisitive leader who will bring new ideas that challenge the status quo Ability to navigate a large, highly matrixed organization Strong presence with clients Bachelor’s Degree in Computer Science, Engineering, Mathematics or Statistics Our Benefits To help you stay energized, engaged and inspired, we offer a wide range of benefits including a strong retirement plan, tuition reimbursement, comprehensive healthcare, support for working parents and Flexible Time Off (FTO) so you can relax, recharge and be there for the people you care about. Our hybrid work model BlackRock’s hybrid work model is designed to enable a culture of collaboration and apprenticeship that enriches the experience of our employees, while supporting flexibility for all. Employees are currently required to work at least 4 days in the office per week, with the flexibility to work from home 1 day a week. Some business groups may require more time in the office due to their roles and responsibilities. We remain focused on increasing the impactful moments that arise when we work together in person – aligned with our commitment to performance and innovation. As a new joiner, you can count on this hybrid model to accelerate your learning and onboarding experience here at BlackRock. About BlackRock At BlackRock, we are all connected by one mission: to help more and more people experience financial well-being. Our clients, and the people they serve, are saving for retirement, paying for their children’s educations, buying homes and starting businesses. Their investments also help to strengthen the global economy: support businesses small and large; finance infrastructure projects that connect and power cities; and facilitate innovations that drive progress. This mission would not be possible without our smartest investment – the one we make in our employees. It’s why we’re dedicated to creating an environment where our colleagues feel welcomed, valued and supported with networks, benefits and development opportunities to help them thrive. For additional information on BlackRock, please visit @blackrock | Twitter: @blackrock | LinkedIn: www.linkedin.com/company/blackrock BlackRock is proud to be an Equal Opportunity Employer. We evaluate qualified applicants without regard to age, disability, family status, gender identity, race, religion, sex, sexual orientation and other protected attributes at law. Show more Show less

Posted 4 days ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Join us as a Software Engineer This is an opportunity for a driven Software Engineer to take on an exciting new career challenge Day-to-day, you'll build a wide network of stakeholders of varying levels of seniority It’s a chance to hone your existing technical skills and advance your career We're offering this role at associate level What You'll Do In your new role, you’ll engineer and maintain innovative, customer centric, high performance, secure and robust solutions. You’ll be working within a feature team and using your extensive experience to engineer software, scripts and tools that are often complex, as well as liaising with other engineers, architects and business analysts across the platform. You’ll Also Be Producing complex and critical software rapidly and of high quality which adds value to the business Working in permanent teams who are responsible for the full life cycle, from initial development, through enhancement and maintenance to replacement or decommissioning Collaborating to optimise our software engineering capability Designing, producing, testing and implementing our working code Working across the life cycle, from requirements analysis and design, through coding to testing, deployment and operations The Skills You'll Need You’ll need at least five years of experience in data sourcing including real time data integration , and a certification in AWS cloud. You’ll Also Need Experience in AWS Cloud, Airflow, and associated data migration from on premise to cloud with knowledge on databases like Snowflake, AWS Data Lake, PostgreSQL, Oracle, MongoDB and AWS DynamoDB, Experience in multiple programming languages or Low Code toolsets, Kafka and Stream sets Experience of DevOps, Testing and Agile methodology and associated toolsets A background in solving highly complex, analytical and numerical problems Experience of implementing programming best practice, especially around scalability, automation, virtualisation, optimisation, availability and performance Show more Show less

Posted 4 days ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Join us as a Data Engineering Lead This is an exciting opportunity to use your technical expertise to collaborate with colleagues and build effortless, digital first customer experiences You’ll be simplifying the bank through developing innovative data driven solutions, inspiring to be commercially successful through insight, and keeping our customers’ and the bank’s data safe and secure Participating actively in the data engineering community, you’ll deliver opportunities to support our strategic direction while building your network across the bank We’re recruiting for multiple roles across a range to levels, up to and including experienced managers What you'll do We’ll look to you to demonstrate technical and people leadership to drive value for the customer through modelling, sourcing and data transformation. You’ll be working closely with core technology and architecture teams to deliver strategic data solutions, while driving Agile and DevOps adoption in the delivery of data engineering, leading a team of data engineers. We’ll Also Expect You To Be Working with Data Scientists and Analytics Labs to translate analytical model code to well tested production ready code Helping to define common coding standards and model monitoring performance best practices Owning and delivering the automation of data engineering pipelines through the removal of manual stages Developing comprehensive knowledge of the bank’s data structures and metrics, advocating change where needed for product development Educating and embedding new data techniques into the business through role modelling, training and experiment design oversight Leading and delivering data engineering strategies to build a scalable data architecture and customer feature rich dataset for data scientists Leading and developing solutions for streaming data ingestion and transformations in line with streaming strategy The skills you'll need To be successful in this role, you’ll need to be an expert level programmer and data engineer with a qualification in Computer Science or Software Engineering. You’ll also need a strong understanding of data usage and dependencies with wider teams and the end customer, as well as extensive experience in extracting value and features from large scale data. We'll also expect you to have knowledge of of big data platforms like Snowflake, AWS Redshift, Postgres, MongoDB, Neo4J and Hadoop, along with good knowledge of cloud technologies such as Amazon Web Services, Google Cloud Platform and Microsoft Azure You’ll Also Demonstrate Knowledge of core computer science concepts such as common data structures and algorithms, profiling or optimisation An understanding of machine-learning, information retrieval or recommendation systems Good working knowledge of CICD tools Knowledge of programming languages in data engineering such as Python or PySpark, SQL, Java, and Scala An understanding of Apache Spark and ETL tools like Informatica PowerCenter, Informatica BDM or DEI, Stream Sets and Apache Airflow Knowledge of messaging, event or streaming technology such as Apache Kafka Experience of ETL technical design, automated data quality testing, QA and documentation, data warehousing, data modelling and data wrangling Extensive experience using RDMS, ETL pipelines, Python, Hadoop and SQL Show more Show less

Posted 4 days ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Join us as a Machine Learning Engineer In this role, you’ll be driving and embedding the deployment, automation, maintenance and monitoring of machine learning models and algorithms Day-to-day, you’ll make sure that models and algorithms work effectively in a production environment while promoting data literacy education with business stakeholders If you see opportunities where others see challenges, you’ll find that this solutions-driven role will be your chance to solve new problems and enjoy excellent career development What you’ll do Your daily responsibilities will include you collaborating with colleagues to design and develop advanced machine learning products which power our group for our customers. You’ll also codify and automate complex machine learning model productions, including pipeline optimisation. We’ll expect you to transform advanced data science prototypes and apply machine learning algorithms and tools. You’ll also plan, manage, and deliver larger or complex projects, involving a variety of colleagues and teams across our business. You’ll Also Be Responsible For Understanding the complex requirements and needs of business stakeholders, developing good relationships and how machine learning solutions can support our business strategy Working with colleagues to productionise machine learning models, including pipeline design and development and testing and deployment, so the original intent is carried over to production Creating frameworks to ensure robust monitoring of machine learning models within a production environment, making sure they deliver quality and performance Understanding and addressing any shortfalls, for instance, through retraining Leading direct reports and wider teams in an Agile way within multi-disciplinary data and analytics teams to achieve agreed project and Scrum outcomes The skills you’ll need To be successful in this role, you’ll need to have a good academic background in a STEM discipline, such as Mathematics, Physics, Engineering or Computer Science. You’ll also have the ability to use data to solve business problems, from hypotheses through to resolution. We’ll look to you to have experience of at least twelve years with machine learning on large datasets, as well as experience building, testing, supporting, and deploying advanced machine learning models into a production environment using modern CI/CD tools, including git, TeamCity and CodeDeploy. You’ll Also Need A good understanding of machine learning approaches and algorithms such as supervised or unsupervised learning, deep learning, NLP with a strong focus on model development, deployment, and optimization Experience using Python with libraries such as NumPy, Pandas, Scikit-learn, and TensorFlow or PyTorch An understanding of PySpark for distributed data processing and manipulation with AWS (Amazon Web Services) including EC2, S3, Lambda, SageMaker, and other cloud tools. Experience with data processing frameworks such as Apache Kafka, Apache Airflow and containerization technologies such as Docker and orchestration tools such as Kubernetes Experience of building GenAI solutions to automate workflows to improve productivity and efficiency Show more Show less

Posted 4 days ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Join us as a Software Engineer This is an opportunity for a driven Software Engineer to take on an exciting new career challenge Day-to-day, you'll be engineering and maintaining innovative, customer centric, high performance, secure and robust solutions It’s a chance to hone your existing technical skills and advance your career while building a wide network of stakeholders We're offering this role at associate level What you'll do In your new role, you’ll be working within a feature team to engineer software, scripts and tools, as well as liaising with other engineers, architects and business analysts across the platform. You’ll Also Be Producing complex and critical software rapidly and of high quality which adds value to the business Working in permanent teams who are responsible for the full life cycle, from initial development, through enhancement and maintenance to replacement or decommissioning Collaborating to optimise our software engineering capability Designing, producing, testing and implementing our working software solutions Working across the life cycle, from requirements analysis and design, through coding to testing, deployment and operations The skills you'll need To take on this role, you’ll need at least four years of experience in software engineering, software design, and architecture, and an understanding of how your area of expertise supports our customers. You’ll Also Need Experience of working with development and testing tools, bug tracking tools and wikis Experience in AWS native services particularly S3, Glue, Lambda, IAM, and Elastic MapReduce Strong proficiency in Terraform for AWS cloud, Python for developing AWS lambdas, Airflow DAGs and shell scripting Experience with Apache Airflow for workflow orchestration Experience of DevOps and Agile methodology and associated toolsets Show more Show less

Posted 4 days ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Join us as a Data Engineering Lead This is an exciting opportunity to use your technical expertise to collaborate with colleagues and build effortless, digital first customer experiences You’ll be simplifying the bank through developing innovative data driven solutions, inspiring to be commercially successful through insight, and keeping our customers’ and the bank’s data safe and secure Participating actively in the data engineering community, you’ll deliver opportunities to support our strategic direction while building your network across the bank We’re recruiting for multiple roles across a range to levels, up to and including experienced managers What you'll do We’ll look to you to demonstrate technical and people leadership to drive value for the customer through modelling, sourcing and data transformation. You’ll be working closely with core technology and architecture teams to deliver strategic data solutions, while driving Agile and DevOps adoption in the delivery of data engineering, leading a team of data engineers. We’ll Also Expect You To Be Working with Data Scientists and Analytics Labs to translate analytical model code to well tested production ready code Helping to define common coding standards and model monitoring performance best practices Owning and delivering the automation of data engineering pipelines through the removal of manual stages Developing comprehensive knowledge of the bank’s data structures and metrics, advocating change where needed for product development Educating and embedding new data techniques into the business through role modelling, training and experiment design oversight Leading and delivering data engineering strategies to build a scalable data architecture and customer feature rich dataset for data scientists Leading and developing solutions for streaming data ingestion and transformations in line with streaming strategy The skills you'll need To be successful in this role, you’ll need to be an expert level programmer and data engineer with a qualification in Computer Science or Software Engineering. You’ll also need a strong understanding of data usage and dependencies with wider teams and the end customer, as well as extensive experience in extracting value and features from large scale data. We'll also expect you to have knowledge of of big data platforms like Snowflake, AWS Redshift, Postgres, MongoDB, Neo4J and Hadoop, along with good knowledge of cloud technologies such as Amazon Web Services, Google Cloud Platform and Microsoft Azure You’ll Also Demonstrate Knowledge of core computer science concepts such as common data structures and algorithms, profiling or optimisation An understanding of machine-learning, information retrieval or recommendation systems Good working knowledge of CICD tools Knowledge of programming languages in data engineering such as Python or PySpark, SQL, Java, and Scala An understanding of Apache Spark and ETL tools like Informatica PowerCenter, Informatica BDM or DEI, Stream Sets and Apache Airflow Knowledge of messaging, event or streaming technology such as Apache Kafka Experience of ETL technical design, automated data quality testing, QA and documentation, data warehousing, data modelling and data wrangling Extensive experience using RDMS, ETL pipelines, Python, Hadoop and SQL Show more Show less

Posted 4 days ago

Apply

7.0 years

0 Lacs

Greater Kolkata Area

On-site

Linkedin logo

Job Description Job Title: Automation Tester - Selenium, python, databricks Candidate Specification: 7 + years, Immediate to 30 days. Job Description Experience with Automated Testing. Ability to code and read a programming language (Python). Experience in pytest, selenium(python). Experience working with large datasets and complex data environments. Experience with airflow, Databricks, Data lake, Pyspark. Knowledge and working experience in Agile methodologies. Experience in CI/CD/CT methodology. Experience in Test methodologies. Skills Required RoleAutomation Tester Industry TypeIT/ Computers - Software Functional Area Required Education B Tech Employment TypeFull Time, Permanent Key Skills SELENIUM PYTHON DATABRICKS Other Information Job CodeGO/JC/100/2025 Recruiter NameSheena Rakesh Show more Show less

Posted 4 days ago

Apply

7.0 years

0 Lacs

Greater Kolkata Area

On-site

Linkedin logo

Job Description Job Title: Automation Tester - Selenium, python, databricks Candidate Specification: 7 + years, Immediate to 30 days. Job Description Experience with Automated Testing. Ability to code and read a programming language (Python). Experience in pytest, selenium(python). Experience working with large datasets and complex data environments. Experience with airflow, Databricks, Data lake, Pyspark. Knowledge and working experience in Agile methodologies. Experience in CI/CD/CT methodology. Experience in Test methodologies. Skills Required RoleAutomation Tester Industry TypeIT/ Computers - Software Functional Area Required Education B Tech Employment TypeFull Time, Permanent Key Skills SELENIUM PYTHON DATABRICKS Other Information Job CodeGO/JC/100/2025 Recruiter NameSheena Rakesh Show more Show less

Posted 4 days ago

Apply

Exploring Airflow Jobs in India

The airflow job market in India is rapidly growing as more companies are adopting data pipelines and workflow automation. Airflow, an open-source platform, is widely used for orchestrating complex computational workflows and data processing pipelines. Job seekers with expertise in airflow can find lucrative opportunities in various industries such as technology, e-commerce, finance, and more.

Top Hiring Locations in India

  1. Bangalore
  2. Mumbai
  3. Hyderabad
  4. Pune
  5. Gurgaon

Average Salary Range

The average salary range for airflow professionals in India varies based on experience levels: - Entry-level: INR 6-8 lakhs per annum - Mid-level: INR 10-15 lakhs per annum - Experienced: INR 18-25 lakhs per annum

Career Path

In the field of airflow, a typical career path may progress as follows: - Junior Airflow Developer - Airflow Developer - Senior Airflow Developer - Airflow Tech Lead

Related Skills

In addition to airflow expertise, professionals in this field are often expected to have or develop skills in: - Python programming - ETL concepts - Database management (SQL) - Cloud platforms (AWS, GCP) - Data warehousing

Interview Questions

  • What is Apache Airflow? (basic)
  • Explain the key components of Airflow. (basic)
  • How do you schedule a DAG in Airflow? (basic)
  • What are the different operators in Airflow? (medium)
  • How do you monitor and troubleshoot DAGs in Airflow? (medium)
  • What is the difference between Airflow and other workflow management tools? (medium)
  • Explain the concept of XCom in Airflow. (medium)
  • How do you handle dependencies between tasks in Airflow? (medium)
  • What are the different types of sensors in Airflow? (medium)
  • What is a Celery Executor in Airflow? (advanced)
  • How do you scale Airflow for a high volume of tasks? (advanced)
  • Explain the concept of SubDAGs in Airflow. (advanced)
  • How do you handle task failures in Airflow? (advanced)
  • What is the purpose of a TriggerDagRun operator in Airflow? (advanced)
  • How do you secure Airflow connections and variables? (advanced)
  • Explain how to create a custom Airflow operator. (advanced)
  • How do you optimize the performance of Airflow DAGs? (advanced)
  • What are the best practices for version controlling Airflow DAGs? (advanced)
  • Describe a complex data pipeline you have built using Airflow. (advanced)
  • How do you handle backfilling in Airflow? (advanced)
  • Explain the concept of DAG serialization in Airflow. (advanced)
  • What are some common pitfalls to avoid when working with Airflow? (advanced)
  • How do you integrate Airflow with external systems or tools? (advanced)
  • Describe a challenging problem you faced while working with Airflow and how you resolved it. (advanced)

Closing Remark

As you explore job opportunities in the airflow domain in India, remember to showcase your expertise, skills, and experience confidently during interviews. Prepare well, stay updated with the latest trends in airflow, and demonstrate your problem-solving abilities to stand out in the competitive job market. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies