Home
Jobs

2646 Airflow Jobs - Page 27

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

10.0 - 15.0 years

3 - 13 Lacs

Bengaluru / Bangalore, Karnataka, India

On-site

Foundit logo

Your Day-to-Day Provide technical leadership and guidance to teams of software engineers, fostering a culture of collaboration, innovation, and continuous improvement. Establish outcomes and key results (OKRs) and successfully deliver them. Drive improvements in key performance indicators (KPIs). Increase the productivity and velocity of delivery teams. Develop, plan, and execute engineering roadmaps that bring value and quality to our customers. Collaborate and coordinate across teams and functions to ensure technical, product, and business objectives are met. Instill end-to-end ownership of products, projects, features, modules, and services that you and your team deliver in all phases of the software development lifecycle. What do you need to bring 10+ years of experience in the software industry, with 3+ years of professional experience leading software development teams. Strong critical thinking and problem-solving skills with the ability to address complex technical and non-technical challenges. Experience building and developing engineering teams that exhibit strong ownership, user empathy, and engineering excellence. Proven track record of delivering high-quality systems and software in Big Data Technologies including Spark, Airflow, Hive, etc., with practical exposure to integrating machine learning workflows into data pipelines. Proven track record of delivering high-quality systems and software in Java/J2EE technologies and distributed systems, with experience deploying ML models into production at scale using REST APIs, streaming platforms, or batch inference. Excellent communication skills with the ability to collaborate effectively with cross-functional teams (including data scientists and ML engineers) and manage stakeholders expectations. Ability to coach and mentor talent to reach their full potential, including guiding teams in adopting MLOps best practices and understanding AI model lifecycle management. Experience in building large scale, high throughput, low latency systems, including real-time data processing systems that support personalization, anomaly detection, or predictive analytics. Strong understanding of software development methodologies, modern technology topics and frameworks, and developer operations best practices. Experience with ML platforms (e.g., Kubeflow, MLflow) and familiarity with model monitoring, feature engineering, and data versioning tools is a plus. Provide leadership to others, particularly junior engineers who work on the same team or related product features. Proven experience delivering complex software projects and solutions effectively through Agile methodologies on a regular release cadence. Strong verbal and written communication skills. Strong customer focus, ownership, urgency and drive.

Posted 1 week ago

Apply

0.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Indeed logo

Location Bangalore, Karnataka, 560048 Category Engineering / Information Technology Job Type Full time Job Id 1180663 No Automation NoSQL Data Engineer This role has been designed as ‘’Onsite’ with an expectation that you will primarily work from an HPE partner/customer office. Who We Are: Hewlett Packard Enterprise is the global edge-to-cloud company advancing the way people live and work. We help companies connect, protect, analyze, and act on their data and applications wherever they live, from edge to cloud, so they can turn insights into outcomes at the speed required to thrive in today’s complex world. Our culture thrives on finding new and better ways to accelerate what’s next. We know varied backgrounds are valued and succeed here. We have the flexibility to manage our work and personal needs. We make bold moves, together, and are a force for good. If you are looking to stretch and grow your career our culture will embrace you. Open up opportunities with HPE. Job Description: HPE Operations is our innovative IT services organization. It provides the expertise to advise, integrate, and accelerate our customers’ outcomes from their digital transformation. Our teams collaborate to transform insight into innovation. In today’s fast paced, hybrid IT world, being at business speed means overcoming IT complexity to match the speed of actions to the speed of opportunities. Deploy the right technology to respond quickly to market possibilities. Join us and redefine what’s next for you. What you will do: Think through complex data engineering problems in a fast-paced environment and drive solutions to reality. Work in a dynamic, collaborative environment to build DevOps-centered data solutions using the latest technologies and tools. Provide engineering-level support for data tools and systems deployed in customer environments. Respond quickly and professionally to customer emails/requests for assistance. What you need to bring: Bachelor’s degree in Computer Science, Information Systems, or equivalent. 7+ years of demonstrated experience working in software development teams with a strong focus on NoSQL databases and distributed data systems. Strong experience in automated deployment, troubleshooting, and fine-tuning technologies such as Apache Cassandra, Clickhouse, MongoDB, Apache Spark, Apache Flink, Apache Airflow, and similar technologies. Technical Skills: Strong knowledge of NoSQL databases such as Apache Cassandra, Clickhouse, and MongoDB, including their installation, configuration, and performance tuning in production environments. Expertise in deploying and managing real-time data processing pipelines using Apache Spark, Apache Flink, and Apache Airflow. Experience in deploying and managing Apache Spark and Apache Flink operators on Kubernetes and other containerized environments, ensuring high availability and scalability of data processing jobs. Hands-on experience in configuring and optimizing Apache Spark and Apache Flink clusters, including fine-tuning resource allocation, fault tolerance, and job execution. Proficiency in authoring, automating, and optimizing Apache Airflow DAGs for orchestrating complex data workflows across Spark and Flink jobs. Strong experience with container orchestration platforms (like Kubernetes) to deploy and manage Spark/Flink operators and data pipelines. Proficiency in creating, managing, and optimizing Airflow DAGs to automate data pipeline workflows, handle retries, task dependencies, and scheduling. Solid experience in troubleshooting and optimizing performance in distributed data systems. Expertise in automated deployment and infrastructure management using tools such as Terraform, Chef, Ansible, Kubernetes, or similar technologies. Experience with CI/CD pipelines using tools like Jenkins, GitLab CI, Bamboo, or similar. Strong knowledge of scripting languages such as Python, Bash, or Go for automation, provisioning Platform-as-a-Service, and workflow orchestration. Additional Skills: Accountability, Accountability, Active Learning (Inactive), Active Listening, Bias, Business Growth, Client Expectations Management, Coaching, Creativity, Critical Thinking, Cross-Functional Teamwork, Customer Centric Solutions, Customer Relationship Management (CRM), Design Thinking, Empathy, Follow-Through, Growth Mindset, Information Technology (IT) Infrastructure, Infrastructure as a Service (IaaS), Intellectual Curiosity (Inactive), Long Term Planning, Managing Ambiguity, Process Improvements, Product Services, Relationship Building {+ 5 more} What We Can Offer You: Health & Wellbeing We strive to provide our team members and their loved ones with a comprehensive suite of benefits that supports their physical, financial and emotional wellbeing. Personal & Professional Development We also invest in your career because the better you are, the better we all are. We have specific programs catered to helping you reach any career goals you have — whether you want to become a knowledge expert in your field or apply your skills to another division. Unconditional Inclusion We are unconditionally inclusive in the way we work and celebrate individual uniqueness. We know varied backgrounds are valued and succeed here. We have the flexibility to manage our work and personal needs. We make bold moves, together, and are a force for good. Let's Stay Connected: Follow @HPECareers on Instagram to see the latest on people, culture and tech at HPE. #india #operations Job: Services Job Level: TCP_03 HPE is an Equal Employment Opportunity/ Veterans/Disabled/LGBT employer. We do not discriminate on the basis of race, gender, or any other protected category, and all decisions we make are made on the basis of qualifications, merit, and business need. Our goal is to be one global team that is representative of our customers, in an inclusive environment where we can continue to innovate and grow together. Please click here: Equal Employment Opportunity. Hewlett Packard Enterprise is EEO Protected Veteran/ Individual with Disabilities. HPE will comply with all applicable laws related to employer use of arrest and conviction records, including laws requiring employers to consider for employment qualified applicants with criminal histories.

Posted 1 week ago

Apply

5.0 - 8.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Indeed logo

Category: Software Development/ Engineering Main location: India, Karnataka, Bangalore Position ID: J0525-0430 Employment Type: Full Time Position Description: Company Profile: At CGI, we’re a team of builders. We call our employees members because all who join CGI are building their own company - one that has grown to 72,000 professionals located in 40 countries. Founded in 1976, CGI is a leading IT and business process services firm committed to helping clients succeed. We have the global resources, expertise, stability and dedicated professionals needed to achieve. At CGI, we’re a team of builders. We call our employees members because all who join CGI are building their own company - one that has grown to 72,000 professionals located in 40 countries. Founded in 1976, CGI is a leading IT and business process services firm committed to helping clients succeed. We have the global resources, expertise, stability and dedicated professionals needed to achieve results for our clients - and for our members. Come grow with us. Learn more at www.cgi.com. This is a great opportunity to join a winning team. CGI offers a competitive compensation package with opportunities for growth and professional development. Benefits for full-time, permanent members start on the first day of employment and include a paid time-off program and profit participation and stock purchase plans. We wish to thank all applicants for their interest and effort in applying for this position, however, only candidates selected for interviews will be contacted. No unsolicited agency referrals please. Job Title: Senior Software Engineer Position: Senior Software Engineer- Node, AWS and Terraform Experience: 5-8 Years Category: Software Development/ Engineering Main location: Hyderabad/ Chennai/Bangalore Position ID: J0525-0430 Employment Type: Full Time Responsibilities: Design, develop, and maintain robust and scalable server-side applications using Node.js and JavaScript/TypeScript. Develop and consume RESTful APIs and integrate with third-party services. In-depth knowledge of AWS cloud including familiarity with services such as S3, Lambda, DynamoDB, Glue, Apache Airflow, SQS, SNS, ECS and Step Functions, EMR, EKS (Elastic Kubernetes Service), Key Management Service, Elastic MapReduce Handon Experience on Terraform Specializing in designing and developing fully automated end-to-end data processing pipelines for large-scale data ingestion, curation, and transformation. Experience in deploying Spark-based ingestion frameworks, testing automation tools, and CI/CD pipelines. Knowledge of unit testing frameworks and best practices. Working experience in databases- SQL and NO-SQL (preferred)-including joins, aggregations, window functions, date functions, partitions, indexing, and performance improvement ideas. Experience with database systems such as Oracle, MySQL, PostgreSQL, MongoDB, or other NoSQL databases. Familiarity with ORM/ODM libraries (e.g., Sequelize, Mongoose). Proficiency in using Git for version control. Understanding of testing frameworks (e.g., Jest, Mocha, Chai) and writing unit and integration tests. Collaborate with front-end developers to integrate user-facing elements with server-side logic. Design and implement efficient database schemas and ensure data integrity. Write clean, well-documented, and testable code. Participate in code reviews to ensure code quality and adherence to coding standards. Troubleshoot and debug issues in development and production environments. Knowledge of security best practices for web applications (authentication, authorization, data validation). Strong communication and collaboration skills. Effective communication skills to interact with technical and non-technical stakeholders. CGI is an equal opportunity employer. In addition, CGI is committed to providing accommodations for people with disabilities in accordance with provincial legislation. Please let us know if you require a reasonable accommodation due to a disability during any aspect of the recruitment process and we will work with you to address your needs. Skills: Node.Js RESTful (Rest-APIs) Terraform What you can expect from us: Together, as owners, let’s turn meaningful insights into action. Life at CGI is rooted in ownership, teamwork, respect and belonging. Here, you’ll reach your full potential because… You are invited to be an owner from day 1 as we work together to bring our Dream to life. That’s why we call ourselves CGI Partners rather than employees. We benefit from our collective success and actively shape our company’s strategy and direction. Your work creates value. You’ll develop innovative solutions and build relationships with teammates and clients while accessing global capabilities to scale your ideas, embrace new opportunities, and benefit from expansive industry and technology expertise. You’ll shape your career by joining a company built to grow and last. You’ll be supported by leaders who care about your health and well-being and provide you with opportunities to deepen your skills and broaden your horizons. Come join our team—one of the largest IT and business consulting services firms in the world.

Posted 1 week ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

Remote

Linkedin logo

IMEA (India, Middle East, Africa) India LIXIL INDIA PVT LTD Employee Assignment Fully remote possible Full Time 1 May 2025 Title Senior Data Engineer Job Description A Data Engineer is responsible for designing, building, and maintaining large-scale data systems and infrastructure. Their primary goal is to ensure that data is properly collected, stored, processed, and retrieved to support business intelligence, analytics, and data-driven decision-making. Key Responsibilities Design and Develop Data Pipelines: Create data pipelines to extract data from various sources, transform it into a standardized format, and load it into a centralized data repository. Build and Maintain Data Infrastructure: Design, implement, and manage data warehouses, data lakes, and other data storage solutions. Ensure Data Quality and Integrity: Develop data validation, cleansing, and normalization processes to ensure data accuracy and consistency. Collaborate with Data Analysts and Business Process Owners: Work with data analysts and business process owners to understand their data requirements and provide data support for their projects. Optimize Data Systems for Performance: Continuously monitor and optimize data systems for performance, scalability, and reliability. Develop and Maintain Data Governance Policies: Create and enforce data governance policies to ensure data security, compliance, and regulatory requirements. Experience & Skills Hands-on experience in implementing, supporting, and administering modern cloud-based data solutions (Google BigQuery, AWS Redshift, Azure Synapse, Snowflake, etc.). Strong programming skills in SQL, Java, and Python. Experience in configuring and managing data pipelines using Apache Airflow, Informatica, Talend, SAP BODS or API-based extraction. Expertise in real-time data processing frameworks. Strong understanding of Git and CI/CD for automated deployment and version control. Experience with Infrastructure-as-Code tools like Terraform for cloud resource management. Good stakeholder management skills to collaborate effectively across teams. Solid understanding of SAP ERP data and processes to integrate enterprise data sources. Exposure to data visualization and front-end tools (Tableau, Looker, etc.). Strong command of English with excellent communication skills. Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

Remote

Linkedin logo

IMEA (India, Middle East, Africa) India LIXIL INDIA PVT LTD Employee Assignment Fully remote possible Full Time 1 May 2025 Title Data Engineer Job Description A Data Engineer is responsible for designing, building, and maintaining large-scale data systems and infrastructure. Their primary goal is to ensure that data is properly collected, stored, processed, and retrieved to support business intelligence, analytics, and data-driven decision-making. Key Responsibilities Design and Develop Data Pipelines: Create data pipelines to extract data from various sources, transform it into a standardized format, and load it into a centralized data repository. Build and Maintain Data Infrastructure: Design, implement, and manage data warehouses, data lakes, and other data storage solutions. Ensure Data Quality and Integrity: Develop data validation, cleansing, and normalization processes to ensure data accuracy and consistency. Collaborate with Data Analysts and Business Process Owners: Work with data analysts and business process owners to understand their data requirements and provide data support for their projects. Optimize Data Systems for Performance: Continuously monitor and optimize data systems for performance, scalability, and reliability. Develop and Maintain Data Governance Policies: Create and enforce data governance policies to ensure data security, compliance, and regulatory requirements. Experience & Skills Hands-on experience in implementing, supporting, and administering modern cloud-based data solutions (Google BigQuery, AWS Redshift, Azure Synapse, Snowflake, etc.). Strong programming skills in SQL, Java, and Python. Experience in configuring and managing data pipelines using Apache Airflow, Informatica, Talend, SAP BODS or API-based extraction. Expertise in real-time data processing frameworks. Strong understanding of Git and CI/CD for automated deployment and version control. Experience with Infrastructure-as-Code tools like Terraform for cloud resource management. Good stakeholder management skills to collaborate effectively across teams. Solid understanding of SAP ERP data and processes to integrate enterprise data sources. Exposure to data visualization and front-end tools (Tableau, Looker, etc.). Strong command of English with excellent communication skills. Show more Show less

Posted 1 week ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Description We are seeking a highly skilled C++ Python Developer with a strong background in software development, scripting, and EDA tool integration. This role focuses on creating, enhancing, and maintaining tools used in silicon design and verification environments. Required Skills & Experience 3+ years of hands-on experience in C++ software development. 2+ years of experience in Python scripting for automation or tool development. Strong grasp of object-oriented design, data structures, and algorithms. Hands-on experience with EDA tools (Synopsys, Cadence, Mentor Graphics) is a strong advantage. Proficient in Unix/Linux environments, including shell scripting. Solid understanding of software development lifecycle (SDLC) and design patterns. Strong debugging and profiling skills in both C++ and Python. Experience in unit testing and test automation frameworks (e.g., Google Test, PyTest). Knowledge of build systems (e.g., Make, CMake, SCons). Familiarity with code quality tools like linting, static analysis, and formatters. Excellent problem-solving, analytical, and communication skills. Preferred Qualifications Experience developing tools/scripts for chip design, EDA automation, or verification environments. Exposure to hardware description languages (HDLs) like Verilog or VHDL for tool integration. Understanding of semiconductor design flows (RTL to GDSII). Familiarity with version control systems (e.g., Git) and CI/CD pipelines. Knowledge of database integration (e.g., SQLite, PostgreSQL) for storing tool output or metrics. Experience with task automation frameworks like Airflow or Snakemake. Exposure to RESTful APIs for tool interoperability. Comfortable working in Agile/Scrum environments. Ability to manage and prioritize multiple tasks in a fast-paced, collaborative setting. Why Join Us? Join a technically strong and collaborative global team. Contribute to high-impact silicon and EDA automation projects. Flexible work arrangements and learning opportunities. (ref:hirist.tech) Show more Show less

Posted 1 week ago

Apply

6.0 years

0 Lacs

Hyderabad, Telangana, India

Remote

Linkedin logo

Job Title: Data Engineer Job Description We’re Concentrix. A new breed of tech company — Human-centered. Tech-powered. Intelligence-fueled. We create game-changing solutions across the enterprise, that help brands grow across the world and into the future. We are trusted by clients across all major sectors, from up-and-coming success stories to iconic Fortune Global 500 brands in over 70 countries spanning 6 continents. Our game-changers: Challenge Conventions Deliver outcomes unimagined Create experiences that go beyond WOW If this is you, we would love to discuss career opportunities with you. In our Information Technology and Global Security team, you will deliver the latest technology infrastructure, transformative software solutions and industry-leading global security for our staff and clients. You will work with the best in the world to design, implement and strategize IT, security, application development, innovation, and solutions in today’s hyperconnected world. You will be part of the technology team that is core to our vision of develop, build and run the future of CX. Concentrix provides eligible employees with an opportunity to enroll in many benefit programs, generally including private medical plans, great compensation package, retirement savings plans, paid learning days, and flexible workplaces. Specific benefits plans will vary by country/region. We’re a remote-first company looking for the absolute best talent in the world. Experience the power of a game-changing career. Qualifications Education & Experience Bachelor’s degree (preferred) in Computer Science, Engineering, or a related field (or equivalent experience). 4–6 years of hands-on data engineering experience, including designing and maintaining data pipelines in a production environment. Technical Skills Proficiency in Python and SQL with experience using data pipeline orchestration tools (e.g., Dagster, Airflow, etc.). Demonstrable experience with AWS (S3, EC2, Lambda, Glue) and Snowflake. Familiarity with dbt for data transformations and modeling. Understanding of ETL/ELT best practices and data warehousing concepts. Experience using source control systems and CI/CD pipelines. Soft Skills & Leadership Effective communication skills, with the ability to translate technical concepts for non-technical stakeholders. Experience mentoring junior engineers or contributing to team knowledge sharing. Strong problem-solving and analytical thinking skills. Location: IND Hyderabad Hitech Work-at-Home Language Requirements: Time Type: Full time If you are a California resident, by submitting your information, you acknowledge that you have read and have access to the Job Applicant Privacy Notice for California Residents R1614856 Show more Show less

Posted 1 week ago

Apply

8.0 years

0 Lacs

Pune, Maharashtra, India

Remote

Linkedin logo

Role: Data QA Lead Experience Required- 8+ Years Location- India/Remote Company Overview At Codvo.ai, software and people transformations go hand-in-hand. We are a global empathy-led technology services company. Product innovation and mature software engineering are part of our core DNA. Respect, Fairness, Growth, Agility, and Inclusiveness are the core values that we aspire to live by each day. We continue to expand our digital strategy, design, architecture, and product management capabilities to offer expertise, outside-the-box thinking, and measurable results. The Data Quality Analyst is responsible for ensuring the quality, accuracy, and consistency of data within the Customer and Loan Master Data API solution. This role will work closely with data owners, data modelers, and developers to identify and resolve data quality issues. Key Responsibilities Lead and manage end-to-end ETL/data validation activities. Design test strategy, plans, and scenarios for source-to-target validation. Build automated data validation frameworks (SQL/Python/Great Expectations). Integrate tests with CI/CD pipelines (Jenkins, Azure DevOps). Perform data integrity, transformation logic, and reconciliation checks. Collaborate with Data Engineering, Product, and DevOps teams. Drive test metrics reporting, defect triage, and root cause analysis. Mentor QA team members and ensure process adherence. Must-Have Skills 8+ years in QA with 4+ years in ETL testing. Strong SQL and database testing experience. Proficiency with ETL tools (Airbyte, DBT, Informatica, etc.). Automation using Python or similar scripting language. Solid understanding of data warehousing, SCD, deduplication. Experience with large datasets and structured/unstructured formats. Preferred Skills Knowledge of data orchestration tools (Prefect, Airflow). Familiarity with data quality/observability tools. Experience with big data systems (Spark, Hive). Hands-on with test data generation (Faker, Mockaroo). Show more Show less

Posted 1 week ago

Apply

0.0 - 3.0 years

0 Lacs

India

On-site

Linkedin logo

Description GroundTruth is an advertising platform that turns real-world behavior into marketing that drives in-store visits and other real business results. We use observed real-world consumer behavior, including location and purchase data, to create targeted advertising campaigns across all screens, measure how consumers respond, and uncover unique insights to help optimize ongoing and future marketing efforts. With this focus on media, measurement, and insights, we provide marketers with tools to deliver media campaigns that drive measurable impact, such as in-store visits, sales, and more. Learn more at groundtruth.com. We believe that innovative technology starts with the best talent and have been ranked one of Ad Age’s Best Places to Work in 2021, 2022, 2023 & 2025! Learn more about the perks of joining our team here. About Team GroundTruth seeks an Associate Software Engineer to join our Reporting team. The Reporting Team at GroundTruth is responsible for designing, building, and maintaining data pipelines and dashboards that deliver actionable insights. We ensure accurate and timely reporting to drive data-driven decisions for advertisers and publishers. We take pride in building an Engineering Team composed of strong communicators who collaborate with multiple business and engineering stakeholders to find compromises and solutions. Our engineers are organised and detail-oriented team players who are problem solvers with a maker mindset. As an Associate Software Engineer (ASE) on our Integration Team, you will build solutions that add new capabilities to our platform. You Will Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘big data’ technologies. Lead engineering efforts across multiple software components. Write excellent production code and tests, and help others improve in code reviews. Analyse high-level requirements to design, document, estimate, and build systems. Continuously improve the team's practices in code quality, reliability, performance, testing, automation, logging, monitoring, alerting, and build processes You Have B.Tech./B.E./M.Tech./MCA or equivalent in computer science 0-3 years of experience in Data Engineering Experience with AWS Stack used for Data engineering EC2, S3, Athena, Redshift, EMR, ECS, Lambda, and Step functions Experience in MapReduce, Spark, and Glue Hands-on experience with Java/Python for the orchestration of data pipelines and Data engineering tasks Experience in writing analytical queries using SQL Experience in Airflow Experience in Docker Proficient in Git How can you impress us? Knowledge of REST APIs The following skills/certifications: Python, SQL/MySQL, AWS, Git Additional nice-to-have skills/certifications: Flask, Fast API Knowledge of shell scripting. Experience with BI tools like Looker. Experience with DB maintenance Experience with Amazon Web Services and Docker Configuration management and QA practices Benefits At GroundTruth, we want our employees to be comfortable with their benefits so they can focus on doing the work they love. Parental leave- Maternity and Paternity Flexible Time Offs (Earned Leaves, Sick Leaves, Birthday leave, Bereavement leave & Company Holidays) In Office Daily Catered Breakfast, Lunch, Snacks and Beverages Health cover for any hospitalization. Covers both nuclear family and parents Tele-med for free doctor consultation, discounts on health checkups and medicines Wellness/Gym Reimbursement Pet Expense Reimbursement Childcare Expenses and reimbursements Employee referral program Education reimbursement program Skill development program Cell phone reimbursement (Mobile Subsidy program). Internet reimbursement/Postpaid cell phone bill/or both. Birthday treat reimbursement Employee Provident Fund Scheme offering different tax saving options such as Voluntary Provident Fund and employee and employer contribution up to 12% Basic Creche reimbursement Co-working space reimbursement National Pension System employer match Meal card for tax benefit Special benefits on salary account Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Join us as a Support Analyst at Barclays, responsible for supporting the successful delivery of Location Strategy projects to plan, budget, agreed quality and governance standards. You'll spearhead the evolution of our digital landscape, driving innovation and excellence. You will harness cutting-edge technology to revolutionise our digital offerings, ensuring unparalleled customer experiences. To be successful as a Support Analyst you should have experience with: Bachelor’s degree in computers/ IT or equivalent. ITIL Process awareness with support background preferred. Good Knowledge on AWS Data Analytics Technology Stack (Glue, S3, Lambda, Lake formation, Athena) Experience in using Orchestration tools such as Apache Airflow or Snowflake Tasks. Hands on Experience in maintaining and Supporting applications on AWS Cloud. Hands on experience in pyspark, Dataframes, RDD and SparkSQL Experience in UNIX and shell scripting Experience in analysing SQL. Exposure to data governance or lineage tools such as Immuta and Alation is added advantage. Additional Skills ETL Tools exposure with Real-time and large data volumes handling and processing. Experience is supporting critical services with escalation matrix handling and customer communication. Knowledge on Ab Initio ETL tool is a plus Exposure to Automation and Tooling. You may be assessed on the key critical skills relevant for success in role, such as risk and controls, change and transformation, business acumen strategic thinking and digital and technology, as well as job-specific technical skills. This role is based in Pune. Purpose of the role To effectively monitor and maintain the bank’s critical technology infrastructure and resolve more complex technical issues, whilst minimising disruption to operations. Accountabilities Provision of technical support for the service management function to resolve more complex issues for a specific client of group of clients. Develop the support model and service offering to improve the service to customers and stakeholders. Execution of preventative maintenance tasks on hardware and software and utilisation of monitoring tools/metrics to identify, prevent and address potential issues and ensure optimal performance. Maintenance of a knowledge base containing detailed documentation of resolved cases for future reference, self-service opportunities and knowledge sharing. Analysis of system logs, error messages and user reports to identify the root causes of hardware, software and network issues, and providing a resolution to these issues by fixing or replacing faulty hardware components, reinstalling software, or applying configuration changes. Automation, monitoring enhancements, capacity management, resiliency, business continuity management, front office specific support and stakeholder management. Identification and remediation or raising, through appropriate process, of potential service impacting risks and issues. Proactively assess support activities implementing automations where appropriate to maintain stability and drive efficiency. Actively tune monitoring tools, thresholds, and alerting to ensure issues are known when they occur. Analyst Expectations To perform prescribed activities in a timely manner and to a high standard consistently driving continuous improvement. Requires in-depth technical knowledge and experience in their assigned area of expertise Thorough understanding of the underlying principles and concepts within the area of expertise They lead and supervise a team, guiding and supporting professional development, allocating work requirements and coordinating team resources. If the position has leadership responsibilities, People Leaders are expected to demonstrate a clear set of leadership behaviours to create an environment for colleagues to thrive and deliver to a consistently excellent standard. The four LEAD behaviours are: L – Listen and be authentic, E – Energise and inspire, A – Align across the enterprise, D – Develop others. OR for an individual contributor, they develop technical expertise in work area, acting as an advisor where appropriate. Will have an impact on the work of related teams within the area. Partner with other functions and business areas. Takes responsibility for end results of a team’s operational processing and activities. Escalate breaches of policies / procedure appropriately. Take responsibility for embedding new policies/ procedures adopted due to risk mitigation. Advise and influence decision making within own area of expertise. Take ownership for managing risk and strengthening controls in relation to the work you own or contribute to. Deliver your work and areas of responsibility in line with relevant rules, regulation and codes of conduct. Maintain and continually build an understanding of how own sub-function integrates with function, alongside knowledge of the organisations products, services and processes within the function. Demonstrate understanding of how areas coordinate and contribute to the achievement of the objectives of the organisation sub-function. Make evaluative judgements based on the analysis of factual information, paying attention to detail. Resolve problems by identifying and selecting solutions through the application of acquired technical experience and will be guided by precedents. Guide and persuade team members and communicate complex / sensitive information. Act as contact point for stakeholders outside of the immediate function, while building a network of contacts outside team and external to the organisation. All colleagues will be expected to demonstrate the Barclays Values of Respect, Integrity, Service, Excellence and Stewardship – our moral compass, helping us do what we believe is right. They will also be expected to demonstrate the Barclays Mindset – to Empower, Challenge and Drive – the operating manual for how we behave. Back to nav Share job X(Opens in new tab or window) Facebook(Opens in new tab or window) LinkedIn(Opens in new tab or window) Show more Show less

Posted 1 week ago

Apply

4.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Description Year of Exp- 4 to 6 Years Location- Noida, Pune, Bangalore, Nagpur Requirements You have a minimum of 3+ years’ experience and hands on practical experience in data integration, engineering and technological analytics. You have a degree in Science, Technology, Engineering, or Mathematics Related Discipline Excellent skills in; SQL, Python, distributed source control such as GIT in an Agile- Scrum environment. Experience with ETL pipelines and Airflow Has a strong understanding of dimensional modelling and data warehousing methodologies. Can identify ways to improve data quality And reliability. Can use data to discover different tasks for automation. Is aligned with the latest data trends and ways to simplify data insights. Is passionate about data and the insights that large amounts of data sets can provide Experience within the retail industry is a plus. Job responsibilities Working with our stakeholders to develop end to end Cloud based solutions with a heavy focus on applications and data. Collaborate with BI/BA analyst, Data scientists, Data Engineers, Product Managers and other stakeholders across the organization. Ensure the delivery of reliable software and data pipelines using data engineering best practices, including secure automation, version control, continuous integration/delivery, proper testing. Ownership of the product and will significantly influence on our strategy by helping define the next wave of data insights and system architecture. A commitment to teamwork and excellent business and interpersonal skills are essential. You will be an essential part of our growing analytics and data insights team, and be responsible for our technological and architectural vision. What we offer Culture of caring. At GlobalLogic, we prioritize a culture of caring. Across every region and department, at every level, we consistently put people first. From day one, you’ll experience an inclusive culture of acceptance and belonging, where you’ll have the chance to build meaningful connections with collaborative teammates, supportive managers, and compassionate leaders. Learning and development. We are committed to your continuous learning and development. You’ll learn and grow daily in an environment with many opportunities to try new things, sharpen your skills, and advance your career at GlobalLogic. With our Career Navigator tool as just one example, GlobalLogic offers a rich array of programs, training curricula, and hands-on opportunities to grow personally and professionally. Interesting & meaningful work. GlobalLogic is known for engineering impact for and with clients around the world. As part of our team, you’ll have the chance to work on projects that matter. Each is a unique opportunity to engage your curiosity and creative problem-solving skills as you help clients reimagine what’s possible and bring new solutions to market. In the process, you’ll have the privilege of working on some of the most cutting-edge and impactful solutions shaping the world today. Balance and flexibility. We believe in the importance of balance and flexibility. With many functional career areas, roles, and work arrangements, you can explore ways of achieving the perfect balance between your work and life. Your life extends beyond the office, and we always do our best to help you integrate and balance the best of work and life, having fun along the way! High-trust organization. We are a high-trust organization where integrity is key. By joining GlobalLogic, you’re placing your trust in a safe, reliable, and ethical global company. Integrity and trust are a cornerstone of our value proposition to our employees and clients. You will find truthfulness, candor, and integrity in everything we do. About GlobalLogic GlobalLogic, a Hitachi Group Company, is a trusted digital engineering partner to the world’s largest and most forward-thinking companies. Since 2000, we’ve been at the forefront of the digital revolution – helping create some of the most innovative and widely used digital products and experiences. Today we continue to collaborate with clients in transforming businesses and redefining industries through intelligent products, platforms, and services. Show more Show less

Posted 1 week ago

Apply

5.0 - 9.0 years

13 - 22 Lacs

Hyderabad

Hybrid

Naukri logo

Key Responsibilities: 1. Design, build, and deploy new data pipelines within our Big Data Eco-Systems using Streamsets/Talend/Informatica BDM etc. Document new/existing pipelines, Datasets. 2. Design ETL/ELT data pipelines using StreamSets, Informatica or any other ETL processing engine. Familiarity with Data Pipelines, Data Lakes and modern Data Warehousing practices (virtual data warehouse, push down analytics etc.) 3. Expert level programming skills on Python 4. Expert level programming skills on Spark 5. Cloud Based Infrastructure: GCP 6. Experience with one of the ETL Informatica, StreamSets in creation of complex parallel loads, Cluster Batch Execution and dependency creation using Jobs/Topologies/Workflows etc., 7. Experience in SQL and conversion of SQL stored procedures into Informatica/StreamSets, Strong exposure working with web service origins/targets/processors/executors, XML/JSON Sources and Restful APIs. 8. Strong exposure working with relation databases DB2, Oracle & SQL Server including complex SQL constructs and DDL generation. 9. Exposure to Apache Airflow for scheduling jobs 10. Strong knowledge of Big data Architecture (HDFS), Cluster installation, configuration, monitoring, cluster security, cluster resources management, maintenance, and performance tuning 11. Create POCs to enable new workloads and technical capabilities on the Platform. 12. Work with the platform and infrastructure engineers to implement these capabilities in production. 13. Manage workloads and enable workload optimization including managing resource allocation and scheduling across multiple tenants to fulfill SLAs. 14. Participate in planning activities, Data Science and perform activities to increase platform skills Key Requirements: 1. Minimum 6 years of experience in ETL/ELT Technologies, preferably StreamSets/Informatica/Talend etc., 2. Minimum of 6 years hands-on experience with Big Data technologies e.g. Hadoop, Spark, Hive. 3. Minimum 3+ years of experience on Spark 4. Minimum 3 years of experience in Cloud environments, preferably GCP 5. Minimum of 2 years working in a Big Data service delivery (or equivalent) roles focusing on the following disciplines: 6. Any experience with NoSQL and Graph databases 7. Informatica or StreamSets Data integration (ETL/ELT) 8. Exposure to role and attribute based access controls 9. Hands on experience with managing solutions deployed in the Cloud, preferably on GCP 10. Experience working in a Global company, working in a DevOps model is a plus

Posted 1 week ago

Apply

8.0 - 11.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Company Description About Sopra Steria Sopra Steria, a major Tech player in Europe with 50,000 employees in nearly 30 countries, is recognised for its consulting, digital services and solutions. It helps its clients drive their digital transformation and obtain tangible and sustainable benefits. The Group provides end-to-end solutions to make large companies and organisations more competitive by combining in-depth knowledge of a wide range of business sectors and innovative technologies with a collaborative approach. Sopra Steria places people at the heart of everything it does and is committed to putting digital to work for its clients in order to build a positive future for all. In 2024, the Group generated revenues of €5.8 billion. Job Description The world is how we shape it. Position: Snowflake - Senior Technical Lead Experience: 8-11 years Location: Noida/ Bangalore Education: B.E./ B.Tech./ MCA Primary Skills: Snowflake, Snowpipe, SQL, Data Modelling, DV 2.0, Data Quality, AWS, Snowflake Security Good to have Skills: Snowpark, Data Build Tool, Finance Domain Experience with Snowflake-specific features: Snowpipe, Streams & Tasks, Secure Data Sharing. Experience in data warehousing, with at least 2 years focused on Snowflake. Hands-on expertise in SQL, Snowflake scripting (JavaScript UDFs), and Snowflake administration. Proven experience with ETL/ELT tools (e.g., dbt, Informatica, Talend, Matillion) and orchestration frameworks. Deep knowledge of data modeling techniques (star schema, data vault) and performance tuning. Familiarity with data security, compliance requirements, and governance best practices. Experience in Python, Scala, or Java for Snowpark development is good to have. Strong understanding of cloud platforms (AWS, Azure, or GCP) and related services (S3, ADLS, IAM) Key Responsibilities Define data partitioning, clustering, and micro-partition strategies to optimize performance and cost. Lead the implementation of ETL/ELT processes using Snowflake features (Streams, Tasks, Snowpipe). Automate schema migrations, deployments, and pipeline orchestration (e.g., with dbt, Airflow, or Matillion). Monitor query performance and resource utilization; tune warehouses, caching, and clustering. Implement workload isolation (multi-cluster warehouses, resource monitors) for concurrent workloads. Define and enforce role-based access control (RBAC), masking policies, and object tagging. Ensure data encryption, compliance (e.g., GDPR, HIPAA), and audit logging are correctly configured. Establish best practices for dimensional modeling, data vault architecture, and data quality. Create and maintain data dictionaries, lineage documentation, and governance standards. Partner with business analysts and data scientists to understand requirements and deliver analytics-ready datasets. Stay current with Snowflake feature releases (e.g., Snowpark, Native Apps) and propose adoption strategies. Contribute to the long-term data platform roadmap and cloud cost-optimization initiatives. Qualifications BTech/MCA Additional Information At our organization, we are committed to fighting against all forms of discrimination. We foster a work environment that is inclusive and respectful of all differences. All of our positions are open to people with disabilities. Show more Show less

Posted 1 week ago

Apply

4.0 - 8.0 years

12 - 18 Lacs

Hyderabad, Chennai, Coimbatore

Hybrid

Naukri logo

We are seeking a skilled and motivated Data Engineer to join our dynamic team. The ideal candidate will have experience in designing, developing, and maintaining scalable data pipelines and architectures using Hadoop, PySpark, ETL processes , and Cloud technologies . Responsibilities: Design, develop, and maintain data pipelines for processing large-scale datasets. Build efficient ETL workflows to transform and integrate data from multiple sources. Develop and optimize Hadoop and PySpark applications for data processing. Ensure data quality, governance, and security standards are met across systems. Implement and manage Cloud-based data solutions (AWS, Azure, or GCP). Collaborate with data scientists and analysts to support business intelligence initiatives. Troubleshoot performance issues and optimize query executions in big data environments. Stay updated with industry trends and advancements in big data and cloud technologies . Required Skills: Strong programming skills in Python, Scala, or Java . Hands-on experience with Hadoop ecosystem (HDFS, Hive, Spark, etc.). Expertise in PySpark for distributed data processing. Proficiency in ETL tools and workflows (SSIS, Apache Nifi, or custom pipelines). Experience with Cloud platforms (AWS, Azure, GCP) and their data-related services. Knowledge of SQL and NoSQL databases. Familiarity with data warehousing concepts and data modeling techniques. Strong analytical and problem-solving skills. Interested can reach us at +91 7305206696/ saranyadevib@talentien.com

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

Delhi, India

On-site

Linkedin logo

What is Findem: Findem is the only talent data platform that combines 3D data with AI. It automates and consolidates top-of-funnel activities across your entire talent ecosystem, bringing together sourcing, CRM, and analytics into one place. Only 3D data connects people and company data over time - making an individual’s entire career instantly accessible in a single click, removing the guesswork, and unlocking insights about the market and your competition no one else can. Powered by 3D data, Findem’s automated workflows across the talent lifecycle are the ultimate competitive advantage. Enabling talent teams to deliver continuous pipelines of top, diverse candidates while creating better talent experiences, Findem transforms the way companies plan, hire, and manage talent. Learn more at www.findem.ai Experience - 5 - 9 years We are looking for an experienced Big Data Engineer, who will be responsible for building, deploying and managing various data pipelines, data lake and Big data processing solutions using Big data and ETL technologies. Location- Delhi, India Hybrid- 3 days onsite Responsibilities Build data pipelines, Big data processing solutions and data lake infrastructure using various Big data and ETL technologies Assemble and process large, complex data sets that meet functional non-functional business requirements ETL from a wide variety of sources like MongoDB, S3, Server-to-Server, Kafka etc., and processing using SQL and big data technologies Build analytical tools to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics Build interactive and ad-hoc query self-serve tools for analytics use cases Build data models and data schema for performance, scalability and functional requirement perspective Build processes supporting data transformation, metadata, dependency and workflow management Research, experiment and prototype new tools/technologies and make them successful Skill Requirements Must have-Strong in Python/Scala Must have experience in Big data technologies like Spark, Hadoop, Athena / Presto, Redshift, Kafka etc Experience in various file formats like parquet, JSON, Avro, orc etc Experience in workflow management tools like airflow Experience with batch processing, streaming and message queues Any of visualization tools like Redash, Tableau, Kibana etc Experience in working with structured and unstructured data sets Strong problem solving skills Good to have Exposure to NoSQL like MongoDB Exposure to Cloud platforms like AWS, GCP, etc Exposure to Microservices architecture Exposure to Machine learning techniques The role is full-time and comes with full benefits. We are globally headquartered in the San Francisco Bay Area with our India headquarters in Bengaluru. Equal Opportunity As an equal opportunity employer, we do not discriminate on the basis of race, color, religion, national origin, age, sex (including pregnancy), physical or mental disability, medical condition, genetic information, gender identity or expression, sexual orientation, marital status, protected veteran status or any other legally-protected characteristic. Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

The Product Owner III will be responsible for defining and prioritizing features and user stories, outlining acceptance criteria, and collaborating with cross-functional teams to ensure successful delivery of product increments. This role requires strong communication skills to effectively engage with stakeholders, gather requirements, and facilitate product demos. The ideal candidate should have a deep understanding of agile methodologies, experience in the insurance sector, and possess the ability to translate complex needs into actionable tasks for the development team. Key Responsibilities: Define and communicate the vision, roadmap, and backlog for data products. Manages team backlog items and prioritizes based on business value. Partners with the business owner to understand needs, manage scope and add/eliminate user stories while contributing heavy influence to build an effective strategy. Translate business requirements into scalable data product features. Collaborate with data engineers, analysts, and business stakeholders to prioritize and deliver impactful solutions. Champion data governance, privacy, and compliance best practices. Act as the voice of the customer to ensure usability and adoption of data products. Lead Agile ceremonies (e.g., backlog grooming, sprint planning, demos) and maintain a clear product backlog. Monitor data product performance and continuously identify areas for improvement. Support the integration of AI/ML solutions and advanced analytics into product offerings. Required Skills & Experience: Proven experience as a Product Owner, ideally in data or analytics domains. Strong understanding of data engineering, data architecture, and cloud platforms (AWS, Azure, GCP). Familiarity with SQL, data modeling, and modern data stack tools (e.g., Snowflake, dbt, Airflow). Excellent stakeholder management and communication skills across technical and non-technical teams. Strong business acumen and ability to align data products with strategic goals. Experience with Agile/Scrum methodologies and working in cross-functional teams. Ability to translate data insights into compelling stories and recommendations. Show more Show less

Posted 1 week ago

Apply

3.0 - 8.0 years

0 - 0 Lacs

Chennai

Hybrid

Naukri logo

You Lead the Way. We've Got Your Back. With the right backing, people and businesses have the power to progress in incredible ways. When you join Team Amex, you become part of a global and diverse community of colleagues with an unwavering commitment to back our customers, communities and each other. Here, youll learn and grow as we help you create a career journey thats unique and meaningful to you with benefits, programs, and flexibility that support you personally and professionally. At American Express, you’ll be recognized for your contributions, leadership, and impact—every colleague has the opportunity to share in the company’s success. Together, we’ll win as a team, striving to uphold our company values and powerful backing promise to provide the world’s best customer experience every day. And we’ll do it with the utmost integrity, and in an environment where everyone is seen, heard and feels like they belong. As part of our diverse tech team, you can architect, code and ship software that makes us an essential part of our customers’ digital lives. Here, you can work alongside talented engineers in an open, supportive, inclusive environment where your voice is valued, and you make your own decisions on what tech to use to solve challenging problems. Amex offers a range of opportunities to work with the latest technologies and encourages you to back the broader engineering community through open source. And because we understand the importance of keeping your skills fresh and relevant, we give you dedicated time to invest in your professional development. Find your place in technology on #TeamAmex. How will you make an impact in this role? Build NextGen Data Strategy, Data Virtualization, Data Lakes Warehousing Transform and improve performance of existing reporting & analytics use cases with more efficient and state of the art data engineering solutions. Analytics Development to realize advanced analytics vision and strategy in a scalable, iterative manner. Deliver software that provides superior user experiences, linking customer needs and business drivers together through innovative product engineering. Cultivate an environment of Engineering excellence and continuous improvement, leading changes that drive efficiencies into existing Engineering and delivery processes. Own accountability for all quality aspects and metrics of product portfolio, including system performance, platform availability, operational efficiency, risk management, information security, data management and cost effectiveness. Work with key stakeholders to drive Software solutions that align to strategic roadmaps, prioritized initiatives and strategic Technology directions. Work with peers, staff engineers and staff architects to assimilate new technology and delivery methods into scalable software solutions. Minimum Qualifications: Bachelor’s degree in computer science, Computer Science Engineering, or related field required; Advanced Degree preferred. 3- 12 years of hands-on experience in implementing large data-warehousing projects, strong knowledge of latest NextGen BI & Data Strategy & BI Tools Proven experience in Business Intelligence, Reporting on large datasets, Data Virtualization Tools, Big Data, GCP, JAVA, Microservices Strong systems integration architecture skills and a high degree of technical expertise, ranging across a number of technologies with a proven track record of turning new technologies into business solutions. Should be good in one programming language python/Java. Should have good understanding of data structures. GCP /cloud knowledge has added advantage. PowerBI, Tableau and looker good knowledge and understanding. Outstanding influential and collaboration skills; ability to drive consensus and tangible outcomes, demonstrated by breaking down silos and fostering cross communication process. Experience managing in a fast paced, complex, and dynamic global environment. Outstanding influential and collaboration skills; ability to drive consensus and tangible outcomes, demonstrated by breaking down silos and fostering cross communication process. Preferred Qualifications: Bachelor’s degree in computer science, Computer Science Engineering, or related field required; Advanced Degree preferred. 5+ years of hands-on experience in implementing large data-warehousing projects, strong knowledge of latest NextGen BI & Data Strategy & BI Tools Proven experience in Business Intelligence, Reporting on large datasets, Oracle Business Intelligence (OBIEE), Tableau, MicroStrategy, Data Virtualization Tools, Oracle PL/SQL, Informatica, Other ETL Tools like Talend, Java Should be good in one programming language python/Java. Should be good data structures and reasoning. GCP knowledge has added advantage or cloud knowledge. PowerBI, Tableau and looker good knowledge and understanding. Strong systems integration architecture skills and a high degree of technical expertise, ranging across several technologies with a proven track record of turning new technologies into business solutions. Outstanding influential and collaboration skills; ability to drive consensus and tangible outcomes, demonstrated by breaking down silos and fostering cross communication process. Compliance Language We back you with benefits that support your holistic well-being so you can be and deliver your best. This means caring for you and your loved ones' physical, financial, and mental health, as well as providing the flexibility you need to thrive personally and professionally: Competitive base salaries Bonus incentives Support for financial-well-being and retirement Comprehensive medical, dental, vision, life insurance, and disability benefits (depending on location) Flexible working model with hybrid, onsite or virtual arrangements depending on role and business need Generous paid parental leave policies (depending on your location) Free access to global on-site wellness centers staffed with nurses and doctors (depending on location) Free and confidential counseling support through our Healthy Minds program Career development and training opportunities American Express is an equal opportunity employer and makes employment decisions without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran status, disability status, age, or any other status protected by law. Offer of employment with American Express is conditioned upon the successful completion of a background verification check, subject to applicable laws and regulations.

Posted 1 week ago

Apply

5.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Position Overview We are looking for an experienced Lead Data Engineer to join our dynamic team. If you are passionate about building scalable software solutions, and work collaboratively with cross-functional teams to define requirements and deliver solutions we would love to hear from you. ShyftLabs is a growing data product company that was founded in early 2020 and works primarily with Fortune 500 companies. We deliver digital solutions built to help accelerate the growth of businesses in various industries, by focusing on creating value through innovation. Job Responsibilities: Develop and maintain data pipelines and ETL/ELT processes using Python Design and implement scalable, high-performance applications Work collaboratively with cross-functional teams to define requirements and deliver solutions Develop and manage near real-time data streaming solutions using Pub, Sub or Beam Contribute to code reviews, architecture discussions, and continuous improvement initiatives Monitor and troubleshoot production systems to ensure reliability and performance Basic Qualifications: 5+ years of professional software development experience with Python Strong understanding of software engineering best practices (testing, version control, CI/CD) Experience building and optimizing ETL/ELT processes and data pipelines Proficiency with SQL and database concepts Experience with data processing frameworks (e.g., Pandas) Understanding of software design patterns and architectural principles Ability to write clean, well-documented, and maintainable code Experience with unit testing and test automation Experience working with any cloud provider (GCP is preferred) Experience with CI/CD pipelines and Infrastructure as code Experience with Containerization technologies like Docker or Kubernetes Bachelor's degree in Computer Science, Engineering, or related field (or equivalent experience) Proven track record of delivering complex software projects Excellent problem-solving and analytical thinking skills Strong communication skills and ability to work in a collaborative environment Preferred Qualifications: Experience with GCP services, particularly Cloud Run and Dataflow Experience with stream processing technologies (Pub/Sub) Familiarity with big data technologies (Airflow) Experience with data visualization tools and libraries Knowledge of CI/CD pipelines with Gitlab and infrastructure as code with Terraform Familiarity with platforms like Snowflake, Bigquery or Databricks, GCP Data engineer certification We are proud to offer a competitive salary alongside a strong insurance package. We pride ourselves on the growth of our employees, offering extensive learning and development resources. Show more Show less

Posted 1 week ago

Apply

5.0 years

15 - 20 Lacs

Ahmedabad, Gujarat, India

On-site

Linkedin logo

Experience : 5.00 + years Salary : INR 1500000-2000000 / year (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Hybrid (Ahmedabad) Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Inferenz) What do you need for this opportunity? Must have skills required: ML model deployment, MLOps, Monitoring Inferenz is Looking for: Job Description: Position: Sr. MLOps Engineer Location: Ahmedabad, Pune Required Experience: 5+ Years of experience Preferred: Immediate Joiners Job Overview: Building the machine learning production infrastructure (or MLOps) is the biggest challenge most large companies currently have in making the transition to becoming an AI-driven organization. We are looking for a highly skilled MLOps Engineer to join our team. As an MLOps Engineer, you will be responsible for designing, implementing, and maintaining the infrastructure that supports the deployment, monitoring, and scaling of machine learning models in production. You will work closely with data scientists, software engineers, and DevOps teams to ensure seamless integration of machine learning models into our production systems. The job is NOT for you if: You don’t want to build a career in AI/ML. Becoming an expert in this technology and staying current will require significant self-motivation. You like the comfort and predictability of working on the same problem or code base for years. The tools, best practices, architectures, and problems are all going through rapid change — you will be expected to learn new skills quickly and adapt. Key Responsibilities: Model Deployment: Design and implement scalable, reliable, and secure pipelines for deploying machine learning models to production. Infrastructure Management: Develop and maintain infrastructure as code (IaC) for managing cloud resources, compute environments, and data storage. Monitoring and Optimization: Implement monitoring tools to track the performance of models in production, identify issues, and optimize performance. Collaboration: Work closely with data scientists to understand model requirements and ensure models are production ready. Automation: Automate the end-to-end process of training, testing, deploying, and monitoring models. Continuous Integration/Continuous Deployment (CI/CD): Develop and maintain CI/CD pipelines for machine learning projects. Version Control: Implement model versioning to manage different iterations of machine learning models. Security and Governance: Ensure that the deployed models and data pipelines are secure and comply with industry regulations. Documentation: Create and maintain detailed documentation of all processes, tools, and infrastructure. Qualifications: 5+ years of experience in a similar role (DevOps, DataOps, MLOps, etc.) Bachelor’s or master’s degree in computer science, Engineering, or a related field. Experience with cloud platforms (AWS, GCP, Azure) and containerization (Docker, Kubernetes) Strong understanding of machine learning lifecycle, data pipelines, and model serving. Proficiency in programming languages such as Python, Shell scripting, and familiarity with ML frameworks (TensorFlow, PyTorch, etc.). Exposure to deep learning approaches and modeling frameworks (PyTorch, Tensorflow, Keras, etc.) Experience with CI/CD tools like Jenkins, GitLab CI, or similar Experience building end-to-end systems as a Platform Engineer, ML DevOps Engineer, or Data Engineer (or equivalent) Strong software engineering skills in complex, multi-language systems Comfort with Linux administration Experience working with cloud computing and database systems Experience building custom integrations between cloud-based systems using APIs Experience developing and maintaining ML systems built with open-source tools Experience developing with containers and Kubernetes in cloud computing environments Familiarity with one or more data-oriented workflow orchestration frameworks (MLFlow, KubeFlow, Airflow, Argo, etc.) Ability to translate business needs to technical requirements Strong understanding of software testing, benchmarking, and continuous integration Exposure to machine learning methodology and best practices Understanding of regulatory requirements for data privacy and model governance. Preferred Skills: Excellent problem-solving skills and ability to troubleshoot complex production issues. Strong communication skills and ability to collaborate with cross-functional teams. Familiarity with monitoring and logging tools (e.g., Prometheus, Grafana, ELK Stack). Knowledge of database systems (SQL, NoSQL). Experience with Generative AI frameworks Preferred cloud-based or MLOps/DevOps certification (AWS, GCP, or Azure) How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Position Overview We are looking for an experienced Lead Data Engineer to join our dynamic team. If you are passionate about building scalable software solutions, and work collaboratively with cross-functional teams to define requirements and deliver solutions we would love to hear from you. ShyftLabs is a growing data product company that was founded in early 2020 and works primarily with Fortune 500 companies. We deliver digital solutions built to help accelerate the growth of businesses in various industries, by focusing on creating value through innovation. Job Responsibilities: Develop and maintain data pipelines and ETL/ELT processes using Python Design and implement scalable, high-performance applications Work collaboratively with cross-functional teams to define requirements and deliver solutions Develop and manage near real-time data streaming solutions using Pub, Sub or Beam Contribute to code reviews, architecture discussions, and continuous improvement initiatives Monitor and troubleshoot production systems to ensure reliability and performance Basic Qualifications: 5+ years of professional software development experience with Python Strong understanding of software engineering best practices (testing, version control, CI/CD) Experience building and optimizing ETL/ELT processes and data pipelines Proficiency with SQL and database concepts Experience with data processing frameworks (e.g., Pandas) Understanding of software design patterns and architectural principles Ability to write clean, well-documented, and maintainable code Experience with unit testing and test automation Experience working with any cloud provider (GCP is preferred) Experience with CI/CD pipelines and Infrastructure as code Experience with Containerization technologies like Docker or Kubernetes Bachelor's degree in Computer Science, Engineering, or related field (or equivalent experience) Proven track record of delivering complex software projects Excellent problem-solving and analytical thinking skills Strong communication skills and ability to work in a collaborative environment Preferred Qualifications: Experience with GCP services, particularly Cloud Run and Dataflow Experience with stream processing technologies (Pub/Sub) Familiarity with big data technologies (Airflow) Experience with data visualization tools and libraries Knowledge of CI/CD pipelines with Gitlab and infrastructure as code with Terraform Familiarity with platforms like Snowflake, Bigquery or Databricks, GCP Data engineer certification We are proud to offer a competitive salary alongside a strong insurance package. We pride ourselves on the growth of our employees, offering extensive learning and development resources. Show more Show less

Posted 1 week ago

Apply

12.0 - 15.0 years

35 - 50 Lacs

Hyderabad

Work from Office

Naukri logo

Skill : Java, Spark, Kafka Experience : 10 to 16 years Location : Hyderabad As Data Engineer, you will : Support in designing and rolling out the data architecture and infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources Identify data source, design and implement data schema/models and integrate data that meet the requirements of the business stakeholders Play an active role in the end-to-end delivery of AI solutions, from ideation, feasibility assessment, to data preparation and industrialization. Work with business, IT and data stakeholders to support with data-related technical issues, their data infrastructure needs as well as to build the most flexible and scalable data platform. With a strong focus on DataOps, design, develop and deploy scalable batch and/or real-time data pipelines. Design, document, test and deploy ETL/ELT processes Find the right tradeoffs between the performance, reliability, scalability, and cost of the data pipelines you implement Monitor data processing efficiency and propose solutions for improvements. • Have the discipline to create and maintain comprehensive project documentation. • Build and share knowledge with colleagues and coach junior profiles.

Posted 1 week ago

Apply

5.0 - 10.0 years

7 - 17 Lacs

Chandigarh

Remote

Naukri logo

We are looking for a skilled and motivated Data Engineer with deep expertise in GCP, BigQuery, Apache Airflow to join our data platform team. The ideal candidate should have hands-on experience building scalable data pipelines, automating workflows, migrating large-scale datasets, and optimizing distributed systems. The candidate should have experience with building Web APIs using Python. This role will play a key part in designing and maintaining robust data engineering solutions across cloud and on-prem environments. Key Responsibilities BigQuery & Cloud Data Pipelines: Design and implement scalable ETL pipelines for ingesting large-scale datasets. Build solutions for efficient querying of tables in BigQuery. Automated scheduled data ingestion using Google Cloud services and scheduled Apache Airflow DAGs Airflow DAG Development & Automation: Build dynamic and configurable DAGs using JSON-based input to be reused across multiple data processes Create DAGs for data migration to/from BigQuery and external systems (SFTP, SharePoint, Email etc.) Develop custom Airflow operators to meet business needs Data Security & Encryption : Build secure data pipelines with end-to-end encryption for external data exports and imports Data Migration & Integration: Experience in data migration and replication across various systems, including Salesforce, MySQL, SQL Server, and BigQuery Required Skills & Qualifications: Strong hands-on experience with Google BigQuery, Apache Airflow, and Cloud Storage (GCS/S3) Deep understanding of ETL/ELT concepts, data partitioning, and pipeline scheduling Proven ability to automate complex workflows and build reusable pipeline frameworks Programming knowledge in Python, SQL, and scripting for automation Hands-on experience with building web APIs/applications using Python Familiarity with cloud platforms (GCP/AWS) and distributed computing frameworks Strong problem-solving, analytical thinking, and debugging skills Basic understanding of Object Oriented Fundamentals Working knowledge of version control tools like Gitlab, Bitbucket Industry Knowledge & Experience Experience with deep expertise in Google BigQuery, Apache Airflow, Python, and SQL Experience with BI tools such as DOMO, Looker, and Tableau Working knowledge of Salesforce and data extraction methods Prior experience working with data encryption, SFTP automation, or ad-hoc data requests

Posted 1 week ago

Apply

4.0 years

0 Lacs

India

On-site

Linkedin logo

Love turning raw data into powerful insights? Join us! We're partnering with global brands to unlock the full potential of their data. As a Data Engineer, you'll be at the heart of these transformations—building scalable data pipelines, optimizing data flows, and empowering analytics teams to make real-time, data-driven decisions. We are seeking a highly skilled Data Engineer with hands-on experience in Databricks to support data integration, pipeline development, and large-scale data processing for our retail or healthcare client. The ideal candidate will work closely with cross-functional teams to design robust data solutions that drive business intelligence and operational efficiency. Key Responsibilities Develop and maintain scalable data pipelines using Databricks and Spark. Build ETL/ELT workflows to support data ingestion, transformation, and validation. Collaborate with data scientists, analysts, and business stakeholders to gather data requirements. Optimize data processing workflows for performance and reliability. Manage structured and unstructured data across cloud-based data lakes and warehouses (e.g., Delta Lake, Snowflake, Azure Synapse). Ensure data quality and compliance with data governance standards. Required Qualifications 4+ years of experience as a Data Engineer. Strong expertise in Databricks, Apache Spark, and Delta Lake. Proficiency in Python, SQL, and data pipeline orchestration tools (e.g., Airflow, ADF). Experience with cloud platforms such as Azure, AWS, or GCP. Familiarity with data modeling, version control, and CI/CD practices. Experience in the retail or healthcare domain is a plus. Benefits Health Insurance, Accident Insurance. The salary will be determined based on several factors including, but not limited to, location, relevant education, qualifications, experience, technical skills, and business needs. Additional Responsibilities Participate in OP monthly team meetings, and participate in team-building efforts. Contribute to OP technical discussions, peer reviews, etc. Contribute content and collaborate via the OP-Wiki/Knowledge Base. Provide status reports to OP Account Management as requested. About Us OP is a technology consulting and solutions company, offering advisory and managed services, innovative platforms, and staffing solutions across a wide range of fields — including AI, cyber security, enterprise architecture, and beyond. Our most valuable asset is our people: dynamic, creative thinkers, who are passionate about doing quality work. As a member of the OP team, you will have access to industry-leading consulting practices, strategies & and technologies, innovative training & education. An ideal OP team member is a technology leader with a proven track record of technical excellence and a strong focus on process and methodology. Show more Show less

Posted 1 week ago

Apply

2.0 - 3.0 years

6 - 7 Lacs

Pune

Work from Office

Naukri logo

Data Engineer Job Description : Jash Data Sciences: Letting Data Speak! Do you love solving real-world data problems with the latest and best techniques? And having fun while solving them in a team! Then come and join our high-energy team of passionate data people. Jash Data Sciences is the right place for you. We are a cutting-edge Data Sciences and Data Engineering startup based in Pune, India. We believe in continuous learning and evolving together. And we let the data speak! What will you be doing? You will be discovering trends in the data sets and developing algorithms to transform raw data for further analytics Create Data Pipelines to bring in data from various sources, with different formats, transform it, and finally load it to the target database. Implement ETL/ ELT processes in the cloud using tools like AirFlow, Glue, Stitch, Cloud Data Fusion, and DataFlow. Design and implement Data Lake, Data Warehouse, and Data Marts in AWS, GCP, or Azure using Redshift, BigQuery, PostgreSQL, etc. Creating efficient SQL queries and understanding query execution plans for tuning queries on engines like PostgreSQL. Performance tuning of OLAP/ OLTP databases by creating indices, tables, and views. Write Python scripts for the orchestration of data pipelines Have thoughtful discussions with customers to understand their data engineering requirements. Break complex requirements into smaller tasks for execution. What do we need from you? Strong Python coding skills with basic knowledge of algorithms/data structures and their application. Strong understanding of Data Engineering concepts including ETL, ELT, Data Lake, Data Warehousing, and Data Pipelines. Experience designing and implementing Data Lakes, Data Warehouses, and Data Marts that support terabytes of scale data. A track record of implementing Data Pipelines on public cloud environments (AWS/GCP/Azure) is highly desirable A clear understanding of Database concepts like indexing, query performance optimization, views, and various types of schemas. Hands-on SQL programming experience with knowledge of windowing functions, subqueries, and various types of joins. Experience working with Big Data technologies like PySpark/ Hadoop A good team player with the ability to communicate with clarity Show us your git repo/ blog! Qualification 1-2 years of experience working on Data Engineering projects for Data Engineer I 2-5 years of experience working on Data Engineering projects for Data Engineer II 1-5 years of Hands-on Python programming experience Bachelors/Masters' degree in Computer Science is good to have Courses or Certifications in the area of Data Engineering will be given a higher preference. Candidates who have demonstrated a drive for learning and keeping up to date with technology by continuing to do various courses/self-learning will be given high preference.

Posted 1 week ago

Apply

0.0 years

0 Lacs

Kolkata, West Bengal, India

On-site

Foundit logo

Ready to shape the future of work At Genpact, we don&rsquot just adapt to change&mdashwe drive it. AI and digital innovation are redefining industries, and we&rsquore leading the charge. Genpact&rsquos , our industry-first accelerator, is an example of how we&rsquore scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to , our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that&rsquos shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions - we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at and on , , , and . Inviting applications for the role of Principal Consultant- Sr. Snowflake Data Engineer ( Snowflake+ Python+Cloud ) ! In this role, the Sr. Snowflake Data Engineer is responsible for providing technical direction and lead a group of one or more developer to address a goal. Job Description : E xperience in IT industry W orking experience with building productionized data ingestion and processing data pipelines in Snowflake Strong understanding on Snowflake Architecture Fully well-versed with data warehousing concepts. Expertise and excellent understanding of Snowflake features and integration of Snowflake with other data processing. Able to create the data pipeline for ETL /ELT Good to have DBT experience Excellent presentation and communication skills, both written and verbal Ability to problem solve and architect in an environment with unclear requirements. Able to create the high level and low-level design document based on requirement. Hands on experience in configuration, troubleshooting, testing and managing data platforms, on premises or in the cloud. Awareness on data visualisation tools and methodologies Work independently on business problems and generate meaningful insights Good to have some experience/knowledge on Snowpark or Streamlit or GenAI but not mandatory. Should have experience on implementing Snowflake Best Practices Snowflake SnowPro Core Certification will be add ed an advantage Roles and Responsibilities : Requirement gathering, creating design document, providing solutions to customer, work with offshore team etc. Writing SQL queries against Snowflake , developing scripts to do Extract, Load, and Transform data. Hands-on experience with Snowflake utilities such as SnowSQL , Bulk copy, Snow p ipe , Tasks, Streams, Time travel, Cloning, Optimizer, Metadata Manager, data sharing, stored procedures and UDFs , Snowsight . Have experience with Snowflake cloud data warehouse and AWS S3 bucket or Azure blob storage container for integrating data from multiple source system . Should have have some exp on AWS services (S3, Glue, Lambda) or Azure services ( Blob Storage, ADLS gen2, ADF) Should have good experience in Python / Pyspark . integration with Snowflake and cloud (AWS/Azure) with ability to leverage cloud services for data processing and storage. Proficiency in Python programming language, including knowledge of data types, variables, functions, loops, conditionals, and other Python-specific concepts. Knowledge of ETL (Extract, Transform, Load) processes and tools, and ability to design and develop efficient ETL jobs using Python and Pyspark . Should have some experience on Snowflake RBAC and data security . Should have good experience in implementing CDC or SCD type - 2 . Should have good experience in implementing Snowflake Best Practices In-depth understanding of Data Warehouse, ETL concepts and Data Modelling Experience in requirement gathering, analys is, designing, development, and deployment . Should Have experience building data ingestion pipeline Optimize and tune data pipelines for performance and scalability Able to communicate with clients and lead team. Proficiency in working with Airflow or other workflow management tools for scheduling and managing ETL jobs. Good to have experience in deployment using CI/CD tools and exp in repositories like Azure repo , Github etc. Qualifications we seek in you! Minimum qualifications B.E./ Masters in Computer Science , Information technology, or Computer engineering or any equivalent degree with good IT experience and relevant as Senior Snowflake Data Engineer . Skill Metrix: Snowflake, Python/ PySpark , AWS/Azure, ETL concepts, Data Modeling & Data Warehousing concepts Why join Genpact Be a transformation leader - Work at the cutting edge of AI, automation, and digital innovation Make an impact - Drive change for global enterprises and solve business challenges that matter Accelerate your career - Get hands-on experience, mentorship, and continuous learning opportunities Work with the best - Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture - Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let&rsquos build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.

Posted 1 week ago

Apply

Exploring Airflow Jobs in India

The airflow job market in India is rapidly growing as more companies are adopting data pipelines and workflow automation. Airflow, an open-source platform, is widely used for orchestrating complex computational workflows and data processing pipelines. Job seekers with expertise in airflow can find lucrative opportunities in various industries such as technology, e-commerce, finance, and more.

Top Hiring Locations in India

  1. Bangalore
  2. Mumbai
  3. Hyderabad
  4. Pune
  5. Gurgaon

Average Salary Range

The average salary range for airflow professionals in India varies based on experience levels: - Entry-level: INR 6-8 lakhs per annum - Mid-level: INR 10-15 lakhs per annum - Experienced: INR 18-25 lakhs per annum

Career Path

In the field of airflow, a typical career path may progress as follows: - Junior Airflow Developer - Airflow Developer - Senior Airflow Developer - Airflow Tech Lead

Related Skills

In addition to airflow expertise, professionals in this field are often expected to have or develop skills in: - Python programming - ETL concepts - Database management (SQL) - Cloud platforms (AWS, GCP) - Data warehousing

Interview Questions

  • What is Apache Airflow? (basic)
  • Explain the key components of Airflow. (basic)
  • How do you schedule a DAG in Airflow? (basic)
  • What are the different operators in Airflow? (medium)
  • How do you monitor and troubleshoot DAGs in Airflow? (medium)
  • What is the difference between Airflow and other workflow management tools? (medium)
  • Explain the concept of XCom in Airflow. (medium)
  • How do you handle dependencies between tasks in Airflow? (medium)
  • What are the different types of sensors in Airflow? (medium)
  • What is a Celery Executor in Airflow? (advanced)
  • How do you scale Airflow for a high volume of tasks? (advanced)
  • Explain the concept of SubDAGs in Airflow. (advanced)
  • How do you handle task failures in Airflow? (advanced)
  • What is the purpose of a TriggerDagRun operator in Airflow? (advanced)
  • How do you secure Airflow connections and variables? (advanced)
  • Explain how to create a custom Airflow operator. (advanced)
  • How do you optimize the performance of Airflow DAGs? (advanced)
  • What are the best practices for version controlling Airflow DAGs? (advanced)
  • Describe a complex data pipeline you have built using Airflow. (advanced)
  • How do you handle backfilling in Airflow? (advanced)
  • Explain the concept of DAG serialization in Airflow. (advanced)
  • What are some common pitfalls to avoid when working with Airflow? (advanced)
  • How do you integrate Airflow with external systems or tools? (advanced)
  • Describe a challenging problem you faced while working with Airflow and how you resolved it. (advanced)

Closing Remark

As you explore job opportunities in the airflow domain in India, remember to showcase your expertise, skills, and experience confidently during interviews. Prepare well, stay updated with the latest trends in airflow, and demonstrate your problem-solving abilities to stand out in the competitive job market. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies