Home
Jobs

2514 Airflow Jobs - Page 34

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

We're seeking a detail-oriented, technically-minded Product Manager in Chennai to drive strategy, execution, and compliance for our software development. You'll define roadmaps, manage backlogs in JIRA, collaborate with engineers, ensure technical quality, uphold compliance standards, and communicate effectively with stakeholders while focusing on delivering high-impact customer value and maintaining product health. Skills Required: Python Skills Preferred: JIRA, Python, GCP, GCP Cloudrun, Angular, AIRFLOW, Big Query, Terraform LLM, Cycode, Dynatrace, Checkmarx, Fossa Experience Required: 3 years Show more Show less

Posted 1 week ago

Apply

3.0 - 5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

B.Responsible Works independently on data collection and preparation. Uses their past experience and seeks for help in complex scenarios to translate business problems into data driven insights. Leverages available cloud big data platforms to run root cause analysis, data reconciliations and shares the insights with the business team. Maintains and drives key reports, metrics and workflows running within their scope Is able to communicate results and outcomes clearly to stakeholders based on their knowledge and experience. Actively participates in business and/or analytics team activities and suggests ways of achieving objectives (standup, planning meeting, retrospectives) Networks and proactively connects with craft peers beyond the team scope Has strong understanding of the big data ecosystems Collaborates and is open to giving and receiving feedback with peers and direct stakeholders. Is flexible in adopting and proposing new approaches and expanding their technical competencies when a more efficient way presents itself Expected to get significant deep knowledge about the operational, tactical and strategic workings of the department. Has a main focus on business and technical opportunities. B.Skilled Educational background in Quantitative field - Preferably a Master's degree 3-5 years of experience in data analytics, Insight generation and data visualization Should have executed big data analytics projects in Industry setting Advanced knowledge of SQL, ideally with experience in Snowflake Good knowledge with Python/Py-Spark Experience of working with ETL and Data Modelling tools like Airflow, Dagster and DBT Knowledge and experience using data analysis and visualization tools (e.g: tableau, data studio, powerbi, mixpanel, etc) Familiarity with Cloud data platforms like AWS and GIT version control is a plus Familiarity with financial metrics is a big plus Strong communication and stakeholder management skills Able to understand details while keeping an eye on the bigger picture Pre-Employment Screening If your application is successful, your personal data may be used for a pre-employment screening check by a third party as permitted by applicable law. Depending on the vacancy and applicable law, a pre-employment screening may include employment history, education and other information (such as media information) that may be necessary for determining your qualifications and suitability for the position. Show more Show less

Posted 1 week ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

About Groww We are a passionate group of people focused on making financial services accessible to every Indian through a multi-product platform. Each day, we help millions of customers take charge of their financial journey. Customer obsession is in our DNA. Every product, every design, every algorithm down to the tiniest detail is executed keeping the customers’ needs and convenience in mind. Our people are our greatest strength. Everyone at Groww is driven by ownership, customer-centricity, integrity and the passion to constantly challenge the status quo. Are you as passionate about defying conventions and creating something extraordinary as we are? Let’s chat. Our Vision Every individual deserves the knowledge, tools, and confidence to make informed financial decisions. At Groww, we are making sure every Indian feels empowered to do so through a cutting-edge multi-product platform offering a variety of financial services. Our long-term vision is to become the trusted financial partner for millions of Indians. Our Values Our culture enables us to be what we are — India’s fastest-growing financial services company. It fosters an environment where collaboration, transparency, and open communication take center-stage and hierarchies fade away. There is space for every individual to be themselves and feel motivated to bring their best to the table, as well as craft a promising career for themselves. The values that form our foundation are: Radical customer centricity Ownership-driven culture Keeping everything simple Long-term thinking Complete transparency EXPERTISE AND QUALIFICATIONS What youʼll do: Providing 24X7 infra & platform support for the Data Platform infrastructure setup hosting the workloads for the Data engineering teams and also building processes and documenting “tribal” knowledge around the same time. Managing application deployment & GKE platforms - automate and improve development and release processes. Creating, managing and maintaining datastores & data platform infra using IaC. Owning the end-to-end Availability, Performance, Capacity of applications and their infrastructure and creating/maintaining the respective observability with Prometheus/New Relic/ELK/Loki. Owning and onboarding new applications with the production readiness review process. Managing the SLO/Error Budgets/Alerts and performing root cause analysis for production errors. Working with Core Infra, Dev and Product teams to define SLO/Error Budgets/Alerts. Working with the Dev team to have an in-depth understanding of the application architecture and its bottlenecks. Identifying observability gaps in application & infrastructure and working with stakeholders to fix them. Managing outages and doing detailed RCA with developers and identifying ways to avoid that situation. Automate toil and repetitive work. What We're Looking For: 3+ Years of experience in managing high traffic, large scale microservices and infrastructure with excellent troubleshooting skills. Has handled and worked on distributed processing engines , distributed databases and messaging queues ( Kafka , PubSub or RabbitMQ etc Experienced in setting up , working on data platforms, data lakes, and data ingestion systems that work at scale. Write core libraries (in python and golang) to interact with various internal data stores. Define and support internal SLAs for common data infrastructure Good to have familiarity with BigQuery or Trino , Pinot , Airflow , and Superset or similar ones ( good to have familiarity with Mongo and Redis ) Experience in troubleshooting, managing and deploying containerized environments using Docker/container, Kubernetes is a must. Extensive experience in DNS, TCP/IP, UDP, GRPC, Routing and Load Balancing. Expertise in GitOps, Infrastructure as a Code tool such as Terraform etc.. and Configuration Management Tools such as Chef, Puppet, Saltstack, Ansible. Expertise in Google Cloud (GCP) and/or other relevant Cloud Infrastructure solutions like AWS or Azure. Experience in building the CI/CD pipelines with any one the tools such as Jenkins, GitLab, Spinnaker, Argo etc. Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Title : Automation Engineer Job Type : Full-time, Contractor About Us: Our mission at micro1 is to match the most talented people in the world with their dream jobs. If you are looking to be at the forefront of AI innovation and work with some of the fastest-growing companies in Silicon Valley, we invite you to apply for a role. By joining the micro1 community, your resume will become visible to top industry leaders, unlocking access to the best career opportunities on the market. Job Summary: We are seeking a detail-oriented and innovative Automation Engineer to join our customer's team. In this critical role, you will design, develop, and execute automated tests to ensure the quality, reliability, and integrity of data within Databricks environments. If you are passionate about data quality, thrive in collaborative environments, and excel at both written and verbal communication, we'd love to meet you. Key Responsibilities: Design, develop, and maintain robust automated test scripts using Python, Selenium, and SQL to validate data integrity within Databricks environments. Execute comprehensive data validation and verification activities to ensure accuracy and consistency across multiple systems, data warehouses, and data lakes. Create detailed and effective test plans and test cases based on technical requirements and business specifications. Integrate automated tests with CI/CD pipelines to facilitate seamless and efficient testing and deployment processes. Work collaboratively with data engineers, developers, and other stakeholders to gather data requirements and achieve comprehensive test coverage. Document test cases, results, and identified defects; communicate findings clearly to the team. Conduct performance testing to ensure data processing and retrieval meet established benchmarks. Provide mentorship and guidance to junior team members, promoting best practices in test automation and data validation. Required Skills and Qualifications: Strong proficiency in Python, Selenium, and SQL for developing test automation solutions. Hands-on experience with Databricks, data warehouse, and data lake architectures. Proven expertise in automated testing of data pipelines, preferably with tools such as Apache Airflow, dbt Test, or similar. Proficient in integrating automated tests within CI/CD pipelines on cloud platforms (AWS, Azure preferred). Excellent written and verbal communication skills with the ability to translate technical concepts to diverse audiences. Bachelor’s degree in Computer Science, Information Technology, or a related discipline. Demonstrated problem-solving skills and a collaborative approach to teamwork. Preferred Qualifications: Experience with implementing security and data protection measures in data-driven applications. Ability to integrate user-facing elements with server-side logic for seamless data experiences. Demonstrated passion for continuous improvement in test automation processes, tools, and methodologies. Show more Show less

Posted 1 week ago

Apply

6.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

TCS Hiring !!!Walkin Drive ( Face to face ) on14th June 2025 iN person 6 to 8 years, Hyderabad, Role**. Python Full stack Developer Exp - 14 June -25 6 to 8 years, Hyderabad, Kolkata Please read Job description before Applying NOTE: If the skills/profile matches and interested, please reply to this email by attaching your latest updated CV and with below few details: Name: Contact Number: Email ID: Highest Qualification in: (Eg. B.Tech/B.E./M.Tech/MCA/M.Sc./MS/BCA/B.Sc./Etc.) Current Organization Name: Total IT Experience 6 to 8 years, Hyderabad, Kolkata Current CTC Expected CTC Notice period Whether worked with TCS - Y/N Location 6 to 8 year, Hyderabad, Frontend o 6+ years of overall experience with proficiency in React (2+ years), Typescript (1+ year), React hooks (1+ year) o Experience with ESlint, CSS in JS styling (preferably Emotion), state management (preferably Redux), and JavaScript bundlers such as Webpack o Experience with integrating with RESTful APIs or other web services Backend o Expertise with Python (3+ years, preferably Python3) o Proficiency with a Python web framework (2+ years, preferably flask and FastAPI) o Experience with a Python linter (preferably flake8), graph databases (preferably Neo4j), a package manager (preferably pip), Elasticsearch, and Airflow o Experience with developing microservices, RESTful APIs or other web services o Experience with Database design and management, including NoSQL/RDBMS tradeoffs Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Join us as a Principal Engineer - PySpark This is a challenging role that will see you design and engineer software with the customer or user experience as the primary objective You’ll actively contribute to our architecture, design and engineering centre of excellence, collaborating to improve the bank’s overall software engineering capability You’ll gain valuable stakeholder exposure as you build and leverage relationships, as well as the opportunity to hone your technical talents We're offering this role at vice president level What you'll do As a Principal Engineer, you’ll be creating great customer outcomes via engineering and innovative solutions to existing and new challenges, and technology designs which are innovative, customer centric, high performance, secure and robust. You’ll be working with software engineers in the production and prototyping of innovative ideas, engaging with domain and enterprise architects to validate and leverage these in wider contexts, by incorporating the relevant architectures. We’ll also look to you to design and develop software with a focus on the automation of build, test and deployment activities, while developing the discipline of software engineering across the business. You’ll Also Be Defining, creating and providing oversight and governance of engineering and design solutions with a focus on end-to-end automation, simplification, resilience, security, performance, scalability and reusability Working within a platform or feature team along with software engineers to design and engineer complex software, scripts and tools to enable the delivery of bank platforms, applications and services, acting as a point of contact for solution design considerations Defining and developing architecture models and roadmaps of application and software components to meet business and technical requirements, driving common usability across products and domains Designing, producing, testing and implementing the working code, along with applying Agile methods to the development of software with the use of DevOps techniques The skills you'll need You’ll come with significant experience in software engineering, software or database design and architecture, as well as experience of developing software within a DevOps and Agile framework. Along with an expert understanding of the latest market trends, technologies and tools, you’ll need at least ten years of experience working with Python or PySpark with at least four years of team handling experience. You'll need experience in model development and support with expertise in Spark SQL query optimization and performance tuning. You'll also need experience in writing Advance Spark SQL or ANSI SQL queries. Knowledge of AWS will be highly desired. You’ll Also Need A strong background in leading software development teams in a matrix structure, introducing and executing technical strategies Experience in Unix or Linux scripting, Airflow, Continuous Integration, DevOps, GIT and Artifactory Experience in Agile, Test Driven Development approach and software delivery best practice The ability to rapidly and effectively understand and translate product and business requirements into technical solutions A background of working with code repositories, bug tracking tools and wikis Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Job description Job Name: Senior Data Engineer Azure Years of Experience: 5 Job Description: We are looking for a skilled and experienced Senior Azure Developer to join our team! As part of the team, you will be involved in the implementation of the ongoing and new initiatives for our company. If you love learning, thinking strategically, innovating, and helping others, this job is for you! Primary Skills: ADF, Databricks Secondary Skills: DBT, Python, Databricks, Airflow, Fivetran, Glue, Snowflake Role Description: Data engineering role requires creating and managing technological infrastructure of a data platform, be in-charge /involved in architecting, building, and managing data flows / pipelines and construct data storages (noSQL, SQL), tools to work with big data (Hadoop, Kafka), and integration tools to connect sources or other databases Role Responsibility: Translate functional specifications and change requests into technical specifications Translate business requirement document, functional specification, and technical specification to related coding Develop efficient code with unit testing and code documentation Ensuring accuracy and integrity of data and applications through analysis, coding, documenting, testing, and problem solving Setting up the development environment and configuration of the development tools Communicate with all the project stakeholders on the project status Manage, monitor, and ensure the security and privacy of data to satisfy business needs Contribute to the automation of modules, wherever required To be proficient in written, verbal and presentation communication (English) Co-ordinating with the UAT team Role Requirement: Proficient in basic and advanced SQL programming concepts (Procedures, Analytical functions etc.) Good Knowledge and Understanding of Data warehouse concepts (Dimensional Modeling, change data capture, slowly changing dimensions etc.) Knowledgeable in Shell / PowerShell scripting Knowledgeable in relational databases, nonrelational databases, data streams, and file stores Knowledgeable in performance tuning and optimization Experience in Data Profiling and Data validation Experience in requirements gathering and documentation processes and performing unit testing Understanding and Implementing QA and various testing process in the project Knowledge in any BI tools will be an added advantage Sound aptitude, outstanding logical reasoning, and analytical skills Willingness to learn and take initiatives Ability to adapt to fast-paced Agile environment Additional Requirement: Demonstrated expertise as a Data Engineer, specializing in Azure cloud services. Highly skilled in Azure Data Factory, Azure Data Lake, Azure Databricks, and Azure Synapse Analytics. Create and execute efficient, scalable, and dependable data pipelines utilizing Azure Data Factory. Utilize Azure Databricks for data transformation and processing. Effectively oversee and enhance data storage solutions, emphasizing Azure Data Lake and other Azure storage services. Construct and uphold workflows for data orchestration and scheduling using Azure Data Factory or equivalent tools. Proficient in programming languages like Python, SQL, and conversant with pertinent scripting languages Show more Show less

Posted 1 week ago

Apply

7.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Job Title: Python Developer (GCP) Location: Chennai Experience: 7-12 Years Job Summary We are seeking a Python Developer (GCP) with deep expertise in Python , Google Cloud Platform (GCP) , and MLOps to lead end-to-end development and deployment of machine learning solutions. The ideal candidate is a versatile engineer capable of building both front-end and back-end systems, while also managing the automation and scalability of ML workflows and models in production. Required Skills & Experience Strong proficiency in Python, including OOP, data processing, and backend development. 3+ years of experience with Google Cloud Platform (GCP) and services relevant to ML & application deployment. Proven experience with MLOps practices and tools such as Vertex AI, MLflow, Kubeflow, TensorFlow Extended (TFX), Airflow, or similar. Hands-on experience in front-end development (React.js, Angular, or similar). Experience in building RESTful APIs and working with Flask/FastAPI/Django frameworks. Familiarity with Docker, Kubernetes, and CI/CD pipelines in cloud environments. Experience with Terraform, Cloud Build, and monitoring tools (e.g., Stackdriver, Prometheus). Understanding of version control (Git), Agile methodologies, and collaborative software development. Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

THIS IS A LONG TERM CONTRACT POSITION WITH ONE OF THE LARGEST, GLOBAL, TECHNOLOGY LEADER . Our large, Fortune client is ranked as one of the best companies to work with, in the world. The client fosters progressive culture, creativity, and a Flexible work environment. They use cutting-edge technologies to keep themselves ahead of the curve. Diversity in all aspects is respected. Integrity, experience, honesty, people, humanity, and passion for excellence are some other adjectives that define this global technology leader. Responsibilities : Contribute to the team’s vision and articulate strategies to have fundamental impact at our massive scale. You will need a product-focused mindset. It is essential for you to understand business requirements and architect systems that will scale and extend to accommodate those needs. Diagnose and solve complex problems in distributed systems, develop and document technical solutions and sequence work to make fast, iterative deliveries and improvements. Build and maintain high-performance, fault-tolerant, and scalable distributed systems that can handle our massive scale. Provide solid leadership within your very own problem space, through data-driven approach, robust software designs, and effective delegation. Participate in, or spearhead design reviews with peers and stakeholders to adopt what’s best suited amongst available technologies. • Review code developed by other developers and provided feedback to ensure best practices (e.g., checking code in, accuracy, testability, and efficiency) • Automate cloud infrastructure, services, and observability. • Develop CI/CD pipelines and testing automation (nice to have) • Establish and uphold best engineering practices through thorough code and design reviews and improved processes and tools. • Groom junior engineers through mentoring and delegation • Drive a culture of trust, respect, and inclusion within your team. Minimum Qualifications : Bachelor’s degree in Computer Science, Engineering or related field, or equivalent training, fellowship, or work experience Min 5 years of experience curating data and hands on experience working on ETL/ELT tools. Strong overall programming skills, able to write modular, maintainable code, preferably Python & SQL Strong Data warehousing concepts and SQL skills. Understanding of SQL, dimensional modelling, and at least one relational database Experience with AWS Exposure to Snowflake and ingesting data in it or exposure to similar tools Humble, collaborative, team player, willing to step up and support your colleagues. Effective communication, problem solving and interpersonal skills. Commit to grow deeper in the knowledge and understanding of how to improve our existing applications. Preferred Qualifications : Experience on following tools – DBT, Fivetran, Airflow Knowledge and experience in Spark, Hadoop 2.0, and its ecosystem. Experience with automation frameworks/tools like Git, Jenkins Primary Skills: Snowflake, Python, SQL, DBT Secondary Skills: Fivetran, Airflow,Git, Jenkins, AWS, SQL DBM Show more Show less

Posted 1 week ago

Apply

6.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Role**: Python,AWS,Terraform Required Technical Skill Set: Python Full stack Developer Desired Experience Range: 06 - 08 yrs Notice Period: Immediate to 90Days only Location of Requirement: Hyderabad We are currently planning to do a Walk in Interview on 14 th June 2025 (Saturday) Date – 14 th June 2025 (Saturday) Venue - Tata Consultancy Services Limited, Kohinoor Park Plot No 1, Hitech City Road, Rd Number 1, HITEC City, Hyderabad, Telangana 500084 Job Description: Primary Skill Frontend o 6+ years of overall experience with proficiency in React (2+ years), Typescript (1+ year), React hooks (1+ year) o Experience with ESlint, CSS in JS styling (preferably Emotion), state management (preferably Redux), and JavaScript bundlers such as Webpack o Experience with integrating with RESTful APIs or other web services Backend o Expertise with Python (3+ years, preferably Python3) o Proficiency with a Python web framework (2+ years, preferably flask and FastAPI) o Experience with a Python linter (preferably flake8), graph databases (preferably Neo4j), a package manager (preferably pip), Elasticsearch, and Airflow o Experience with developing microservices, RESTful APIs or other web services o Experience with Database design and management, including NoSQL/RDBMS tradeoffs Show more Show less

Posted 1 week ago

Apply

50.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

We are hiring for Digit88 About Digit88 Digit88 empowers digital transformation for innovative and high growth B2B and B2C SaaS companies as their trusted offshore software product engineering partner! We are a lean mid-stage software company, with a team of 75+ fantastic technologists, backed by executives with deep understanding of and extensive experience in consumer and enterprise product development across large corporations and startups. We build highly efficient and effective engineering teams that solve real and complex problems for our partners. With more than 50+ years of collective experience in areas ranging from B2B and B2C SaaS, web and mobile apps, e-commerce platforms and solutions, custom enterprise SaaS platforms and domains spread across Conversational AI, Chatbots, IoT, Health-tech, ESG/Energy Analytics, Data Engineering, the founding team thrives in a fast paced and challenging environment that allows us to showcase our best. The Vision: To be the most trusted technology partner to innovative software product companies world-wide The Opportunity Digit88 development team is establishing a new offshore product development team for its partner , that is building next-generation Big Data, Cloud-Based Business Operation Support technology for utilities, retail energy suppliers and Community Choice Aggregators (CCA). The candidate would be joining an existing team of outstanding data engineers in the US and help us expand the data engineering team and work on different products and on different layers of the infrastructure. Job Profile Digit88 is looking for a Big Data Engineer who will work on building, and managing Big Data Pipelines for us to deal with the huge structured data sets that we use as an input to accurately generate analytics at scale for our valued Customers. The primary focus will be on choosing optimal solutions to use for these purposes, then maintaining,implementing, and monitoring them. You will also be responsible for integrating them with the architecture used across the company. Applicants must have a passion for engineering with accuracy and efficiency, be highly motivated and organized, able to work as part of a team, and also possess the ability to work independently with minimal supervision. To be successful in this role, you should possess Collaborate closely with Product Management and Engineering leadership to devise and build the right solution. Participate in Design discussions and brainstorming sessions to select, integrate, and maintain Big Data tools and frameworks required to solve Big Data problems at scale. Design and implement systems to cleanse, process, and analyze large data sets using distributed processing tools like Akka and Spark. Understanding and critically reviewing existing data pipelines, and coming up with ideas in collaboration with Technical Leaders and Architects to improve upon current bottlenecks Take initiatives, and show the drive to pick up new stuff proactively, and work as a Senior Individual contributor on the multiple products and features we have. 8+ years of experience in developing highly scalable Big Data pipelines. Hands on exp in team leading and leading product or module development experience In-depth understanding of the Big Data ecosystem including processing frameworks like Spark, Akka, Storm, and Hadoop, and the file types they deal with. Experience with ETL and Data pipeline tools like Apache NiFi, Airflow etc. Excellent coding skills in Java or Scala, including the understanding to apply appropriate Design Patterns when required. Experience with Git and build tools like Gradle/Maven/SBT. Strong understanding of object-oriented design, data structures, algorithms, profiling, and optimization. Have elegant, readable, maintainable and extensible code style. You are someone who would easily be able to Work closely with the US and India engineering teams to help build the Java/Scala based data pipelines Lead the India engineering team in technical excellence and ownership of critical modules; own the development of new modules and features Troubleshoot live production server issues. Handle client coordination and be able to work as a part of a team, be able to contribute independently and drive the team to exceptional contributions with minimal team supervision Follow Agile methodology, JIRA for work planning, issue management/tracking Additional Project/Soft Skills: Should be able to work independently with India & US based team members. Strong verbal and written communication with ability to articulate problems and solutions over phone and emails. Strong sense of urgency, with a passion for accuracy and timeliness. Ability to work calmly in high pressure situations and manage multiple projects/tasks. Ability to work independently and possess superior skills in issue resolution. Should have the passion to learn and implement, analyse and troubleshoot issues Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

WHAT MAKES US A GREAT PLACE TO WORK We are proud to be consistently recognized as one of the world’s best places to work. We are currently the #1 ranked consulting firm on Glassdoor’s Best Places to Work list and have maintained a spot in the top four on Glassdoor’s list since its founding in 2009. Extraordinary teams are at the heart of our business strategy, but these don’t happen by chance. They require intentional focus on bringing together a broad set of backgrounds, cultures, experiences, perspectives, and skills in a supportive and inclusive work environment. We hire people with exceptional talent and create an environment in which every individual can thrive professionally and personally. WHO YOU’LL WORK WITH You’ll join our Application Engineering experts within the AI, Insights & Solutions team. This team is part of Bain’s digital capabilities practice, which includes experts in analytics, engineering, product management, and design. In this multidisciplinary environment, you'll leverage deep technical expertise with business acumen to help clients tackle their most transformative challenges. You’ll work on integrated teams alongside our general consultants and clients to develop data-driven strategies and innovative solutions. Together, we create human-centric solutions that harness the power of data and artificial intelligence to drive competitive advantage for our clients. Our collaborative and supportive work environment fosters creativity and continuous learning, enabling us to consistently deliver exceptional results. WHAT YOU’LL DO Design, develop, and maintain cloud-based AI applications, leveraging a full-stack technology stack to deliver high-quality, scalable, and secure solutions. Collaborate with cross-functional teams, including product managers, data scientists, and other engineers, to define and implement analytics features and functionality that meet business requirements and user needs. Utilize Kubernetes and containerization technologies to deploy, manage, and scale analytics applications in cloud environments, ensuring optimal performance and availability. Develop and maintain APIs and microservices to expose analytics functionality to internal and external consumers, adhering to best practices for API design and documentation. Implement robust security measures to protect sensitive data and ensure compliance with data privacy regulations and organizational policies. Continuously monitor and troubleshoot application performance, identifying and resolving issues that impact system reliability, latency, and user experience. Participate in code reviews and contribute to the establishment and enforcement of coding standards and best practices to ensure high-quality, maintainable code. Stay current with emerging trends and technologies in cloud computing, data analytics, and software engineering, and proactively identify opportunities to enhance the capabilities of the analytics platform. Collaborate with DevOps and infrastructure teams to automate deployment and release processes, implement CI/CD pipelines, and optimize the development workflow for the analytics engineering team. Collaborate closely with and influence business consulting staff and leaders as part of multi-disciplinary teams to assess opportunities and develop analytics solutions for Bain clients across a variety of sectors. Influence, educate and directly support the analytics application engineering capabilities of our clients Travel is required (30%) ABOUT YOU Required Master’s degree in Computer Science, Engineering, or a related technical field. 5+ years at Senior or Staff level, or equivalent Experience with client-side technologies such as React, Angular, Vue.js, HTML and CSS Experience with server-side technologies such as, Django, Flask, Fast API Experience with cloud platforms and services (AWS, Azure, GCP) via Terraform Automation (good to have) 3+ years of Python expertise Use Git as your main tool for versioning and collaborating Experience with DevOps, CI/CD, Github Actions Demonstrated interest with LLMs, Prompt engineering, Langchain Experience with workflow orchestration - doesn’t matter if it’s dbt, Beam, Airflow, Luigy, Metaflow, Kubeflow, or any other Experience implementation of large-scale structured or unstructured databases, orchestration and container technologies such as Docker or Kubernetes Strong interpersonal and communication skills, including the ability to explain and discuss complex engineering technicalities with colleagues and clients from other disciplines at their level of cognition Curiosity, proactivity and critical thinking Strong computer science fundaments in data structures, algorithms, automated testing, object-oriented programming, performance complexity, and implications of computer architecture on software performance. Strong knowledge in designing API interfaces Knowledge of data architecture, database schema design and database scalability Agile development methodologies Show more Show less

Posted 1 week ago

Apply

8.0 years

0 Lacs

Vadodara, Gujarat, India

On-site

Linkedin logo

At Rearc, we're committed to empowering engineers to build awesome products and experiences. Success as a business hinges on our people's ability to think freely, challenge the status quo, and speak up about alternative problem-solving approaches. If you're an engineer driven by the desire to solve problems and make a difference, you're in the right place! Our approach is simple — empower engineers with the best tools possible to make an impact within their industry. We're on the lookout for engineers who thrive on ownership and freedom, possessing not just technical prowess, but also exceptional leadership skills. Our ideal candidates are hands-on leaders who don't just talk the talk but also walk the walk, designing and building solutions that push the boundaries of cloud computing. As a Senior Data Engineer at Rearc, you will be at the forefront of driving technical excellence within our data engineering team. Your expertise in data architecture, cloud-native solutions, and modern data processing frameworks will be essential in designing workflows that are optimized for efficiency, scalability, and reliability. You'll leverage tools like Databricks, PySpark, and Delta Lake to deliver cutting-edge data solutions that align with business objectives. Collaborating with cross-functional teams, you will design and implement scalable architectures while adhering to best practices in data management and governance . Building strong relationships with both technical teams and stakeholders will be crucial as you lead data-driven initiatives and ensure their seamless execution. What You Bring 8+ years of experience in data engineering, showcasing expertise in diverse architectures, technology stacks, and use cases. Strong expertise in designing and implementing data warehouse and data lake architectures, particularly in AWS environments. Extensive experience with Python for data engineering tasks, including familiarity with libraries and frameworks commonly used in Python-based data engineering workflows. Proven experience with data pipeline orchestration using platforms such as Airflow, Databricks, DBT or AWS Glue. Hands-on experience with data analysis tools and libraries like Pyspark, NumPy, Pandas, or Dask. Proficiency with Spark and Databricks is highly desirable. Experience with SQL and NoSQL databases, including PostgreSQL, Amazon Redshift, Delta Lake, Iceberg and DynamoDB. In-depth knowledge of data architecture principles and best practices, especially in cloud environments. Proven experience with AWS services, including expertise in using AWS CLI, SDK, and Infrastructure as Code (IaC) tools such as Terraform, CloudFormation, or AWS CDK. Exceptional communication skills, capable of clearly articulating complex technical concepts to both technical and non-technical stakeholders. Demonstrated ability to quickly adapt to new tasks and roles in a dynamic environment. What You'll Do Strategic Data Engineering Leadership: Provide strategic vision and technical leadership in data engineering, guiding the development and execution of advanced data strategies that align with business objectives. Architect Data Solutions: Design and architect complex data pipelines and scalable architectures, leveraging advanced tools and frameworks (e.g., Apache Kafka, Kubernetes) to ensure optimal performance and reliability. Drive Innovation: Lead the exploration and adoption of new technologies and methodologies in data engineering, driving innovation and continuous improvement across data processes. Technical Expertise: Apply deep expertise in ETL processes, data modelling, and data warehousing to optimize data workflows and ensure data integrity and quality. Collaboration and Mentorship: Collaborate closely with cross-functional teams to understand requirements and deliver impactful data solutions—mentor and coach junior team members, fostering their growth and development in data engineering practices. Thought Leadership: Contribute to thought leadership in the data engineering domain through technical articles, conference presentations, and participation in industry forums. Some More About Us Founded in 2016, we pride ourselves on fostering an environment where creativity flourishes, bureaucracy is non-existent, and individuals are encouraged to challenge the status quo. We're not just a company; we're a community of problem-solvers dedicated to improving the lives of fellow software engineers. Our commitment is simple - finding the right fit for our team and cultivating a desire to make things better. If you're a cloud professional intrigued by our problem space and eager to make a difference, you've come to the right place. Join us, and let's solve problems together! Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Linkedin logo

About the Role You’ll work directly with the founder. Learning fast, owning small but meaningful pipeline tasks, and shipping production code exactly to spec. What You’ll Do  In this role you’ll build and ship ETL/ELT pipelines in Python or Scala, crafting and tuning the necessary SQL transformations, while closely following my design documents and verbal briefs and iterating quickly on feedback until the output matches requirements. You’ll keep the codebase healthy by working through Git feature branches and pull requests, adding unit tests, and adhering to our pre-commit hooks. Day-to-day work will involve operating across AWS services such as EMR/Spark as projects demand. Learning is continuous: we’ll pair regularly for reviews and debugging, and you’ll present your progress during short weekly catch-ups. Must-Have Basics Up to 6 months practical experience (internship, project, or personal lab) in data engineering Working knowledge of Python or Scala and solid SQL Basic Git workflow familiarity Conceptual understanding of big-data tooling (Spark/Hadoop) Exposure to at least core AWS storage/compute services Strong willingness to take direction, ask questions, and iterate quickly Reside in Ahmedabad and commit to full-time office work Nice-to-Haves Docker or Airflow familiarity Data-modeling (star/snowflake, SCD) basics Hackathon or open-source contributions Compensation & Perks ₹15,000 – ₹30,000 / month (intern / junior band) Direct 1-on-1 mentorship from a senior data engineer & founder Dedicated learning budget after 90 days Comfortable workspace, high-end dev laptop, free coffee/snacks How to Apply Apply with your résumé (PDF). In the note, share a link to code or briefly describe a data project you built. Shortlisted candidates will have an on-site interview (python and SQL discussions) Location : S.G.Highway, Ahmedabad Timing : 8-9 hours (Flexible) Experience : 0 to 6 months If you’re hungry to learn, enjoy clear guidance, and want to grow into a full-stack data engineer, I’d love to hear from you. Show more Show less

Posted 1 week ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Title: Infrastructure Lead/Architect Job Type: Full-Time Location: On-site Hyderabad, Pune or New Delhi Job Summary Join our customer's team as an Infrastructure Lead/Architect and play a pivotal role in architecting, designing, and implementing next-generation cloud infrastructure solutions. You will drive cloud and data platform initiatives, ensure system scalability and security, and act as a technical leader, shaping the backbone of our customers’ mission-critical applications. Key Responsibilities Architect, design, and implement robust, scalable, and secure AWS cloud infrastructure utilizing services such as EC2, S3, Lambda, RDS, Redshift, and IAM. Lead the end-to-end design and deployment of high-performance, cost-efficient Databricks data pipelines, ensuring seamless integration with business objectives. Develop and manage data integration workflows using modern ETL tools in combination with Python and Java scripting. Collaborate with Data Engineering, DevOps, and Security teams to build resilient, highly available, and compliant systems aligned with operational standards. Act as a technical leader and mentor, guiding cross-functional teams through infrastructure design decisions and conducting in-depth code and architecture reviews. Oversee project planning, resource allocation, and deliverables, ensuring projects are executed on-time and within budget. Proactively identify infrastructure bottlenecks, recommend process improvements, and drive automation initiatives. Maintain comprehensive documentation and uphold security and compliance standards across the infrastructure landscape. Required Skills and Qualifications 8+ years of hands-on experience in IT infrastructure, cloud architecture, or related roles. Extensive expertise with AWS cloud services; AWS certifications are highly regarded. Deep experience with Databricks, including cluster deployment, Delta Lake, and machine learning integrations. Strong programming and scripting proficiency in Python and Java. Advanced knowledge of ETL/ELT processes and tools such as Apache NiFi, Talend, Airflow, or Informatica. Proven track record in project management, leading cross-functional teams; PMP or Agile/Scrum certifications are a plus. Familiarity with CI/CD workflows and Infrastructure as Code tools like Terraform and CloudFormation. Exceptional problem-solving, stakeholder management, and both written and verbal communication skills. Preferred Qualifications Experience with big data platforms such as Spark or Hadoop. Background in regulated environments (e.g., finance, healthcare). Knowledge of Kubernetes and AWS container orchestration (EKS). Show more Show less

Posted 1 week ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana

On-site

Indeed logo

Location Hyderabad, Telangana, India Category Corporate Careers Job Id JREQ192013 Job Type Full time Hybrid We are seeking a Senior Manager - Pricing Analytics for the pricing team in Thomson Reuters. Central Pricing Team works with Pricing Managers, Business Units, Product Marketing Managers, Finance and Sales in price execution of new product launches, maintenance of existing ones, and creation & maintenance of data products for reporting & analytics. The team is responsible for providing product and pricing information globally to all internal stakeholders and collaborating with upstream and downstream teams to ensure offer pricing readiness. Apart from BAU, the team works on various automation, pricing transformation projects & pricing analytics initiatives. About the Role In this role as a Senior Manager - Pricing Analytics, you will: Lead and mentor a team of pricing analysts, data engineers, and BI developers Drive operational excellence by fostering a culture of data quality, accountability, and continuous improvement. Manage team capacity, project prioritization, and cross-functional coordination with Segment Pricing, Finance, Sales, and Analytics teams Partner closely with the Pricing team to translate business objectives into actionable analytics deliverables. Drive insights on pricing performance, discounting trends, segmentation, and monetization opportunities. Oversee design and execution of robust ETL pipelines to consolidate data from multiple sources (e.g., Salesforce, EMS, UNISON, SAP, Pendo, Product usage platforms etc). Ensure delivery of intuitive, self-service dashboards and reports that track key pricing KPIs, sales performance, and customer behaviour. Strategize, deploy and promote scalable analytics architecture and best practices in data governance, modelling, and visualization. Act as a trusted advisor to Pricing leadership by delivering timely, relevant, and accurate data insights. Collaborate with analytics, finance, segment pricing and data platform teams to align on data availability, definitions, and architecture. Shift Timings: 2 PM to 11 PM (IST) Work from office for 2 days in a week (Mandatory) About You You’re a fit for the role of Senior Marketing Analyst, if your background includes: 10+ years of experience in analytics, data science, or business intelligence, with 3+ years in a people leadership or managerial role. Proficiency in SQL, ETL tools (e.g. Alteryx, dbt, airflow), and BI platforms (e.g., Tableau, Power BI, Looker) Knowledge of Python, R, or other statistical tools is a plus Experience with data from Salesforce, SAP, other CRM, ERP or CPQ tools Ability to translate complex data into actionable insights and communicate effectively with senior stakeholders. Strong understanding of data analytics, monetization metrics, and SaaS pricing practices Proven experience working in a B2B SaaS or software product company preferred MBA, Master’s in Analytics, Engineering, or a quantitative field preferred #LI-GS2 What’s in it For You? Hybrid Work Model: We’ve adopted a flexible hybrid working environment (2-3 days a week in the office depending on the role) for our office-based roles while delivering a seamless experience that is digitally and physically connected. Flexibility & Work-Life Balance: Flex My Way is a set of supportive workplace policies designed to help manage personal and professional responsibilities, whether caring for family, giving back to the community, or finding time to refresh and reset. This builds upon our flexible work arrangements, including work from anywhere for up to 8 weeks per year, empowering employees to achieve a better work-life balance. Career Development and Growth: By fostering a culture of continuous learning and skill development, we prepare our talent to tackle tomorrow’s challenges and deliver real-world solutions. Our Grow My Way programming and skills-first approach ensures you have the tools and knowledge to grow, lead, and thrive in an AI-enabled future. Industry Competitive Benefits: We offer comprehensive benefit plans to include flexible vacation, two company-wide Mental Health Days off, access to the Headspace app, retirement savings, tuition reimbursement, employee incentive programs, and resources for mental, physical, and financial wellbeing. Culture: Globally recognized, award-winning reputation for inclusion and belonging, flexibility, work-life balance, and more. We live by our values: Obsess over our Customers, Compete to Win, Challenge (Y)our Thinking, Act Fast / Learn Fast, and Stronger Together. Social Impact: Make an impact in your community with our Social Impact Institute. We offer employees two paid volunteer days off annually and opportunities to get involved with pro-bono consulting projects and Environmental, Social, and Governance (ESG) initiatives. Making a Real-World Impact: We are one of the few companies globally that helps its customers pursue justice, truth, and transparency. Together, with the professionals and institutions we serve, we help uphold the rule of law, turn the wheels of commerce, catch bad actors, report the facts, and provide trusted, unbiased information to people all over the world. About Us Thomson Reuters informs the way forward by bringing together the trusted content and technology that people and organizations need to make the right decisions. We serve professionals across legal, tax, accounting, compliance, government, and media. Our products combine highly specialized software and insights to empower professionals with the data, intelligence, and solutions needed to make informed decisions, and to help institutions in their pursuit of justice, truth, and transparency. Reuters, part of Thomson Reuters, is a world leading provider of trusted journalism and news. We are powered by the talents of 26,000 employees across more than 70 countries, where everyone has a chance to contribute and grow professionally in flexible work environments. At a time when objectivity, accuracy, fairness, and transparency are under attack, we consider it our duty to pursue them. Sound exciting? Join us and help shape the industries that move society forward. As a global business, we rely on the unique backgrounds, perspectives, and experiences of all employees to deliver on our business goals. To ensure we can do that, we seek talented, qualified employees in all our operations around the world regardless of race, color, sex/gender, including pregnancy, gender identity and expression, national origin, religion, sexual orientation, disability, age, marital status, citizen status, veteran status, or any other protected classification under applicable law. Thomson Reuters is proud to be an Equal Employment Opportunity Employer providing a drug-free workplace. We also make reasonable accommodations for qualified individuals with disabilities and for sincerely held religious beliefs in accordance with applicable law.

Posted 1 week ago

Apply

0.0 - 10.0 years

0 Lacs

Noida, Uttar Pradesh

On-site

Indeed logo

Noida, Uttar Pradesh, India;Gurgaon, Haryana, India;Hyderabad, Telangana, India;Indore, Madhya Pradesh, India;Bangalore, Karnataka, India Qualification : 5-7 years of good hands on exposure with Big Data technologies – pySpark (Data frame and SparkSQL), Hadoop, and Hive Good hands on experience of python and Bash Scripts Good understanding of SQL and data warehouse concepts Strong analytical, problem-solving, data analysis and research skills Demonstrable ability to think outside of the box and not be dependent on readily available tools Excellent communication, presentation and interpersonal skills are a must Good to have: Hands-on experience with using Cloud Platform provided Big Data technologies (i.e. IAM, Glue, EMR, RedShift, S3, Kinesis) Orchestration with Airflow and Any job scheduler experience Experience in migrating workload from on-premise to cloud and cloud to cloud migrations Skills Required : Python, Pyspark, AWS Role : Develop efficient ETL pipelines as per business requirements, following the development standards and best practices. Perform integration testing of different created pipeline in AWS env. Provide estimates for development, testing & deployments on different env. Participate in code peer reviews to ensure our applications comply with best practices. Create cost effective AWS pipeline with required AWS services i.e S3,IAM, Glue, EMR, Redshift etc. Experience : 8 to 10 years Job Reference Number : 13025

Posted 1 week ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Indeed logo

Bangalore,Karnataka,India Job ID 764972 Join our Team Ericsson’s R&D Data team is seeking a visionary and technically exceptional Principal Machine Learning Engineer to lead the design, development, and deployment of advanced ML systems at scale. This role sits at the strategic intersection of machine learning, data engineering, and cloud-native architecture—shaping the next generation of AI-driven services at Ericsson. As a senior technical leader, you will architect and guide the end-to-end ML lifecycle, from data strategy and engineering to large-scale model deployment and continuous optimization. You’ll partner closely with data engineers, software developers, and product stakeholders, while mentoring a high-performing team of ML engineers. Your work will help scale intelligent systems that power mission-critical R&D and network solutions. Key Responsibilities: Architect and implement scalable ML solutions, deeply integrated with robust and reliable data pipelines. Own the complete ML lifecycle: data ingestion, preprocessing, feature engineering, model design, training, evaluation, deployment, and monitoring. Design and optimize data architectures supporting both batch and streaming ML use cases. Collaborate with data engineering teams to build real-time and batch pipelines using managed streaming platforms such as Kafka or equivalent technologies. Guide the development and automation of ML workflows using modern MLOps and CI/CD practices. Mentor and lead ML engineers, establishing engineering best practices and fostering a high-performance, collaborative culture. Align ML efforts with business objectives by working cross-functionally with data scientists, engineers, and product managers. Stay current with the latest ML and data engineering advancements, integrating emerging tools and frameworks into scalable, production-ready systems. Champion responsible AI practices including model governance, explainability, fairness, and compliance. Required Qualifications: 8+ years of experience in machine learning, applied AI, or data science with a proven record of delivering ML systems at scale. 3+ years of experience in data engineering or building ML-supportive data infrastructure and pipelines. Advanced degree (MS or PhD preferred) in Computer Science, Data Engineering, Machine Learning, or a related technical field. Proficient in Python (preferred), with experience in Java or C++ for backend or performance-critical tasks. Deep expertise with ML frameworks (TensorFlow, PyTorch, JAX) and cloud platforms, especially AWS (SageMaker, Lambda, Step Functions, S3, etc.). Experience with managed streaming data platforms such as Amazon MSK, or similar technologies for real-time ML data pipelines. Experience with distributed systems and data processing tools such as Spark, Airflow, and AWS Glue. Fluency in MLOps best practices, including CI/CD, model versioning, observability, and automated retraining pipelines. Strong leadership skills with experience mentoring engineers and influencing technical direction across teams. Excellent collaboration and communication skills, with the ability to align ML strategy with product and business needs.

Posted 1 week ago

Apply

7.0 years

0 Lacs

Greater Chennai Area

Remote

Linkedin logo

Do you want to make a global impact on patient health? Join Pfizer Digital’s Artificial Intelligence, Data, and Advanced Analytics organization (AIDA) to leverage cutting-edge technology for critical business decisions and enhance customer experiences for colleagues, patients, and physicians. Our team is at the forefront of Pfizer’s transformation into a digitally driven organization, using data science and AI to change patients’ lives. The Data Science Industrialization team leads engineering efforts to advance AI and data science applications from POCs and prototypes to full production. As a Senior Manager, AI and Analytics Data Engineer, you will be part of a global team responsible for designing, developing, and implementing robust data layers that support data scientists and key advanced analytics/AI/ML business solutions. You will partner with cross-functional data scientists and Digital leaders to ensure efficient and reliable data flow across the organization. You will lead development of data solutions to support our data science community and drive data-centric decision-making. Join our diverse team in making an impact on patient health through the application of cutting-edge technology and collaboration. Role Responsibilities Lead development of data engineering processes to support data scientists and analytics/AI solutions, ensuring data quality, reliability, and efficiency As a data engineering tech lead, enforce best practices, standards, and documentation to ensure consistency and scalability, and facilitate related trainings Provide strategic and technical input on the AI ecosystem including platform evolution, vendor scan, and new capability development Act as a subject matter expert for data engineering on cross functional teams in bespoke organizational initiatives by providing thought leadership and execution support for data engineering needs Train and guide junior developers on concepts such as data modeling, database architecture, data pipeline management, data ops and automation, tools, and best practices Stay updated with the latest advancements in data engineering technologies and tools and evaluate their applicability for improving our data engineering capabilities Direct data engineering research to advance design and development capabilities Collaborate with stakeholders to understand data requirements and address them with data solutions Partner with the AIDA Data and Platforms teams to enforce best practices for data engineering and data solutions Demonstrate a proactive approach to identifying and resolving potential system issues. Communicate the value of reusable data components to end-user functions (e.g., Commercial, Research and Development, and Global Supply) and promote innovative, scalable data engineering approaches to accelerate data science and AI work Basic Qualifications Bachelor's degree in computer science, information technology, software engineering, or a related field (Data Science, Computer Engineering, Computer Science, Information Systems, Engineering, or a related discipline). 7+ years of hands-on experience in working with SQL, Python, object-oriented scripting languages (e.g. Java, C++, etc..) in building data pipelines and processes. Proficiency in SQL programming, including the ability to create and debug stored procedures, functions, and views. Recognized by peers as an expert in data engineering with deep expertise in data modeling, data governance, and data pipeline management principles In-depth knowledge of modern data engineering frameworks and tools such as Snowflake, Redshift, Spark, Airflow, Hadoop, Kafka, and related technologies Experience working in a cloud-based analytics ecosystem (AWS, Snowflake, etc.) Familiarity with machine learning and AI technologies and their integration with data engineering pipelines Demonstrated experience interfacing with internal and external teams to develop innovative data solutions Strong understanding of Software Development Life Cycle (SDLC) and data science development lifecycle (CRISP) Highly self-motivated to deliver both independently and with strong team collaboration Ability to creatively take on new challenges and work outside comfort zone. Strong English communication skills (written & verbal) Preferred Qualifications Advanced degree in Data Science, Computer Engineering, Computer Science, Information Systems, or a related discipline (preferred, but not required) Experience in software/product engineering Experience with data science enabling technology, such as Dataiku Data Science Studio, AWS SageMaker or other data science platforms Familiarity with containerization technologies like Docker and orchestration platforms like Kubernetes. Experience working effectively in a distributed remote team environment Hands on experience working in Agile teams, processes, and practices Expertise in cloud platforms such as AWS, Azure or GCP. Proficiency in using version control systems like Git. Pharma & Life Science commercial functional knowledge Pharma & Life Science commercial data literacy Ability to work non-traditional work hours interacting with global teams spanning across the different regions (e.g.: North America, Europe, Asia) Pfizer is an equal opportunity employer and complies with all applicable equal employment opportunity legislation in each jurisdiction in which it operates. Information & Business Tech Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Why Patients Need You The Revenue Management (RM) Digital Solutions team manages technology strategy and delivery for the Pfizer’s Commercial Business Units, the Global Access and Value (GAV) organization and Pfizer Commercial Finance teams. The team plays a critical role in ensuring Patients can easily access Pfizer’s breakthrough medicines through contracting opportunities with Payer and Providers. What You Will Achieve This role will be responsible to configure, maintain and enhance strategic technology platforms and related reporting dashboards that support market access pre-deal analytics, post-deal analytics and broader performance reporting. These web-based tools are used by Market Access Strategy, Pricing and Analytics (MASPA) teams that are responsible for payer and provider contract analytics. The candidate will be accountable for finding the most effective ways for technology to support Pfizer’s business objectives in these domains and optimize the return on all technology investments. The ideal candidate would be someone with strong technical skill on Data management tools, Data Architecture, data profiling/quality tools, AWS services, Tableau, Snowflake, Informatica Cloud Services, Dataiku AI/ML tools. This role will work closely with functional and technical leads and become a subject matter expert with solutions to identify, develop, and deploy processes and reporting. The role will lead Deal Analytics data integration, create analytic solutions, and utilize data prep and visualization platforms. How You Will Achieve It Evaluates and implements solutions to meet business requirements, ensuring consistent usage and adherence to data management best practices. Collaborates with product owners to prioritize features and manage technical requirements based on business needs, new technologies, and known issues. Develops application design and documentation for leadership teams. Assists in defining the vision for the shared data model, including sourcing, transformation, and loading approaches. Manages daily operations of the team, ensuring on-time delivery of milestones. Accountable for end-to-end delivery of program outcomes within budget, aligning with relevant business units. Fosters collaboration with internal and external stakeholders, including software vendors and data providers. Works independently with minimal supervision, capable of making recommendations. You will have the opportunity to: Demonstrate a solid ability to tell a story with simplistic views of complex datasets. Deliver data reliability, efficiency, and best-in-class data governance, ensuring security and compliance. Will be an integral part of developing best-in-class solution for the GAV organization. Build dashboard and reporting proofs of concept as needed; develop reporting and analysis templates and tools. Work in close collaboration with business teams throughout MASPA (Market Access Strategy Pricing and Analytics) to determine tool functionality/configuration and data requirements to ensure that the analytic capability is supporting the most current business needs. Partner with the Digital Client Partners to align on priorities, processes and governance and ensure experimentation to activate innovation and pipeline value. Qualifications Must-Have Bachelor’s degree in computer science, Software Engineering, or engineering related area. 5+ years of relevant experience emphasizing data modeling, development, or systems engineering. 1+ years with a data visualization tool (e.g. Tableau, Power BI). 2+ Years of experience in any number of the following tools, languages, and databases: (e.g. MySQL, SQL, Aurora DB, Redshift, Snowflake). Demonstrated capabilities in integrating and analyzing heterogeneous datasets; Ability to identify trends, identify outliers and find patterns. Demonstrated expertise and capabilities in matrixed, cross-functional teams and influencing without authority. Proven experience and demonstrated skills with AWS services, Tableau, Airflow,Python and Dataiku. Must be experienced in DevSecOps tools JIRA, GitHub. Experience in Database design tools. Deep knowledge of Agile methodologies and SDLC processes. Excellent written, interpersonal, and oral communication skills, communicate and liaise broadly across functions and the global organization. Strong analytical, critical thinking, and troubleshooting skills. Ambition to learn and utilize emerging technologies while working in a stimulating team environment. Nice-to-Have Advanced degree in Computer Engineering, Computer Science, Information Systems or related discipline. Knowledge with GenAI and LLMs framework (OpenAI, AWS). US Market Access functional knowledge and data literacy. Statistical analysis to understand and improve possible limitations in models. Experience in AI/ML frameworks. Pytest and CI/CD tools. Experience in UI/UX design. Experience in solution architecture & product engineering. Organizational Relationships Global Access and Value Organization Market Access Strategy, Pricing and Analytics Channel Management, Contract Operations Trade Operations Government Pricing Managed Markets Finance Vaccines Business Unit Leadership Commercial Leaders for newly launched brands AIDA Pfizer is an equal opportunity employer and complies with all applicable equal employment opportunity legislation in each jurisdiction in which it operates. Information & Business Tech Show more Show less

Posted 1 week ago

Apply

8.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

This role will be part of a team that develops software that processes data captured every day from over a quarter of a million computers and Mobile devices worldwide. Measuring panelists' activities as they surf the Internet via Browsers, or utilizing Mobile apps downloaded from Apple's and Google's stores. The Nielsen software meter used to capture this usage data has been optimized to be unobtrusive yet gather many biometric data points that the backend system can use to identify who is using the device, and also detect fraudulent behavior. As an Engineering Manager, you will be a cross-functional team of developers and DevOps Engineers, using a Scrum/Agile team management approach. Provide technical expertise and guidance to team members and help develop designs for complex applications. Ability to plan tasks and project phases as well as review, comment, and approve the analysis, proposed design, and test strategy done by members of the team. Responsibilities: Oversee the development of scalable, reliable, and cost-effective software solutions with an emphasis on quality, best-practice coding standards, and cost-effectiveness. Aids with driving the business unit's financials and ensures budgets and schedules meet corporate requirements. Participates in corporate development of methods, techniques, and evaluation criteria for projects, programs, and people. Has overall control of planning, staffing, budgeting, and managing expense priorities for the team they lead. Provide training, coaching, and sharing technical knowledge with less experienced staff. People manager duties include annual reviews, career guidance, and compensation planning. Rapidly identify technical issues as they emerge, and asses their impact to the business. Provide day-to-day work direction to a large team of developers. Collaborate effectively with Data Science to understand, translate, and integrate data methodologies into the product. Collaborate with product owners to translate complex business requirements into technical solutions, providing leadership in the design and architecture processes. Stay informed about the latest technology and methodology by participating in industry forums, having an active peer network, and engaging actively with customers. Cultivate a team environment focused on continuous learning, where innovative technologies are developed and refined through collaborative effort. Requirements: Bachelor's degree in computer science, engineering, or a relevant field. 8+ years of experience in information technology solutions development and 2+ years of managerial experience. Proven experience in leading and managing software development teams. Development background in Java, AWS Cloud-based environment for high-volume data processing. Experience with Data Warehouses, ETL, and/or Data Lakes. Experience with Databases such as Postgres, DynamoDB, or RedShift. Good understanding of CI/CD principles and tools. GitLab is a plus. Must have the ability to provide solutions utilizing best practices for resilience, scalability, cloud optimization, and security. Excellent project management skills. Other desirable skills: Knowledge of networking principles and security best practices. AWS Certification is a plus. Experience with MS Project or Smartsheet. Experience with Airflow, Python, Lambda, Prometheus, Grafana, and OpsGenie is a bonus. Exposure to the Google Cloud Platform (GCP) is useful. Show more Show less

Posted 1 week ago

Apply

10.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Job Title: Project Manager – Data Engineering Location: Pune (with travel to the Middle East as required) Experience: 7–10 Years Employment Type: Full-time --- About the Role We are looking for an experienced and hands-on Project Manager in Data Engineering who can lead the end-to-end delivery of data pipeline projects across Azure and AWS environments. The ideal candidate will bring strong technical depth in data engineering along with client-facing and project execution capabilities. --- Key Responsibilities · Lead and manage multiple data engineering projects across Azure and AWS ecosystems. · Gather client requirements and translate them into technical specifications and delivery roadmaps. · Design, oversee, and ensure successful implementation of scalable data pipelines, ETL processes, and data integration workflows. · Collaborate with internal data engineers, BI developers, and client stakeholders to ensure smooth project execution. · Ensure adherence to timelines, quality standards, and cost constraints. · Identify project risks, dependencies, and proactively resolve issues. · Own the client relationship from initiation to delivery – conduct regular check-ins, demos, and retrospectives. · Stay updated on emerging tools and best practices in the data engineering space and recommend their adoption. · Lead sprint planning, resource allocation, and tracking using Agile or hybrid methodologies. --- Required Skills & Experience · 7–10 years of total experience in data engineering and project delivery. · Strong experience in Azure Data Services – Azure Data Factory, Synapse, Databricks, Data Lake, etc. · Working knowledge of AWS data tools such as Glue, Redshift, S3, and Lambda functions. · Good understanding of data modeling, data warehousing, and pipeline orchestration. · Experience with tools such as Talend, Airflow, DBT, or other orchestration platforms is a plus. · Proven track record of managing enterprise data projects from requirement gathering to deployment. · Client-facing experience with strong communication and stakeholder management skills. · Strong understanding of project management methodologies and tools (e.g., JIRA, Trello, MS Project). --- Preferred Qualifications · Bachelor's or Master’s degree in Computer Science, Information Systems, or related field. · PMP or PRINCE2 certification is a plus. · Experience working with Middle East clients is an added advantage. · Exposure to modern data platforms, real-time data processing, or big data tools is a plus. --- Additional Details · This is a Pune-based role with expected travel to Middle East locations based on project needs. · Should be open to handling cross-functional teams and multiple projects simultaneously. Show more Show less

Posted 1 week ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

What You’ll Do Handle data: pull, clean, and shape structured & unstructured data. Manage pipelines: Airflow / Step Functions / ADF… your call. Deploy models: build, tune, and push to production on SageMaker, Azure ML, or Vertex AI. Scale: Spark / Databricks for the heavy lifting. Automate processes: Docker, Kubernetes, CI/CD, MLFlow, Seldon, Kubeflow. Collaborate effectively: work with engineers, architects, and business professionals to solve real problems promptly. What You Bring 3+ years hands-on MLOps (4-5 yrs total software experience). Proven experience with one hyperscaler (AWS, Azure, or GCP). Confidence with Databricks / Spark, Python, SQL, TensorFlow / PyTorch / Scikit-learn. Extensive experience handling and troubleshooting Kubernetes and proficiency in Dockerfile management. Prototyping with open-source tools, selecting the appropriate solution, and ensuring scalability. Analytical thinker, team player, with a proactive attitude. Nice-to-Haves Sagemaker, Azure ML, or Vertex AI in production. Dedication to clean code, thorough documentation, and precise pull requests. Skills: mlflow,ml ops,scikit-learn,airflow,mlops,sql,pytorch,adf,step functions,kubernetes,gcp,kubeflow,python,databricks,tensorflow,aws,azure,docker,seldon,spark Show more Show less

Posted 1 week ago

Apply

6.0 - 9.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Job Description The Risk division is responsible for credit, market and operational risk, model risk, independent liquidity risk, and insurance throughout the firm. RISK BUSINESS The Risk Business identifies, monitors, evaluates, and manages the firm’s financial and non-financial risks in support of the firm’s Risk Appetite Statement and the firm’s strategic plan. Operating in a fast paced and dynamic environment and utilizing the best in class risk tools and frameworks, Risk teams are analytically curious, have an aptitude to challenge, and an unwavering commitment to excellence. Overview To ensure uncompromising accuracy and timeliness in the delivery of the risk metrics, our platform is continuously growing and evolving. Risk Engineering combines the principles of Computer Science, Mathematics and Finance to produce large scale, computationally intensive calculations of risk Goldman Sachs faces with each transaction we engage in. As an Engineer in the Risk Engineering organization, you will have the opportunity to impact one or more aspects of risk management. You will work with a team of talented engineers to drive the build & adoption of common tools, platforms, and applications. The team builds solutions that are offered as a software product or as a hosted service. We are a dynamic team of talented developers and architects who partner with business areas and other technology teams to deliver high profile projects using a raft of technologies that are fit for purpose (Java, Cloud computing, HDFS, Spark, S3, ReactJS, Sybase IQ among many others). A glimpse of the interesting problems that we engineer solutions for, include acquiring high quality data, storing it, performing risk computations in limited amount of time using distributed computing, and making data available to enable actionable risk insights through analytical and response user interfaces. What We Look For Senior Developer in large projects across a global team of developers and risk managers Performance tune applications to improve memory and CPU utilization. Perform statistical analyses to identify trends and exceptions related Market Risk metrics. Build internal and external reporting for the output of risk metric calculation using data extraction tools, such as SQL, and data visualization tools, such as Tableau. Utilize web development technologies to facilitate application development for front end UI used for risk management actions Develop software for calculations using databases like Snowflake, Sybase IQ and distributed HDFS systems. Interact with business users for resolving issues with applications. Design and support batch processes using scheduling infrastructure for calculation and distributing data to other systems. Oversee junior technical team members in all aspects of Software Development Life Cycle (SDLC) including design, code review and production migrations. Skills And Experience Bachelor’s degree in Computer Science, Mathematics, Electrical Engineering or related technical discipline 6-9 years’ experience is working risk technology team in another bank, financial institution. Experience in market risk technology is a plus. Experience with one or more major relational / object databases. Experience in software development, including a clear understanding of data structures, algorithms, software design and core programming concepts Comfortable multi-tasking, managing multiple stakeholders and working as part of a team Comfortable with working with multiple languages Technologies: Scala, Java, Python, Spark, Linux and shell scripting, TDD (JUnit), build tools (Maven/Gradle/Ant) Experience in working with process scheduling platforms like Apache Airflow. Should be ready to work in GS proprietary technology like Slang/SECDB An understanding of compute resources and the ability to interpret performance metrics (e.g., CPU, memory, threads, file handles). Knowledge and experience in distributed computing – parallel computation on a single machine like DASK, Distributed processing on Public Cloud. Knowledge of SDLC and experience in working through entire life cycle of the project from start to end About Goldman Sachs At Goldman Sachs, we commit our people, capital and ideas to help our clients, shareholders and the communities we serve to grow. Founded in 1869, we are a leading global investment banking, securities and investment management firm. Headquartered in New York, we maintain offices around the world. We believe who you are makes you better at what you do. We're committed to fostering and advancing diversity and inclusion in our own workplace and beyond by ensuring every individual within our firm has a number of opportunities to grow professionally and personally, from our training and development opportunities and firmwide networks to benefits, wellness and personal finance offerings and mindfulness programs. Learn more about our culture, benefits, and people at GS.com/careers. We’re committed to finding reasonable accommodations for candidates with special needs or disabilities during our recruiting process. Learn more: https://www.goldmansachs.com/careers/footer/disability-statement.html © The Goldman Sachs Group, Inc., 2023. All rights reserved. Goldman Sachs is an equal employment/affirmative action employer Female/Minority/Disability/Veteran/Sexual Orientation/Gender Identity Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Job Description The candidate must possess knowledge relevant to the functional area, and act as a subject matter expert in providing advice in the area of expertise, and also focus on continuous improvement for maximum efficiency. It is vital to focus on the high standard of delivery excellence, provide top-notch service quality and develop successful long-term business partnerships with internal/external customers by identifying and fulfilling customer needs. He/she should be able to break down complex problems into logical and manageable parts in a systematic way, and generate and compare multiple options, and set priorities to resolve problems. The ideal candidate must be proactive, and go beyond expectations to achieve job results and create new opportunities. He/she must positively influence the team, motivate high performance, promote a friendly climate, give constructive feedback, provide development opportunities, and manage career aspirations of direct reports. Communication skills are key here, to explain organizational objectives, assignments, and the big picture to the team, and to articulate team vision and clear objectives. Process Manager Roles And Responsibilities Designing and implementing scalable, reliable, and maintainable data architectures on AWS. Developing data pipelines to extract, transform, and load (ETL) data from various sources into AWS environments. Creating and optimizing data models and schemas for performance and scalability using AWS services like Redshift, Glue, Athena, etc. Integrating AWS data solutions with existing systems and third-party services. Monitoring and optimizing the performance of AWS data solutions, ensuring efficient query execution and data retrieval. Implementing data security and encryption best practices in AWS environments. Documenting data engineering processes, maintaining data pipeline infrastructure, and providing support as needed. Working closely with cross-functional teams including data scientists, analysts, and stakeholders to understand data requirements and deliver solutions. Technical And Functional Skills Typically, a bachelors degree in Computer Science, Engineering, or a related field is required, along with 5+ years of experience in data engineering and AWS cloud environments. Strong experience with AWS data services such as S3, EC2, Redshift, Glue, Athena, EMR, etc Proficiency in programming languages commonly used in data engineering such as Python, SQL, Scala, or Java. Experience in designing, implementing, and optimizing data warehouse solutions on Snowflake/ Amazon Redshift. Familiarity with ETL tools and frameworks (e.g., Apache Airflow, AWS Glue) for building and managing data pipelines. Knowledge of database management systems (e.g., PostgreSQL, MySQL, Amazon Redshift) and data lake concepts. Understanding of big data technologies such as Hadoop, Spark, Kafka, etc., and their integration with AWS. Proficiency in version control tools like Git for managing code and infrastructure as code (e.g., CloudFormation, Terraform). Ability to analyze complex technical problems and propose effective solutions. Strong verbal and written communication skills for documenting processes and collaborating with team members and stakeholders. Show more Show less

Posted 1 week ago

Apply

Exploring Airflow Jobs in India

The airflow job market in India is rapidly growing as more companies are adopting data pipelines and workflow automation. Airflow, an open-source platform, is widely used for orchestrating complex computational workflows and data processing pipelines. Job seekers with expertise in airflow can find lucrative opportunities in various industries such as technology, e-commerce, finance, and more.

Top Hiring Locations in India

  1. Bangalore
  2. Mumbai
  3. Hyderabad
  4. Pune
  5. Gurgaon

Average Salary Range

The average salary range for airflow professionals in India varies based on experience levels: - Entry-level: INR 6-8 lakhs per annum - Mid-level: INR 10-15 lakhs per annum - Experienced: INR 18-25 lakhs per annum

Career Path

In the field of airflow, a typical career path may progress as follows: - Junior Airflow Developer - Airflow Developer - Senior Airflow Developer - Airflow Tech Lead

Related Skills

In addition to airflow expertise, professionals in this field are often expected to have or develop skills in: - Python programming - ETL concepts - Database management (SQL) - Cloud platforms (AWS, GCP) - Data warehousing

Interview Questions

  • What is Apache Airflow? (basic)
  • Explain the key components of Airflow. (basic)
  • How do you schedule a DAG in Airflow? (basic)
  • What are the different operators in Airflow? (medium)
  • How do you monitor and troubleshoot DAGs in Airflow? (medium)
  • What is the difference between Airflow and other workflow management tools? (medium)
  • Explain the concept of XCom in Airflow. (medium)
  • How do you handle dependencies between tasks in Airflow? (medium)
  • What are the different types of sensors in Airflow? (medium)
  • What is a Celery Executor in Airflow? (advanced)
  • How do you scale Airflow for a high volume of tasks? (advanced)
  • Explain the concept of SubDAGs in Airflow. (advanced)
  • How do you handle task failures in Airflow? (advanced)
  • What is the purpose of a TriggerDagRun operator in Airflow? (advanced)
  • How do you secure Airflow connections and variables? (advanced)
  • Explain how to create a custom Airflow operator. (advanced)
  • How do you optimize the performance of Airflow DAGs? (advanced)
  • What are the best practices for version controlling Airflow DAGs? (advanced)
  • Describe a complex data pipeline you have built using Airflow. (advanced)
  • How do you handle backfilling in Airflow? (advanced)
  • Explain the concept of DAG serialization in Airflow. (advanced)
  • What are some common pitfalls to avoid when working with Airflow? (advanced)
  • How do you integrate Airflow with external systems or tools? (advanced)
  • Describe a challenging problem you faced while working with Airflow and how you resolved it. (advanced)

Closing Remark

As you explore job opportunities in the airflow domain in India, remember to showcase your expertise, skills, and experience confidently during interviews. Prepare well, stay updated with the latest trends in airflow, and demonstrate your problem-solving abilities to stand out in the competitive job market. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies