Jobs
Interviews

6350 Airflow Jobs - Page 50

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0 years

0 Lacs

Bengaluru East, Karnataka, India

On-site

Company Description At Nielsen, we are passionate about our work to power a better media future for all people by providing powerful insights that drive client decisions and deliver extraordinary results. Our talented, global workforce is dedicated to capturing audience engagement with content - wherever and whenever it’s consumed. Together, we are proudly rooted in our deep legacy as we stand at the forefront of the media revolution. When you join Nielsen, you will join a dynamic team committed to excellence, perseverance, and the ambition to make an impact together. We champion you, because when you succeed, we do too. We enable your best to power our future. Job Description Ever wonder how Nielsen figured out that 127.3 million people tuned in to watch Super Bowl LVIII? That's what we do here. Come and join our team of professionals and together we will build a cutting-edge Television measurement system. On this specialized team, we will use a combination of technology, curiosity and culture to empower our teams and people to be successful and focused on delivering highly reliable and accurate systems. Our focus will be to develop and deliver content recognition engines used in measuring streaming video, commercials, and broadcast TV. Responsibilities The duties of this position include the development and refinement of a high-resolution content identification system used in the identification of television programs and commercials. You will be working on a scrum team with other skilled developers sharing best practices and exploring new technologies and algorithms that will advance the excellence of our measurement. You will build and maintain microservices that power the content identification services used in Television Audience Measurement. These micro services run in AWS and consume and process data using advanced algorithms that are tuned for efficiency. You will also be responsible for the efficient use of AWS resources on our projects. Critical thinking and Innovation are highly valued on this team and everyone is expected to think out of the box, bring new ideas and to challenge what we do and how we are doing it. Qualifications Bachelor's degree in Computer Engineering, Computer Science Skilled in GoLang and fluent in Python Skilled in AWS Programming APIs for S3, SQS. Experienced in Docker, Kubernetes, or ECS Experienced in writing Infrastructure as code: Pulumi and/or Terraform. Experienced in Operating and writing Airflow DAGs. Has fundamental skills in Signal Processing - FFT, Nyquist cutoff,etc. Strong skills in Hashing data and working with large hash tables. Having past experience of working efficiently with large data sets. Strong abstract reasoning and problem solving skills. Demonstrate the ability to perform a root cause analysis. Can deal with ambiguity and formulate key questions to resolve the ambiguity. Required Unix Required Additional Information Please be aware that job-seekers may be at risk of targeting by scammers seeking personal data or money. Nielsen recruiters will only contact you through official job boards, LinkedIn, or email with a nielsen.com domain. Be cautious of any outreach claiming to be from Nielsen via other messaging platforms or personal email addresses. Always verify that email communications come from an @nielsen.com address. If you're unsure about the authenticity of a job offer or communication, please contact Nielsen directly through our official website or verified social media channels.

Posted 3 weeks ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

About Soul AI Pods Deccan AI is a pioneering company founded by IIT Bombay and IIM Ahmedabad alumni, with a strong founding team from top tier institutes like IITs, NITs, and BITS. We’re hiring for our client servicing arm Soul AI. Soul AI has a talent network, known as Soul AI Pods , under which highly skilled & vetted tech talent gets the opportunity to work with top-tier global tech companies. Our client list includes tech giants like Google & Snowflake .Read more about her eResponsibilitie sDesign, build, and maintain ETL/ELT pipeline s using tools like Airflo w, DB T, or Spar kDevelop and optimize data lake s, data warehouse s, and streaming pipeline sEnsure data quality, reliability, and lineag e across sources and pipeline sIntegrate structured and unstructured data from internal and third-party API sCollaborate with ML teams to deliver production-ready feature pipeline s, labeling dat a, and dataset versionin gImplement data governanc e, security, and access control policie sRequired Skill s Strong SQL skills including analytical queries, CTEs, window functions, and query optimizati onProficient in Python for data manipulation and scripting using libraries like Pandas and Num PyExperience with ETL orchestration tools such as Airflow, Prefect, or Lui giHands-on with batch and streaming data processing using Spark, Kafka, or Fli nkFamiliarity with data lakes and warehouses (S3, BigQuery, Redshift, Snowflake) and schema desi gnBonus: experience with DBT, data validation, MLOps integration, or compliance-aware data workflo ws Application & Other Deta ilsTo apply, fill t he Soul AI Pods Interest F ormYou will be invited for selection process → R1: Test, R2: AI Interview, R3: 1:1 Interv iewWe are hiring for full-time or long Contract (40 hrs/week) hybrid ro lesWe are hiring across different seniority lev elsYou will work on a key client project (Top-tier tech consulting fi rm)

Posted 3 weeks ago

Apply

6.0 - 10.0 years

4 - 8 Lacs

Hyderabad

Work from Office

We are looking for a skilled Senior Oracle Data Engineer to join our team at Apps Associates (I) Pvt. Ltd, with 6-10 years of experience in the IT Services & Consulting industry. Roles and Responsibility Design, develop, and implement data engineering solutions using Oracle technologies. Collaborate with cross-functional teams to identify and prioritize project requirements. Develop and maintain large-scale data pipelines and architectures. Ensure data quality, integrity, and security through data validation and testing procedures. Optimize data processing workflows for improved performance and efficiency. Troubleshoot and resolve complex technical issues related to data engineering projects. Job Requirements Strong knowledge of Oracle Data Engineering concepts and technologies. Experience with data modeling, design, and development. Proficiency in programming languages such as Java or Python. Excellent problem-solving skills and attention to detail. Ability to work collaboratively in a team environment. Strong communication and interpersonal skills.

Posted 3 weeks ago

Apply

5.0 years

0 Lacs

Gurgaon

On-site

Manager EXL/M/1430835 ServicesGurgaon Posted On 23 Jul 2025 End Date 06 Sep 2025 Required Experience 5 - 10 Years Basic Section Number Of Positions 1 Band C1 Band Name Manager Cost Code D012515 Campus/Non Campus NON CAMPUS Employment Type Permanent Requisition Type New Max CTC 1800000.0000 - 3000000.0000 Complexity Level Not Applicable Work Type Hybrid – Working Partly From Home And Partly From Office Organisational Group Analytics Sub Group Retail Media & Hi-Tech Organization Services LOB Retail Media & Hi-Tech SBU Services Country India City Gurgaon Center EXL - Gurgaon Center 38 Skills Skill BIG DATA ETL JAVA SPARK Minimum Qualification ANY GRADUATE Certification No data available Job Description Job Title: Senior Data Engineer – Big Data, ETL & Java Experience Level: 5+ Years Employment Type: Full-time About the Role EXL is seeking a Senior Software Engineer with a strong foundation in Java, along with expertise in Big Data technologies and ETL development. In this role, you'll design and implement scalable, high-performance data and backend systems for clients in retail, media, and other data-driven industries. You’ll work across cloud platforms such as AWS and GCP to build end-to-end data and application pipelines. Key Responsibilities Design, develop, and maintain scalable data pipelines and ETL workflows using Apache Spark, Apache Airflow, and cloud platforms (AWS/GCP). Build and support Java-based backend components, services, or APIs as part of end-to-end data solutions. Work with large-scale datasets to support transformation, integration, and real-time analytics. Optimize Spark, SQL, and Java processes for performance, scalability, and reliability. Collaborate with cross-functional teams to understand business requirements and deliver robust solutions. Follow engineering best practices in coding, testing, version control, and deployment. Required Qualifications 5+ years of hands-on experience in software or data engineering. Proven experience in developing ETL pipelines using Java and Spark. Strong programming experience in Java (preferably with frameworks such as Spring or Spring Boot). Experience in Big Data tools including Apache Spark, Apache Airflow, and cloud services such as AWS EMR, Glue, S3, Lambda or GCP BigQuery, Dataflow, Cloud Functions. Proficiency in SQL and experience with performance tuning for large datasets. Familiarity with data modeling, warehousing, and distributed systems. Experience working in Agile development environments. Strong problem-solving skills and attention to detail. Excellent communication skills Preferred Qualifications Experience building and integrating RESTful APIs or microservices using Java. Exposure to data platforms like Snowflake, Databricks, or Kafka. Background in retail, merchandising, or media domains is a plus. Familiarity with CI/CD pipelines, DevOps tools, and cloud-based development workflows. Workflow Workflow Type L&S-DA-Consulting

Posted 3 weeks ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Description A Data Engineer Extraordinaire will possess masterful proficiency in crafting scalable and efficient solutions for data processing and analysis. With expertise in database management, ETL processes, and data modelling, they design robust pipelines using cutting-edge technologies such as Apache Spark and Hadoop. Their proficiency extends to cloud platforms like AWS, Azure, or Google Cloud Platform, where they leverage scalable resources to build resilient data ecosystems. This exceptional individual possesses a deep understanding of business requirements, collaborating closely with stakeholders to ensure that data infrastructure aligns with organizational objectives. Through their technical acumen and innovative spirit, they pave the way for data-driven insights and empower organizations to thrive in the digital age Key Responsibilities Develop and maintain cutting-edge data pipeline architecture, ensuring optimal performance and scalability. Building seamless ETL pipeline for diverse sources leveraging advanced big data technologies Craft advanced analytics tools that leverage the robust data pipeline, delivering actionable insights to drive business decisions Prototype and iterate test solutions for identified functional and technical challenges, driving innovation and problem-solving Champion ETL best practices and standards, ensuring adherence to industry-leading methodologies Collaborate closely with stakeholders across Executive, Product, Data, and Design teams, addressing data-related technical challenges and supporting their infrastructure needs Thrive in a dynamic, cross-functional environment, working collaboratively to drive innovation and deliver impactful solutions Required Skills and Qualifications Proficient in SQL, Python, Spark, and data transformation techniques Experience with Cloud Platforms AWS, Azure, or Google Cloud (GCP) for deploying and managing data services Data Orchestration Proficient in using orchestration tools such as Apache Airflow, Azure Data Factory (ADF), or similar tools for managing complex workflows Data Platform Experience Hands-on experience with Databricks or similar platforms for data engineering workloads Familiarity with Data Lakes and Warehouses Experience working with data lakes, data warehouses (Redshift/SQL Server/Big Query), and big data processing architectures Version Control & CI/CD Proficient in Git, GitHub, or similar version control systems, and comfortable working with CI/CD pipelines Data Security Knowledge of data governance, encryption, and compliance practices within cloud environments Problem-solving Analytical thinking and problem-solving mindset, with a passion for optimizing data workflows Preferred Skills and Qualifications Bachelor's degree or equivalent degrees in computer science, Engineering, or a related field 3+ years of experience in data engineering or related roles Hands-on experience with distributed computing and parallel data processing Good to Have Streaming Tools Experience with Kafka, Event Hubs, Amazon SQS, or equivalent streaming technologies Experience in Containerization Familiarity with Docker and Kubernetes for deploying scalable data solutions Engage in peer review processes and present research findings at esteemed ML/AI conferences such as NIPS, ICML, AAAI and COLT Experiment with latest advancements in Data Engineering tools, platforms, and methodologies. Mentor peers and junior members and handle multiple projects at the same time Participate and speak at various external forums such as research conferences and technical summits Promote and support company policies, procedures, mission, values, and standards of ethics and integrity Certifications in AWS, Azure, or GCP are a plus Understanding of modern data architecture patterns, including the Lambda and Kappa architectures

Posted 3 weeks ago

Apply

5.0 years

7 - 8 Lacs

Hyderābād

On-site

Full-time Employee Status: Regular Role Type: Hybrid Department: Product Development Schedule: Full Time Company Description Experian is a global data and technology company, powering opportunities for people and businesses around the world. We help to redefine lending practices, uncover and prevent fraud, simplify healthcare, create marketing solutions, and gain deeper insights into the automotive market, all using our unique combination of data, analytics and software. We also assist millions of people to realize their financial goals and help them save time and money. We operate across a range of markets, from financial services to healthcare, automotive, agribusiness, insurance, and many more industry segments. We invest in people and new advanced technologies to unlock the power of data. As a FTSE 100 Index company listed on the London Stock Exchange (EXPN), we have a team of 22,500 people across 32 countries. Our corporate headquarters are in Dublin, Ireland. Learn more at experianplc.com. Job Description Design, develop, and maintain high-quality software solutions. Collaborate with cross-functional teams to define, design, and ship new features. Strong Programming knowledge includes design patterns and debugging Java or Scala Design and Implement Data Engineering Frameworks on HDFS, Spark and EMR Implement and manage Kafka Streaming and containerized microservices. Work with RDBMS (Aurora MySQL) and No-SQL (Cassandra) databases. Utilize AWS Cloud services such as S3, EFS, MSK, ECS, EMR, etc. Ensure the performance, quality, and responsiveness of applications. Troubleshoot and resolve software defects and issues. Write clean, maintainable, and efficient code. Participate in code reviews and contribute to team knowledge sharing. You will be reporting to a Senior Manager This role would require you to work from Hyderabad (Workplace) for Hybrid 2 days a week from Office Qualifications 5+ years experienced engineer with hands-on and strong coding skills, preferably with Scala and java. Experience with Data Engineering – BigData, EMR, Airflow, Spark, Athena. AWS Cloud experience – S3, EFS, MSK, ECS, EMR, etc. Experience with Kafka Streaming and containerized microservices. Knowledge and experience with RDBMS (Aurora MySQL) and No-SQL (Cassandra) databases. Additional Information Our uniqueness is that we truly celebrate yours. Experian's culture and people are important differentiators. We take our people agenda very seriously and focus on what truly matters; DEI, work/life balance, development, authenticity, engagement, collaboration, wellness, reward & recognition, volunteering... the list goes on. Experian's strong people first approach is award winning; Great Place To Work™ in 24 countries, FORTUNE Best Companies to work and Glassdoor Best Places to Work (globally 4.4 Stars) to name a few. Check out Experian Life on social or our Careers Site to understand why. Experian is proud to be an Equal Opportunity and Affirmative Action employer. Innovation is a critical part of Experian's DNA and practices, and our diverse workforce drives our success. Everyone can succeed at Experian and bring their whole self to work, irrespective of their gender, ethnicity, religion, color, sexuality, physical ability or age. If you have a disability or special need that requires accommodation, please let us know at the earliest opportunity. Experian Careers - Creating a better tomorrow together Benefits Experian care for employee's work life balance, health, safety and wellbeing. In support of this endeavor, we offer best-in-class family well-being benefits, enhanced medical benefits and paid time off. #LI-Onsite Experian Careers - Creating a better tomorrow together

Posted 3 weeks ago

Apply

6.0 years

7 - 8 Lacs

Hyderābād

On-site

Full-time Employee Status: Regular Role Type: Hybrid Department: Analytics Schedule: Full Time Company Description Experian is a global data and technology company, powering opportunities for people and businesses around the world. We help to redefine lending practices, uncover and prevent fraud, simplify healthcare, create marketing solutions, and gain deeper insights into the automotive market, all using our unique combination of data, analytics and software. We also assist millions of people to realize their financial goals and help them save time and money. We operate across a range of markets, from financial services to healthcare, automotive, agribusiness, insurance, and many more industry segments. We invest in people and new advanced technologies to unlock the power of data. As a FTSE 100 Index company listed on the London Stock Exchange (EXPN), we have a team of 22,500 people across 32 countries. Our corporate headquarters are in Dublin, Ireland. Learn more at experianplc.com. Job Description Senior Data Engineer is responsible for design, develop and support ETL data pipelines solutions primary in AWS environment Design, develop, and maintain scaled ETL process to deliver meaningful insights from large and complicated data sets. Work as part of a team to build out and support data warehouse, implement solutions using PySpark to process structured and unstructured data. Play key role in building out a semantic layer through development of ETLs and virtualized views. Collaborate with Engineering teams to discovery and leverage new data being introduced into the environment Support existing ETL processes written in SQL, or leveraging third party APIs with Python, troubleshoot and resolve production issues. Strong SQL and data to understand and troubleshoot existing complex SQL. Hands-on experience with Apache Airflow or equivalent tools (AWS MWAA) for orchestration of data pipelines Create and maintain report specifications and process documentations as part of the required data deliverables. Serve as liaison with business and technical teams to achieve project objectives, delivering cross functional reporting solutions. Troubleshoot and resolve data, system, and performance issues Communicating with business partners, other technical teams and management to collect requirements, articulate data deliverables, and provide technical designs. Qualifications you have completed graduation from BE/Btech 6 to 9 years of experience in Data Engineering development 5 years of experience in Python scripting You should have 8 years experience in SQL, 5+years in Datawarehouse, 5yrs in Agile and 3yrs with Cloud 3 years of experience with AWS ecosystem (Redshift, EMR, S3, MWAA) 5 years of experience in Agile development methodology You will work with the team to create solutions Proficiency in CI/CD tools (Jenkins, GitLab, etc.) Additional Information Our uniqueness is that we celebrate yours. Experian's culture and people are important differentiators. We take our people agenda very seriously and focus on what matters; DEI, work/life balance, development, authenticity, collaboration, wellness, reward & recognition, volunteering... the list goes on. Experian's people first approach is award-winning; World's Best Workplaces™ 2024 (Fortune Top 25), Great Place To Work™ in 24 countries, and Glassdoor Best Places to Work 2024 to name a few. Check out Experian Life on social or our Careers Site to understand why. Experian is proud to be an Equal Opportunity and Affirmative Action employer. Innovation is an important part of Experian's DNA and practices, and our diverse workforce drives our success. Everyone can succeed at Experian and bring their whole self to work, irrespective of their gender, ethnicity, religion, colour, sexuality, physical ability or age. If you have a disability or special need that requires accommodation, please let us know at the earliest opportunity. #LI-Onsite Benefits Experian care for employee's work life balance, health, safety and wellbeing. 1) In support of this endeavor, we offer the best family well-being benefits, 2) Enhanced medical benefits and paid time off. Experian Careers - Creating a better tomorrow together

Posted 3 weeks ago

Apply

130.0 years

0 Lacs

Hyderābād

On-site

Job Description Associate Manager, Scientific Data Cloud Engineering The Opportunity Based in Hyderabad, join a global healthcare biopharma company and be part of a 130- year legacy of success backed by ethical integrity, forward momentum, and an inspiring mission to achieve new milestones in global healthcare. Be part of an organisation driven by digital technology and data-backed approaches that support a diversified portfolio of prescription medicines, vaccines, and animal health products. Drive innovation and execution excellence. Be a part of a team with passion for using data, analytics, and insights to drive decision-making, and which creates custom software, allowing us to tackle some of the world's greatest health threats. Our Technology Centers focus on creating a space where teams can come together to deliver business solutions that save and improve lives. An integral part of our company’s IT operating model, Tech Centers are globally distributed locations where each IT division has employees to enable our digital transformation journey and drive business outcomes. These locations, in addition to the other sites, are essential to supporting our business and strategy. A focused group of leaders in each Tech Center helps to ensure we can manage and improve each location, from investing in growth, success, and well-being of our people, to making sure colleagues from each IT division feel a sense of belonging to managing critical emergencies. And together, we must leverage the strength of our team to collaborate globally to optimize connections and share best practices across the Tech Centres. Role Overview Our Engineering team builds core components used by our Research Labs data analytics, visualization, and management workflows. The analysis tools and pipelines built for data processing by our team in partnership with our scientists aim to accelerate research and the discovery of new therapies for our patients. We collect, annotate, analyze petabytes of scientific data (multi-omics, chemistry, imaging, safety) used in biomarker research, drug safety/efficacy, drug target discovery, and compendium diagnostic development. We help our scientists to process, analyze scientific data at scale by developing highly parallelized analytical workflows ran on HPC infrastructure (on-prem & cloud); to manage, explore and visualize various scientific data modalities by developing bespoke data models, bioinformatics ETL processes, data retrieval and visualization services using distributed micro-service architecture, FAIR data principles, SPA type dashboards, industry specific regulatory compliant data integrity, auditing, and security access controls. We are a creative and disciplined software engineering team, using agile practices and established technology stacks to design and develop large-scale data analytics, visualization, and management software solutions for local and on-cloud hosted HPC datacenters, as well as to integrate 3rd party analytical platforms with internal data workflows to address pressing engineering and data science challenges of life-science. We are looking for Software Engineers (SE) who can break down and solve complex problems with a strong motivation to get things done with a boots-on-the-ground, pragmatic mindset! Our engineers own their products end to end and influence the way how our products and technology are deployed to facilitate most aspects of drug discovery impacting hundreds of thousands of patients around the world. We are looking for engineers who can creatively handle complex dependencies and ambiguous requirements, competing business priorities, while producing fit-for-purpose, optimal solutions. We are hoping that you are passionate about collaborating across the interface between hard-core software development and research and discovery data analysis. What will you do in this role Design and implement engineering tools, applications and solutions that facilitate research processes and scientific discovery in several areas of our drug discovery process. Help drive the design and architecture of adopted engineering solutions with a detail-oriented mindset Promote and help with the adoption of development, design, architecture, and DevOps best practices, with a particular focus on agile deliver mindset Lead and mentor smaller team of developers (squads) to ensure timely and quality delivery of multiple product iterations Drive product discovery and requirements clarification for ambiguous and/or undefined problems framed with uncertainty. Manage technical and business dependencies and bottlenecks; balance technical constraints with business requirements; and deliver maximum business impact with solid customer experience Help stakeholders with go/no-go decisions on software and infrastructure by assessing gaps in existing software solutions (internal/external), by vetting technologies/platforms and vendor products Strong collaboration, organization skills in cross-functional teams; ability to effectively communicate with technical and non-technical audiences; work closely with scientists, peers, and business leaders in different geographical locations to define and deliver complex engineering features. What should you have Education: BS or MS in Computer Science/Bioinformatics Basic Qualifications MUST (Proficient (2+ years hands-on experience) w/ at least one language: Java (preferred), Python or C# with building CI/CD workflows with Jenkins or equivalent with using IaC frameworks (CloudFormation, Ansible, Terraform) to build microservice-architecture solutions with a focus on scientific data analysis and management to integrate AWS services (EC2/RDS/S3/Batch/KMS/ECS etc..) into production workflows SHOULD-HAVE (proven hands-on experience, at least 3 years) with these scripting languages: Python, Bash to build production workflows using Java/Python/ with Linux OS command line to drive API (REST, GraphQL, etc.) driven, modular development of production workflows and integration with 3rd party vendor platforms relational data models, ETL processing pipelines using PostgreSQL (preferred), Oracle, SQL Server, MySQL Proficient with pipelines-workflow: building, execution, maintenance, debugging Airflow, Nextflow, aws batch SOFT-SKILS MUST Strong collaborator, communicator, Strong problem-solving skills Experienced with making technical partnerships with research and business teams NICE TO HAVE: Prior experience with with non-relational database vendors (Elastic Search, etc..) with building resource intensive HPC analysis modules and/or data processing tasks with developing and deploying containerized applications (i.e., Docker, Singularity) with containerization platform: Kubernetes/Helm with end-to-end testing framework: Robot/Selenium What we look for Imagine getting up in the morning for a job as important as helping to save and improve lives around the world. Here, you have that opportunity. You can put your empathy, creativity, digital mastery, or scientific genius to work in collaboration with a diverse group of colleagues who pursue and bring hope to countless people who are battling some of the most challenging diseases of our time. Our team is constantly evolving, so if you are among the intellectually curious, join us—and start making your impact today. #HYDIT2025 Current Employees apply HERE Current Contingent Workers apply HERE Search Firm Representatives Please Read Carefully Merck & Co., Inc., Rahway, NJ, USA, also known as Merck Sharp & Dohme LLC, Rahway, NJ, USA, does not accept unsolicited assistance from search firms for employment opportunities. All CVs / resumes submitted by search firms to any employee at our company without a valid written search agreement in place for this position will be deemed the sole property of our company. No fee will be paid in the event a candidate is hired by our company as a result of an agency referral where no pre-existing agreement is in place. Where agency agreements are in place, introductions are position specific. Please, no phone calls or emails. Employee Status: Regular Relocation: VISA Sponsorship: Travel Requirements: Flexible Work Arrangements: Hybrid Shift: Valid Driving License: Hazardous Material(s): Required Skills: Availability Management, Capacity Management, Change Controls, Design Applications, High Performance Computing (HPC), Incident Management, Information Management, Information Technology (IT) Infrastructure, IT Service Management (ITSM), Release Management, Software Development, Software Development Life Cycle (SDLC), Solution Architecture, System Administration, System Designs Preferred Skills: Job Posting End Date: 08/15/2025 A job posting is effective until 11:59:59PM on the day BEFORE the listed job posting end date. Please ensure you apply to a job posting no later than the day BEFORE the job posting end date. Requisition ID: R353506

Posted 3 weeks ago

Apply

8.0 years

12 Lacs

India

On-site

Experience- 8+ years JD- We are seeking a skilled Snowflake Developer with 8+ years of experience in designing, developing, and optimizing Snowflake data solutions. The ideal candidate will have strong expertise in Snowflake SQL, ETL/ELT pipelines, and cloud data integration. This role involves building scalable data warehouses, implementing efficient data models, and ensuring high-performance data processing in Snowflake. Key Responsibilities 1. Snowflake Development & Optimization Design and develop Snowflake databases, schemas, tables, and views following best practices. Write complex SQL queries, stored procedures, and UDFs for data transformation. Optimize query performance using clustering, partitioning, and materialized views. Implement Snowflake features (Time Travel, Zero-Copy Cloning, Streams & Tasks). 2. Data Pipeline Development Build and maintain ETL/ELT pipelines using Snowflake, Snowpark, Python, or Spark. Integrate Snowflake with cloud storage (S3, Blob) and data ingestion tools (Snowpipe). Develop CDC (Change Data Capture) and real-time data processing solutions. 3. Data Modeling & Warehousing Design star schema, snowflake schema, and data vault models in Snowflake. Implement data sharing, secure views, and dynamic data masking. Ensure data quality, consistency, and governance across Snowflake environments. 4. Performance Tuning & Troubleshooting Monitor and optimize Snowflake warehouse performance (scaling, caching, resource usage). Troubleshoot data pipeline failures, latency issues, and query bottlenecks. Work with DevOps teams to automate deployments and CI/CD pipelines. 5. Collaboration & Documentation Work closely with data analysts, BI teams, and business stakeholders to deliver data solutions. Document data flows, architecture, and technical specifications. Mentor junior developers on Snowflake best practices. Required Skills & Qualifications · 8+ years in database development, data warehousing, or ETL. · 4+ years of hands-on Snowflake development experience. · Strong SQL or Python skills for data processing. · Experience with Snowflake utilities (SnowSQL, Snowsight, Snowpark). · Knowledge of cloud platforms (AWS/Azure) and data integration tools (Coalesce, Airflow, DBT). · Certifications: SnowPro Core Certification (preferred). Preferred Skills · Familiarity with data governance and metadata management. · Familiarity with DBT, Airflow, SSIS & IICS · Knowledge of CI/CD pipelines (Azure DevOps). Job Type: Full-time Pay: From ₹1,200,000.00 per year Schedule: Monday to Friday Application Question(s): How many years of total experience do you currently have? How many years of experience do you have in Snowflake development? What is your current CTC? What is your expected CTC? What is your notice period/ LWD? What is your current location? Are you comfortable attending L2 round face to face in Hyderabad?

Posted 3 weeks ago

Apply

6.0 years

8 - 23 Lacs

Hyderābād

On-site

Position – Data Engineer Exp – 6-8 Years Location - Hyderabad, INDIA Budget – open Budget based on interview Not Many Job Switches Can be from a Reputed College like ( IIM or IIT) Should & Must have SaaS product experience Mongo DB – Mandatory Good understanding of Database systems -- SQL and No SQL Must have comprehensive experience in MongoDB or any other document DB Responsibilities:  Design, build, and optimize data pipelines to ingest, process, transform, and load data from various sources into our data platform  Implement and maintain ETL workflows using tools like Debezium, Kafka, Airflow, and Jenkins to ensure reliable and timely data processing  Develop and optimize SQL and NoSQL database schemas, queries, and stored procedures for efficient data retrieval and processing  Work with both relational databases (MySQL, PostgreSQL) and NoSQL databases (MongoDB,DocumentDB) to build scalable data solutions  Design and implement data warehouse solutions that support analytical needs and machine learning applications  Collaborate with data scientists and ML engineers to prepare data for AI/ML models and implement data-driven features  Implement data quality checks, monitoring, and alerting to ensure data accuracy and reliability  Optimize query performance across various database systems through indexing, partitioning,and query refactoring  Develop and maintain documentation for data models, pipelines, and processes  Collaborate with cross-functional teams to understand data requirements and deliver solutions that meet business needs  Stay current with emerging technologies and best practices in data engineering Requirements:  6+ years of experience in data engineering or related roles with a proven track record of building data pipelines and infrastructure  Strong proficiency in SQL and experience with relational databases like MySQL and PostgreSQL  Hands-on experience with NoSQL databases such as MongoDB or AWS DocumentDB  Expertise in designing, implementing, and optimizing ETL processes using tools like Kafka,Debezium, Airflow, or similar technologies  Experience with data warehousing concepts and technologies  Solid understanding of data modeling principles and best practices for both operational and analytical systems  Proven ability to optimize database performance, including query optimization, indexing strategies, and database tuning  Experience with AWS data services such as RDS, Redshift, S3, Glue, Kinesis, and ELK stack  Proficiency in at least one programming language (Python, Node.js, Java)  Experience with version control systems (Git) and CI/CD pipelines  Bachelor's degree in Computer Science, Engineering, or related field Preferred Qualifications:  Experience with graph databases (Neo4j, Amazon Neptune)  Knowledge of big data technologies such as Hadoop, Spark, Hive, and data lake architectures  Experience working with streaming data technologies and real-time data processing  Familiarity with data governance and data security best practices  Experience with containerization technologies (Docker, Kubernetes)  Understanding of financial back-office operations and FinTech domain  Experience working in a high-growth startup environment Job Type: Permanent Pay: ₹862,603.66 - ₹2,376,731.02 per year Benefits: Health insurance Provident Fund Supplemental Pay: Performance bonus Yearly bonus Experience: ETL: 7 years (Preferred) HADOOP : 1 year (Preferred) Work Location: In person Application Deadline: 27/07/2025 Expected Start Date: 25/07/2025

Posted 3 weeks ago

Apply

0 years

3 - 7 Lacs

Hyderābād

On-site

Job description Some careers shine brighter than others. If you’re looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions. We are currently seeking an experienced professional to join our team in the role of Consultant Specialist In this role, you will : Working in a Cloud Infrastructure enablement team to enable GCP (Google Cloud Platform) platform and deploy data processing tools, APIs, AI / ML tools on GCP. Lead & drive areas to increase efficiency, reengineer and automate selected processes to drastically improve process/operation efficiencies that will enable the business to make rapid decisions and ensure successful implementation without impact the business. Drive tactical recommendations for the cloud infrastructure, build and deploy pipelines, observability for a variety of development teams and help them to operationalize those recommendations. Provide technical leadership in the areas of metrics, logging, tracing, profiling, alerting and visualization, deep dive into code and system design to help diagnose issues. Design, implement, and maintain scalable and reliable infrastructure solutions in cloud environments. Provide the tactical planning for each release cycle, including planning for configuration management time, branching schedule, documentation time and regression testing. Manage standard operating process such as proactive monitoring, incident handling, system release management, system maintenance, and disaster recovery. Collaborating with Technical Architects, project manager’s product management to understand and implement tools for the design process by working closely with teams to assist with the release of new products and to fit with the overall directions to allow teams to deliver code production reliably. Ensure to fix compliance with security / violations and operational risk standards and troubleshooting network issues. Drive CICD adoption and utilize to create a frictionless development process by modernizing the CICD orchestration using industry leading technologies and maintain a robust development and deployment infrastructure. Investigate live systems faults, diagnose problems and propose and provide solutions. Report progress as required and advise of problems in good time. Implement and maintain infrastructure-as-code (IaC) using tools like Terraform to ensure consistent and reproducible deployments. Update programs to increase operating efficiency or adapt to new requirements. Review code from team members Analyst/Developers as part of the quality assurance process. Collaborate with application teams to speed up and automate aspects of the process of developing, testing and releasing the software and deliver the updated software. Develop dashboards within cloud platforms to provide Realtime analysis on data usage within cloud platforms. Requirements To be successful in this role, you should meet the following requirements: Bachelor’s degree or International Equivalent Strong exposure to Google Cloud platform. Strong prioritization and time management skills 7 or more years work experience in Cloud DevOps with strong awareness of industry Data Quality practice and Analytic Tools and experience in financial domain (Banking). Having good exposure in remediating violations and vulnerabilities. Good technical design and implementation skills with proven track record of delivering large and complex projects Good communication skills to build and maintain effective working relationship with peer groups and users Experience in Airflow, Composer, Terraform, Jenkins and Ansible tower. Experience in CI/CD applying Jenkins or similar CICD systems. Automation experience. Linux and windows development experience & API Design. Networking with different application including knowledge of entitlement/access controls. Keep up-to-date and have expertise on current tools, technologies and areas like cyber security and regulations pertaining to aspects like data privacy. Creating Technical Documentation. Detailed understanding of data warehouse and data marts, along with BI tools. Self-motivated, focused, detailed oriented and able to work efficiently to deadlines are essential. Ability to work with a degree of autonomy while also being able to work in a collaborative team environment You’ll achieve more when you join HSBC. www.hsbc.com/careers HSBC is committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within an inclusive and diverse environment. Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website. Issued by – HSBC Software Development India

Posted 3 weeks ago

Apply

4.0 - 7.0 years

18 - 20 Lacs

Pune

Hybrid

Job Title: GCP Data Engineer Location: Pune, India Experience: 4 to 7 Years Job Type: Full-Time Job Summary: We are looking for a highly skilled GCP Data Engineer with 4 to 7 years of experience to join our data engineering team in Pune . The ideal candidate should have strong experience working with Google Cloud Platform (GCP) , including Dataproc , Cloud Composer (Apache Airflow) , and must be proficient in Python , SQL , and Apache Spark . The role involves designing, building, and optimizing data pipelines and workflows to support enterprise-grade analytics and data science initiatives. Key Responsibilities: Design and implement scalable and efficient data pipelines on GCP , leveraging Dataproc , BigQuery , Cloud Storage , and Pub/Sub. Develop and manage ETL/ELT workflows using Apache Spark , SQL , and Python. Orchestrate and automate data workflows using Cloud Composer (Apache Airflow). Build batch and streaming data processing jobs that integrate data from various structured and unstructured sources. Optimize pipeline performance and ensure cost-effective data processing. Collaborate with data analysts, scientists, and business teams to understand data requirements and deliver high-quality solutions. Implement and monitor data quality checks, validation, and transformation logic. Required Skills: Strong hands-on experience with Google Cloud Platform (GCP) Proficiency with Dataproc for big data processing and Apache Spark Expertise in Python and SQL for data manipulation and scripting Experience with Cloud Composer / Apache Airflow for workflow orchestration Knowledge of data modeling, warehousing, and pipeline best practices Solid understanding of ETL/ELT architecture and implementation Strong troubleshooting and problem-solving skills Preferred Qualifications: GCP Data Engineer or Cloud Architect Certification. Familiarity with BigQuery , Dataflow , and Pub/Sub. Interested candidates can send your your resume on pranitathapa@onixnet.com

Posted 3 weeks ago

Apply

1.0 years

3 - 8 Lacs

Chennai

On-site

If you are looking for a career at a dynamic company with a people-first mindset and a deep culture of growth and autonomy, ACV is the right place for you! Competitive compensation packages and learning and development opportunities, ACV has what you need to advance to the next level in your career. We will continue to raise the bar every day by investing in our people and technology to help our customers succeed. We hire people who share our passion, bring innovative ideas to the table, and enjoy a collaborative atmosphere. Who we are: ACV is a technology company that has revolutionized how dealers buy and sell cars online. We are transforming the automotive industry. ACV Auctions Inc. (ACV), has applied innovation and user-designed, data driven applications and solutions. We are building the most trusted and efficient digital marketplace with data solutions for sourcing, selling and managing used vehicles with transparency and comprehensive insights that were once unimaginable. We are disruptors of the industry and we want you to join us on our journey. Our network of brands include ACV Auctions, ACV Transportation, ClearCar, MAX Digital and ACV Capital within its Marketplace Products, as well as, True360 and Data Services. ACV Auctions in Chennai, India are looking for talented individuals to join our team. As we expand our platform, we're offering a wide range of exciting opportunities across various roles in corporate, operations, and product and technology. Our global product and technology organization spans product management, engineering, data science, machine learning, DevOps and program leadership. What unites us is a deep sense of customer centricity, calm persistence in solving hard problems, and a shared passion for innovation. If you're looking to grow, lead, and contribute to something larger than yourself, we'd love to have you on this journey. Let's build something extraordinary together. Join us in shaping the future of automotive! At ACV we focus on the Health, Physical, Financial, Social and Emotional Wellness of our Teammates and to support this we offer industry leading benefits and wellness programs. Who we are looking for: We are seeking a skilled and motivated engineer to join our Data Infrastructure team. The Data Infrastructure engineering team is responsible for the tools and backend infrastructure that supports our data platform to optimize in performance scalability and reliability. This role requires strong focus and experience in multi-cloud based technologies, message bus systems, automated deployments using containerized applications, design, development, database management and performance, SOX compliance requirements, and implementation of infrastructure using automation through terraform and continuous delivery and batch-oriented workflows. As a Data Infrastructure Engineer at ACV Auctions, you will work alongside and mentor software and production engineers in the development of solutions to ACV’s most complex data and software problems. You will be an engineer that is able to operate in a high performing team, that can balance high quality deliverables with customer focus, have excellent communication skills, desire and ability to mentor and guide engineers, and have a record of delivering results in a fast paced environment. It is expected that you are a technical liaison that you can balance high quality delivery with customer focus, that you have excellent communication skills, and that you have a record of delivering results in a fast-paced environment. What you will be doing: Collaborate with cross-functional teams, including Data Scientists, Software Engineers, Data Engineers, and Data Analysts, to understand data requirements and translate them into technical specifications. Influence company wide engineering standards for databases, tooling, languages, and build systems. Design, implement, and maintain scalable and high-performance data infrastructure solutions, with a primary focus on data. Design, implement, and maintain tools and best practices for (but not limited to) access control, data versioning, database management, and migration strategies. Contribute, influence, and set standards for all technical aspects of a product or service including, but not limited to, coding, testing, debugging, performance, languages, database selection, management and deployment. Identify and troubleshoot database/system issues and bottlenecks, working closely with the engineering team to implement effective solutions. Write clean, maintainable, well-commented code and automation to support our data infrastructure layer. Perform code reviews, develop high-quality documentation, and build robust test suites for your products. Provide technical support for databases, including troubleshooting, performance tuning, and resolving complex issues. Collaborate with software and DevOps engineers to design scalable services, plan feature roll-out, and ensure high reliability and performance of our products. Collaborate with development teams and data science teams to design and optimize database schemas, queries, and stored procedures for maximum efficiency. Participate in the SOX audits, including creation of standards and reproducible audit evidence through automation Create and maintain documentation for database and system configurations, procedures, and troubleshooting guides. Maintain and extend (as required) existing database operations solutions for backups, index defragmentation, data retention, etc. Respond-to and troubleshoot highly complex problems quickly, efficiently, and effectively. Accountable for the overall performance of products and/or services within a defined area of focus. Be part of the on-call rotation. Handle multiple competing priorities in an agile, fast-paced environment. Perform additional duties as assigned What you need: Bachelor’s degree in computer science, Information Technology, or a related field (or equivalent work experience) Ability to read, write, speak, and understand English Strong communication and collaboration skills, with the ability to work effectively in a fast paced global team environment 1+ years experience architecting, developing, and delivering software products with emphasis on data infrastructure layer 1+ years work with continuous integration and build tools. 1+ years experience programing in Python 1+ years experience with Cloud platforms preferably in GCP/AWS Knowledge in day-day tools and how they work including deployments, k8s, monitoring systems, and testing tools. Knowledge in version control systems including trunk-based development, multiple release planning, cherry picking, and rebase. Hands-on skills and the ability to drill deep into the complex system design and implementation. Experience with: DevOps practices and tools for database automation and infrastructure provisioning Programming in Python, SQL Github, Jenkins Infrastructure as code tooling, such as terraform, preferred Big data technologies and distributed databases Nice to Have Qualifications: Experience with NoSQL data stores Airflow, Docker, Containers, Kubernetes, DataDog, Fivetran Database monitoring and diagnostic tools, preferably Data Dog. Database management/administration with PostgreSQL, MySQL, Dynamo, Mongo GCP/BigQuery, Confluent Kafka Using and integrating with cloud services, specifically: AWS RDS, Aurora, S3, GCP Service Oriented Architecture/Microservices and Event Sourcing in a platform like Kafka (preferred) Familiarity with DevOps practices and tools for automation and infrastructure provisioning. Hands-on experience with SOX compliance requirements Knowledge of data warehousing concepts and technologies, including dimensional modeling and ETL frameworks Knowledge of database design principles, data modeling, architecture, infrastructure, security principles, best practices, performance tuning and optimization techniques #LI-AM1 Our Values Trust & Transparency | People First | Positive Experiences | Calm Persistence | Never Settling At ACV, we are committed to an inclusive culture in which every individual is welcomed and empowered to celebrate their true selves. We achieve this by fostering a work environment of acceptance and understanding that is free from discrimination. ACV is committed to being an equal opportunity employer regardless of sex, race, creed, color, religion, marital status, national origin, age, pregnancy, sexual orientation, gender, gender identity, gender expression, genetic information, disability, military status, status as a veteran, or any other protected characteristic. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. If you have a disability or special need that requires reasonable accommodation, please let us know. Data Processing Consent When you apply to a job on this site, the personal data contained in your application will be collected by ACV Auctions Inc. and/or one of its subsidiaries ("ACV Auctions"). By clicking "apply", you hereby provide your consent to ACV Auctions and/or its authorized agents to collect and process your personal data for purpose of your recruitment at ACV Auctions and processing your job application. ACV Auctions may use services provided by a third party service provider to help manage its recruitment and hiring process. For more information about how your personal data will be processed by ACV Auctions and any rights you may have, please review ACV Auctions' candidate privacy notice here. If you have any questions about our privacy practices, please contact datasubjectrights@acvauctions.com.

Posted 3 weeks ago

Apply

5.0 years

0 Lacs

Chennai

Remote

Summary As a Data Engineer at Gainwell, you can contribute your skills as we harness the power of technology to help our clients improve the health and well-being of the members they serve — a community’s most vulnerable. Connect your passion with purpose, teaming with people who thrive on finding innovative solutions to some of healthcare’s biggest challenges. Here are the details on this position. Your role in our mission Lead the end-to-end design of pipelines and storage solutions supporting applications and reporting. Build and optimize ETL/ELT processes for ingesting and transforming applications. Architect and maintain high performance and PostgreSQL databases integrated with OutSystems front-end applications and AWS backend services. Collaborate closely with OutSystems developers, architects, business analysts, and SMEs to deliver solutions aligned with the requirements. Implement and monitor quality, lineage, and validation to ensure accuracy in reporting. Integrate with AWS services such as S3, RDS, Glue, Lambda, Step Functions, and manage pipelines and workflows. Provide technical leadership, mentor junior data engineers, and define best practices for internal process, performance tuning, and governance. What we're looking for Bachelor’s or Master’s degree in Computer Science, Information Systems, Data Engineering, or a related field. 5+ years of data engineering experience, including leading complex projects. 3+ years of hands-on experience with AWS services (e.g., RDS/PostgreSQL, Glue, S3, Lambda, CloudWatch). Deep experience designing and optimizing PostgreSQL schemas and queries. Experience integrating OutSystems applications with external databases and APIs. Solid understanding of ETL frameworks, and orchestration tools (e.g., Airflow, Step Functions). Strong communication skills with the ability to translate business requirements into scalable solutions. Experience building dashboards to support operational metrices. Familiarity with OutSystems service components and integration patterns. AWS and/or OutSystems certifications. What you should expect in this role Work Environment Remote/ Hybrid

Posted 3 weeks ago

Apply

6.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Hello Everyone We are #hiring Position – Data Engineer Exp – 6-8 Years Location - Hyderabad, INDIA 🧠 Mongo DB – Mandatory Good understanding of Database systems -- SQL and No SQL Must have comprehensive experience in MongoDB or any other document DB 🔷 Responsibilities: ▪️ Design, build, and optimize data pipelines to ingest, process, transform, and load data from various sources into our data platform ▪️ Implement and maintain ETL workflows using tools like Debezium, Kafka, Airflow, and Jenkins to ensure reliable and timely data processing ▪️ Develop and optimize SQL and NoSQL database schemas, queries, and stored procedures for efficient data retrieval and processing ▪️ Work with both relational databases (MySQL, PostgreSQL) and NoSQL databases (MongoDB,DocumentDB) to build scalable data solutions ▪️ Design and implement data warehouse solutions that support analytical needs and machine learning applications ▪️ Collaborate with data scientists and ML engineers to prepare data for AI/ML models and implement data-driven features ▪️ Implement data quality checks, monitoring, and alerting to ensure data accuracy and reliability ▪️ Optimize query performance across various database systems through indexing, partitioning,and query refactoring ▪️ Develop and maintain documentation for data models, pipelines, and processes ▪️ Collaborate with cross-functional teams to understand data requirements and deliver solutions that meet business needs ▪️ Stay current with emerging technologies and best practices in data engineering 🔷 Requirements: ▫️ 6+ years of experience in data engineering or related roles with a proven track record of building data pipelines and infrastructure ▫️ Strong proficiency in SQL and experience with relational databases like MySQL and PostgreSQL ▫️ Hands-on experience with NoSQL databases such as MongoDB or AWS DocumentDB ▫️ Expertise in designing, implementing, and optimizing ETL processes using tools like Kafka,Debezium, Airflow, or similar technologies ▫️ Experience with data warehousing concepts and technologies ▫️ Solid understanding of data modeling principles and best practices for both operational and analytical systems ▫️ Proven ability to optimize database performance, including query optimization, indexing strategies, and database tuning ▫️ Experience with AWS data services such as RDS, Redshift, S3, Glue, Kinesis, and ELK stack 📬 How to Apply: 📩 Email your updated resume to: sandhyarani.p@nybinfotech.com

Posted 3 weeks ago

Apply

8.0 years

2 - 8 Lacs

Noida

On-site

Company Overview: - R1 is a leading provider of technology-driven solutions that help hospitals and health systems to manage their financial systems and improve patients’ experience. We are the one company that combines the deep expertise of a global workforce of revenue cycle professionals with the industry's most advanced technology platform, encompassing sophisticated analytics, Al, intelligent automation and workflow orchestration. R1 is a place where we think boldly to create opportunities for everyone to innovate and grow. A place where we partner with purpose through transparency and inclusion. We are a global community of engineers, front-line associates, healthcare operators, and RCM experts that work together to go beyond for all those we serve. Because we know that all this adds up to something more, a place where we're all together better. R1 India is proud to be recognized amongst Top 25 Best Companies to Work For 2024, by the Great Place to Work Institute. This is our second consecutive recognition on this prestigious Best Workplaces list, building on the Top 50 recognition we achieved in 2023. Our focus on employee wellbeing and inclusion and diversity is demonstrated through prestigious recognitions with R1 India being ranked amongst Best in Healthcare, Top 100 Best Companies for Women by Avtar & Seramount, and amongst Top 10 Best Workplaces in Health & Wellness. We are committed to transform the healthcare industry with our innovative revenue cycle management services. Our goal is to ‘make healthcare work better for all’ by enabling efficiency for healthcare systems, hospitals, and physician practices. With over 30,000 employees globally, we are about 16,000+ strong in India with presence in Delhi NCR, Hyderabad, Bangalore, and Chennai. Our inclusive culture ensures that every employee feels valued, respected, and appreciated with a robust set of employee benefits and engagement activities. R1 RCM Inc. is a leading provider of technology-enabled revenue cycle management services which transform and solve challenges across health systems, hospitals and physician practices. Headquartered in Chicago, R1® is a publicly-traded organization with employees throughout the US and multiple INDIA locations. Our mission is to be the one trusted partner to manage revenue, so providers and patients can focus on what matters most. Our priority is to always do what is best for our clients, patients and each other. With our proven and scalable operating model, we complement a healthcare organization’s infrastructure, quickly driving sustainable improvements to net patient revenue and cash flows while reducing operating costs and enhancing the patient experience. Description: We are seeking a Staff Software Engineer(5I) (ETL) with 8-10 years of experience to join our ETL Development team. This role will report to the Engineering Manager and the candidate will be involved in the planning, design, and implementation of our centralized data warehouse solution for data acquisition, Ingestion and large data processing and automation/optimization across all the company products. About the Role: Candidate will play a crucial role in designing, developing, and leading the implementation of ETL processes and data architecture solutions. Will collaborate with various stakeholders to ensure the seamless integration, transformation, and loading of data to support our data warehousing and analytics initiatives. Key Responsibilities: Lead the design and architecture of ETL processes and data integration solutions. Develop and maintain ETL workflows using tools such as SSIS, Azure Databricks, SparkSQL or similar. Collaborate with data architects, analysts, and business stakeholders to gather requirements and translate them into technical solutions. Optimize ETL processes for performance, scalability, and reliability . Conduct code reviews , provide technical guidance, and mentor junior developers. Troubleshoot and resolve issues related to ETL processes and data integration. Ensure compliance with data governance, security policies, and best practices. Document ETL processes and maintain comprehensive technical documentation. Stay updated with the latest trends and technologies in data integration and ETL. Qualifications: Bachelor’s degree in computer science, Information Technology, or a related field. 10-12 years of experience in ETL development and data integration. Expertise in ETL tools such as SSIS, T-SQL, Azure Databricks or similar. Knowledge of various SQL/NoSQL data storage mechanisms and Big Data technologies and experience in Data Modeling . Knowledge of Azure data factory, Azure Data bricks, Azure Data Lake. Experience in Scala, SparkSQL, Airflow is preferred. Proven experience in data architecture and designing scalable ETL solutions. Excellent problem-solving and analytical skills. Strong communication and leadership skills. Ability to work effectively in a team-oriented environment. Experience working with agile methodology. Healthcare industry experience preferred. Working in an evolving healthcare setting, we use our shared expertise to deliver innovative solutions. Our fast-growing team has opportunities to learn and grow through rewarding interactions, collaboration and the freedom to explore professional interests. Our associates are given valuable opportunities to contribute, to innovate and create meaningful work that makes an impact in the communities we serve around the world. We also offer a culture of excellence that drives customer success and improves patient care. We believe in giving back to the community and offer a competitive benefits package. To learn more, visit: r1rcm.com Visit us on Facebook

Posted 3 weeks ago

Apply

2.0 years

3 - 5 Lacs

India

On-site

Job Title: AI Engineer Location: Kolkata, India Company: GWC (Global We Connect) Job Type: Full-Time Experience Level: Mid to Senior Level Position Summary We are looking for a talented and proactive AI Engineer to join our growing team in Kolkata. As an AI Engineer at GWC, you will work closely with data scientists, software developers, and product managers to design, develop, and deploy AI/ML models that solve real-world business problems across various domains. Key Responsibilities Design, develop, and deploy scalable machine learning and deep learning models. Collaborate with cross-functional teams to integrate AI solutions into enterprise platforms and applications. Process, clean, and analyze large data sets to uncover trends and patterns. Develop and maintain AI pipelines using tools like TensorFlow, PyTorch, Scikit-learn, etc. Apply NLP, computer vision, or recommendation systems depending on project needs. Research and implement novel algorithms and techniques. Monitor and evaluate model performance and retrain as necessary. Required Skills & Qualifications Bachelor’s or Master’s degree in Computer Science, Engineering, or related fields. 2–5 years of experience in machine learning, AI, or data science roles. Strong programming skills in Python; familiarity with R, Java, or C++ is a plus. Proficiency with ML libraries and frameworks (e.g., TensorFlow, PyTorch, Keras, Scikit-learn). Experience with data processing tools like Pandas, NumPy, and SQL. Familiarity with cloud platforms (AWS, Azure, or GCP) and version control systems like Git. Solid understanding of statistical analysis, data modeling, and algorithm design. Excellent problem-solving abilities and communication skills. Preferred Qualifications Experience with MLOps tools such as MLflow, Airflow, or Kubeflow. Background in deploying AI models in production environments (REST APIs, microservices). Exposure to NLP (spaCy, HuggingFace) or computer vision (OpenCV, YOLO, etc.). Contributions to open-source projects or participation in AI research. Job Type: Full-time Pay: ₹300,000.00 - ₹500,000.00 per year Schedule: Day shift Work Location: In person Application Deadline: 31/07/2025 Expected Start Date: 01/08/2025

Posted 3 weeks ago

Apply

8.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Role Description Role Proficiency: This role requires proficiency in developing data pipelines including coding and testing for ingesting wrangling transforming and joining data from various sources. The ideal candidate should be adept in ETL tools like Informatica Glue Databricks and DataProc with strong coding skills in Python PySpark and SQL. This position demands independence and proficiency across various data domains. Expertise in data warehousing solutions such as Snowflake BigQuery Lakehouse and Delta Lake is essential including the ability to calculate processing costs and address performance issues. A solid understanding of DevOps and infrastructure needs is also required. Outcomes Act creatively to develop pipelines/applications by selecting appropriate technical options optimizing application development maintenance and performance through design patterns and reusing proven solutions. Support the Project Manager in day-to-day project execution and account for the developmental activities of others. Interpret requirements create optimal architecture and design solutions in accordance with specifications. Document and communicate milestones/stages for end-to-end delivery. Code using best standards debug and test solutions to ensure best-in-class quality. Tune performance of code and align it with the appropriate infrastructure understanding cost implications of licenses and infrastructure. Create data schemas and models effectively. Develop and manage data storage solutions including relational databases NoSQL databases Delta Lakes and data lakes. Validate results with user representatives integrating the overall solution. Influence and enhance customer satisfaction and employee engagement within project teams. Measures Of Outcomes TeamOne's Adherence to engineering processes and standards TeamOne's Adherence to schedule / timelines TeamOne's Adhere to SLAs where applicable TeamOne's # of defects post delivery TeamOne's # of non-compliance issues TeamOne's Reduction of reoccurrence of known defects TeamOne's Quickly turnaround production bugs Completion of applicable technical/domain certifications Completion of all mandatory training requirementst Efficiency improvements in data pipelines (e.g. reduced resource consumption faster run times). TeamOne's Average time to detect respond to and resolve pipeline failures or data issues. TeamOne's Number of data security incidents or compliance breaches. Code Outputs Expected: Develop data processing code with guidance ensuring performance and scalability requirements are met. Define coding standards templates and checklists. Review code for team and peers. Documentation Create/review templates checklists guidelines and standards for design/process/development. Create/review deliverable documents including design documents architecture documents infra costing business requirements source-target mappings test cases and results. Configure Define and govern the configuration management plan. Ensure compliance from the team. Test Review/create unit test cases scenarios and execution. Review test plans and strategies created by the testing team. Provide clarifications to the testing team. Domain Relevance Advise data engineers on the design and development of features and components leveraging a deeper understanding of business needs. Learn more about the customer domain and identify opportunities to add value. Complete relevant domain certifications. Manage Project Support the Project Manager with project inputs. Provide inputs on project plans or sprints as needed. Manage the delivery of modules. Manage Defects Perform defect root cause analysis (RCA) and mitigation. Identify defect trends and implement proactive measures to improve quality. Estimate Create and provide input for effort and size estimation and plan resources for projects. Manage Knowledge Consume and contribute to project-related documents SharePoint libraries and client universities. Review reusable documents created by the team. Release Execute and monitor the release process. Design Contribute to the creation of design (HLD LLD SAD)/architecture for applications business components and data models. Interface With Customer Clarify requirements and provide guidance to the Development Team. Present design options to customers. Conduct product demos. Collaborate closely with customer architects to finalize designs. Manage Team Set FAST goals and provide feedback. Understand team members' aspirations and provide guidance and opportunities. Ensure team members are upskilled. Engage the team in projects. Proactively identify attrition risks and collaborate with BSE on retention measures. Certifications Obtain relevant domain and technology certifications. Skill Examples Proficiency in SQL Python or other programming languages used for data manipulation. Experience with ETL tools such as Apache Airflow Talend Informatica AWS Glue Dataproc and Azure ADF. Hands-on experience with cloud platforms like AWS Azure or Google Cloud particularly with data-related services (e.g. AWS Glue BigQuery). Conduct tests on data pipelines and evaluate results against data quality and performance specifications. Experience in performance tuning. Experience in data warehouse design and cost improvements. Apply and optimize data models for efficient storage retrieval and processing of large datasets. Communicate and explain design/development aspects to customers. Estimate time and resource requirements for developing/debugging features/components. Participate in RFP responses and solutioning. Mentor team members and guide them in relevant upskilling and certification. Knowledge Examples Knowledge Examples Knowledge of various ETL services used by cloud providers including Apache PySpark AWS Glue GCP DataProc/Dataflow Azure ADF and ADLF. Proficient in SQL for analytics and windowing functions. Understanding of data schemas and models. Familiarity with domain-related data. Knowledge of data warehouse optimization techniques. Understanding of data security concepts. Awareness of patterns frameworks and automation practices. Additional Comments We are seeking a highly experienced Senior Data Engineer to design, develop, and optimize scalable data pipelines in a cloud-based environment. The ideal candidate will have deep expertise in PySpark, SQL, Azure Databricks, and experience with either AWS or GCP. A strong foundation in data warehousing, ELT/ETL processes, and dimensional modeling (Kimball/star schema) is essential for this role. Must-Have Skills 8+ years of hands-on experience in data engineering or big data development. Strong proficiency in PySpark and SQL for data transformation and pipeline development. Experience working in Azure Databricks or equivalent Spark-based cloud platforms. Practical knowledge of cloud data environments – Azure, AWS, or GCP. Solid understanding of data warehousing concepts, including Kimball methodology and star/snowflake schema design. Proven experience designing and maintaining ETL/ELT pipelines in production. Familiarity with version control (e.g., Git), CI/CD practices, and data pipeline orchestration tools (e.g., Airflow, Azure Data Factory Skills Azure Data Factory,Azure Databricks,Pyspark,Sql

Posted 3 weeks ago

Apply

2.0 - 4.0 years

7 - 11 Lacs

Jaipur

Work from Office

Position Overview We are seeking a skilled Data Engineer with 2-4 years of experience to design, build, and maintain scalable data pipelines and infrastructure. You will work with modern data technologies to enable data-driven decision making across the organisation. Key Responsibilities Design and implement ETL/ELT pipelines using Apache Spark and orchestration tools (Airflow/Dagster). Build and optimize data models on Snowflake and cloud platforms. Collaborate with analytics teams to deliver reliable data for reporting and ML initiatives. Monitor pipeline performance, troubleshoot data quality issues, and implement testing frameworks. Contribute to data architecture decisions and work with cross-functional teams to deliver quality data solutions. Required Skills & Experience 2-4 years of experience in data engineering or related field Strong proficiency with Snowflake including data modeling, performance optimisation, and cost management Hands-on experience building data pipelines with Apache Spark (PySpark) Experience with workflow orchestration tools (Airflow, Dagster, or similar) Proficiency with dbt for data transformation, modeling, and testing Proficiency in Python and SQL for data processing and analysis Experience with cloud platforms (AWS, Azure, or GCP) and their data services Understanding of data warehouse concepts, dimensional modeling, and data lake architectures Preferred Qualifications Experience with infrastructure as code tools (Terraform, CloudFormation) Knowledge of streaming technologies (Kafka, Kinesis, Pub/Sub) Familiarity with containerisation (Docker, Kubernetes) Experience with data quality frameworks and monitoring tools Understanding of CI/CD practices for data pipelines Knowledge of data catalog and governance tools Advanced dbt features including macros, packages, and documentation Experience with table format technologies (Apache Iceberg, Apache Hudi) Technical Environment Data Warehouse: Snowflake Processing: Apache Spark, Python, SQL Orchestration: Airflow/Dagster Transformation: dbt Cloud: AWS/Azure/GCP Version Control: Git Monitoring: DataDog, Grafana, or similar

Posted 3 weeks ago

Apply

6.0 - 11.0 years

15 - 25 Lacs

Bengaluru

Work from Office

Hiring Data Engineer in Bangalore with 6+ years experience in below skills: Must Have: - Big Data technologies: Hadoop, MapReduce, Spark, Kafka, Flink - Programming languages: Java/ Scala/ Python - Cloud: Azure, AWS, Google Cloud - Docker/Kubernetes Required Candidate profile - Strong in Communication Skills - Experience with relational SQL/ NoSQL databases- Postgres & Cassandra - Experience with ELK stack - Immediate Join is plus - Must be ready to work from office

Posted 3 weeks ago

Apply

6.0 - 8.0 years

16 - 20 Lacs

Hyderabad, Pune, Bengaluru

Work from Office

Key Responsibilities: Lead the design, development, and deployment of scalable Python-based applications. Collaborate with cross-functional teams including DevOps, Data Engineering, and Product. Drive architecture and solution design discussions for new components and enhancements. Implement and maintain data pipelines using Apache Airflow . Orchestrate and manage deployments on Kubernetes clusters. Integrate and monitor applications using the ELK stack. Build APIs using Flask , Django , or FastAPI . Manage and execute database migrations , especially from MS SQL to open-source DBs such as Spark or ClickHouse . Lead efforts to refactor or migrate codebases , including Java-to-Python conversions or intra-Python framework transitions . Collaborate on initiatives involving Generative AI , contributing to proof-of-concepts or production-ready modules (preferred). Required Skills & Experience: 68 years of hands-on experience in Python development . At least 3 years of experience with: Airflow (data pipeline orchestration) Kubernetes (container orchestration) ELK Stack (logging and monitoring) Flask / Django / FastAPI (web frameworks) Strong experience with database migration , especially from MS SQL to Spark/ClickHouse or other open-source DBs. Experience in code migration , particularly: From Java to Python Between Python frameworks Solid understanding of REST APIs, microservices, and application performance optimization. Preferred Skills: Hands-on experience or exposure to Generative AI technologies and models. Familiarity with CI/CD pipelines and cloud platforms (AWS, Azure, or GCP). Strong problem-solving, communication, and leadership skills. Experience working in agile development environments. Locations: BLR, HYD, Trivandrum, Chennai, Pune, Chandigarh, Jaipur, Mangalore

Posted 3 weeks ago

Apply

5.0 - 10.0 years

5 - 9 Lacs

Noida

Work from Office

We are looking for a ML Ops Engineer to join our Technology team at Clarivate. You will get the opportunity to work in a cross-cultural work environment while working on the latest web technologies with an emphasis on user-centered design. About You (Skills & Experience Required) Bachelors or masters degree in computer science, Engineering, or a related field. 5+ years of experience in machine learning, data engineering, or software development. Good experience in building data pipelines, data cleaning, and feature engineering is essential for preparing data for model training. Knowledge of programming languages (Python, R), and version control systems (Git) is necessary for building and maintaining MLOps pipelines. Experience with MLOps-specific tools and platforms (e.g., Kubeflow, MLflow, Airflow) can streamline MLOps workflows. DevOps principles, including CI/CD pipelines, infrastructure as code (IaaC), and monitoring is helpful for automating ML workflows. Experience with atleast one of the cloud platforms (AWS, GCP, Azure) and their associated services (e.g., compute, storage, ML platforms) is essential for deploying and scaling ML models. Familiarity with container orchestration tools like Kubernetes can help manage and scale ML workloads efficiently. It would be great if you also had, Experience with big data technologies (Hadoop, Spark). Knowledge of data governance and security practices. Familiarity with DevOps practices and tools. What will you be doing in this role? Model Deployment & Monitoring : Oversee the deployment of machine learning models into production environments. Ensure continuous monitoring and performance tuning of deployed models. Implement robust CI/CD pipelines for model updates and rollbacks. Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions. Communicate project status, risks, and opportunities to stakeholders. Provide technical guidance and support to team members. Infrastructure & Automation : Design and manage scalable infrastructure for model training and deployment. Automate repetitive tasks to improve efficiency and reduce errors. Ensure the infrastructure meets security and compliance standards. Innovation & Improvement : Stay updated with the latest trends and technologies in MLOps. Identify opportunities for process improvements and implement them. Drive innovation within the team to enhance the MLOps capabilities.

Posted 3 weeks ago

Apply

4.0 - 8.0 years

20 - 35 Lacs

Pune, Gurugram, Bengaluru

Hybrid

Salary: 20 to 35 LPA Exp: 3 to 7 years Location: Gurgaon/Pune/Bengalore Notice: Immediate to 30 days..!! Job Profile: Experienced Data Engineer with a strong foundation in designing, building, and maintaining scalable data pipelines and architectures. Skilled in transforming raw data into clean, structured formats for analytics and business intelligence. Proficient in modern data tools and technologies such as SQL, T-SQL, Python, Databricks, and cloud platforms (Azure). Adept at data wrangling, modeling, ETL/ELT development, and ensuring data quality, integrity, and security. Collaborative team player with a track record of enabling data-driven decision-making across business units. As a Data engineer, Candidate will work on the assignments for one of our Utilities clients. Collaborating with cross-functional teams and stakeholders involves gathering data requirements, aligning business goals, and translating them into scalable data solutions. The role includes working closely with data analysts, scientists, and business users to understand needs, designing robust data pipelines, and ensuring data is accessible, reliable, and well-documented. Regular communication, iterative feedback, and joint problem-solving are key to delivering high-impact, data-driven outcomes that support organizational objectives. This position requires a proven track record of transforming processes, driving customer value, cost savings with experience in running end-to-end analytics for large-scale organizations. Design, build, and maintain scalable data pipelines to support analytics, reporting, and advanced modeling needs. Collaborate with consultants, analysts, and clients to understand data requirements and translate them into effective data solutions. Ensure data accuracy, quality, and integrity through validation, cleansing, and transformation processes. Develop and optimize data models, ETL workflows, and database architectures across cloud and on-premises environments. Support data-driven decision-making by delivering reliable, well-structured datasets and enabling self-service analytics. Provides seamless integration with cloud platforms (Azure), making it easy to build and deploy end-to-end data pipelines in the cloud Scalable clusters for handling large datasets and complex computations in Databricks, optimizing performance and cost management. Must to have Client Engagement Experience and collaboration with cross-functional teams Data Engineering background in Databricks Capable of working effectively as an individual contributor or in collaborative team environments Effective communication and thought leadership with proven record. Candidate Profile: Bachelors/masters degree in economics, mathematics, computer science/engineering, operations research or related analytics areas 3+ years experience must be in Data engineering. Hands on experience on SQL, Python, Databricks, cloud Platform like Azure etc. Prior experience in managing and delivering end to end projects Outstanding written and verbal communication skills Able to work in fast pace continuously evolving environment and ready to take up uphill challenges Is able to understand cross cultural differences and can work with clients across the globe.

Posted 3 weeks ago

Apply

4.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About Us Zelis is modernizing the healthcare financial experience in the United States (U.S.) by providing a connected platform that bridges the gaps and aligns interests across payers, providers, and healthcare consumers. This platform serves more than 750 payers, including the top 5 health plans, BCBS insurers, regional health plans, TPAs and self-insured employers, and millions of healthcare providers and consumers in the U.S. Zelis sees across the system to identify, optimize, and solve problems holistically with technology built by healthcare experts—driving real, measurable results for clients. Why We Do What We Do In the U.S., consumers, payers, and providers face significant challenges throughout the healthcare financial journey. Zelis helps streamline the process by offering solutions that improve transparency, efficiency, and communication among all parties involved. By addressing the obstacles that patients face in accessing care, navigating the intricacies of insurance claims, and the logistical challenges healthcare providers encounter with processing payments, Zelis aims to create a more seamless and effective healthcare financial system. Zelis India plays a crucial role in this mission by supporting various initiatives that enhance the healthcare financial experience. The local team contributes to the development and implementation of innovative solutions, ensuring that technology and processes are optimized for efficiency and effectiveness. Beyond operational expertise, Zelis India cultivates a collaborative work culture, leadership development, and global exposure, creating a dynamic environment for professional growth. With hybrid work flexibility, comprehensive healthcare benefits, financial wellness programs, and cultural celebrations, we foster a holistic workplace experience. Additionally, the team plays a vital role in maintaining high standards of service delivery and contributes to Zelis’ award-winning culture. Position Overview About Zelis Zelis is a leading payments company in healthcare, guiding, pricing, explaining, and paying for care on behalf of insurers and their members. We align the interests of payers, providers, and consumers to deliver a better financial experience and more affordable, transparent care for all. Partnering with 700+ payers, supporting 4 million+ providers and 100 million members across the healthcare industry. About ZDI Zelis Data Intelligence (ZDI) is a centralized data team that partners across Zelis business units to unlock the value of data through intelligence and AI solutions. Our mission is to transform data into a strategic and competitive asset by fostering collaboration and innovation. Enable the democratization and productization of data assets to drive insights and decision-making. Develop new data and product capabilities through advanced analytics and AI-driven solutions. Collaborate closely with business units and enterprise functions to maximize the impact of data. Leverage intelligence solutions to unlock efficiency, transparency, and value across the organization. Key Responsibilities Product Expertise & Collaboration Become an expert in product areas, acting as the go-to person for stakeholders before engaging with technical data and data engineering teams. Lead the creation of clear user stories and tasks in collaboration with Engineering teams to track ongoing and upcoming work. Design, build, and own repeatable processes for implementing projects. Collaborate with software engineers, data engineers, data scientists, and other product teams to scope new or refine existing product features and data capabilities that increase business value, adoption, and user engagement. Understand how the product area aligns with the wider company roadmap and educate internal teams on the organization’s vision. Requirements Management & Communication Ensure consistent updates of tickets and timelines, following up with technical teams on status and roadblocks. Draft clear and concise business requirements and technical product documentation. Understand the Zelis healthcare ecosystem (e.g., claims, payments, provider and member data) and educate the company on requirements and guidelines for accessing, sharing, and requesting information to inform advanced analytics, feature enhancements, and new product innovation. Communicate with technical audiences to identify requirements, gaps, and barriers, translating needs into product features. Track key performance indicators to evaluate product performance. Qualifications Bachelor’s or Master’s degree in Computer Science, Information Systems, or a related field. 4+ years technical experience business analyst, data analyst, technical product, engineering, etc. experience with demonstrated ability to deliver alongside technical teams 4+ years of direct experience with Agile methodologies & frameworks and product tools such as Jira and Confluence to author user stories, acceptance criteria etc. Technical depth that enables you to collaborate with software engineers, data engineers and data scientists and drive technical discussions about design of data visualizations, data models, ETLs, deployment of data infrastructure Understanding of Data Management, Data Engineering, API development, Cloud Engineering, Advanced Analytics, Data Science, or Product Analytics concepts or other data/product tools such as SQL, Python, R, Spark, AWS, Azure, Airflow, Snowflake, and PowerBI Preferred Qualifications Strong communication skills, with clear verbal communication as well as explicit and mindful written communication skills to work with technical teams B2B or B2C experience helpful Familiarity with the US healthcare system Hands-on experience with Snowflake or other cloud platforms, including data pipeline architecture, cloud-based systems, BI/Analytics, and deploying data infrastructure solutions.

Posted 3 weeks ago

Apply

15.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Position Title: Cloud Solutions Practice Head Location: Hyderabad, India (Travel as Needed) Reports To: CEO / Executive Leadership Team Employment Type: Full-Time | Senior Leadership Role Industry: Information Technology & Services | Cloud Solutions | AI & Digital Transformation Join the Future of Enterprise Cloud At BPMLinks , we are building a cloud-first future for enterprise clients across the globe. As our Cloud Solutions Practice Head , you won’t just lead a team, you’ll shape a legacy. Position Overview: BPMLinks LLC is seeking an accomplished and visionary Cloud Solutions Practice Head to establish and lead our newly launched Cloud Solutions Practice , aligning cloud innovation with business value creation. This is a pivotal leadership role that will oversee the full spectrum of cloud consulting, engineering, cost optimization, migration, and AI/ML-enabled services across our global delivery portfolio. The ideal candidate is a cloud thought leader with deep expertise across AWS, Azure, GCP , and modern data platforms (e.g., Snowflake, Databricks, Azure Data Factory, Oracle ). You will play a key role in scaling multi-cloud capabilities, building high-performing teams, and partnering with clients to drive cost efficiency, performance, security, and digital innovation. Key Responsibilities: 🔹 Practice Strategy & Leadership Define and execute the vision, roadmap, and service catalog for the Cloud Solutions Practice. Build a world-class delivery team of cloud architects, engineers, DevOps professionals, and data specialists. Align the practice’s capabilities with BPMLinks’ broader business transformation initiatives. 🔹 Cloud & Data Architecture Oversight Lead the design and deployment of scalable, secure, cost-optimized cloud solutions on AWS, Azure, and GCP. Direct complex cloud and data migration programs , including: Transitioning from legacy systems to Snowflake, Databricks, and BigQuery Data pipeline orchestration using Azure Data Factory, Airflow, Informatica Modernization of Oracle and SQL Server environments Guide hybrid cloud and multi-cloud strategies across IaaS, PaaS, SaaS, and serverless architectures. 🔹 Cloud Cost Optimization & FinOps Leadership Architect and institutionalize cloud cost governance frameworks and FinOps best practices. Leverage tools like AWS Cost Explorer, Azure Cost Management, and third-party FinOps platforms. Drive resource rightsizing, workload scheduling, RIs/SPs adoption, and continuous spend monitoring. 🔹 Client Engagement & Solution Delivery Act as executive sponsor for strategic accounts, engaging CXOs and technology leaders. Lead cloud readiness assessments, transformation workshops, and solution design sessions. Ensure delivery excellence through agile governance, quality frameworks, and continuous improvement. 🔹 Cross-Functional Collaboration & Talent Development Partner with sales, marketing, and pre-sales teams to define go-to-market strategies and win pursuits. Foster a culture of knowledge sharing, upskilling, certification, and technical excellence. Mentor emerging cloud leaders and architects across geographies. Cloud Services Portfolio You Will Lead: Cloud Consulting & Advisory Cloud readiness assessments, cloud strategy and TCO analysis Multi-cloud and hybrid cloud governance, regulatory advisory (HIPAA, PCI, SOC2) Infrastructure, Platform & Application Services Virtual machines, networking, containers, Kubernetes, serverless computing App hosting, API gateways, orchestration, cloud-native replatforming Cloud Migration & Modernization Lift-and-shift, refactoring, legacy app migration Zero-downtime migrations and DR strategies Data Engineering & Modern Data Platforms Snowflake, Databricks, BigQuery, Redshift Azure Data Factory, Oracle Cloud, Informatica, ETL/ELT pipelines DevOps & Automation CI/CD, Infrastructure-as-Code (Terraform, CloudFormation, ARM) Release orchestration and intelligent environment management Cloud Security & Compliance IAM, encryption, CSPM, SIEM/SOAR, compliance audits and policies Cost Optimization & FinOps Reserved instances, spot instances, scheduling automation Multi-cloud FinOps dashboards, showback/chargeback enablement AI/ML & Analytics on Cloud Model hosting (SageMaker, Vertex AI, Azure ML), RAG systems, semantic vector search Real-time analytics with Power BI, Looker, Kinesis Managed Cloud Services 24/7 monitoring (NOC/SOC), SLA-driven support, patching, DR management Training & Enablement Certification workshops, cloud engineering training, CoE development Required Qualifications: 15+ years of experience in enterprise IT and cloud solutions, with 5+ years in senior leadership roles Expertise in AWS, Azure, GCP (certifications preferred) Proven success in scaling cloud practices or large delivery units Hands-on experience with data platforms: Snowflake, Databricks, Azure Data Factory, Oracle In-depth understanding of FinOps principles, cost governance, and cloud performance tuning Excellent executive-level communication, strategic thinking, and client-facing presence Preferred Qualifications: Experience serving clients in regulated industries (healthcare, finance, public sector) Strong commercial acumen with experience in pre-sales, solutioning, and deal structuring MBA or advanced degree in Computer Science, Engineering, or Technology Management What We Offer: Opportunity to define and scale a global Cloud Practice from the ground up Direct influence on innovation, customer impact, and company growth Collaboration with a forward-thinking executive team and top-tier AI engineers Competitive compensation, performance-linked incentives, and potential equity Culture of ownership, agility, and continuous learning

Posted 3 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies