Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
5.0 - 7.0 years
13 - 17 Lacs
Hyderabad
Work from Office
Skilled Multiple GCP services - GCS, BigQuery, Cloud SQL, Dataflow, Pub/Sub, Cloud Run, Workflow, Composer, Error reporting, Log explorer etc. Must have Python and SQL work experience & Proactive, collaborative and ability to respond to critical situation Ability to analyse data for functional business requirements & front face customer Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise 5 to 7 years of relevant experience working as technical analyst with Big Query on GCP platform. Skilled in multiple GCP services - GCS, Cloud SQL, Dataflow, Pub/Sub, Cloud Run, Workflow, Composer, Error reporting, Log explorer You love collaborative environments that use agile methodologies to encourage creative design thinking and find innovative ways to develop with cutting edge technologies Ambitious individual who can work under their own direction towards agreed targets/goals and with creative approach to work Preferred technical and professional experience Create up to 3 bullets maxitive individual with an ability to manage change and proven time management Proven interpersonal skills while contributing to team effort by accomplishing related results as needed Up-to-date technical knowledge by attending educational workshops, reviewing publications (encouraging then to focus on required skills)
Posted 1 week ago
6.0 - 11.0 years
3 - 7 Lacs
Bengaluru
Work from Office
Were looking for an experienced Senior Data Engineer to lead the design and development of scalable data solutions at our company. The ideal candidate will have extensive hands-on experience in data warehousing, ETL/ELT architecture, and cloud platforms like AWS, Azure, or GCP. You will work closely with both technical and business teams, mentoring engineers while driving data quality, security, and performance optimization. Responsibilities: Lead the design of data warehouses, lakes, and ETL workflows. Collaborate with teams to gather requirements and build scalable solutions. Ensure data governance, security, and optimal performance of systems. Mentor junior engineers and drive end-to-end project delivery.: 6+ years of experience in data engineering, including at least 2 full-cycle datawarehouse projects. Strong skills in SQL, ETL tools (e.g., Pentaho, dbt), and cloud platforms. Expertise in big data tools (e.g., Apache Spark, Kafka). Excellent communication skills and leadership abilities.PreferredExperience with workflow orchestration tools (e.g., Airflow), real-time data,and DataOps practices.
Posted 1 week ago
15.0 - 20.0 years
6 - 10 Lacs
Mumbai
Work from Office
LocationMumbai Experience15+ years in data engineering/architecture Role Overview: Lead the architectural design and implementation of a secure, scalable Cloudera-based Data Lakehouse for one of India’s top public sector banks. Key Responsibilities: * Design end-to-end Lakehouse architecture on Cloudera * Define data ingestion, processing, storage, and consumption layers * Guide data modeling, governance, lineage, and security best practices * Define migration roadmap from existing DWH to CDP * Lead reviews with client stakeholders and engineering teams Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Skills Required: * Proven experience with Cloudera CDP, Spark, Hive, HDFS, Iceberg * Deep understanding of Lakehouse patterns and data mesh principles * Familiarity with data governance tools (e.g., Apache Atlas, Collibra) * Banking/FSI domain knowledge highly desirable
Posted 1 week ago
8.0 - 13.0 years
5 - 8 Lacs
Mumbai
Work from Office
Role Overview : Seeking an experienced Apache Airflow specialist to design and manage data orchestration pipelines for batch/streaming workflows in a Cloudera environment. Key Responsibilities : Design, schedule, and monitor DAGs for ETL/ELT pipelines Integrate Airflow with Cloudera services and external APIs Implement retries, alerts, logging, and failure recovery Collaborate with data engineers and DevOps teams Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Skills Required : Experience 3–8 years Expertise in Airflow 2.x, Python, Bash Knowledge of CI/CD for Airflow DAGs Proven experience with Cloudera CDP, Spark/Hive-based data pipelines Integration with Kafka, REST APIs, databases
Posted 1 week ago
3.0 - 6.0 years
7 - 12 Lacs
Mumbai
Work from Office
Role Overview : Hiring an ML Engineer with experience in Cloudera ML to support end-to-end model development, deployment, and monitoring on the CDP platform. Key Responsibilities : Develop and deploy models using CML workspaces Build CI/CD pipelines for ML lifecycle Integrate with governance and monitoring tools Enable secure model serving via REST APIs Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Skills Required : Experience in Cloudera ML, Spark MLlib, or scikit-learn ML pipeline automation (MLflow, Airflow, or equivalent) Model governance, lineage, and versioning API exposure for real-time inference
Posted 1 week ago
4.0 - 7.0 years
14 - 17 Lacs
Bengaluru
Work from Office
A Data Engineer specializing in enterprise data platforms, experienced in building, managing, and optimizing data pipelines for large-scale environments. Having expertise in big data technologies, distributed computing, data ingestion, and transformation frameworks. Proficient in Apache Spark, PySpark, Kafka, and Iceberg tables, and understand how to design and implement scalable, high-performance data processing solutions.What you’ll doAs a Data Engineer – Data Platform Services, responsibilities include: Data Ingestion & Processing Designing and developing data pipelines to migrate workloads from IIAS to Cloudera Data Lake. Implementing streaming and batch data ingestion frameworks using Kafka, Apache Spark (PySpark). Working with IBM CDC and Universal Data Mover to manage data replication and movement. Big Data & Data Lakehouse Management Implementing Apache Iceberg tables for efficient data storage and retrieval. Managing distributed data processing with Cloudera Data Platform (CDP). Ensuring data lineage, cataloging, and governance for compliance with Bank/regulatory policies. Optimization & Performance Tuning Optimizing Spark and PySpark jobs for performance and scalability. Implementing data partitioning, indexing, and caching to enhance query performance. Monitoring and troubleshooting pipeline failures and performance bottlenecks. Security & Compliance Ensuring secure data access, encryption, and masking using Thales CipherTrust. Implementing role-based access controls (RBAC) and data governance policies. Supporting metadata management and data quality initiatives. Collaboration & Automation Working closely with Data Scientists, Analysts, and DevOps teams to integrate data solutions. Automating data workflows using Airflow and implementing CI/CD pipelines with GitLab and Sonatype Nexus. Supporting Denodo-based data virtualization for seamless data access Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise 4-7 years of experience in big data engineering, data integration, and distributed computing. Strong skills in Apache Spark, PySpark, Kafka, SQL, and Cloudera Data Platform (CDP). Proficiency in Python or Scala for data processing. Experience with data pipeline orchestration tools (Apache Airflow, Stonebranch UDM). Understanding of data security, encryption, and compliance frameworks Preferred technical and professional experience Experience in banking or financial services data platforms. Exposure to Denodo for data virtualization and DGraph for graph-based insights. Familiarity with cloud data platforms (AWS, Azure, GCP). Certifications in Cloudera Data Engineering, IBM Data Engineering, or AWS Data Analytics
Posted 1 week ago
3.0 - 8.0 years
9 - 13 Lacs
Mumbai
Work from Office
Role Overview : As a Big Data Engineer, you'll design and build robust data pipelines on Cloudera using Spark (Scala/PySpark) for ingestion, transformation, and processing of high-volume data from banking systems. Key Responsibilities : Build scalable batch and real-time ETL pipelines using Spark and Hive Integrate structured and unstructured data sources Perform performance tuning and code optimization Support orchestration and job scheduling (NiFi, Airflow) Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Experience 3-15 Years Proficiency in PySpark/Scala with Hive/Impala Experience with data partitioning, bucketing, and optimization Familiarity with Kafka, Iceberg, NiFi is a must Knowledge of banking or financial datasets is a plus
Posted 1 week ago
5.0 years
0 Lacs
Delhi Cantonment, Delhi, India
On-site
What Makes Us a Great Place To Work We are proud to be consistently recognized as one of the world’s best places to work. We are currently the #1 ranked consulting firm on Glassdoor’s Best Places to Work list and have maintained a spot in the top four on Glassdoor’s list since its founding in 2009. Extraordinary teams are at the heart of our business strategy, but these don’t happen by chance. They require intentional focus on bringing together a broad set of backgrounds, cultures, experiences, perspectives, and skills in a supportive and inclusive work environment. We hire people with exceptional talent and create an environment in which every individual can thrive professionally and personally. Who You’ll Work With You’ll join our Application Engineering experts within the AI, Insights & Solutions team. This team is part of Bain’s digital capabilities practice, which includes experts in analytics, engineering, product management, and design. In this multidisciplinary environment, you'll leverage deep technical expertise with business acumen to help clients tackle their most transformative challenges. You’ll work on integrated teams alongside our general consultants and clients to develop data-driven strategies and innovative solutions. Together, we create human-centric solutions that harness the power of data and artificial intelligence to drive competitive advantage for our clients. Our collaborative and supportive work environment fosters creativity and continuous learning, enabling us to consistently deliver exceptional results. What You’ll Do Design, develop, and maintain cloud-based AI applications, leveraging a full-stack technology stack to deliver high-quality, scalable, and secure solutions. Collaborate with cross-functional teams, including product managers, data scientists, and other engineers, to define and implement analytics features and functionality that meet business requirements and user needs. Utilize Kubernetes and containerization technologies to deploy, manage, and scale analytics applications in cloud environments, ensuring optimal performance and availability. Develop and maintain APIs and microservices to expose analytics functionality to internal and external consumers, adhering to best practices for API design and documentation. Implement robust security measures to protect sensitive data and ensure compliance with data privacy regulations and organizational policies. Continuously monitor and troubleshoot application performance, identifying and resolving issues that impact system reliability, latency, and user experience. Participate in code reviews and contribute to the establishment and enforcement of coding standards and best practices to ensure high-quality, maintainable code. Stay current with emerging trends and technologies in cloud computing, data analytics, and software engineering, and proactively identify opportunities to enhance the capabilities of the analytics platform. Collaborate with DevOps and infrastructure teams to automate deployment and release processes, implement CI/CD pipelines, and optimize the development workflow for the analytics engineering team. Collaborate closely with and influence business consulting staff and leaders as part of multi-disciplinary teams to assess opportunities and develop analytics solutions for Bain clients across a variety of sectors. Influence, educate and directly support the analytics application engineering capabilities of our clients Travel is required (30%) ABOUT YOU Required Master’s degree in Computer Science, Engineering, or a related technical field. 5+ years at Senior or Staff level, or equivalent Experience with client-side technologies such as React, Angular, Vue.js, HTML and CSS Experience with server-side technologies such as, Django, Flask, Fast API Experience with cloud platforms and services (AWS, Azure, GCP) via Terraform Automation (good to have) 3+ years of Python expertise Use Git as your main tool for versioning and collaborating Experience with DevOps, CI/CD, Github Actions Demonstrated interest with LLMs, Prompt engineering, Langchain Experience with workflow orchestration - doesn’t matter if it’s dbt, Beam, Airflow, Luigy, Metaflow, Kubeflow, or any other Experience implementation of large-scale structured or unstructured databases, orchestration and container technologies such as Docker or Kubernetes Strong interpersonal and communication skills, including the ability to explain and discuss complex engineering technicalities with colleagues and clients from other disciplines at their level of cognition Curiosity, proactivity and critical thinking Strong computer science fundaments in data structures, algorithms, automated testing, object-oriented programming, performance complexity, and implications of computer architecture on software performance. Strong knowledge in designing API interfaces Knowledge of data architecture, database schema design and database scalability Agile development methodologies Show more Show less
Posted 1 week ago
10.0 - 15.0 years
5 - 9 Lacs
Mumbai
Work from Office
Role Overview : We are hiring aTalend Data Quality Developerto design and implement robust data quality (DQ) frameworks in a Cloudera-based data lakehouse environment. The role focuses on building rule-driven validation and monitoring processes for migrated data pipelines, ensuring high levels of data trust and regulatory compliance across critical banking domains. Key Responsibilities : Design and implement data quality rules using Talend DQ Studio , tailored to validate customer, account, transaction, and KYC datasets within the Cloudera Lakehouse. Create reusable templates for profiling, validation, standardization, and exception handling. Integrate DQ checks within PySpark-based ingestion and transformation pipelines targeting Apache Iceberg tables . Ensure compatibility with Cloudera components (HDFS, Hive, Iceberg, Ranger, Atlas) and job orchestration frameworks (Airflow/Oozie). Perform initial and ongoing data profiling on source and target systems to detect data anomalies and drive rule definitions. Monitor and report DQ metrics through dashboards and exception reports. Work closely with data governance, architecture, and business teams to align DQ rules with enterprise definitions and regulatory requirements. Support lineage and metadata integration with tools like Apache Atlas or external catalogs. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Experience 5–10 years in data management, with 3+ years in Talend Data Quality tools. Platforms Experience in Cloudera Data Platform (CDP) , with understanding of Iceberg , Hive , HDFS , and Spark ecosystems. Languages/Tools Talend Studio (DQ module), SQL, Python (preferred), Bash scripting. Data Concepts Strong grasp of data quality dimensions—completeness, consistency, accuracy, timeliness, uniqueness. Banking Exposure Experience with financial services data (CIF, AML, KYC, product masters) is highly preferred.
Posted 1 week ago
2.0 - 4.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Description The Position We are seeking a seasoned engineer with a passion for changing the way millions of people save energy. You’ll work within the Deliver and Operate team to build and improve our platforms to deliver flexible and creative solutions to our utility partners and end users and help us achieve our ambitious goals for our business and the planet. We are seeking a highly skilled and detail-oriented Software Engineer I for Data Operations team to maintain our data infrastructure, pipelines, and work-flows. You will play a key role in ensuring the smooth ingestion, transformation, validation, and delivery of data across systems. This role is ideal for someone with a strong understanding of data engineering and operational best practices who thrives in high-availability environments. Responsibilities & Skills You should: Monitor and maintain data pipelines and ETL processes to ensure reliability and performance. Automate routine data operations tasks and optimize workflows for scalability and efficiency. Troubleshoot and resolve data-related issues, ensuring data quality and integrity. Collaborate with data engineering, analytics, and DevOps teams to support data infrastructure. Implement monitoring, alerting, and logging systems for data pipelines. Maintain and improve data governance, access controls, and compliance with data policies. Support deployment and configuration of data tools, services, and platforms. Participate in on-call rotation and incident response related to data system outages or failures. Required Skills 2 to 4 years of experience in data operations, data engineering, or a related role. Strong SQL skills and experience with relational databases (e.g., PostgreSQL, MySQL). Proficiency with data pipeline tools (e.g., Apache Airflow). Experience with cloud platforms (AWS, GCP) and cloud-based data services (e.g., Redshift, BigQuery). Familiarity with scripting languages such as Python, Bash, or Shell. Knowledge of version control (e.g., Git) and CI/CD workflows. Qualifications Bachelor's degree in Computer Science, Engineering, Data Science, or a related field. Experience with data observability tools (e.g., Splunk, DataDog). Background in DevOps or SRE with focus on data systems. Exposure to infrastructure-as-code (e.g., Terraform, CloudFormation). Knowledge of streaming data platforms (e.g., Kafka, Spark Streaming). Show more Show less
Posted 1 week ago
15.0 - 20.0 years
5 - 9 Lacs
Mumbai
Work from Office
Location Mumbai Role Overview : As a Big Data Engineer, you'll design and build robust data pipelines on Cloudera using Spark (Scala/PySpark) for ingestion, transformation, and processing of high-volume data from banking systems. Key Responsibilities : Build scalable batch and real-time ETL pipelines using Spark and Hive Integrate structured and unstructured data sources Perform performance tuning and code optimization Support orchestration and job scheduling (NiFi, Airflow) Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Experience3–15 years Proficiency in PySpark/Scala with Hive/Impala Experience with data partitioning, bucketing, and optimization Familiarity with Kafka, Iceberg, NiFi is a must Knowledge of banking or financial datasets is a plus
Posted 1 week ago
4.0 - 7.0 years
14 - 17 Lacs
Gurugram
Work from Office
A Data Engineer specializing in enterprise data platforms, experienced in building, managing, and optimizing data pipelines for large-scale environments. Having expertise in big data technologies, distributed computing, data ingestion, and transformation frameworks. Proficient in Apache Spark, PySpark, Kafka, and Iceberg tables, and understand how to design and implement scalable, high-performance data processing solutions.What you’ll doAs a Data Engineer – Data Platform Services, responsibilities include: Data Ingestion & Processing Designing and developing data pipelines to migrate workloads from IIAS to Cloudera Data Lake. Implementing streaming and batch data ingestion frameworks using Kafka, Apache Spark (PySpark). Working with IBM CDC and Universal Data Mover to manage data replication and movement. Big Data & Data Lakehouse Management Implementing Apache Iceberg tables for efficient data storage and retrieval. Managing distributed data processing with Cloudera Data Platform (CDP). Ensuring data lineage, cataloging, and governance for compliance with Bank/regulatory policies. Optimization & Performance Tuning Optimizing Spark and PySpark jobs for performance and scalability. Implementing data partitioning, indexing, and caching to enhance query performance. Monitoring and troubleshooting pipeline failures and performance bottlenecks. Security & Compliance Ensuring secure data access, encryption, and masking using Thales CipherTrust. Implementing role-based access controls (RBAC) and data governance policies. Supporting metadata management and data quality initiatives. Collaboration & Automation Working closely with Data Scientists, Analysts, and DevOps teams to integrate data solutions. Automating data workflows using Airflow and implementing CI/CD pipelines with GitLab and Sonatype Nexus. Supporting Denodo-based data virtualization for seamless data access Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise 4-7 years of experience in big data engineering, data integration, and distributed computing. Strong skills in Apache Spark, PySpark, Kafka, SQL, and Cloudera Data Platform (CDP). Proficiency in Python or Scala for data processing. Experience with data pipeline orchestration tools (Apache Airflow, Stonebranch UDM). Understanding of data security, encryption, and compliance frameworks Preferred technical and professional experience Experience in banking or financial services data platforms. Exposure to Denodo for data virtualization and DGraph for graph-based insights. Familiarity with cloud data platforms (AWS, Azure, GCP). Certifications in Cloudera Data Engineering, IBM Data Engineering, or AWS Data Analytics
Posted 1 week ago
7.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
What I’ll Be Doing – Your Accountabilities Design and implement machine learning models and algorithms to solve complex business problems. Collaborate closely with data analysts, and product teams to understand requirements and deliver impactful ML solutions. Build and maintain scalable data pipelines and model training workflows using modern tools and frameworks. Deploy models into production environments, ensuring performance, reliability, and scalability. Monitor and evaluate model performance over time, retraining and fine-tuning as needed to maintain accuracy and relevance. Contribute to MLOps practices, including versioning, automation, and continuous integration/deployment of ML models. Document work clearly and effectively to ensure reproducibility and knowledge sharing across teams. Stay up to date with the latest advancements in machine learning and AI, and proactively bring new ideas and techniques to the table. Job Responsibilities Design, build, and deploy scalable machine learning models and pipelines. Collaborate with cross-functional teams to understand business requirements and translate them into ML solutions. Perform data preprocessing, feature engineering, and model evaluation. Optimize model performance and ensure robustness in production environments. Monitor and maintain deployed models, retraining and updating as needed. Contribute to the development of internal tools and frameworks for ML operations (MLOps). Stay current with the latest research and trends in machine learning and AI. Skills Required For The Job 4–7 years experience building production-quality software. Bachelors or Masters degree and/or equivalent professional experience Proficiency in Python and ML libraries such as scikit-learn, TensorFlow, PyTorch. Hands-on experience with Generative AI models, including Large Language Models (LLMs) like GPT, BERT, or LLaMA. Proficiency in prompt engineering, fine-tuning, and in-context learning for GenAI applications. Experience with data manipulation tools (e.g., Pandas, NumPy) and SQL. Familiarity with cloud platforms (AWS) and containerization (Docker, Kubernetes). Strong understanding of machine learning algorithms, model evaluation metrics, and deployment strategies. Experience with version control (Git) and collaborative development workflows. Familiarity with one or more MLOps frameworks like(MLFLow,KubeFlow, Airflow,etc.) Experience you would be expected to have A PROACTIVE ATTITUDE TO ENHANCEMENTS AND BRINGING SERVICE IMPROVEMENTS & BEST PRACTICES Experience Working With Large Data Sets DEMONSTRATES CONTINUED PERSONAL/PROFESSIONAL DEVELOPMENT SOFT SKILLS:STRONG PROBLEM-SOLVING AND ANALYTICAL SKILLS. Excellent Communication And Teamwork Abilities PROGRAMMING SKILLS: PYTHON, SQL, SHELL SCRIPTING About Us BT Group was the world’s first telco and our heritage in the sector is unrivalled. As home to several of the UK’s most recognised and cherished brands – BT, EE, Openreach and Plusnet, we have always played a critical role in creating the future, and we have reached an inflection point in the transformation of our business. Over the next two years, we will complete the UK’s largest and most successful digital infrastructure project – connecting more than 25 million premises to full fibre broadband. Together with our heavy investment in 5G, we play a central role in revolutionising how people connect with each other. While we are through the most capital-intensive phase of our fibre investment, meaning we can reward our shareholders for their commitment and patience, we are absolutely focused on how we organise ourselves in the best way to serve our customers in the years to come. This includes radical simplification of systems, structures, and processes on a huge scale. Together with our application of AI and technology, we are on a path to creating the UK’s best telco, reimagining the customer experience and relationship with one of this country’s biggest infrastructure companies. Change on the scale we will all experience in the coming years is unprecedented. BT Group is committed to being the driving force behind improving connectivity for millions and there has never been a more exciting time to join a company and leadership team with the skills, experience, creativity, and passion to take this company into a new era. A FEW POINTS TO NOTE: Although these roles are listed as full-time, if you’re a job share partnership, work reduced hours, or any other way of working flexibly, please still get in touch. We will also offer reasonable adjustments for the selection process if required, so please do not hesitate to inform us. DON'T MEET EVERY SINGLE REQUIREMENT? Studies have shown that women and people who are disabled, LGBTQ+, neurodiverse or from ethnic minority backgrounds are less likely to apply for jobs unless they meet every single qualification and criteria. We're committed to building a diverse, inclusive, and authentic workplace where everyone can be their best, so if you're excited about this role but your past experience doesn't align perfectly with every requirement on the Job Description, please apply anyway - you may just be the right candidate for this or other roles in our wider team. Show more Show less
Posted 1 week ago
1.0 - 3.0 years
3 - 7 Lacs
Chennai
Hybrid
Strong experience in Python Good experience in Databricks Experience working in AWS/Azure Cloud Platform. Experience working with REST APIs and services, messaging and event technologies. Experience with ETL or building Data Pipeline tools Experience with streaming platforms such as Kafka. Demonstrated experience working with large and complex data sets. Ability to document data pipeline architecture and design Experience in Airflow is nice to have To build complex Deltalake
Posted 1 week ago
1.0 - 3.0 years
2 - 5 Lacs
Chennai
Work from Office
Mandatory Skills: AWS, Python, SQL, spark, Airflow, SnowflakeResponsibilities Create and manage cloud resources in AWS Data ingestion from different data sources which exposes data using different technologies, such asRDBMS, REST HTTP API, flat files, Streams, and Time series data based on various proprietary systems. Implement data ingestion and processing with the help of Big Data technologies Data processing/transformation using various technologies such as Spark and Cloud Services. You will need to understand your part of business logic and implement it using the language supported by the base data platform Develop automated data quality check to make sure right data enters the platform and verifying the results of the calculations Develop an infrastructure to collect, transform, combine and publish/distribute customer data. Define process improvement opportunities to optimize data collection, insights and displays. Ensure data and results are accessible, scalable, efficient, accurate, complete and flexible Identify and interpret trends and patterns from complex data sets Construct a framework utilizing data visualization tools and techniques to present consolidated analytical and actionable results to relevant stakeholders. Key participant in regular Scrum ceremonies with the agile teams Proficient at developing queries, writing reports and presenting findings Mentor junior members and bring best industry practices
Posted 1 week ago
2.0 years
0 Lacs
Bengaluru East, Karnataka, India
Remote
Visa is a world leader in payments and technology, with over 259 billion payments transactions flowing safely between consumers, merchants, financial institutions, and government entities in more than 200 countries and territories each year. Our mission is to connect the world through the most innovative, convenient, reliable, and secure payments network, enabling individuals, businesses, and economies to thrive while driven by a common purpose – to uplift everyone, everywhere by being the best way to pay and be paid. Make an impact with a purpose-driven industry leader. Join us today and experience Life at Visa. Job Description The Client Services BI & Analytics team strives to create an open, trusting data culture where the cost of curiosity – the number of steps, amount of time, and complexity of effort needed to use operational data to derive insights – is as low as possible. We govern Client Services’ operational data and metrics, create easily usable dashboards and data sources, and analyze data to share insights. We are a part of the Client Services Global Business Operations function and work with all levels of stakeholders, from executive leaders sharing insights with the C-Suite to customer-facing colleagues who rely on our assets to incorporate data into their daily responsibilities. This specialist role makes data available from new sources, builds robust data models, creates and optimizes data enrichment pipelines, and provides engineering support to specific projects. You will partner with our Data Visualizers and Solution Designers to ensure that data needed by the business is available and accurate and to develop certified data sets. This technical lead and architect role is a force multiplier to our Visualizers, Analysts, and other data users across Client Services. Responsibilities Design, develop, and maintain scalable data pipelines and systems. Monitor and troubleshoot data pipeline issues to ensure seamless data flow. Establish data processes and automation based on business and technology requirements, leveraging Visa’s supported data platforms and tools Deliver small to large data engineering and Machine learning projects either individually or as part of a project team Setup ML Ops pipelines to Productionalize ML models and setting up Gen AI pipelines Collaborate with cross-functional teams to understand data requirements and ensure data quality, with a focus on implementing data validation and data quality checks at various stages of the pipeline Provide expertise in data warehousing, ETL, and data modeling to support data-driven decision making, with a strong understanding of best practices in data pipeline design and performance optimization Extract and manipulate large datasets using standard tools such as Hadoop (Hive), Spark, Python (pandas, NumPy), Presto, and SQL Develop data solutions using Agile principles Provide ongoing production support Communicate complex concepts in a clear and effective manner Stay up to date with the latest data engineering trends and technologies to ensure the company's data infrastructure is always state-of-the-art, with an understanding of best practices in cloud-based data engineering This is a remote position. A remote position does not require job duties be performed within proximity of a Visa office location. Remote positions may be required to be present at a Visa office with scheduled notice. Qualifications Basic Qualifications -2 or more years of work experience with a Bachelor’s Degree or an Advanced Degree (e.g. Masters, MBA, JD, MD, or PhD) Preferred Qualifications -3 or more years of work experience with a Bachelor’s Degree or more than 2 years of work experience with an Advanced Degree (e.g. Masters, MBA, JD, MD) -3+ years of work experience with a bachelor’s degree in the STEM field. -Strong experience with SQL, Python, Hadoop, Spark, Hive, Airflow and MPP data bases -5+ years of analytics experience with a focus on data Engineering and AI -Experience with both traditional data warehousing tools and techniques (such as SSIS, ODI, and on-prem SQL Server, Oracle) as well as modern technologies (such as Hadoop, Denodo, Spark, Airflow, and Python), and a solid understanding of best practices in data engineering -Advanced knowledge of SQL (e.g., understands subqueries, self-joining tables, stored procedures, can read an execution plan, SQL tuning, etc.) -Solid understanding of best practices in data warehousing, ETL, data modeling, and data architecture. -Experience with NoSQL databases (e.g., MongoDB, Cassandra) -Experience with cloud-based data warehousing and data pipeline management (AWS, GCP, Azure) -Experience in Python, Spark, and exposure to scheduling tools like Tuber/Airflow is preferred. -Able to create data dictionaries, setup and monitor data validation alerts, and execute periodic jobs to maintain data pipelines for completed projects -Experience with visualization software (e.g., Tableau, QlikView, PowerBI) is a plus. -A team player and collaborator, able to work well with a diverse group of individuals in a matrixed environment Additional Information Visa is an EEO Employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability or protected veteran status. Visa will also consider for employment qualified applicants with criminal histories in a manner consistent with EEOC guidelines and applicable local law. Show more Show less
Posted 1 week ago
9.0 - 12.0 years
35 - 40 Lacs
Bengaluru
Work from Office
We are seeking an experienced AWS Architect with a strong background in designing and implementing cloud-native data platforms. The ideal candidate should possess deep expertise in AWS services such as S3, Redshift, Aurora, Glue, and Lambda, along with hands-on experience in data engineering and orchestration tools. Strong communication and stakeholder management skills are essential for this role. Key Responsibilities Design and implement end-to-end data platforms leveraging AWS services. Lead architecture discussions and ensure scalability, reliability, and cost-effectiveness. Develop and optimize solutions using Redshift, including stored procedures, federated queries, and Redshift Data API. Utilize AWS Glue and Lambda functions to build ETL/ELT pipelines. Write efficient Python code and data frame transformations, along with unit testing. Manage orchestration tools such as AWS Step Functions and Airflow. Perform Redshift performance tuning to ensure optimal query execution. Collaborate with stakeholders to understand requirements and communicate technical solutions clearly. Required Skills & Qualifications Minimum 9 years of IT experience with proven AWS expertise. Hands-on experience with AWS services: S3, Redshift, Aurora, Glue, and Lambda . Mandatory experience working with AWS Redshift , including stored procedures and performance tuning. Experience building end-to-end data platforms on AWS . Proficiency in Python , especially working with data frames and writing testable, production-grade code. Familiarity with orchestration tools like Airflow or AWS Step Functions . Excellent problem-solving skills and a collaborative mindset. Strong verbal and written communication and stakeholder management abilities. Nice to Have Experience with CI/CD for data pipelines. Knowledge of AWS Lake Formation and Data Governance practices.
Posted 1 week ago
4.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job description: Overall more than 5 Yrs of experience in Data projects Good Knowledge of GCP Bigquery SQL Python dataflow skills Has worked in Implementation projects building data pipelines transformation logics and data models Job Title GCP Data Engineer Belongs to Data Management Engineering Education Bachelor of engineering in any disciplineequivalent Desired Candidate Profile Technology Engineering Expertise 4 years of experience in implementing data solutions using GCP BigquerySQL programming Proficient in dealing data access layer RDBMS NOSQL Experience in implementing and deploying Big data applications with GCP Big Data Services Good to have SQL skills Able to deal with diverse set of stakeholders Proficient in articulation communication and presentation High integrity Problem solving skills learning attitude Team player Key Responsibilities Implement data solutions using GCP and need to be familiar in programming with SQLpython Ensure clarity on NFR and implement these requirements Work with Client Technical Manager by understanding customers landscape their IT priorities Lead performance engineering and capacity planning exercises for databases Technology Engineering Expertise 4 years of experience in implementing data pipelines for Data Analytics solutions Experience in solutions using Google Cloud Data Flow Apache Beam Java programming Proficient in dealing data access layer RDBMS NOSQL Experience in implementing and deploying Big data applications with GCP Big Data Services Good to have SQL skills Experience with different development methodologies RUP Scrum XP Soft skills Able to deal with diverse set of stakeholders Proficient in articulation communication and presentation High integrity Problem solving skills learning attitude Team player Skills: Mandatory Skills : GCP Storage,GCP BigQuery,GCP DataProc,GCP Cloud Composer,GCP DMS,Apache airflow,Java,Python,Scala,GCP Datastream,Google Analytics Hub,GCP Workflows,GCP Dataform,GCP Datafusion,GCP Pub/Sub,ANSI-SQL,GCP Dataflow,GCP Data Flow,GCP Cloud Pub/Sub,Big Data Hadoop Ecosystem Show more Show less
Posted 1 week ago
5.0 - 9.0 years
15 - 20 Lacs
Hyderabad
Hybrid
About Us: Our global community of colleagues bring a diverse range of experiences and perspectives to our work. You'll find us working from a corporate office or plugging in from a home desk, listening to our customers and collaborating on solutions. Our products and solutions are vital to businesses of every size, scope and industry. And at the heart of our work youll find our core values: to be data inspired, relentlessly curious and inherently generous. Our values are the constant touchstone of our community; they guide our behavior and anchor our decisions. Designation: Software Engineer II Location: Hyderabad KEY RESPONSIBILITIES Design, build, and deploy new data pipelines within our Big Data Eco-Systems using Streamsets/Talend/Informatica BDM etc. Document new/existing pipelines, Datasets. Design ETL/ELT data pipelines using StreamSets, Informatica or any other ETL processing engine. Familiarity with Data Pipelines, Data Lakes and modern Data Warehousing practices (virtual data warehouse, push down analytics etc.) Expert level programming skills on Python Expert level programming skills on Spark Cloud Based Infrastructure: GCP Experience with one of the ETL Informatica, StreamSets in creation of complex parallel loads, Cluster Batch Execution and dependency creation using Jobs/Topologies/Workflows etc., Experience in SQL and conversion of SQL stored procedures into Informatica/StreamSets, Strong exposure working with web service origins/targets/processors/executors, XML/JSON Sources and Restful APIs. Strong exposure working with relation databases DB2, Oracle & SQL Server including complex SQL constructs and DDL generation. Exposure to Apache Airflow for scheduling jobs Strong knowledge of Big data Architecture (HDFS), Cluster installation, configuration, monitoring, cluster security, cluster resources management, maintenance, and performance tuning Create POCs to enable new workloads and technical capabilities on the Platform. Work with the platform and infrastructure engineers to implement these capabilities in production. Manage workloads and enable workload optimization including managing resource allocation and scheduling across multiple tenants to fulfill SLAs. Participate in planning activities, Data Science and perform activities to increase platform skills KEY Requirements Minimum 6 years of experience in ETL/ELT Technologies, preferably StreamSets/Informatica/Talend etc., Minimum of 6 years hands-on experience with Big Data technologies e.g. Hadoop, Spark, Hive. Minimum 3+ years of experience on Spark Minimum 3 years of experience in Cloud environments, preferably GCP Minimum of 2 years working in a Big Data service delivery (or equivalent) roles focusing on the following disciplines: Any experience with NoSQL and Graph databases Informatica or StreamSets Data integration (ETL/ELT) Exposure to role and attribute based access controls Hands on experience with managing solutions deployed in the Cloud, preferably on GCP Experience working in a Global company, working in a DevOps model is a plus Dun & Bradstreet is an Equal Opportunity Employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, creed, sex, age, national origin, citizenship status, disability status, sexual orientation, gender identity or expression, pregnancy, genetic information, protected military and veteran status, ancestry, marital status, medical condition (cancer and genetic characteristics) or any other characteristic protected by law. We are committed to Equal Employment Opportunity and providing reasonable accommodations to qualified candidates and employees. If you are interested in applying for employment with Dun & Bradstreet and need special assistance or an accommodation to use our website or to apply for a position, please send an e-mail with your requesttoacquisitiont@dnb.com Determinationon requests for reasonable accommodation are made on a case-by-case basis.
Posted 1 week ago
8.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Position Overview: We’re looking for a hands-on Data Lead to architect and deliver end-to-end data features that power our booking and content management systems. You’ll lead a team, own the data pipeline lifecycle, and play a critical role in shaping and scaling our data infrastructure — spanning batch, real-time, and NoSQL environments. ShyftLabs is a growing data product company founded in early 2020 and works primarily with Fortune 500 companies. We deliver digital solutions built to help accelerate the growth of businesses in various industries, by focusing on creating value through innovation. Job Responsibilities: Lead the development and implementation of booking and CMS data features according to the roadmap Build, optimize, and manage robust data pipelines and ETL/ELT processes using tools like Airflow, DBT, and Databricks Oversee infrastructure and storage layers across distributed systems (e.g., Cassandra, MongoDB, Postgres), ensuring scalability, availability, and performance Support and partner with Data PMs to deliver clean, actionable data for reporting, analytics, and experimentation Handle client data queries, investigate anomalies, and proactively improve data quality and debugging workflows Manage a team of data engineers and analysts: provide architectural direction, review code, and support career growth Collaborate with DevOps/Platform teams on deployment, monitoring, and performance optimization of data services Champion best practices in data governance, version control, security, and documentation Basic Qualifications: 8+ years of experience in data engineering or analytics engineering with a strong focus on both data modeling and infrastructure 2+ years of experience of managing and guiding data team to realize complicated data features. Proficient in SQL, Python, and working with modern data stack tools (e.g., DBT, Databricks, Airflow) Experience managing distributed and NoSQL databases (e.g., Cassandra, MongoDB), and cloud data warehouses (e.g., Snowflake, BigQuery) Strong understanding of scalable data architecture and real-time streaming pipelines is a plus Experience in leading teams, setting code standards, and mentoring junior developers Ability to translate business requirements into scalable, maintainable data systems Familiarity with booking platforms, CMS architectures, or event-based tracking systems is a plus! We are proud to offer a competitive salary alongside a strong insurance package. We pride ourselves on the growth of our employees, offering extensive learning and development resources. Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Responsibilities Manage Data: Extract, clean, and structure both structured and unstructured data. Coordinate Pipelines: Utilize tools such as Airflow, Step Functions, or Azure Data Factory to orchestrate data workflows. Deploy Models: Develop, fine-tune, and deploy models using platforms like SageMaker, Azure ML, or Vertex AI. Scale Solutions: Leverage Spark or Databricks to handle large-scale data processing tasks. Automate Processes: Implement automation using tools like Docker, Kubernetes, CI/CD pipelines, MLFlow, Seldon, and Kubeflow. Collaborate Effectively: Work alongside engineers, architects, and business stakeholders to address and resolve real-world problems efficiently. Qualifications 3+ years of hands-on experience in MLOps (4-5 years of overall software development experience). Extensive experience with at least one major cloud provider (AWS, Azure, or GCP). Proficiency in using Databricks, Spark, Python, SQL, TensorFlow, PyTorch, and Scikit-learn. Expertise in debugging Kubernetes and creating efficient Dockerfiles. Experience in prototyping with open-source tools and scaling solutions effectively. Strong analytical skills, humility, and a proactive approach to problem-solving. Preferred Qualifications Experience with SageMaker, Azure ML, or Vertex AI in a production environment. Commitment to writing clean code, creating clear documentation, and maintaining concise pull requests. Skills: sql,kubeflow,spark,docker,databricks,ml,gcp,mlflow,kubernetes,aws,pytorch,azure,ci/cd,tensorflow,scikit-learn,seldon,python,mlops Show more Show less
Posted 1 week ago
8.0 - 10.0 years
2 - 8 Lacs
Hyderābād
On-site
Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. Senior Manager Technology – US Commercial Data & Analytics What you will do Let’s do this. Let’s change the world. In this vital role you will lead the engagement model between Amgen's Technology organization and our global business partners in Commercial Data & Analytics. We seek a technology leader with a passion for innovation and a collaborative working style that partners effectively with business and technology leaders. Are you interested in building a team that consistently delivers business value in an agile model using technologies such as AWS, Databricks, Airflow, and Tableau? Come join our team! Roles & Responsibilities: Establish an effective engagement model to collaborate with senior leaders on the Sales Insights product team within the Commercial Data & Analytics organization, focused on operations within the United States Serve as the technology product owner for an agile product team committed to delivering business value to Commercial stakeholders via data pipeline buildout for sales data Lead and mentor junior team members to deliver on the needs of the business Interact with business clients and technology management to create technology roadmaps, build cases, and drive DevOps to achieve the roadmaps Help to mature Agile operating principles through deployment of creative and consistent practices for user story development, robust testing and quality oversight, and focus on user experience Ability to connect and understand our vast array Commercial and other functional data sources including Sales, Activity, and Digital data, etc. into consumable and user-friendly modes (e.g., dashboards, reports, mobile, etc.) for key decision makers such as executives, brand leads, account managers, and field representatives. Become the lead subject matter expert in reporting technology capabilities by researching and implementing new tools and features, internal and external methodologies What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Master’s degree with 8 - 10 years of experience in Information Systems experience OR Bachelor’s degree with 10 - 14 years of experience in Information Systems experience OR Diploma with 14 - 18 years of experience in Information Systems experience Must-Have Skills Excellent problem-solving skills and a passion for tackling complex challenges in data and analytics with technology Experience leading data and analytics teams in a Scaled Agile Framework (SAFe) Excellent interpersonal skills, strong attention to detail, and ability to influence based on data and business value Ability to build compelling business cases with accurate cost and effort estimations Has experience with writing user requirements and acceptance criteria in agile project management systems such as Jira Ability to explain sophisticated technical concepts to non-technical clients Strong understanding of sales and incentive compensation value streams Preferred Qualifications: Jira Align & Confluence experience Experience of DevOps, Continuous Integration, and Continuous Delivery methodology Understanding of software systems strategy, governance, and infrastructure Experience in managing product features for PI planning and developing product roadmaps and user journeys Familiarity with low-code, no-code test automation software Technical thought leadership Soft Skills: Able to work effectively across multiple geographies (primarily India, Portugal, and the United States) under minimal supervision Demonstrated proficiency in written and verbal communication in English language Skilled in providing oversight and mentoring team members. Demonstrated ability in effectively delegating work Intellectual curiosity and the ability to question partners across functions Ability to prioritize successfully based on business value High degree of initiative and self-motivation Ability to manage multiple priorities successfully across virtual teams Team-oriented, with a focus on achieving team goals Strong presentation and public speaking skills Technical Skills: ETL tools: Experience in ETL tools such as Databricks Redshift or equivalent cloud-based dB Big Data, Analytics, Reporting, Data Lake, and Data Integration technologies S3 or equivalent storage system AWS (similar cloud-based platforms) BI Tools (Tableau and Power BI preferred) What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
Posted 1 week ago
10.0 years
5 - 5 Lacs
Hyderābād
On-site
We are seeking a Senior Manager - Pricing Analytics for the pricing team in Thomson Reuters. Central Pricing Team works with Pricing Managers, Business Units, Product Marketing Managers, Finance and Sales in price execution of new product launches, maintenance of existing ones, and creation & maintenance of data products for reporting & analytics. The team is responsible for providing product and pricing information globally to all internal stakeholders and collaborating with upstream and downstream teams to ensure offer pricing readiness. Apart from BAU, the team works on various automation, pricing transformation projects & pricing analytics initiatives. About the Role In this role as a Senior Manager - Pricing Analytics, you will: Lead and mentor a team of pricing analysts, data engineers, and BI developers Drive operational excellence by fostering a culture of data quality, accountability, and continuous improvement. Manage team capacity, project prioritization, and cross-functional coordination with Segment Pricing, Finance, Sales, and Analytics teams Partner closely with the Pricing team to translate business objectives into actionable analytics deliverables. Drive insights on pricing performance, discounting trends, segmentation, and monetization opportunities. Oversee design and execution of robust ETL pipelines to consolidate data from multiple sources (e.g., Salesforce, EMS, UNISON, SAP, Pendo, Product usage platforms etc). Ensure delivery of intuitive, self-service dashboards and reports that track key pricing KPIs, sales performance, and customer behaviour. Strategize, deploy and promote scalable analytics architecture and best practices in data governance, modelling, and visualization. Act as a trusted advisor to Pricing leadership by delivering timely, relevant, and accurate data insights. Collaborate with analytics, finance, segment pricing and data platform teams to align on data availability, definitions, and architecture. Shift Timings: 2 PM to 11 PM (IST) Work from office for 2 days in a week (Mandatory) About You You’re a fit for the role of Senior Marketing Analyst, if your background includes: 10+ years of experience in analytics, data science, or business intelligence, with 3+ years in a people leadership or managerial role. Proficiency in SQL, ETL tools (e.g. Alteryx, dbt, airflow), and BI platforms (e.g., Tableau, Power BI, Looker) Knowledge of Python, R, or other statistical tools is a plus Experience with data from Salesforce, SAP, other CRM, ERP or CPQ tools Ability to translate complex data into actionable insights and communicate effectively with senior stakeholders. Strong understanding of data analytics, monetization metrics, and SaaS pricing practices Proven experience working in a B2B SaaS or software product company preferred MBA, Master’s in Analytics, Engineering, or a quantitative field preferred #LI-GS2 What’s in it For You? Hybrid Work Model: We’ve adopted a flexible hybrid working environment (2-3 days a week in the office depending on the role) for our office-based roles while delivering a seamless experience that is digitally and physically connected. Flexibility & Work-Life Balance: Flex My Way is a set of supportive workplace policies designed to help manage personal and professional responsibilities, whether caring for family, giving back to the community, or finding time to refresh and reset. This builds upon our flexible work arrangements, including work from anywhere for up to 8 weeks per year, empowering employees to achieve a better work-life balance. Career Development and Growth: By fostering a culture of continuous learning and skill development, we prepare our talent to tackle tomorrow’s challenges and deliver real-world solutions. Our Grow My Way programming and skills-first approach ensures you have the tools and knowledge to grow, lead, and thrive in an AI-enabled future. Industry Competitive Benefits: We offer comprehensive benefit plans to include flexible vacation, two company-wide Mental Health Days off, access to the Headspace app, retirement savings, tuition reimbursement, employee incentive programs, and resources for mental, physical, and financial wellbeing. Culture: Globally recognized, award-winning reputation for inclusion and belonging, flexibility, work-life balance, and more. We live by our values: Obsess over our Customers, Compete to Win, Challenge (Y)our Thinking, Act Fast / Learn Fast, and Stronger Together. Social Impact: Make an impact in your community with our Social Impact Institute. We offer employees two paid volunteer days off annually and opportunities to get involved with pro-bono consulting projects and Environmental, Social, and Governance (ESG) initiatives. Making a Real-World Impact: We are one of the few companies globally that helps its customers pursue justice, truth, and transparency. Together, with the professionals and institutions we serve, we help uphold the rule of law, turn the wheels of commerce, catch bad actors, report the facts, and provide trusted, unbiased information to people all over the world. About Us Thomson Reuters informs the way forward by bringing together the trusted content and technology that people and organizations need to make the right decisions. We serve professionals across legal, tax, accounting, compliance, government, and media. Our products combine highly specialized software and insights to empower professionals with the data, intelligence, and solutions needed to make informed decisions, and to help institutions in their pursuit of justice, truth, and transparency. Reuters, part of Thomson Reuters, is a world leading provider of trusted journalism and news. We are powered by the talents of 26,000 employees across more than 70 countries, where everyone has a chance to contribute and grow professionally in flexible work environments. At a time when objectivity, accuracy, fairness, and transparency are under attack, we consider it our duty to pursue them. Sound exciting? Join us and help shape the industries that move society forward. As a global business, we rely on the unique backgrounds, perspectives, and experiences of all employees to deliver on our business goals. To ensure we can do that, we seek talented, qualified employees in all our operations around the world regardless of race, color, sex/gender, including pregnancy, gender identity and expression, national origin, religion, sexual orientation, disability, age, marital status, citizen status, veteran status, or any other protected classification under applicable law. Thomson Reuters is proud to be an Equal Employment Opportunity Employer providing a drug-free workplace. We also make reasonable accommodations for qualified individuals with disabilities and for sincerely held religious beliefs in accordance with applicable law. More information on requesting an accommodation here. Learn more on how to protect yourself from fraudulent job postings here. More information about Thomson Reuters can be found on thomsonreuters.com.
Posted 1 week ago
130.0 years
6 - 9 Lacs
Hyderābād
On-site
Job Description Our company is an innovative, global healthcare leader that is committed to improving health and well-being around the world with a diversified portfolio of prescription medicines, vaccines and animal health products. We continue to focus our research on conditions that affect millions of people around the world - diseases like Alzheimer's, diabetes and cancer - while expanding our strengths in areas like vaccines and biologics. Our ability to excel depends on the integrity, knowledge, imagination, skill, diversity and teamwork of an individual like you. To this end, we strive to create an environment of mutual respect, encouragement and teamwork. As part of our global team, you’ll have the opportunity to collaborate with talented and dedicated colleagues while developing and expanding your career. As a Digital Supply Chain Data Modeler/Engineer, you will work as a member of the Digital Manufacturing Division team supporting Enterprise Orchestration Platform. You will be responsible for identifying, assessing, and solving complex business problems related to manufacturing and supply chain. You will receive training to achieve this, and you’ll be amazed at the diversity of opportunities to develop your potential and grow professionally. You will collaborate with business stakeholders and determine analytical capabilities that will enable the creation of Insights-focused solutions that align to business needs and ensure that delivery of these solutions meet quality requirements. The Opportunity Based in Hyderabad, joining a global healthcare biopharma company and be part of a 130- year legacy of success backed by ethical integrity, forward momentum, and an inspiring mission to achieve new milestones in global healthcare. Be part of an organization driven by digital technology and data-backed approaches that support a diversified portfolio of prescription medicines, vaccines, and animal health products. Drive innovation and execution excellence. Be a part of a team with passion for using data, analytics, and insights to drive decision-making, and which creates custom software, allowing us to tackle some of the world's greatest health threats. Our Technology Centers focus on creating a space where teams can come together to deliver business solutions that save and improve lives. An integral part of our company’s IT operating model, Tech Centers are globally distributed locations where each IT division has employees to enable our digital transformation journey and drive business outcomes. These locations, in addition to the other sites, are essential to supporting our business and strategy. A focused group of leaders in each Tech Centre helps to ensure we can manage and improve each location, from investing in growth, success, and well-being of our people, to making sure colleagues from each IT division feel a sense of belonging to managing critical emergencies. And together, we must leverage the strength of our team to collaborate globally to optimize connections and share best practices across the Tech Centers. Job Description As Data modeler lead, you will be responsible for following but not limited to, Deliver divisional analytics initiatives with primary focus on data modeling for all analytics, advanced analytics and AI/ML uses cases e,g Self Services, Business Intelligence & Analytics, Data exploration, Data Wrangling etc. Host and lead requirement/process workshop to understand the requirements of data modeling. Analysis of business requirements and work with architecture team to deliver & contribute to feasibility analysis, implementation plans and high-level estimates. Based on business process and analysis of data sources, deliver detailed ETL design with mapping of data model covering all areas of Data warehousing for all analytics use cases. Creation of data model & transformation mapping in modeling tool and deploy in databases including creation of schedule orchestration jobs. Deployment of Data modeling configuration to Target systems (SIT, UAT & Prod) . Understanding of Product ownership and management. Lead Data model as a product for focus areas of Digital supply chain domain. Creation of required SDLC documentation as per project requirements. Optimization/industrialization of existing database and data transformation solution Prepare and update Data modeling and Data warehousing best practices along with foundational platforms. Work very closely with foundational product teams, Business, vendors, and technology support teams to build team to deliver business initiatives Position Qualifications : Education Minimum Requirement: - B.S. or M.S. in IT, Engineering, Computer Science, or related fields. Required Experience and Skills**: 5+ years of relevant work experience, with demonstrated expertise in Data modeling in DWH, Data Mesh or any analytics related implementation; experience in implementing end to end DWH solutions involving creating design of DWH and deploying the solution 3+ years of experience in creating logical & Physical data models in any modeling tool (SAP Power designer, WhereScape etc ). Experience in creating data modeling standards, best practices and Implementation process. High Proficiency in Information Management, Data Analysis and Reporting Requirement Elicitation Experience working with extracting business rules to develop transformations, data lineage, and dimension data modeling Experience working with validating legacy and developed data model outputs Development experience using WhereScape and various similar ETL/Data Modeling tools Exposure to Qlik or similar BI dashboarding applications Has advanced knowledge of SQL and data transformation practices Has deep understanding of data modelling and preparation of optimal data structures Is able to communicate with business, data transformation team and reporting team Has knowledge of ETL methods, and a willingness to learn ETL technologies Can fluently communicate in English Experience in Redshift or similar databases using DDL, DML, Query optimization, Schema management, Security, etc. Experience with Airflow or similar various orchestration tools Exposure to CI/CD tools Exposure to AWS modules such as S3, AWS Console, Glue, Spectrum, etc management Independently support business discussions, analyze, and develop/deliver code Work during and with the US and European team with the overlap work hours. Preferred Experience and Skills: Experience working on projects where Agile methodology is leveraged Understanding of data management best practices and data analytics Ability to lead requirements sessions with clients and project teams Strong leadership, verbal and written communication skills with ability to articulate results and issues to internal and client teams Demonstrated experience in the Life Science space Exposure to SAP and RapidResponse domain data is a plus Current Employees apply HERE Current Contingent Workers apply HERE Search Firm Representatives Please Read Carefully Merck & Co., Inc., Rahway, NJ, USA, also known as Merck Sharp & Dohme LLC, Rahway, NJ, USA, does not accept unsolicited assistance from search firms for employment opportunities. All CVs / resumes submitted by search firms to any employee at our company without a valid written search agreement in place for this position will be deemed the sole property of our company. No fee will be paid in the event a candidate is hired by our company as a result of an agency referral where no pre-existing agreement is in place. Where agency agreements are in place, introductions are position specific. Please, no phone calls or emails. Employee Status: Regular Relocation: VISA Sponsorship: Travel Requirements: Flexible Work Arrangements: Not Applicable Shift: Valid Driving License: Hazardous Material(s): Required Skills: Agile Data Warehousing, Agile Methodology, Animal Vaccination, Business, Business Communications, Business Intelligence (BI), Computer Science, Database Administration, Data Engineering, Data Management, Data Modeling, Data Visualization, Data Warehousing (DW), Design Applications, Digital Supply Chain, Digital Supply Chain Management, Digital Transformation, Information Management, Information Technology Operations, Physical Data Models, Software Development, Software Development Life Cycle (SDLC), Supply Chain Optimization, Supply Management, System Designs Preferred Skills: Job Posting End Date: 06/30/2025 A job posting is effective until 11:59:59PM on the day BEFORE the listed job posting end date. Please ensure you apply to a job posting no later than the day BEFORE the job posting end date. Requisition ID: R351878
Posted 1 week ago
5.0 - 10.0 years
7 - 10 Lacs
Hyderābād
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. AWS Data Engineer- Senior We are seeking a highly skilled and motivated Hands on AWS Data Engineer with 5-10 years of experience in AWS Glue, Pyspark ,AWS Redshift, S3, and Python to join our dynamic team. As a Data Engineer, you will be responsible for designing, developing, and optimizing data pipelines and solutions that support business intelligence, analytics, and large-scale data processing. You will work closely with data scientists, analysts, and other engineering teams to ensure seamless data flow across our systems. Technical Skills : Must have Strong experience in AWS Data Services like Glue , Lambda, Even bridge, Kinesis, S3/ EMR , Redshift , RDS, Step functions, Airflow & Pyspark Strong exposure to IAM, Cloud Trail , Cluster optimization , Python & SQL Should have expertise in Data design, STTM, understanding of Data models , Data component design, Automated testing, Code Coverage, UAT support , Deployment and go live Experience with version control systems like SVN, Git. Create and manage AWS Glue crawlers and jobs to automate data cataloging and ingestion processes across various structured and unstructured data sources. Strong experience with AWS Glue building ETL pipelines, managing crawlers, and working with Glue data catalogue. Proficiency in AWS Redshift designing and managing Redshift clusters, writing complex SQL queries, and optimizing query performance. Enable data consumption from reporting and analytics business applications using AWS services (ex: QuickSight, Sagemaker, JDBC / ODBC connectivity, etc.) Behavioural skills: Willing to work 5 days a week from ODC / client location ( based on project can be hybrid 3 days a week ) Ability to Lead developers and engage with client stakeholders to drive technical decisions Ability to do technical design and POCs- help build / analyse logical data model, required entities, relationships, data constraints and dependencies focused on enabling reporting and analytics business use cases Should be able to work in Agile environment Should have strong communication skills Good to have : Exposure to Financial Services , Wealth and Asset Management Exposure to Data science, Exposure to Fullstack technologies GenAI will be an added advantage EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
The airflow job market in India is rapidly growing as more companies are adopting data pipelines and workflow automation. Airflow, an open-source platform, is widely used for orchestrating complex computational workflows and data processing pipelines. Job seekers with expertise in airflow can find lucrative opportunities in various industries such as technology, e-commerce, finance, and more.
The average salary range for airflow professionals in India varies based on experience levels: - Entry-level: INR 6-8 lakhs per annum - Mid-level: INR 10-15 lakhs per annum - Experienced: INR 18-25 lakhs per annum
In the field of airflow, a typical career path may progress as follows: - Junior Airflow Developer - Airflow Developer - Senior Airflow Developer - Airflow Tech Lead
In addition to airflow expertise, professionals in this field are often expected to have or develop skills in: - Python programming - ETL concepts - Database management (SQL) - Cloud platforms (AWS, GCP) - Data warehousing
As you explore job opportunities in the airflow domain in India, remember to showcase your expertise, skills, and experience confidently during interviews. Prepare well, stay updated with the latest trends in airflow, and demonstrate your problem-solving abilities to stand out in the competitive job market. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.