Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
4.0 - 9.0 years
6 - 16 Lacs
Hyderabad
Work from Office
Notice Period: Immediate to 15 Days Skills Java 8/11 Spring Boot Microservices Junit / Mockito Karate / Gherkin MySQL Database Kafka / Avro git Pivotal Cloud Foundry Jenkins
Posted 1 month ago
5.0 - 7.0 years
1 - 1 Lacs
Lucknow
Hybrid
Technical Experience 5-7 years of hands-on experience in data pipeline development and ETL processes 3+ years of deep AWS experience , specifically with Kinesis, Glue, Lambda, S3, and Step Functions Strong proficiency in NodeJS/JavaScript and Java for serverless and containerized applications Production experience with Apache Spark, Apache Flink, or similar big data processing frameworks Data Engineering Expertise Proven experience with real-time streaming architectures and event-driven systems Hands-on experience with Parquet, Avro, Delta Lake, and columnar storage optimization Experience implementing data quality frameworks such as Great Expectations or similar tools Knowledge of star schema modeling, slowly changing dimensions, and data warehouse design patterns Experience with medallion architecture or similar progressive data refinement strategies AWS Skills Experience with Amazon EMR, Amazon MSK (Kafka), or Amazon Kinesis Analytics Knowledge of Apache Airflow for workflow orchestration Experience with DynamoDB, ElastiCache, and Neptune for specialized data storage Familiarity with machine learning pipelines and Amazon SageMaker integration
Posted 1 month ago
7.0 - 12.0 years
18 - 33 Lacs
Hyderabad, Bengaluru
Hybrid
Bachelor in Computer Science, Engineering, or equivalent experience 7+ years of experience in core JAVA, Spring Framework (Required) 2 years of Cloud experience (GCP, AWS, Azure, GCP preferred ) (Required) Experience in big data processing, on a distributed system. (required) Experience in databases RDBMS, NoSQL databases Cloud natives. (Required) Experience in handling various data formats like Flat file, jSON, Avro, xml etc with defining the schemas and the contracts. (required) Experience in implementing the data pipeline (ETL) using Dataflow( Apache beam) Experience in Microservices and integration patterns of the APIs with data processing. Experience in data structure, defining and designing the data models
Posted 1 month ago
8.0 - 13.0 years
25 - 30 Lacs
Bengaluru
Work from Office
About The Position. Utilizes software engineering principles to deploy and maintain fully automated data transformation pipelines that combine a large variety of storage and computation technologies to handle a distribution of data types and volumes in support of data architecture design. A Senior Data Engineer designs and oversees the entire data infrastructure, data products and data pipelines that are resilient to change, modular, flexible, scalable, reusable, and cost effective.. Key Responsibilities. Oversee the entire data infrastructure to ensure scalability, operation efficiency and resiliency.. Mentor junior data engineers within the organization.. Design, develop, and maintain data pipelines and ETL processes using Microsoft Azure services (e.g., Azure Data Factory, Azure Synapse, Azure Databricks, Azure Fabric).. Utilize Azure data storage accounts for organizing and maintaining data pipeline outputs. (e.g., Azure Data Lake Storage Gen 2 & Azure Blob storage).. Collaborate with data scientists, data analysts, data architects and other stakeholders to understand data requirements and deliver high-quality data solutions.. Optimize data pipelines in the Azure environment for performance, scalability, and reliability.. Ensure data quality and integrity through data validation techniques and frameworks.. Develop and maintain documentation for data processes, configurations, and best practices.. Monitor and troubleshoot data pipeline issues to ensure timely resolution.. Stay current with industry trends and emerging technologies to ensure our data solutions remain cutting-edge.. Manage the CI/CD process for deploying and maintaining data solutions.. Required Qualifications. Bachelor’s degree in Computer Science, Engineering, or a related field (or equivalent experience) and able to demonstrate high proficiency in programming fundamentals.. At least 5 years of proven experience as a Data Engineer or similar role dealing with data and ETL processes. 5 10 years of experience. Strong knowledge of Microsoft Azure services, including Azure Data Factory, Azure Synapse, Azure Databricks, Azure Blob Storage and Azure Data Lake Gen 2.. Experience utilizing SQL DML to query modern RDBMS in an efficient manner (e.g., SQL Server, PostgreSQL).. Strong understanding of Software Engineering principles and how they apply to Data Engineering (e.g., CI/CD, version control, testing).. Experience with big data technologies (e.g., Spark).. Strong problem-solving skills and attention to detail.. Excellent communication and collaboration skills.. Preferred Qualifications. Learning agility. Technical Leadership. Consulting and managing business needs. Strong experience in Python is preferred but experience in other languages such as Scala, Java, C#, etc is accepted.. Experience building spark applications utilizing PySpark.. Experience with file formats such as Parquet, Delta, Avro.. Experience efficiently querying API endpoints as a data source.. Understanding of the Azure environment and related services such as subscriptions, resource groups, etc.. Understanding of Git workflows in software development.. Using Azure DevOps pipeline and repositories to deploy and maintain solutions.. Understanding of Ansible and how to use it in Azure DevOps pipelines.. Chevron ENGINE supports global operations, supporting business requirements across the world. Accordingly, the work hours for employees will be aligned to support business requirements. The standard work week will be Monday to Friday. Working hours are 8:00am to 5:00pm or 1.30pm to 10.30pm.. Chevron participates in E-Verify in certain locations as required by law.. Show more Show less
Posted 1 month ago
6.0 - 10.0 years
2 - 6 Lacs
Pune
Work from Office
Req ID: 323909 We are currently seeking a Data Ingest Engineer to join our team in Pune, Mahrshtra (IN-MH), India (IN). Job DutiesThe Applications Development Technology Lead Analyst is a senior level position responsible for establishing and implementing new or revised application systems and programs in coordination with the Technology team. This is a position within the Ingestion team of the DRIFT data ecosystem. The focus is on ingesting data in a timely , complete, and comprehensive fashion while using the latest technology available to Citi. The ability to leverage new and creative methods for repeatable data ingestion from a variety of data sources while always questioning "is this the best way to solve this problem" and "am I providing the highest quality data to my downstream partners" are the questions we are trying to solve. Responsibilities: "¢ Partner with multiple management teams to ensure appropriate integration of functions to meet goals as well as identify and define necessary system enhancements to deploy new products and process improvements "¢ Resolve variety of high impact problems/projects through in-depth evaluation of complex business processes, system processes, and industry standards "¢ Provide expertise in area and advanced knowledge of applications programming and ensure application design adheres to the overall architecture blueprint "¢ Utilize advanced knowledge of system flow and develop standards for coding, testing, debugging, and implementation "¢ Develop comprehensive knowledge of how areas of business, such as architecture and infrastructure, integrate to accomplish business goals "¢ Provide in-depth analysis with interpretive thinking to define issues and develop innovative solutions "¢ Serve as advisor or coach to mid-level developers and analysts, allocating work as necessary "¢ Appropriately assess risk when business decisions are made, demonstrating particular consideration for the firm's reputation and safeguarding Citigroup, its clients and assets, by driving compliance with applicable laws, rules and regulations, adhering to Policy, applying sound ethical judgment regarding personal behavior, conduct and business practices, and escalating, managing and reporting control issues with transparency. Minimum Skills Required"¢ 6-10 years of relevant experience in Apps Development or systems analysis role "¢ Extensive experience system analysis and in programming of software applications "¢ Application Development using JAVA, Scala, Spark "¢ Familiarity with event driven applications and streaming data "¢ Experience with Confluent Kafka, HDFS, HIVE, structured and unstructured database systems (SQL and NoSQL) "¢ Experience with various schema and data types -> JSON, AVRO, Parquet, etc. "¢ Experience with various ELT methodologies and formats -> JDBC, ODBC, API, Web hook, SFTP, etc. "¢ Experience working within the Agile and version control tool sets (JIRA, Bitbucket, Git, etc.) "¢ Ability to adjust priorities quickly as circumstances dictate "¢ Demonstrated leadership and project management skills "¢ Consistently demonstrates clear and concise written and verbal communication
Posted 1 month ago
5.0 - 8.0 years
0 - 1 Lacs
Hyderabad
Hybrid
Location: Hyderabad (Hybrid) Please share your resume with +91 9361912009 Roles and Responsibilities Deep understanding of Linux, networking and security fundamentals. Experience working with AWS cloud platform and infrastructure. Experience working with infrastructure as code with Terraform or Ansible tools. Experience managing large BigData clusters in production (at least one of -- Cloudera, Hortonworks, EMR). Excellent knowledge and solid work experience providing observability for BigData platforms using tools like Prometheus, InfluxDB, Dynatrace, Grafana, Splunk etc. Expert knowledge on Hadoop Distributed File System (HDFS) and Hadoop YARN. Decent knowledge of various Hadoop file formats like ORC, Parquet, Avro etc. Deep understanding of Hive (Tez), Hive LLAP, Presto and Spark compute engines. Ability to understand query plans and optimize performance for complex SQL queries on Hive and Spark. Experience supporting Spark with Python (PySpark) and R (SparklyR, SparkR) languages Solid professional coding experience with at least one scripting language - Shell, Python etc. Experience working with Data Analysts, Data Scientists and at least one of these related analytical applications like SAS, R-Studio, JupyterHub, H2O etc. Able to read and understand code (Java, Python, R, Scala), but expertise in at least one scripting languages like Python or Shell. Nice to have skills: Experience with workflow management tools like Airflow, Oozie etc. Knowledge in analytical libraries like Pandas, Numpy, Scipy, PyTorch etc. Implementation history of Packer, Chef, Jenkins or any other similar tooling. Prior working knowledge of Active Directory and Windows OS based VDI platforms like Citrix, AWS Workspaces etc.
Posted 1 month ago
3.0 - 8.0 years
4 - 8 Lacs
Chennai
Work from Office
Your Profile As a senior software engineer with Capgemini, you will have 3 + years of experience in Scala with strong project track record Hands On experience in Scala/Spark developer Hands on SQL writing skills on RDBMS (DB2) databases Experience in working with different file formats like JSON, Parquet, AVRO, ORC and XML. Must have worked in a HDFS platform development project. Proficiency in data analysis, data profiling, and data lineage Strong oral and written communication skills Experience working in Agile projects. Your Role Work on Hadoop, Spark, Hive &SQL query Ability to perform code optimization for performance, Scalability and configurability Data application development at scale in the Hadoop ecosystem. What youll love about working here ChoosingCapgeminimeans having the opportunity to make a difference, whetherfor the worlds leading businesses or for society. It means getting the support youneed to shape your career in the way that works for you. It means when the futuredoesnt look as bright as youd like, youhave the opportunity tomake changetorewrite it. When you join Capgemini, you dont just start a new job. You become part of something bigger. A diverse collective of free-thinkers, entrepreneurs and experts, all working together to unleash human energy through technology, for an inclusive and sustainable future. At Capgemini, people are at the heart of everything we do! You can exponentially grow your career by being part of innovative projects and taking advantage of our extensiveLearning & Developmentprograms. With us, you will experience aninclusive, safe, healthy, andflexiblework environment to bring out the best in you! You also get a chance to make positive social change and build a better world by taking an active role in ourCorporate Social ResponsibilityandSustainabilityinitiatives. And whilst you make a difference, you will also have a lot offun. About Company
Posted 1 month ago
6.0 - 9.0 years
4 - 8 Lacs
Bengaluru
Work from Office
Hands on experience into Java 8, Spring Boot, Micro services. Deep knowledge on data structure and algorithms. Strong experience in Micro services (Decompose, Strangler, Saga, Event sourcing, CQRS, Tx Messaging). Familiarity with PCF apps, Docker, Kubernetes / Open shift Experience in backend testing using Junit/Mockito, MySQL, Kafka, and Avro. Experience in DDD, BDD, TDD Hands on experience into CI/CD/Jenkins and tools like github/git. Experience of working in Agile environment and good understanding of Agile processes Primary Skills Java8, Springboot, Microservices, Data Structure, Algorithm Secondary Skills Angular
Posted 1 month ago
8.0 - 10.0 years
10 - 12 Lacs
Hyderabad
Work from Office
ABOUT THE ROLE Role Description: We are seeking a highly skilled and experienced hands-on Test Automation Engineering Manager with a deep e xpertise in Data Quality (DQ) , Data Integration (DIF) , and Data Governance . In this role, you will design and implement automated frameworks that ensure data accuracy, metadata consistency , and compliance throughout the data pipeline , leveraging technologies like Data bricks , AWS , and cloud-native tools . You will have a major focus on Data Cataloguing and Governance , ensuring that data assets are well-documented, auditable, and secure across the enterprise. In this role, you will be responsible for the end-to-end design and development of a test automation framework, working collaboratively with the team. As the delivery owner for test automation, your primary focus will be on building and automating comprehensive validation frameworks for data cataloging , data classification, and metadata tracking, while ensuring alignment with internal governance standards. will also work closely with data engineers, product teams, and data governance leads to enforce data quality and governance policies . Your efforts will play a key role in driving data integrity, consistency, and trust across the organization. The role is highly technical and hands-on , with a strong focus on automation, metadata validation , and ensuring data governance practices are seamlessly integrated into development pipelines. Roles & Responsibilities: Data Quality & Integration Frameworks Design and implement Data Quality (DQ) frameworks that validate schema compliance, transformations, completeness, null checks, duplicates, threshold rules, and referential integrity. Build Data Integration Frameworks (DIF) that validate end-to-end data pipelines across ingestion, processing, storage, and consumption layers. Automate data validations in Databricks/Spark pipelines, integrated with AWS services like S3, Glue, Athena, and Lake Formation. Develop modular, reusable validation components using PySpark, SQL, Python, and orchestration via CI/CD pipelines. Data Cataloging & Governance Integrate automated validations with AWS Glue Data Catalog to ensure metadata consistency, schema versioning, and lineage tracking. Implement checks to verify that data assets are properly cataloged, discoverable, and compliant with internal governance standards. Validate and enforce data classification, tagging, and access controls, ensuring alignment with data governance frameworks (e.g., PII/PHI tagging, role-based access policies). Collaborate with governance teams to automate policy enforcement and compliance checks for audit and regulatory needs. Visualization & UI Testing Automate validation of data visualizations in tools like Tableau, Power BI, Looker , or custom React dashboards. Ensure charts, KPIs, filters, and dynamic views correctly reflect backend data using UI automation (Selenium with Python) and backend validation logic. Conduct API testing (via Postman or Python test suites) to ensure accurate data delivery to visualization layers. Technical Skills and Tools Hands-on experience with data automation tools like Databricks and AWS is essential, as the manager will be instrumental in building and managing data pipelines. Leverage automated testing frameworks and containerization tools to streamline processes and improve efficiency. Experience in UI and API functional validation using tools such as Selenium with Python and Postman, ensuring comprehensive testing coverage. Technical Leadership, Strategy & Team Collaboration Define and drive the overall QA and testing strategy for UI and search-related components with a focus on scalability, reliability, and performance, while establishing alerting and reporting mechanisms for test failures, data anomalies, and governance violations. Contribute to system architecture and design discussions , bringing a strong quality and testability lens early into the development lifecycle. Lead test automation initiatives by implementing best practices and scalable frameworks, embedding test suites into CI/CD pipelines to enable automated, continuous validation of data workflows, catalog changes, and visualization updates Mentor and guide QA engineers , fostering a collaborative, growth-oriented culture focused on continuous learning and technical excellence. Collaborate cross-functionally with product managers, developers, and DevOps to align quality efforts with business goals and release timelines. Conduct code reviews, test plan reviews, and pair-testing sessions to ensure team-level consistency and high-quality standards. Good-to-Have Skills: Experience with data governance tools such as Apache Atlas , Collibra , or Alation Understanding of DataOps methodologies and practices Familiarity with monitoring/observability tools such as Datadog , Prometheus , or CloudWatch Experience building or maintaining test data generators Contributions to internal quality dashboards or data observability systems Awareness of metadata-driven testing approaches and lineage-based validations Experience working with agile Testing methodologies such as Scaled Agile. Familiarity with automated testing frameworks like Selenium, JUnit, TestNG, or PyTest. Must-Have Skills: Strong hands-on experience with Data Quality (DQ) framework design and automation Expertise in PySpark, Python, and SQL for data validations Solid understanding of ETL/ELT pipeline testing in Databricks or Apache Spark environments Experience validating structured and semi-structured data formats (e.g., Parquet, JSON, Avro) Deep familiarity with AWS data services: S3, Glue, Athena, Lake Formation, Data Catalog Integration of test automation with AWS Glue Data Catalog or similar catalog tools UI automation using Selenium with Python for dashboard and web interface validation API testing using Postman, Python, or custom API test scripts Hands-on testing of BI tools such as Tableau, Power BI, Looker, or custom visualization layers CI/CD test integration with tools like Jenkins, GitHub Actions, or GitLab CI Familiarity with containerized environments (e.g., Docker, AWS ECS/EKS) Knowledge of data classification, access control validation, and PII/PHI tagging Understanding of data governance standards (e.g., GDPR, HIPAA, CCPA) Understanding Data Structures: Knowledge of various data structures and their applications. Ability to analyze data and identify inconsistencies. Proven hands-on experience in test automation and data automation using Databricks and AWS. Strong knowledge of Data Integrity Frameworks (DIF) and Data Quality (DQ) principles. Familiarity with automated testing frameworks like Selenium, JUnit, TestNG, or PyTest. Strong understanding of data transformation techniques and logic. Education and Professional Certifications Bachelors degree in computer science and engineering preferred, other Engineering field is considered; Masters degree and 6+ years experience Or Bachelors degree and 8+ years Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals Strong presentation and public speaking skills.
Posted 2 months ago
8.0 - 13.0 years
30 - 35 Lacs
Bengaluru
Work from Office
Data Engineer Location: Bangalore - Onsite Experience: 8 - 15 years Type: Full-time Role Overview We are seeking an experienced Data Engineer to build and maintain scalable, high-performance data pipelines and infrastructure for our next-generation data platform. The platform ingests and processes real-time and historical data from diverse industrial sources such as airport systems, sensors, cameras, and APIs. You will work closely with AI/ML engineers, data scientists, and DevOps to enable reliable analytics, forecasting, and anomaly detection use cases. Key Responsibilities Design and implement real-time (Kafka, Spark/Flink) and batch (Airflow, Spark) pipelines for high-throughput data ingestion, processing, and transformation. Develop data models and manage data lakes and warehouses (Delta Lake, Iceberg, etc) to support both analytical and ML workloads. Integrate data from diverse sources: IoT sensors, databases (SQL/NoSQL), REST APIs, and flat files. Ensure pipeline scalability, observability, and data quality through monitoring, alerting, validation, and lineage tracking. Collaborate with AI/ML teams to provision clean and ML-ready datasets for training and inference. Deploy, optimize, and manage pipelines and data infrastructure across on-premise and hybrid environments. Participate in architectural decisions to ensure resilient, cost-effective, and secure data flows. Contribute to infrastructure-as-code and automation for data deployment using Terraform, Ansible, or similar tools. Qualifications & Required Skills Bachelors or Master’s in Computer Science, Engineering, or related field. 6+ years in data engineering roles, with at least 2 years handling real-time or streaming pipelines. Strong programming skills in Python/Java and SQL. Experience with Apache Kafka, Apache Spark, or Apache Flink for real-time and batch processing. Hands-on with Airflow, dbt, or other orchestration tools. Familiarity with data modeling (OLAP/OLTP), schema evolution, and format handling (Parquet, Avro, ORC). Experience with hybrid/on-prem and cloud platforms (AWS/GCP/Azure) deployments. Proficient in working with data lakes/warehouses like Snowflake, BigQuery, Redshift, or Delta Lake. Knowledge of DevOps practices, Docker/Kubernetes, Terraform or Ansible. Exposure to data observability, data cataloging, and quality tools (e.g., Great Expectations, OpenMetadata). Good-to-Have Experience with time-series databases (e.g., InfluxDB, TimescaleDB) and sensor data. Prior experience in domains such as aviation, manufacturing, or logistics is a plus. Role & responsibilities Preferred candidate profile
Posted 2 months ago
3.0 - 8.0 years
9 - 13 Lacs
Gurugram
Work from Office
About the Role: Grade Level (for internal use): 10 The Team As a member of the Data Transformation team you will work on building ML powered products and capabilities to power natural language understanding, data extraction, information retrieval and data sourcing solutions for S&P Global Market Intelligence and our clients. You will spearhead development of production-ready AI products and pipelines while leading-by-example in a highly engaging work environment. You will work in a (truly) global team and encouraged for thoughtful risk-taking and self-initiative. The Impact The Data Transformation team has already delivered breakthrough products and significant business value over the last 3 years. In this role you will be developing our next generation of new products while enhancing existing ones aiming at solving high-impact business problems. What’s in it for you Be a part of a global company and build solutions at enterprise scale Collaborate with a highly skilled and technically strong team Contribute to solving high complexity, high impact problems Key Responsibilities Build production ready data acquisition and transformation pipelines from ideation to deployment Being a hands-on problem solver and developer helping to extend and manage the data platforms Apply best practices in data modeling and building ETL pipelines (streaming and batch) using cloud-native solutions What We’re Looking For 3-5 years of professional software work experience Expertise in Python and Apache Spark OOP Design patterns, Test-Driven Development and Enterprise System design Experience building data processing workflows and APIs using frameworks such as FastAPI, Flask etc. Proficiency in API integration, experience working with REST APIs and integrating external & internal data sources SQL (any variant, bonus if this is a big data variant) Linux OS (e.g. bash toolset and other utilities) Version control system experience with Git, GitHub, or Azure DevOps. Problem-solving and debugging skills Software craftsmanship, adherence to Agile principles and taking pride in writing good code Techniques to communicate change to non-technical people Nice to have Core Java 17+, preferably Java 21+, and associated toolchain DevOps with a keen interest in automation Apache Avro Apache Kafka Kubernetes Cloud expertise (AWS and GCP preferably) Other JVM based languages - e.g. Kotlin, Scala C# - in particular .NET Core Data warehouses (e.g., Redshift, Snowflake, BigQuery) What’s In It For You Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world.Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: Were more than 35,000 strong worldwide—so were able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all.From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the worlds leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Health & WellnessHealth care coverage designed for the mind and body. Flexible DowntimeGenerous time off helps keep you energized for your time on. Continuous LearningAccess a wealth of resources to grow your career and learn valuable new skills. Invest in Your FutureSecure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly PerksIt’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the BasicsFrom retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visithttps://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email toEEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ----------------------------------------------------------- 20 - Professional (EEO-2 Job Categories-United States of America), IFTECH202.1 - Middle Professional Tier I (EEO Job Group), SWP Priority – Ratings - (Strategic Workforce Planning)
Posted 2 months ago
3.0 - 8.0 years
11 - 16 Lacs
Gurugram
Work from Office
About the Role: Grade Level (for internal use): 11 The Team As a member of the Data Transformation team you will work on building ML powered products and capabilities to power natural language understanding, data extraction, information retrieval and data sourcing solutions for S&P Global Market Intelligence and our clients. You will spearhead development of production-ready AI products and pipelines while leading-by-example in a highly engaging work environment. You will work in a (truly) global team and encouraged for thoughtful risk-taking and self-initiative. The Impact The Data Transformation team has already delivered breakthrough products and significant business value over the last 3 years. In this role you will be developing our next generation of new products while enhancing existing ones aiming at solving high-impact business problems. What’s in it for you Be a part of a global company and build solutions at enterprise scale Collaborate with a highly skilled and technically strong team Contribute to solving high complexity, high impact problems Key Responsibilities Build production ready data acquisition and transformation pipelines from ideation to deployment Being a hands-on problem solver and developer helping to extend and manage the data platforms Architect and lead the development of end-to-end data ingestion and processing pipelines to support downstream ML workflows Apply best practices in data modeling and building ETL pipelines (streaming and batch) using cloud-native solutions Mentor junior and mid-level data engineers and provide technical guidance and best practices What We’re Looking For 7-10 years of professional software work experience Expertise in Python and Apache Spark OOP Design patterns, Test-Driven Development and Enterprise System design SQL (any variant, bonus if this is a big data variant) Proficient in optimizing data flows for performance, storage, and cost efficiency Linux OS (e.g. bash toolset and other utilities) Version control system experience with Git, GitHub, or Azure DevOps. Problem-solving and debugging skills Software craftsmanship, adherence to Agile principles and taking pride in writing good code Techniques to communicate change to non-technical people Nice to have Core Java 17+, preferably Java 21+, and associated toolchain DevOps with a keen interest in automation Apache Avro Apache Kafka Kubernetes Cloud expertise (AWS and GCP preferably) Other JVM based languages - e.g. Kotlin, Scala C# - in particular .NET Core What’s In It For You Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world.Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: Were more than 35,000 strong worldwide—so were able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all.From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the worlds leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Health & WellnessHealth care coverage designed for the mind and body. Flexible DowntimeGenerous time off helps keep you energized for your time on. Continuous LearningAccess a wealth of resources to grow your career and learn valuable new skills. Invest in Your FutureSecure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly PerksIt’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the BasicsFrom retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visithttps://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email toEEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ----------------------------------------------------------- 20 - Professional (EEO-2 Job Categories-United States of America), IFTECH202.2 - Middle Professional Tier II (EEO Job Group), SWP Priority – Ratings - (Strategic Workforce Planning)
Posted 2 months ago
2.0 - 3.0 years
7 - 11 Lacs
Bengaluru
Work from Office
Job TitleJava Developer About Us Capco, a Wipro company, is a global technology and management consulting firm. Awarded with Consultancy of the year in the British Bank Award and has been ranked Top 100 Best Companies for Women in India 2022 by Avtar & Seramount . With our presence across 32 cities across globe, we support 100+ clients across banking, financial and Energy sectors. We are recognized for our deep transformation execution and delivery. WHY JOIN CAPCO You will work on engaging projects with the largest international and local banks, insurance companies, payment service providers and other key players in the industry. The projects that will transform the financial services industry. MAKE AN IMPACT Innovative thinking, delivery excellence and thought leadership to help our clients transform their business. Together with our clients and industry partners, we deliver disruptive work that is changing energy and financial services. #BEYOURSELFATWORK Capco has a tolerant, open culture that values diversity, inclusivity, and creativity. CAREER ADVANCEMENT With no forced hierarchy at Capco, everyone has the opportunity to grow as we grow, taking their career into their own hands. DIVERSITY & INCLUSION We believe that diversity of people and perspective gives us a competitive advantage. MAKE AN IMPACT Work location Pune JD as below - 5+ Years Only 30 days : We are looking for a highly skilled Java Developer with expertise in Spring Boot, Confluent Kafka, and distributed systems . The ideal candidate should have strong experience in designing, developing, and optimizing event-driven applications using Confluent Kafka while leveraging Spring Boot/Spring Cloud for microservices-based architectures. Key Responsibilities: Develop, deploy, and maintain scalable and high-performance applications using Java (Core Java, Collections, Multithreading, Executor Services, CompletableFuture, etc.) Work extensively with Confluent Kafka , including producer-consumer frameworks, offset management, and optimization of consumer instances based on message volume. Ensure efficient message serialization and deserialization using JSON, Avro, and Protobuf with Kafka Schema Registry . Design and implement event-driven architectures with real-time processing capabilities. Optimize Kafka consumers for high-throughput and low-latency scenarios. Collaborate with cross-functional teams to ensure seamless integration and deployment of services. Troubleshoot and resolve performance bottlenecks and scalability issues in distributed environments. Familiarity with containerization (Docker, Kubernetes) and cloud platforms is a plus. Experience with monitoring and logging tool- Splunk is a plus.
Posted 2 months ago
6.0 - 8.0 years
10 - 14 Lacs
Pune
Work from Office
: Job TitleStrategic Data Archive Onboarding Engineer, AS LocationPune, India Role Description Strategic Data Archive is an internal service which enables application to implement records management for regulatory requirements, application decommissioning, and application optimization. You will work closely with other teams providing hands on support onboarding by helping them define record content and metadata, configuring archiving, supporting testing and creating defensible documentation that archiving was complete. You will need to both support and manage the expectations of demanding internal clients. What we'll offer you As part of our flexible scheme, here are just some of the benefits that youll enjoy, Best in class leave policy. Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your key responsibilities Provide responsive customer service helping internal clients understand and efficiently manage their records management risks Explain our archiving services (both the business value and technical implementation) and respond promptly to inquiries Support the documentation and approval of requirements including record content and metadata Identify and facilitate implementing an efficient solution to meet the requirements Manage expectations and provide regular updates- frequently to senior stakeholders Configure archiving in test environments- will not be coding new functionality but will be making configuration changes maintained in a code repository and deployed with standard tools Support testing ensuring clients have appropriately managed implementation risks Help issue resolution including data issues, environment challenges, and code bugs Promote configurations from test environments to production Work with Production Support to ensure archiving is completed and evidenced Contribute towards a culture of learning and continuous improvement Will partner with teams in multiple location Your skills and experience Delivers against tight deadlines in a fast paced environment Manages others expectations and meets commitments High degree of accuracy and attention to detail Ability to communicate (written and verbal) concisely both business concepts and technical details and to influence partners including senior mangers High analytical capabilities and able to quickly grasp new contexts we support multiple areas of the Bank Expresses opinions while supporting group decisions Ensures deliverables are clearly documented and holds self and others accountable for meeting those deliverables Ability to identify risks at an early stage and implement mitigating strategies Flexibility and willingness to work autonomously and collaboratively Ability to work in virtual teams, agile environment and in matrixed organizations Treats everyone with respect and embraces diversity Bachelors Degree from an accredited college or university desirable Minimum 4 years experience implementing IT solutions in a global financial institution Comfortable with technology (e.g., SQL, FTP, XML, JSON) and a desire and ability to learn new skills as required (e.g., Fabric, Kubernetes, Kafka, Avro, Ansible) Must be an expert in SQL and have Python programming experience. Financial markets and Google Cloud Platform knowledge a plus while curiosity a requirement How we'll support you Training and development to help you excel in your career. Coaching and support from experts in your team. A culture of continuous learning to aid progression. A range of flexible benefits that you can tailor to suit your needs. About us and our teams Please visit our company website for further information: https://www.db.com/company/company.htm We strive for a culture in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively. Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment.
Posted 2 months ago
5.0 - 10.0 years
8 - 15 Lacs
Chennai
Work from Office
Role Summary: We are seeking an experienced and highly skilled Software Engineer with 5 to 10 years of hands-on experience in cloud-native application development and microservices architecture. The ideal candidate will possess deep technical expertise in AWS, Java, Kafka, Spring Boot, with a strong understanding of designing scalable and high-performance software solutions. You will collaborate with cross-functional teams to deliver cutting-edge applications while driving technical improvements and best practices. Key Responsibilities Application & Architecture Development: Lead the design and development of cloud-native applications and microservices architecture. Make key technical decisions regarding application scalability, security, andperformance optimization. Ensure adherence to best practices and coding standards while contributing to design improvements. Code Implementation & Maintenance: Develop, test, and optimize robust and efficient code, ensuring high-quality software solutions. Troubleshoot complex issues, implement effective solutions, and proactively improve system performance. Maintain and enhance existing applications to ensure seamless functionality. Issue Tracking & Resolution: Take ownership of critical issues and drive their resolution efficiently. Optimize system reliability by analyzing application logs and improving fault tolerance. Collaboration & Code Reviews: Mentor junior developers, conduct code reviews, and provide guidance on best development practices. Partner with cross-functional teams, including architects, product managers, and DevOps engineers, to implement innovative solutions. Qualifications Technical Skills: Proven experience in AWS cloud development, serverless architectures, and containerized applications. Expertise in Kafka, Spring Boot, Java and RESTful APIs for building distributed applications in Event Driven Architecture. Strong knowledge of data formats like JSON, Avro, and serialization techniques. Exposure to containerization technologies (Docker, Kubernetes, ECS, EKS) Proficiency in Agile development, DevOps practices, and CI/CD pipeline implementations. Experience in Node.js and Angular JS or moden UI framework is an added plus Experience: 5 to 10 years of software development experience in cloud and microservices environments. Solid experience working in Agile teams, leading technical discussions, and handling full SDLC responsibilities. Soft Skills: Strong problem-solving abilities with an analytical mindset. Excellent communication and teamwork skills to collaborate across diverse teams. Ability to quickly learn and adapt to emerging technologies and industry trends.
Posted 2 months ago
5.0 - 10.0 years
1 - 5 Lacs
Bengaluru
Work from Office
Job Title:AWS Data Engineer Experience5-10 Years Location:Bangalore : Technical Skills: 5 + Years of experience as AWS Data Engineer, AWS S3, Glue Catalog, Glue Crawler, Glue ETL, Athena write Glue ETLs to convert data in AWS RDS for SQL Server and Oracle DB to Parquet format in S3 Execute Glue crawlers to catalog S3 files. Create catalog of S3 files for easier querying Create SQL queries in Athena Define data lifecycle management for S3 files Strong experience in developing, debugging, and optimizing Glue ETL jobs using PySpark or Glue Studio. Ability to connect Glue ETLs with AWS RDS (SQL Server and Oracle) for data extraction and write transformed data into Parquet format in S3. Proficiency in setting up and managing Glue Crawlers to catalog data in S3. Deep understanding of S3 architecture and best practices for storing large datasets. Experience in partitioning and organizing data for efficient querying in S3. Knowledge of Parquet file format advantages for optimized storage and querying. Expertise in creating and managing the AWS Glue Data Catalog to enable structured and schema-aware querying of data in S3. Experience with Amazon Athena for writing complex SQL queries and optimizing query performance. Familiarity with creating views or transformations in Athena for business use cases. Knowledge of securing data in S3 using IAM policies, S3 bucket policies, and KMS encryption. Understanding of regulatory requirements (e.g., GDPR) and implementing secure data handling practices. Non-Technical Skills: Candidate needs to be Good Team Player Effective interpersonal, team building and communication skills. Ability to communicate complex technology to no tech audience in simple and precise manner.
Posted 2 months ago
3.0 - 7.0 years
6 - 10 Lacs
Mumbai
Work from Office
Role Overview : Looking for a Kafka SME to design and support real-time data ingestion pipelines using Kafka within a Cloudera-based Lakehouse architecture. Key Responsibilities : Design Kafka topics, partitions, schema registry Implement producer-consumer apps using Spark Structured Streaming Set up Kafka Connect, monitoring, and alerts Ensure secure, scalable message delivery Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Skills Required : Deep understanding of Kafka internals and ecosystem Integration with Cloudera and NiFi Schema evolution and serialization (Avro, Parquet) Performance tuning and fault-tolerance Preferred technical and professional experience Good communication skill. India market experience is preferred.
Posted 2 months ago
4 - 8 years
8 - 12 Lacs
Bengaluru
Work from Office
locationsIN - Bangaloreposted onPosted 30+ Days Ago job requisition idR127526 Maersk is hiring a Lead Software Engineer to lead the development of our NSCP platform. About the role : In this role, you will be playing a key part in building the foundations of an exciting greenfield project. Here are some of the things the role involves Work on defining event and API schemas in collaboration with parties that provide data to and consume from the platform. Work closely with our product owners in translating ideas and solutions to customer problems into scalable working software. Ensure application architecture and data flows are secure by design. Actively participate in architecture guilds reviewing work of other peer teams. Break down high-level requirements, sometimes with ambiguity, into smaller specific items that can efficiently be worked upon by more junior engineers. Make technology choices appropriate to the problem being solved; lay the foundations when a technology is introduced, and bring the whole team along the journey. Ensure hygiene practices like clean code, test coverage, performance checks etc. are put in place and gated with automated checks. Ensure applications emit standard signals needed for effective monitoring and alerting. Own the running of the application once deployed as much as building it encouraging DevOps mindset and skills within the team. Actively participate in code reviews and encourage using it as a tool for the team members to learn from each other. Partner with the engineering manager and drive a culture of continuous delivery where almost every single change goes to production on a continuous basis, and the path from code commit to going live is as automated as it can be. Help create practices within the team that are inclusive of members working across geographies, time zones and cultures. About you Here are some things we expect from you for the role Be highly proficient with the JVM ecosystem, and have worked with Java or Kotlin for at least 5 years. Be highly proficient working with containerised 12-factor apps, and tools like Docker and Kubernetes. Be highly proficient in building well-decoupled event-driven applications using a technology like Kafka. Have prior experience working with serialisation systems like Avro or Protobuf. Have prior experience working with at least one relational database and at least one NoSQL database. Be familiar with observability conceptslogging, metrics, traces, alerts; and have experience using them in the past to run and maintain production applications. Be familiar with methods and concepts in the field of cloud computing and can put them into practice; have experience in at least one of the followingAzure, Google Cloud, AWS. Knowledge of Terraform to create and maintain infrastructure in the cloud is a bonus. Be effective at written and spoken communication. Also exhibit the same in writing code that can be well understood by other developers. Having previous experience of working with realtime data at massive scale would be desirable. Having knowledge about the logistics and supply chain domain would be a bonusMaersk is committed to a diverse and inclusive workplace, and we embrace different styles of thinking. Maersk is an equal opportunities employer and welcomes applicants without regard to race, colour, gender, sex, age, religion, creed, national origin, ancestry, citizenship, marital status, sexual orientation, physical or mental disability, medical condition, pregnancy or parental leave, veteran status, gender identity, genetic information, or any other characteristic protected by applicable law. We will consider qualified applicants with criminal histories in a manner consistent with all legal requirements. We are happy to support your need for any adjustments during the application and hiring process. If you need special assistance or an accommodation to use our website, apply for a position, or to perform a job, please contact us by emailing .
Posted 2 months ago
5 - 8 years
8 - 12 Lacs
Pune
Work from Office
Hello Visionary! We empower our people to stay resilient and relevant in a constantly changing world. Were looking for people who are always searching for creative ways to grow and learn. People who want to make a real impact, now and in the future.Does that sound like you? Then it seems like youd make a great addition to our vibrant team.Siemens founded the new business unit Siemens Foundational Technologies on October 1, 2024 with its headquarter in Munich, Germany. It has been crafted to unlock the digital future of its clients by offering end-to-end support on their outstanding digitalization journey. Siemens Foundational Technologies is a strategic advisor and a trusted implementation partner in digital transformation and industrial IoT with a global network of more than 8000 employees in 10 countries and 21 offices. Highly skilled and experienced specialists offer services which range from consulting to craft & prototyping to solution & implementation and operation- everything out of one hand. We are looking for a Senior Software Developer Youll make a difference by: Experience in software development lifecycle using Angular 17+, TypeScript 6+, Node.js, Java 17+, Spring Boot 3+ and Maven. Experience working with Docker containerization techniques and Kubernetes cluster management with AWS cloud (VPC, Subnet, ELB, Secrets manager, EBS Snapshots, EC2 Security groups, ECS, Cloudwatch and SQS). Experience working with Gitlab CI with ArgoCD based deployment. Experience working with data storage using DynamoDB, RDS PostgreSQL and S3. Experience in software design and documentation with UML notations based on OOAD. Experience with RESTful Webservices. Experience with Keycloak and / or Okta SSO. Experience in unit testing with JUnit, Jasmine and Karma and Static Code analysis with Sonarqube. Experience in service monitoring with Grafana, Loki and Prometheus. Experience working with AGILE technologies and releases. Good to have: Experience with Python 3+ and / or shell script. Experience with Streaming data with Kafka and Avro Serialization. Experience in E2E testing with Protractor. Experience with IaC using Terraform. Desired Skills: BE / B. Tech / MCA / ME or higher 5 to 8 years of experience is required. Good debugging and analytical skills. Great Communication skills. Join us and be yourself! We value your unique identity and perspective and are fully committed to providing equitable opportunities and building a workplace that reflects the diversity of society. Come bring your authentic self and create a better tomorrow with us. Make your mark in our exciting world at Siemens. This role is based in Pune and is an Individual contributor role. You might be required to visit other locations within India and outside. In return, you'll get the chance to work with teams impacting - and the shape of things to come. We're Siemens. A collection of over 379,000 minds building the future, one day at a time in over 200 countries. We're dedicated to equality, and we welcome applications that reflect the diversity of the communities we work in. All employment decisions at Siemens are based on qualifications, merit and business need. Bring your curiosity and imagination and help us shape tomorrow. Find out more about Siemens careers at: www.siemens.com/careers & more about mobility at https://new.siemens.com/global/en/products/mobility.html
Posted 2 months ago
9 - 11 years
37 - 40 Lacs
Ahmedabad, Bengaluru, Mumbai (All Areas)
Work from Office
Dear Candidate, We are hiring a Scala Developer to work on high-performance distributed systems, leveraging the power of functional and object-oriented paradigms. This role is perfect for engineers passionate about clean code, concurrency, and big data pipelines. Key Responsibilities: Build scalable backend services using Scala and the Play or Akka frameworks . Write concurrent and reactive code for high-throughput applications . Integrate with Kafka, Spark, or Hadoop for data processing. Ensure code quality through unit tests and property-based testing . Work with microservices, APIs, and cloud-native deployments. Required Skills & Qualifications: Proficient in Scala , with a strong grasp of functional programming Experience with Akka, Play, or Cats Familiarity with Big Data tools and RESTful API development Bonus: Experience with ZIO, Monix, or Slick Soft Skills: Strong troubleshooting and problem-solving skills. Ability to work independently and in a team. Excellent communication and documentation skills. Note: If interested, please share your updated resume and preferred time for a discussion. If shortlisted, our HR team will contact you. Kandi Srinivasa Reddy Delivery Manager Integra Technologies
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39817 Jobs | Dublin
Wipro
19388 Jobs | Bengaluru
Accenture in India
15458 Jobs | Dublin 2
EY
14907 Jobs | London
Uplers
11185 Jobs | Ahmedabad
Amazon
10459 Jobs | Seattle,WA
IBM
9256 Jobs | Armonk
Oracle
9226 Jobs | Redwood City
Accenture services Pvt Ltd
7971 Jobs |
Capgemini
7704 Jobs | Paris,France