Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
6.0 - 10.0 years
2 - 6 Lacs
Pune
Work from Office
Req ID: 323909 We are currently seeking a Data Ingest Engineer to join our team in Pune, Mahrshtra (IN-MH), India (IN). Job DutiesThe Applications Development Technology Lead Analyst is a senior level position responsible for establishing and implementing new or revised application systems and programs in coordination with the Technology team. This is a position within the Ingestion team of the DRIFT data ecosystem. The focus is on ingesting data in a timely , complete, and comprehensive fashion while using the latest technology available to Citi. The ability to leverage new and creative methods for repeatable data ingestion from a variety of data sources while always questioning "is this the best way to solve this problem" and "am I providing the highest quality data to my downstream partners" are the questions we are trying to solve. Responsibilities: "¢ Partner with multiple management teams to ensure appropriate integration of functions to meet goals as well as identify and define necessary system enhancements to deploy new products and process improvements "¢ Resolve variety of high impact problems/projects through in-depth evaluation of complex business processes, system processes, and industry standards "¢ Provide expertise in area and advanced knowledge of applications programming and ensure application design adheres to the overall architecture blueprint "¢ Utilize advanced knowledge of system flow and develop standards for coding, testing, debugging, and implementation "¢ Develop comprehensive knowledge of how areas of business, such as architecture and infrastructure, integrate to accomplish business goals "¢ Provide in-depth analysis with interpretive thinking to define issues and develop innovative solutions "¢ Serve as advisor or coach to mid-level developers and analysts, allocating work as necessary "¢ Appropriately assess risk when business decisions are made, demonstrating particular consideration for the firm's reputation and safeguarding Citigroup, its clients and assets, by driving compliance with applicable laws, rules and regulations, adhering to Policy, applying sound ethical judgment regarding personal behavior, conduct and business practices, and escalating, managing and reporting control issues with transparency. Minimum Skills Required"¢ 6-10 years of relevant experience in Apps Development or systems analysis role "¢ Extensive experience system analysis and in programming of software applications "¢ Application Development using JAVA, Scala, Spark "¢ Familiarity with event driven applications and streaming data "¢ Experience with Confluent Kafka, HDFS, HIVE, structured and unstructured database systems (SQL and NoSQL) "¢ Experience with various schema and data types -> JSON, AVRO, Parquet, etc. "¢ Experience with various ELT methodologies and formats -> JDBC, ODBC, API, Web hook, SFTP, etc. "¢ Experience working within the Agile and version control tool sets (JIRA, Bitbucket, Git, etc.) "¢ Ability to adjust priorities quickly as circumstances dictate "¢ Demonstrated leadership and project management skills "¢ Consistently demonstrates clear and concise written and verbal communication
Posted 6 days ago
5.0 - 8.0 years
0 - 1 Lacs
Hyderabad
Hybrid
Location: Hyderabad (Hybrid) Please share your resume with +91 9361912009 Roles and Responsibilities Deep understanding of Linux, networking and security fundamentals. Experience working with AWS cloud platform and infrastructure. Experience working with infrastructure as code with Terraform or Ansible tools. Experience managing large BigData clusters in production (at least one of -- Cloudera, Hortonworks, EMR). Excellent knowledge and solid work experience providing observability for BigData platforms using tools like Prometheus, InfluxDB, Dynatrace, Grafana, Splunk etc. Expert knowledge on Hadoop Distributed File System (HDFS) and Hadoop YARN. Decent knowledge of various Hadoop file formats like ORC, Parquet, Avro etc. Deep understanding of Hive (Tez), Hive LLAP, Presto and Spark compute engines. Ability to understand query plans and optimize performance for complex SQL queries on Hive and Spark. Experience supporting Spark with Python (PySpark) and R (SparklyR, SparkR) languages Solid professional coding experience with at least one scripting language - Shell, Python etc. Experience working with Data Analysts, Data Scientists and at least one of these related analytical applications like SAS, R-Studio, JupyterHub, H2O etc. Able to read and understand code (Java, Python, R, Scala), but expertise in at least one scripting languages like Python or Shell. Nice to have skills: Experience with workflow management tools like Airflow, Oozie etc. Knowledge in analytical libraries like Pandas, Numpy, Scipy, PyTorch etc. Implementation history of Packer, Chef, Jenkins or any other similar tooling. Prior working knowledge of Active Directory and Windows OS based VDI platforms like Citrix, AWS Workspaces etc.
Posted 1 week ago
3.0 - 8.0 years
4 - 8 Lacs
Chennai
Work from Office
Your Profile As a senior software engineer with Capgemini, you will have 3 + years of experience in Scala with strong project track record Hands On experience in Scala/Spark developer Hands on SQL writing skills on RDBMS (DB2) databases Experience in working with different file formats like JSON, Parquet, AVRO, ORC and XML. Must have worked in a HDFS platform development project. Proficiency in data analysis, data profiling, and data lineage Strong oral and written communication skills Experience working in Agile projects. Your Role Work on Hadoop, Spark, Hive &SQL query Ability to perform code optimization for performance, Scalability and configurability Data application development at scale in the Hadoop ecosystem. What youll love about working here ChoosingCapgeminimeans having the opportunity to make a difference, whetherfor the worlds leading businesses or for society. It means getting the support youneed to shape your career in the way that works for you. It means when the futuredoesnt look as bright as youd like, youhave the opportunity tomake changetorewrite it. When you join Capgemini, you dont just start a new job. You become part of something bigger. A diverse collective of free-thinkers, entrepreneurs and experts, all working together to unleash human energy through technology, for an inclusive and sustainable future. At Capgemini, people are at the heart of everything we do! You can exponentially grow your career by being part of innovative projects and taking advantage of our extensiveLearning & Developmentprograms. With us, you will experience aninclusive, safe, healthy, andflexiblework environment to bring out the best in you! You also get a chance to make positive social change and build a better world by taking an active role in ourCorporate Social ResponsibilityandSustainabilityinitiatives. And whilst you make a difference, you will also have a lot offun. About Company
Posted 1 week ago
6.0 - 9.0 years
4 - 8 Lacs
Bengaluru
Work from Office
Hands on experience into Java 8, Spring Boot, Micro services. Deep knowledge on data structure and algorithms. Strong experience in Micro services (Decompose, Strangler, Saga, Event sourcing, CQRS, Tx Messaging). Familiarity with PCF apps, Docker, Kubernetes / Open shift Experience in backend testing using Junit/Mockito, MySQL, Kafka, and Avro. Experience in DDD, BDD, TDD Hands on experience into CI/CD/Jenkins and tools like github/git. Experience of working in Agile environment and good understanding of Agile processes Primary Skills Java8, Springboot, Microservices, Data Structure, Algorithm Secondary Skills Angular
Posted 1 week ago
8.0 - 10.0 years
10 - 12 Lacs
Hyderabad
Work from Office
ABOUT THE ROLE Role Description: We are seeking a highly skilled and experienced hands-on Test Automation Engineering Manager with a deep e xpertise in Data Quality (DQ) , Data Integration (DIF) , and Data Governance . In this role, you will design and implement automated frameworks that ensure data accuracy, metadata consistency , and compliance throughout the data pipeline , leveraging technologies like Data bricks , AWS , and cloud-native tools . You will have a major focus on Data Cataloguing and Governance , ensuring that data assets are well-documented, auditable, and secure across the enterprise. In this role, you will be responsible for the end-to-end design and development of a test automation framework, working collaboratively with the team. As the delivery owner for test automation, your primary focus will be on building and automating comprehensive validation frameworks for data cataloging , data classification, and metadata tracking, while ensuring alignment with internal governance standards. will also work closely with data engineers, product teams, and data governance leads to enforce data quality and governance policies . Your efforts will play a key role in driving data integrity, consistency, and trust across the organization. The role is highly technical and hands-on , with a strong focus on automation, metadata validation , and ensuring data governance practices are seamlessly integrated into development pipelines. Roles & Responsibilities: Data Quality & Integration Frameworks Design and implement Data Quality (DQ) frameworks that validate schema compliance, transformations, completeness, null checks, duplicates, threshold rules, and referential integrity. Build Data Integration Frameworks (DIF) that validate end-to-end data pipelines across ingestion, processing, storage, and consumption layers. Automate data validations in Databricks/Spark pipelines, integrated with AWS services like S3, Glue, Athena, and Lake Formation. Develop modular, reusable validation components using PySpark, SQL, Python, and orchestration via CI/CD pipelines. Data Cataloging & Governance Integrate automated validations with AWS Glue Data Catalog to ensure metadata consistency, schema versioning, and lineage tracking. Implement checks to verify that data assets are properly cataloged, discoverable, and compliant with internal governance standards. Validate and enforce data classification, tagging, and access controls, ensuring alignment with data governance frameworks (e.g., PII/PHI tagging, role-based access policies). Collaborate with governance teams to automate policy enforcement and compliance checks for audit and regulatory needs. Visualization & UI Testing Automate validation of data visualizations in tools like Tableau, Power BI, Looker , or custom React dashboards. Ensure charts, KPIs, filters, and dynamic views correctly reflect backend data using UI automation (Selenium with Python) and backend validation logic. Conduct API testing (via Postman or Python test suites) to ensure accurate data delivery to visualization layers. Technical Skills and Tools Hands-on experience with data automation tools like Databricks and AWS is essential, as the manager will be instrumental in building and managing data pipelines. Leverage automated testing frameworks and containerization tools to streamline processes and improve efficiency. Experience in UI and API functional validation using tools such as Selenium with Python and Postman, ensuring comprehensive testing coverage. Technical Leadership, Strategy & Team Collaboration Define and drive the overall QA and testing strategy for UI and search-related components with a focus on scalability, reliability, and performance, while establishing alerting and reporting mechanisms for test failures, data anomalies, and governance violations. Contribute to system architecture and design discussions , bringing a strong quality and testability lens early into the development lifecycle. Lead test automation initiatives by implementing best practices and scalable frameworks, embedding test suites into CI/CD pipelines to enable automated, continuous validation of data workflows, catalog changes, and visualization updates Mentor and guide QA engineers , fostering a collaborative, growth-oriented culture focused on continuous learning and technical excellence. Collaborate cross-functionally with product managers, developers, and DevOps to align quality efforts with business goals and release timelines. Conduct code reviews, test plan reviews, and pair-testing sessions to ensure team-level consistency and high-quality standards. Good-to-Have Skills: Experience with data governance tools such as Apache Atlas , Collibra , or Alation Understanding of DataOps methodologies and practices Familiarity with monitoring/observability tools such as Datadog , Prometheus , or CloudWatch Experience building or maintaining test data generators Contributions to internal quality dashboards or data observability systems Awareness of metadata-driven testing approaches and lineage-based validations Experience working with agile Testing methodologies such as Scaled Agile. Familiarity with automated testing frameworks like Selenium, JUnit, TestNG, or PyTest. Must-Have Skills: Strong hands-on experience with Data Quality (DQ) framework design and automation Expertise in PySpark, Python, and SQL for data validations Solid understanding of ETL/ELT pipeline testing in Databricks or Apache Spark environments Experience validating structured and semi-structured data formats (e.g., Parquet, JSON, Avro) Deep familiarity with AWS data services: S3, Glue, Athena, Lake Formation, Data Catalog Integration of test automation with AWS Glue Data Catalog or similar catalog tools UI automation using Selenium with Python for dashboard and web interface validation API testing using Postman, Python, or custom API test scripts Hands-on testing of BI tools such as Tableau, Power BI, Looker, or custom visualization layers CI/CD test integration with tools like Jenkins, GitHub Actions, or GitLab CI Familiarity with containerized environments (e.g., Docker, AWS ECS/EKS) Knowledge of data classification, access control validation, and PII/PHI tagging Understanding of data governance standards (e.g., GDPR, HIPAA, CCPA) Understanding Data Structures: Knowledge of various data structures and their applications. Ability to analyze data and identify inconsistencies. Proven hands-on experience in test automation and data automation using Databricks and AWS. Strong knowledge of Data Integrity Frameworks (DIF) and Data Quality (DQ) principles. Familiarity with automated testing frameworks like Selenium, JUnit, TestNG, or PyTest. Strong understanding of data transformation techniques and logic. Education and Professional Certifications Bachelors degree in computer science and engineering preferred, other Engineering field is considered; Masters degree and 6+ years experience Or Bachelors degree and 8+ years Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals Strong presentation and public speaking skills.
Posted 1 week ago
8.0 - 13.0 years
30 - 35 Lacs
Bengaluru
Work from Office
Data Engineer Location: Bangalore - Onsite Experience: 8 - 15 years Type: Full-time Role Overview We are seeking an experienced Data Engineer to build and maintain scalable, high-performance data pipelines and infrastructure for our next-generation data platform. The platform ingests and processes real-time and historical data from diverse industrial sources such as airport systems, sensors, cameras, and APIs. You will work closely with AI/ML engineers, data scientists, and DevOps to enable reliable analytics, forecasting, and anomaly detection use cases. Key Responsibilities Design and implement real-time (Kafka, Spark/Flink) and batch (Airflow, Spark) pipelines for high-throughput data ingestion, processing, and transformation. Develop data models and manage data lakes and warehouses (Delta Lake, Iceberg, etc) to support both analytical and ML workloads. Integrate data from diverse sources: IoT sensors, databases (SQL/NoSQL), REST APIs, and flat files. Ensure pipeline scalability, observability, and data quality through monitoring, alerting, validation, and lineage tracking. Collaborate with AI/ML teams to provision clean and ML-ready datasets for training and inference. Deploy, optimize, and manage pipelines and data infrastructure across on-premise and hybrid environments. Participate in architectural decisions to ensure resilient, cost-effective, and secure data flows. Contribute to infrastructure-as-code and automation for data deployment using Terraform, Ansible, or similar tools. Qualifications & Required Skills Bachelors or Master’s in Computer Science, Engineering, or related field. 6+ years in data engineering roles, with at least 2 years handling real-time or streaming pipelines. Strong programming skills in Python/Java and SQL. Experience with Apache Kafka, Apache Spark, or Apache Flink for real-time and batch processing. Hands-on with Airflow, dbt, or other orchestration tools. Familiarity with data modeling (OLAP/OLTP), schema evolution, and format handling (Parquet, Avro, ORC). Experience with hybrid/on-prem and cloud platforms (AWS/GCP/Azure) deployments. Proficient in working with data lakes/warehouses like Snowflake, BigQuery, Redshift, or Delta Lake. Knowledge of DevOps practices, Docker/Kubernetes, Terraform or Ansible. Exposure to data observability, data cataloging, and quality tools (e.g., Great Expectations, OpenMetadata). Good-to-Have Experience with time-series databases (e.g., InfluxDB, TimescaleDB) and sensor data. Prior experience in domains such as aviation, manufacturing, or logistics is a plus. Role & responsibilities Preferred candidate profile
Posted 1 week ago
3.0 - 8.0 years
9 - 13 Lacs
Gurugram
Work from Office
About the Role: Grade Level (for internal use): 10 The Team As a member of the Data Transformation team you will work on building ML powered products and capabilities to power natural language understanding, data extraction, information retrieval and data sourcing solutions for S&P Global Market Intelligence and our clients. You will spearhead development of production-ready AI products and pipelines while leading-by-example in a highly engaging work environment. You will work in a (truly) global team and encouraged for thoughtful risk-taking and self-initiative. The Impact The Data Transformation team has already delivered breakthrough products and significant business value over the last 3 years. In this role you will be developing our next generation of new products while enhancing existing ones aiming at solving high-impact business problems. What’s in it for you Be a part of a global company and build solutions at enterprise scale Collaborate with a highly skilled and technically strong team Contribute to solving high complexity, high impact problems Key Responsibilities Build production ready data acquisition and transformation pipelines from ideation to deployment Being a hands-on problem solver and developer helping to extend and manage the data platforms Apply best practices in data modeling and building ETL pipelines (streaming and batch) using cloud-native solutions What We’re Looking For 3-5 years of professional software work experience Expertise in Python and Apache Spark OOP Design patterns, Test-Driven Development and Enterprise System design Experience building data processing workflows and APIs using frameworks such as FastAPI, Flask etc. Proficiency in API integration, experience working with REST APIs and integrating external & internal data sources SQL (any variant, bonus if this is a big data variant) Linux OS (e.g. bash toolset and other utilities) Version control system experience with Git, GitHub, or Azure DevOps. Problem-solving and debugging skills Software craftsmanship, adherence to Agile principles and taking pride in writing good code Techniques to communicate change to non-technical people Nice to have Core Java 17+, preferably Java 21+, and associated toolchain DevOps with a keen interest in automation Apache Avro Apache Kafka Kubernetes Cloud expertise (AWS and GCP preferably) Other JVM based languages - e.g. Kotlin, Scala C# - in particular .NET Core Data warehouses (e.g., Redshift, Snowflake, BigQuery) What’s In It For You Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world.Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: Were more than 35,000 strong worldwide—so were able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all.From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the worlds leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Health & WellnessHealth care coverage designed for the mind and body. Flexible DowntimeGenerous time off helps keep you energized for your time on. Continuous LearningAccess a wealth of resources to grow your career and learn valuable new skills. Invest in Your FutureSecure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly PerksIt’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the BasicsFrom retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visithttps://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email toEEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ----------------------------------------------------------- 20 - Professional (EEO-2 Job Categories-United States of America), IFTECH202.1 - Middle Professional Tier I (EEO Job Group), SWP Priority – Ratings - (Strategic Workforce Planning)
Posted 1 week ago
3.0 - 8.0 years
11 - 16 Lacs
Gurugram
Work from Office
About the Role: Grade Level (for internal use): 11 The Team As a member of the Data Transformation team you will work on building ML powered products and capabilities to power natural language understanding, data extraction, information retrieval and data sourcing solutions for S&P Global Market Intelligence and our clients. You will spearhead development of production-ready AI products and pipelines while leading-by-example in a highly engaging work environment. You will work in a (truly) global team and encouraged for thoughtful risk-taking and self-initiative. The Impact The Data Transformation team has already delivered breakthrough products and significant business value over the last 3 years. In this role you will be developing our next generation of new products while enhancing existing ones aiming at solving high-impact business problems. What’s in it for you Be a part of a global company and build solutions at enterprise scale Collaborate with a highly skilled and technically strong team Contribute to solving high complexity, high impact problems Key Responsibilities Build production ready data acquisition and transformation pipelines from ideation to deployment Being a hands-on problem solver and developer helping to extend and manage the data platforms Architect and lead the development of end-to-end data ingestion and processing pipelines to support downstream ML workflows Apply best practices in data modeling and building ETL pipelines (streaming and batch) using cloud-native solutions Mentor junior and mid-level data engineers and provide technical guidance and best practices What We’re Looking For 7-10 years of professional software work experience Expertise in Python and Apache Spark OOP Design patterns, Test-Driven Development and Enterprise System design SQL (any variant, bonus if this is a big data variant) Proficient in optimizing data flows for performance, storage, and cost efficiency Linux OS (e.g. bash toolset and other utilities) Version control system experience with Git, GitHub, or Azure DevOps. Problem-solving and debugging skills Software craftsmanship, adherence to Agile principles and taking pride in writing good code Techniques to communicate change to non-technical people Nice to have Core Java 17+, preferably Java 21+, and associated toolchain DevOps with a keen interest in automation Apache Avro Apache Kafka Kubernetes Cloud expertise (AWS and GCP preferably) Other JVM based languages - e.g. Kotlin, Scala C# - in particular .NET Core What’s In It For You Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world.Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: Were more than 35,000 strong worldwide—so were able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all.From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the worlds leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Health & WellnessHealth care coverage designed for the mind and body. Flexible DowntimeGenerous time off helps keep you energized for your time on. Continuous LearningAccess a wealth of resources to grow your career and learn valuable new skills. Invest in Your FutureSecure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly PerksIt’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the BasicsFrom retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visithttps://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email toEEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ----------------------------------------------------------- 20 - Professional (EEO-2 Job Categories-United States of America), IFTECH202.2 - Middle Professional Tier II (EEO Job Group), SWP Priority – Ratings - (Strategic Workforce Planning)
Posted 1 week ago
2.0 - 3.0 years
7 - 11 Lacs
Bengaluru
Work from Office
Job TitleJava Developer About Us Capco, a Wipro company, is a global technology and management consulting firm. Awarded with Consultancy of the year in the British Bank Award and has been ranked Top 100 Best Companies for Women in India 2022 by Avtar & Seramount . With our presence across 32 cities across globe, we support 100+ clients across banking, financial and Energy sectors. We are recognized for our deep transformation execution and delivery. WHY JOIN CAPCO You will work on engaging projects with the largest international and local banks, insurance companies, payment service providers and other key players in the industry. The projects that will transform the financial services industry. MAKE AN IMPACT Innovative thinking, delivery excellence and thought leadership to help our clients transform their business. Together with our clients and industry partners, we deliver disruptive work that is changing energy and financial services. #BEYOURSELFATWORK Capco has a tolerant, open culture that values diversity, inclusivity, and creativity. CAREER ADVANCEMENT With no forced hierarchy at Capco, everyone has the opportunity to grow as we grow, taking their career into their own hands. DIVERSITY & INCLUSION We believe that diversity of people and perspective gives us a competitive advantage. MAKE AN IMPACT Work location Pune JD as below - 5+ Years Only 30 days : We are looking for a highly skilled Java Developer with expertise in Spring Boot, Confluent Kafka, and distributed systems . The ideal candidate should have strong experience in designing, developing, and optimizing event-driven applications using Confluent Kafka while leveraging Spring Boot/Spring Cloud for microservices-based architectures. Key Responsibilities: Develop, deploy, and maintain scalable and high-performance applications using Java (Core Java, Collections, Multithreading, Executor Services, CompletableFuture, etc.) Work extensively with Confluent Kafka , including producer-consumer frameworks, offset management, and optimization of consumer instances based on message volume. Ensure efficient message serialization and deserialization using JSON, Avro, and Protobuf with Kafka Schema Registry . Design and implement event-driven architectures with real-time processing capabilities. Optimize Kafka consumers for high-throughput and low-latency scenarios. Collaborate with cross-functional teams to ensure seamless integration and deployment of services. Troubleshoot and resolve performance bottlenecks and scalability issues in distributed environments. Familiarity with containerization (Docker, Kubernetes) and cloud platforms is a plus. Experience with monitoring and logging tool- Splunk is a plus.
Posted 2 weeks ago
6.0 - 8.0 years
10 - 14 Lacs
Pune
Work from Office
: Job TitleStrategic Data Archive Onboarding Engineer, AS LocationPune, India Role Description Strategic Data Archive is an internal service which enables application to implement records management for regulatory requirements, application decommissioning, and application optimization. You will work closely with other teams providing hands on support onboarding by helping them define record content and metadata, configuring archiving, supporting testing and creating defensible documentation that archiving was complete. You will need to both support and manage the expectations of demanding internal clients. What we'll offer you As part of our flexible scheme, here are just some of the benefits that youll enjoy, Best in class leave policy. Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your key responsibilities Provide responsive customer service helping internal clients understand and efficiently manage their records management risks Explain our archiving services (both the business value and technical implementation) and respond promptly to inquiries Support the documentation and approval of requirements including record content and metadata Identify and facilitate implementing an efficient solution to meet the requirements Manage expectations and provide regular updates- frequently to senior stakeholders Configure archiving in test environments- will not be coding new functionality but will be making configuration changes maintained in a code repository and deployed with standard tools Support testing ensuring clients have appropriately managed implementation risks Help issue resolution including data issues, environment challenges, and code bugs Promote configurations from test environments to production Work with Production Support to ensure archiving is completed and evidenced Contribute towards a culture of learning and continuous improvement Will partner with teams in multiple location Your skills and experience Delivers against tight deadlines in a fast paced environment Manages others expectations and meets commitments High degree of accuracy and attention to detail Ability to communicate (written and verbal) concisely both business concepts and technical details and to influence partners including senior mangers High analytical capabilities and able to quickly grasp new contexts we support multiple areas of the Bank Expresses opinions while supporting group decisions Ensures deliverables are clearly documented and holds self and others accountable for meeting those deliverables Ability to identify risks at an early stage and implement mitigating strategies Flexibility and willingness to work autonomously and collaboratively Ability to work in virtual teams, agile environment and in matrixed organizations Treats everyone with respect and embraces diversity Bachelors Degree from an accredited college or university desirable Minimum 4 years experience implementing IT solutions in a global financial institution Comfortable with technology (e.g., SQL, FTP, XML, JSON) and a desire and ability to learn new skills as required (e.g., Fabric, Kubernetes, Kafka, Avro, Ansible) Must be an expert in SQL and have Python programming experience. Financial markets and Google Cloud Platform knowledge a plus while curiosity a requirement How we'll support you Training and development to help you excel in your career. Coaching and support from experts in your team. A culture of continuous learning to aid progression. A range of flexible benefits that you can tailor to suit your needs. About us and our teams Please visit our company website for further information: https://www.db.com/company/company.htm We strive for a culture in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively. Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment.
Posted 2 weeks ago
5.0 - 10.0 years
8 - 15 Lacs
Chennai
Work from Office
Role Summary: We are seeking an experienced and highly skilled Software Engineer with 5 to 10 years of hands-on experience in cloud-native application development and microservices architecture. The ideal candidate will possess deep technical expertise in AWS, Java, Kafka, Spring Boot, with a strong understanding of designing scalable and high-performance software solutions. You will collaborate with cross-functional teams to deliver cutting-edge applications while driving technical improvements and best practices. Key Responsibilities Application & Architecture Development: Lead the design and development of cloud-native applications and microservices architecture. Make key technical decisions regarding application scalability, security, andperformance optimization. Ensure adherence to best practices and coding standards while contributing to design improvements. Code Implementation & Maintenance: Develop, test, and optimize robust and efficient code, ensuring high-quality software solutions. Troubleshoot complex issues, implement effective solutions, and proactively improve system performance. Maintain and enhance existing applications to ensure seamless functionality. Issue Tracking & Resolution: Take ownership of critical issues and drive their resolution efficiently. Optimize system reliability by analyzing application logs and improving fault tolerance. Collaboration & Code Reviews: Mentor junior developers, conduct code reviews, and provide guidance on best development practices. Partner with cross-functional teams, including architects, product managers, and DevOps engineers, to implement innovative solutions. Qualifications Technical Skills: Proven experience in AWS cloud development, serverless architectures, and containerized applications. Expertise in Kafka, Spring Boot, Java and RESTful APIs for building distributed applications in Event Driven Architecture. Strong knowledge of data formats like JSON, Avro, and serialization techniques. Exposure to containerization technologies (Docker, Kubernetes, ECS, EKS) Proficiency in Agile development, DevOps practices, and CI/CD pipeline implementations. Experience in Node.js and Angular JS or moden UI framework is an added plus Experience: 5 to 10 years of software development experience in cloud and microservices environments. Solid experience working in Agile teams, leading technical discussions, and handling full SDLC responsibilities. Soft Skills: Strong problem-solving abilities with an analytical mindset. Excellent communication and teamwork skills to collaborate across diverse teams. Ability to quickly learn and adapt to emerging technologies and industry trends.
Posted 2 weeks ago
5.0 - 10.0 years
1 - 5 Lacs
Bengaluru
Work from Office
Job Title:AWS Data Engineer Experience5-10 Years Location:Bangalore : Technical Skills: 5 + Years of experience as AWS Data Engineer, AWS S3, Glue Catalog, Glue Crawler, Glue ETL, Athena write Glue ETLs to convert data in AWS RDS for SQL Server and Oracle DB to Parquet format in S3 Execute Glue crawlers to catalog S3 files. Create catalog of S3 files for easier querying Create SQL queries in Athena Define data lifecycle management for S3 files Strong experience in developing, debugging, and optimizing Glue ETL jobs using PySpark or Glue Studio. Ability to connect Glue ETLs with AWS RDS (SQL Server and Oracle) for data extraction and write transformed data into Parquet format in S3. Proficiency in setting up and managing Glue Crawlers to catalog data in S3. Deep understanding of S3 architecture and best practices for storing large datasets. Experience in partitioning and organizing data for efficient querying in S3. Knowledge of Parquet file format advantages for optimized storage and querying. Expertise in creating and managing the AWS Glue Data Catalog to enable structured and schema-aware querying of data in S3. Experience with Amazon Athena for writing complex SQL queries and optimizing query performance. Familiarity with creating views or transformations in Athena for business use cases. Knowledge of securing data in S3 using IAM policies, S3 bucket policies, and KMS encryption. Understanding of regulatory requirements (e.g., GDPR) and implementing secure data handling practices. Non-Technical Skills: Candidate needs to be Good Team Player Effective interpersonal, team building and communication skills. Ability to communicate complex technology to no tech audience in simple and precise manner.
Posted 3 weeks ago
3.0 - 7.0 years
6 - 10 Lacs
Mumbai
Work from Office
Role Overview : Looking for a Kafka SME to design and support real-time data ingestion pipelines using Kafka within a Cloudera-based Lakehouse architecture. Key Responsibilities : Design Kafka topics, partitions, schema registry Implement producer-consumer apps using Spark Structured Streaming Set up Kafka Connect, monitoring, and alerts Ensure secure, scalable message delivery Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Skills Required : Deep understanding of Kafka internals and ecosystem Integration with Cloudera and NiFi Schema evolution and serialization (Avro, Parquet) Performance tuning and fault-tolerance Preferred technical and professional experience Good communication skill. India market experience is preferred.
Posted 3 weeks ago
4 - 8 years
8 - 12 Lacs
Bengaluru
Work from Office
locationsIN - Bangaloreposted onPosted 30+ Days Ago job requisition idR127526 Maersk is hiring a Lead Software Engineer to lead the development of our NSCP platform. About the role : In this role, you will be playing a key part in building the foundations of an exciting greenfield project. Here are some of the things the role involves Work on defining event and API schemas in collaboration with parties that provide data to and consume from the platform. Work closely with our product owners in translating ideas and solutions to customer problems into scalable working software. Ensure application architecture and data flows are secure by design. Actively participate in architecture guilds reviewing work of other peer teams. Break down high-level requirements, sometimes with ambiguity, into smaller specific items that can efficiently be worked upon by more junior engineers. Make technology choices appropriate to the problem being solved; lay the foundations when a technology is introduced, and bring the whole team along the journey. Ensure hygiene practices like clean code, test coverage, performance checks etc. are put in place and gated with automated checks. Ensure applications emit standard signals needed for effective monitoring and alerting. Own the running of the application once deployed as much as building it encouraging DevOps mindset and skills within the team. Actively participate in code reviews and encourage using it as a tool for the team members to learn from each other. Partner with the engineering manager and drive a culture of continuous delivery where almost every single change goes to production on a continuous basis, and the path from code commit to going live is as automated as it can be. Help create practices within the team that are inclusive of members working across geographies, time zones and cultures. About you Here are some things we expect from you for the role Be highly proficient with the JVM ecosystem, and have worked with Java or Kotlin for at least 5 years. Be highly proficient working with containerised 12-factor apps, and tools like Docker and Kubernetes. Be highly proficient in building well-decoupled event-driven applications using a technology like Kafka. Have prior experience working with serialisation systems like Avro or Protobuf. Have prior experience working with at least one relational database and at least one NoSQL database. Be familiar with observability conceptslogging, metrics, traces, alerts; and have experience using them in the past to run and maintain production applications. Be familiar with methods and concepts in the field of cloud computing and can put them into practice; have experience in at least one of the followingAzure, Google Cloud, AWS. Knowledge of Terraform to create and maintain infrastructure in the cloud is a bonus. Be effective at written and spoken communication. Also exhibit the same in writing code that can be well understood by other developers. Having previous experience of working with realtime data at massive scale would be desirable. Having knowledge about the logistics and supply chain domain would be a bonusMaersk is committed to a diverse and inclusive workplace, and we embrace different styles of thinking. Maersk is an equal opportunities employer and welcomes applicants without regard to race, colour, gender, sex, age, religion, creed, national origin, ancestry, citizenship, marital status, sexual orientation, physical or mental disability, medical condition, pregnancy or parental leave, veteran status, gender identity, genetic information, or any other characteristic protected by applicable law. We will consider qualified applicants with criminal histories in a manner consistent with all legal requirements. We are happy to support your need for any adjustments during the application and hiring process. If you need special assistance or an accommodation to use our website, apply for a position, or to perform a job, please contact us by emailing .
Posted 4 weeks ago
5 - 8 years
8 - 12 Lacs
Pune
Work from Office
Hello Visionary! We empower our people to stay resilient and relevant in a constantly changing world. Were looking for people who are always searching for creative ways to grow and learn. People who want to make a real impact, now and in the future.Does that sound like you? Then it seems like youd make a great addition to our vibrant team.Siemens founded the new business unit Siemens Foundational Technologies on October 1, 2024 with its headquarter in Munich, Germany. It has been crafted to unlock the digital future of its clients by offering end-to-end support on their outstanding digitalization journey. Siemens Foundational Technologies is a strategic advisor and a trusted implementation partner in digital transformation and industrial IoT with a global network of more than 8000 employees in 10 countries and 21 offices. Highly skilled and experienced specialists offer services which range from consulting to craft & prototyping to solution & implementation and operation- everything out of one hand. We are looking for a Senior Software Developer Youll make a difference by: Experience in software development lifecycle using Angular 17+, TypeScript 6+, Node.js, Java 17+, Spring Boot 3+ and Maven. Experience working with Docker containerization techniques and Kubernetes cluster management with AWS cloud (VPC, Subnet, ELB, Secrets manager, EBS Snapshots, EC2 Security groups, ECS, Cloudwatch and SQS). Experience working with Gitlab CI with ArgoCD based deployment. Experience working with data storage using DynamoDB, RDS PostgreSQL and S3. Experience in software design and documentation with UML notations based on OOAD. Experience with RESTful Webservices. Experience with Keycloak and / or Okta SSO. Experience in unit testing with JUnit, Jasmine and Karma and Static Code analysis with Sonarqube. Experience in service monitoring with Grafana, Loki and Prometheus. Experience working with AGILE technologies and releases. Good to have: Experience with Python 3+ and / or shell script. Experience with Streaming data with Kafka and Avro Serialization. Experience in E2E testing with Protractor. Experience with IaC using Terraform. Desired Skills: BE / B. Tech / MCA / ME or higher 5 to 8 years of experience is required. Good debugging and analytical skills. Great Communication skills. Join us and be yourself! We value your unique identity and perspective and are fully committed to providing equitable opportunities and building a workplace that reflects the diversity of society. Come bring your authentic self and create a better tomorrow with us. Make your mark in our exciting world at Siemens. This role is based in Pune and is an Individual contributor role. You might be required to visit other locations within India and outside. In return, you'll get the chance to work with teams impacting - and the shape of things to come. We're Siemens. A collection of over 379,000 minds building the future, one day at a time in over 200 countries. We're dedicated to equality, and we welcome applications that reflect the diversity of the communities we work in. All employment decisions at Siemens are based on qualifications, merit and business need. Bring your curiosity and imagination and help us shape tomorrow. Find out more about Siemens careers at: www.siemens.com/careers & more about mobility at https://new.siemens.com/global/en/products/mobility.html
Posted 1 month ago
9 - 11 years
37 - 40 Lacs
Ahmedabad, Bengaluru, Mumbai (All Areas)
Work from Office
Dear Candidate, We are hiring a Scala Developer to work on high-performance distributed systems, leveraging the power of functional and object-oriented paradigms. This role is perfect for engineers passionate about clean code, concurrency, and big data pipelines. Key Responsibilities: Build scalable backend services using Scala and the Play or Akka frameworks . Write concurrent and reactive code for high-throughput applications . Integrate with Kafka, Spark, or Hadoop for data processing. Ensure code quality through unit tests and property-based testing . Work with microservices, APIs, and cloud-native deployments. Required Skills & Qualifications: Proficient in Scala , with a strong grasp of functional programming Experience with Akka, Play, or Cats Familiarity with Big Data tools and RESTful API development Bonus: Experience with ZIO, Monix, or Slick Soft Skills: Strong troubleshooting and problem-solving skills. Ability to work independently and in a team. Excellent communication and documentation skills. Note: If interested, please share your updated resume and preferred time for a discussion. If shortlisted, our HR team will contact you. Kandi Srinivasa Reddy Delivery Manager Integra Technologies
Posted 1 month ago
8 - 11 years
45 - 50 Lacs
Chennai, Noida, Kolkata
Work from Office
Dear Candidate, We are hiring a Scala Developer to work on scalable data pipelines, distributed systems, and backend services. This role is perfect for candidates passionate about functional programming and big data. Key Responsibilities: Develop data-intensive applications using Scala . Work with frameworks like Akka, Play, or Spark . Design and maintain scalable microservices and ETL jobs. Collaborate with data engineers and platform teams. Write clean, testable, and well-documented code. Required Skills & Qualifications: Strong in Scala, Functional Programming, and JVM internals Experience with Apache Spark, Kafka, or Cassandra Familiar with SBT, Cats, or Scalaz Knowledge of CI/CD, Docker, and cloud deployment tools Soft Skills: Strong troubleshooting and problem-solving skills. Ability to work independently and in a team. Excellent communication and documentation skills. Note: If interested, please share your updated resume and preferred time for a discussion. If shortlisted, our HR team will contact you. Kandi Srinivasa Delivery Manager Integra Technologies
Posted 2 months ago
5 - 10 years
9 - 19 Lacs
Hyderabad
Work from Office
Roles & Responsibilities: We are seeking a skilled Senior Backend Java Engineer with experience in Databricks and working with big data file formats (e.g., Parquet, Delta, Avro, ORC) to join our team. This role involves creating and maintaining microservices for our application. The ideal candidate should have a strong background in backend development and a keen understanding of modern data processing frameworks. Key Responsibilities: Design, develop, and maintain high-quality microservices for our application. Work with Databricks for data processing and transformation. Handle various big data file formats, like Parquet, Delta, Avro, and ORC, for efficient data storage and retrieval. Collaborate with cross-functional teams to ensure scalable and maintainable system architecture. Optimize applications for performance and scalability in cloud environments. Utilize AWS services to build and deploy cloud-native solutions (preferred). Required Skills and Qualifications: Strong expertise in Java and backend development. Hands-on experience with Databricks and working with big data file formats such as Parquet, Delta, Avro, or ORC. Experience with microservices architecture and related frameworks. Familiarity with AWS services or similar cloud solutions is a plus. Excellent problem-solving skills and ability to work in a collaborative environment
Posted 2 months ago
5 - 8 years
15 - 27 Lacs
Hyderabad, Gurgaon, Noida
Work from Office
We're Nagarro. We are a Digital Product Engineering company that is scaling in a big way! We build products, services, and experiences that inspire, excite, and delight. We work at a scale across all devices and digital mediums, and our people exist everywhere in the world (18000+ experts across 36 countries, to be exact). Our work culture is dynamic and non-hierarchical. We are looking for great new colleagues. That is where you come in! REQUIREMENTS: Expert knowledge in databases like PostgreSQL (preferably cloud-hosted in AWS, Azure, GCP), and Snowflake Data Warehouse with strong programming experience in SQL. Competence in data preparation and/or ETL tools to build and maintain data pipelines and flows. Expertise in Python and experience working on ML models. Deep knowledge of databases, stored procedures, and optimization of large data sets. In-depth knowledge of ingestion techniques, data cleaning, de-duplication, and partitioning. Understanding of index design and performance-tuning techniques. Familiarity with SQL security techniques such as data encryption at the column level, Transparent Data Encryption (TDE), signed stored procedures, and assignment of user permissions. Experience in understanding source data from various platforms and mapping them into Entity Relationship Models (ER) for data integration and reporting. Exposure to source control tools like GIT, Azure DevOps. Understanding of Agile methodologies (Scrum, Kanban). Experience with automated testing and coverage tools. Experience with CI/CD automation tools (desirable). Programming language experience in Golang (desirable). RESPONSIBILITIES: Design and implement Snowflake-based data warehouse solutions. Develop and optimize complex SQL queries, stored procedures, and views in Snowflake. Build ETL/ELT data pipelines for efficient data processing. Work with structured and semi-structured data (JSON, Parquet, Avro) for data ingestion and processing. Implement data partitioning, clustering, and performance tuning strategies. Manage role-based access control (RBAC), security, and data governance in Snowflake. Integrate Snowflake with BI tools (Power BI, Tableau, Looker) for reporting and analytics. Create and maintain optimal data pipeline architecture. Assemble large, complex data sets that meet functional/non-functional business requirements. Build pipelines for optimal extraction, transformation, and loading of data from various sources using SQL and cloud database technologies. Prepare ML models for data analysis and prediction. Work with stakeholders including Executive, Product, Data, and Design teams to assist with data-related technical issues and support their data infrastructure needs. Ensure data separation and security across national boundaries through multiple data centers and regions. Collaborate with data and analytics experts to enhance functionality in our data systems. Manage exploratory data analysis to support database and dashboard development.
Posted 2 months ago
6 - 11 years
8 - 14 Lacs
Chennai
Hybrid
We are seeking an Azure Data Tester to help create test automation by following an existing data automation framework. The role involves validating business rules for audience creation across multiple social media channels. Required Candidate profile Validate data flow from disparate sources into various data stores,including Event Hub/Data Lakes Post ingestion,data will be transformed using Azure Data Factory workflows/stored in Azure Databricks
Posted 2 months ago
0 - 2 years
5 - 8 Lacs
Pune
Work from Office
About The Role : Job Title Big Data AVP (Resume Your Resume) Location Pune Resume your Resume Program description Whether youve raised a family, set up your own business, or travelled the world - not everyone follows the same life and career trajectory. Our Resume your Rsum India internships are designed to provide opportunities to professionals who are looking to return to work after an extended career break. Throughout the 3-month traineeship you will have the chance to refresh your skills, make new connections and potentially secure a full-time opportunity upon completing the programme. If you have a background in Finance, Operations, Risk or Technology, please apply here. Role Description: A technology-oriented developer is needed to join the Portfolio Analyser team within the Enterprise Risk Technology space, to assist with the implementation of a strategic regulatory risk reporting platform to cater to various programs like Historical Simulation, Counterparty Credit Risk and Stress Testing scenarios. Portfolio Analyzer is the in-memory data analytics platform to generate various Risk metrics for the bank. A strong technologist, self-starter, who is comfortable with independent engagement across functions such as Business Analysis, and Project Management, is expected. Enterprise Risk Technology (ERT) is the technology partner to the Risk divisions of Credit Risk, Market Risk and Non-Financial Risk. This includes definition of the IT strategy and provision of solutions to allow Risk to manage all aspects of risk from the analysis of counterparty credit risk to the protection of the Bank's infrastructure and information. You will work as part of a cross-functional agile delivery team. You will bring an innovative approach to software development, focusing on using the latest technologies and practices, as part of a relentless focus on business value. You will be someone who sees engineering as team activity, with a predisposition to open code, open discussion and creating a supportive, collaborative environment. You will be ready to contribute to all stages of software delivery, from initial analysis right through to production support. Description: This role is for a developer with strong core application or system programming skills in Scala, java and good exposure to concepts and/or technology across the broader spectrum. Enterprise Risk Technology covers a variety of existing systems and green-field projects. A Full stack Hadoop development experience with Scala development A Full stack Java development experience covering Core Java (including JDK 1.8) and good understanding of design patterns. Requirements: Strong hands on development in Java technologies. Strong hands on development in Hadoop technologies like Spark, Scala and experience on Avro. Participation in product feature design and documentation Requirement break-up, ownership and implantation. Product BAU deliveries and Level 3 production defects fixes. Qualifications & Experience Degree holder in numerate subject Hands on Experience on Hadoop, Spark, Scala, Impala, Avro and messaging like Kafka Experience across a core compiled language Java Proficiency in Java related frameworks like Springs, Hibernate, JPA Hands on experience in JDK 1.8 and strong skillset covering Collections, Multithreading with experience working on Distributed applications. Strong hands-on development track record with end to end development cycle involvement Good exposure to computational concepts Good communication and interpersonal skills Working knowledge of risk and derivatives pricing (optional) Proficiency in SQL (PL/SQL), data modelling. Understanding of Hadoop architecture and Scala program language is a good to have. Advantageous: Understanding of middlewares like Solace is an advantage. Understanding of NoSQL is an added advantage Experience of Data Analytics platforms is advantageous. Banking experience, particularly in risk What we'll offer you: Please be aware there are regional differences to DB benefits and you will need to check the correct package per advert. As part of our flexible scheme, here are just some of the benefits that youll enjoy Best in class leave policy Gender neutral parental leaves 100% reimbursement under child care assistance benefit (gender neutral) Flexible working arrangements Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above How we'll support you Training and development to help you excel in your career Flexible working to assist you balance your personal priorities Coaching and support from experts in your team A culture of continuous learning to aid progression A range of flexible benefits that you can tailor to suit your needs Our values define the working environment we strive to create diverse, supportive and welcoming of different views. We embrace a culture reflecting a variety of perspectives, insights and backgrounds to drive innovation. We build talented and diverse teams to drive business results and encourage our people to develop to their full potential. Talk to us about flexible work arrangements and other initiatives we offer. We promote good working relationships and encourage high standards of conduct and work performance. We welcome applications from talented people from all cultures, countries, races, genders, sexual orientations, disabilities, beliefs and generations and are committed to providing a working environment free from harassment, discrimination and retaliation. Click to find out more about our diversity and inclusion policy and initiatives.
Posted 2 months ago
1 - 4 years
1 - 5 Lacs
Uttar Pradesh
Work from Office
Strong experience on development on Java /J2EE technologies, Spring boot, REST API, Transactions management, ACID properties. Strong understanding of OOP concepts, design patterns Working Experience on Microservices using Spring boot Experience on Kafka Streams, Kafka Producer/Consumer API, Avro Good knowledge on Kafka Broker, Zookeeper, Schema registry, topics etc., Experience on Azure cloud, Kubernetes, Docker, Helm etc., Strong experience in databases such as (DB2, Postgres, SQL) or similar. Good experience in Test driven development Experience in working with Jira, GitHub, Azure DevOps Agile Good communications skills
Posted 2 months ago
2 - 7 years
4 - 9 Lacs
Andhra Pradesh
Work from Office
JD -7+ years of hands on experience in Python especially dealing with Pandas and Numpy Good hands-on experience in Spark PySpark and Spark SQL Hands on experience in Databricks Unity Catalog Delta Lake Lake house Platform Medallion Architecture Azure Data Factory ADLS Experience in dealing with Parquet and JSON file format Knowledge in Snowflake.
Posted 2 months ago
10 - 15 years
25 - 40 Lacs
Bengaluru
Hybrid
Welcome to HCL Software! We are looking for a Senior Software Engineer to help us build, optimize, and maintain inter-system information flows. As a Senior Software Engineer at HCL Software, you are an integral part of an agile team that works to enhance, build, and deliver trusted technology solutions. A strong academic background in combination with solid coding skills, understanding of software development methodologies, and good communication skills are key to success. You will design and develop solutions to ensure company information is stored and transferred across system boundaries in an effective, reliable, and secure fashion. YOUR AREA OF RESPONSIBILITY As a core technical contributor, you are responsible for designing and implementing critical technology solutions within various business functions. Design and implement creative software solutions, and conduct technical troubleshooting Develop secure high-quality production code, review and debug code written by your peers Drive outcomes-oriented workshops and work closely with the various functions to discover and take us through projects that improve our efficiency Analyze data storage, data transfers, and associated processes. Identify waste, and suggest solutions based on conceptual and logical data models and flowcharts. YOUR PROFILE MSc in Computer Science, or relevant adjacent field + formal training and certification on software engineering concepts 5+ years applied experience delivering system design, application development, testing, and operational stability Demonstrable experience with one or more of the following programming languages: TypeScript, Java, Scala, Rust. Expertise in Avro design, Kafka topics, exception handling, Kstreams, KSQL will help you to deliver results quickly Proficient in all aspects of the Software Development Life Cycle Advanced understanding of agile methodologies such as CI/CD, Application Resiliency, and Security Knowledge of systems such as Dynamics 365, NetSuite, SQL and of one or more ETL tools is advantageous but not required Practical cloud native experience is of advantage
Posted 2 months ago
8 - 13 years
10 - 15 Lacs
Chennai
Work from Office
Overall Responsibilities: Translate application storyboards and use cases into functional applications. Design, build, and maintain efficient, reusable, and reliable Java code. Ensure the best possible performance, quality, and responsiveness of applications. Identify bottlenecks and bugs, and devise solutions to these problems. Develop high-performance and low-latency components to run Spark clusters. Interpret functional requirements into design approaches that can be served through the Big Data platform. Collaborate and partner with global teams based across different locations. Propose best practices and standards; handover to operations. Perform testing of software prototypes and transfer to the operational team. Process data using Hive, Impala, and HBASE. Perform analysis of large data sets and derive insights. Technical Skills (Category-wise): Java Development: Solid understanding of object-oriented programming and design patterns. Strong Java experience with Java 1.8 or higher version. Strong core Java & multithreading working experience. Understanding of concurrency patterns & multithreading in Java. Proficient understanding of code versioning tools, such as Git. Familiarity with build tools such as Maven and continuous integration like Jenkins/Team City. Big Data Technologies: Experience in Big Data technologies like HDFS, Hive, HBASE, Apache Spark, and Kafka. Experience in building self-service platform-agnostic data access APIs. Service-oriented architecture, and data standards like JSON, Avro, Parquet. Experience in building advanced analytical models based on business context. Data Processing: Comfortable working with large data volumes and understanding logical data structures and analysis techniques. Processing data using Hive, Impala, and HBASE. Strong systems analysis, design, and architecture fundamentals, unit testing, and other SDLC activities. Application performance tuning, troubleshooting experience, and implementation of these skills in the Big Data domain. Additional Skills: Experience in working on Linux shell scripting. Experience in RDMS and NoSQL databases. Basic Unix OS and scripting knowledge. Optional:Familiarity with Arcadia Tool for Analytics. Optional:Familiarity with cloud and container technologies. Experience: 8+ years of relevant experience in Java and Big Data technologies. Day-to-Day Activities: Develop and maintain Java code for Big Data applications. Process and analyze large data sets using Big Data technologies. Collaborate with global teams to design and implement solutions. Perform testing and transfer software prototypes to the operational team. Troubleshoot and resolve performance issues and bugs. Ensure adherence to best practices and standards in development. Qualification: Bachelors or Masters degree in Computer Science, Information Technology, or a related field, or equivalent experience. Soft Skills: Excellent communication and collaboration abilities. Strong interpersonal and teamwork skills. Ability to work under pressure and meet tight deadlines. Positive attitude and strong work ethic. Commitment to continuous learning and professional development. S YNECHRONS DIVERSITY & INCLUSION STATEMENT Diversity & Inclusion are fundamental to our culture, and Synechron is proud to be an equal opportunity workplace and is an affirmative action employer. Our Diversity, Equity, and Inclusion (DEI) initiative Same Difference is committed to fostering an inclusive culture promoting equality, diversity and an environment that is respectful to all. We strongly believe that a diverse workforce helps build stronger, successful businesses as a global company. We encourage applicants from across diverse backgrounds, race, ethnicities, religion, age, marital status, gender, sexual orientations, or disabilities to apply. We empower our global workforce by offering flexible workplace arrangements, mentoring, internal mobility, learning and development programs, and more. All employment decisions at Synechron are based on business needs, job requirements and individual qualifications, without regard to the applicants gender, gender identity, sexual orientation, race, ethnicity, disabled or veteran status, or any other characteristic protected by law . Candidate Application Notice
Posted 3 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2