Jobs
Interviews

15 Janusgraph Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

8.0 - 12.0 years

0 Lacs

noida, uttar pradesh

On-site

Cadence is a pivotal leader in electronic design, leveraging over 30 years of computational software expertise. Our Intelligent System Design approach enables us to provide software, hardware, and IP solutions that bring design concepts to life. Our clientele comprises the most innovative companies globally, developing cutting-edge electronic products for diverse market applications such as consumer electronics, hyperscale computing, 5G communications, automotive, aerospace, industrial, and health sectors. At Cadence, you will have the opportunity to work with the latest technology in a stimulating environment that fosters creativity, innovation, and meaningful contributions. Our employee-centric policies prioritize the well-being of our staff, career growth, continuous learning opportunities, and recognizing achievements tailored to individual needs. The "One Cadence One Team" culture encourages collaboration across teams to ensure customer success. We offer various avenues for learning and development based on your interests and requirements, alongside a diverse team of dedicated professionals committed to exceeding expectations daily. We are currently seeking a Database Engineer with a minimum of 8 years of experience to join our team in Noida. The ideal candidate should possess expertise in both SQL and NoSQL databases, particularly PostgreSQL and Elasticsearch. A solid understanding of database architecture, performance optimization, and data modeling is essential. Proficiency in graph databases like JanusGraph and in-memory databases is advantageous. Strong skills in C++ and design patterns are required, with additional experience in Java and JS being desirable. Key Responsibilities: - Hands-on experience in PostgreSQL, including query tuning, indexing, partitioning, and replication. - Proficiency in Elasticsearch, covering query DSL, indexing, and cluster management. - Expertise in SQL and NoSQL databases, with the ability to determine the appropriate database type for specific requirements. - Proven experience in database performance tuning, scaling, and troubleshooting. - Familiarity with Object-Relational Mapping (ORM) is a plus. If you are a proactive Database Engineer looking to contribute your skills to a dynamic team and work on challenging projects at the forefront of electronic design, we encourage you to apply and be part of our innovative journey at Cadence.,

Posted 3 days ago

Apply

10.0 years

0 Lacs

Delhi, India

On-site

About Neo4j Neo4j is the leader in Graph Database & Analytics, helping organizations uncover hidden patterns and relationships across billions of data connections deeply, easily, and quickly. Customers use Neo4j to gain a deeper understanding of their business and reveal new ways of solving their most pressing problems. Over 84% of Fortune 100 companies use Neo4j, along with a vibrant community of 250,000+ developers, data scientists, and architects across the globe. At Neo4j, we’re proud to build the technology that powers breakthrough solutions for our customers. These solutions have helped NASA get to Mars two years earlier, broke the Panama Papers for the ICIJ, and are helping Transport for London to cut congestion by 10% and save $750M a year. Some of our other notable customers include Intuit, Lockheed Martin, Novartis, UBS, and Walmart. Neo4j experienced rapid growth this year as organizations looking to deploy generative AI (GenAI) recognized graph databases as essential for improving it’s accuracy, transparency, and explainability. Growth was further fueled by enterprise demand for Neo4j’s cloud offering and partnerships with leading cloud hyperscalers and ecosystem leaders. Learn more at neo4j.com and follow us on LinkedIn. Our Vision At Neo4j, we have always strived to help the world make sense of data. As business, society and knowledge become increasingly connected, our technology promotes innovation by helping organizations to find and understand data relationships. We created, drive and lead the graph database category, and we’re disrupting how organizations leverage their data to innovate and stay competitive. The Role As a Senior Solutions Engineer, you will be a key driver of Neo4j’s growth by articulating business value, shaping solutions that address customer needs, and accelerating adoption within enterprises. You will engage in deep technical and business discovery, support the sales process through value-based selling, and collaborate with key partners, including hyperscalers (AWS, Azure, GCP) and strategic ISVs to drive joint go-to-market (GTM) initiatives. Responsibilities Sales Process & Customer Engagement: Lead technical discovery to understand customer challenges, pain points, and success criteria, ensuring solutions align with business objectives. Drive value-based selling, clearly articulating Neo4j’s unique differentiators and ROI to both technical and business stakeholders. Own the pre-sales engagement, delivering impactful presentations, live demonstrations, and competitive positioning. Map customer requirements to relevant Neo4j product capabilities, using whiteboarding, workshops, and tailored solution architectures. Develop proof-of-value (PoV) demonstrations and prototypes that showcase tangible business outcomes. Work closely with the sales team to support deal qualification, opportunity progression, and conversion strategies. Ensure smooth handoff to Professional Services, providing knowledge transfer to consulting teams and industry partners for successful implementation. Partner & Ecosystem Collaboration Work closely with key partners, including hyperscalers (AWS, Azure, GCP), ISVs, and system integrators, to drive joint solutions and co-sell initiatives. Develop and deliver enablement programs for Neo4j’s strategic partners, ensuring they have the technical skills and sales acumen to position Neo4j effectively. Collaborate on GTM strategies, helping design solutions that leverage hyperscaler services (e.g., AWS Neptune, Azure Cosmos DB, Google BigQuery) in combination with Neo4j to meet customer needs. Engage with partner sales and technical teams to drive adoption, influence deal cycles, and execute joint marketing initiatives. Community & Thought Leadership Support local marketing events, user groups, and Neo4j Community initiatives. Contribute to Neo4j’s thought leadership, delivering webinars, writing technical blogs, and speaking at industry conferences. Relevant Skills And Experience Deep expertise in value selling, with a strong understanding of the sales process and how to align solutions with customer business goals. Experience in technical discovery, consultative solutioning, and mapping customer needs to technology capabilities. Graph database expertise (Neo4j, OrientDB, JanusGraph, etc.), along with knowledge of relational/NoSQL databases, analytics, and data integration. Strong experience working with hyperscalers (AWS, Azure, GCP), including understanding of their data services and GTM strategies. Proficiency in Java, Python, JavaScript, Docker, Kubernetes, and cloud infrastructure technologies. Knowledge of data visualization tools (D3.js, Sigma.js, Linkurious, etc.). Excellent presentation and communication skills, capable of engaging technical teams, C-level executives, and partner organizations. Ability to work in fast-paced, sales-driven environments, managing multiple customer and partner engagements. Industry experience in Financial Services, Retail, Government, Security, or Telecommunications is a plus. Qualifications 7+ to 10+ years of pre-sales, professional services, or customer-facing technical experience. Bachelor’s or Master’s degree in a relevant field. Experience working with partners, including hyperscalers, ISVs, and system integrators. Ability to work independently in a cross-functional, global organization. Willingness to travel as needed for customer and partner engagements.

Posted 1 week ago

Apply

7.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Requirements Description and Requirements Position Summary A Big Data (Hadoop) Administrator responsible for supporting the installation, configuration, and maintenance of Cloudera Data Platform (CDP) and Cloudera Flow Management (CFM) streaming clusters on RedHat Linux. Strong expertise in DevOps practices, automation, and scripting (e.g . Ansible , Azure DevOps, Shell, Python ) to streamline operations and improve efficiency is highly valued. Job Responsibilities Assist in the installation, configuration, and maintenance of Cloudera Data Platform (CDP) and Cloudera Flow Management (CFM) streaming clusters on RedHat Linux. Perform routine monitoring, troubleshooting, and issue resolution to ensure the stability and performance of Hadoop clusters. Develop and maintain scripts (e.g., Python, Bash, Ansible) to automate operational tasks and improve system efficiency. Collaborate with cross-functional teams, including application development, infrastructure, and operations, to support business requirements and implement new features. Implement and follow best practices for cluster security, including user access management and integration with tools like Apache Ranger and Kerberos. Support backup, recovery, and disaster recovery processes to ensure data availability and business continuity. Conduct performance tuning and optimization of Hadoop clusters to enhance system efficiency and reduce latency. Analyze logs and use tools like Splunk to debug and resolve production issues. Document operational processes, maintenance procedures, and troubleshooting steps to ensure knowledge sharing and consistency. Stay updated on emerging technologies and contribute to the adoption of new tools and practices to improve cluster management. Education, Technical Skills & Other Critical Requirement Education Bachelor’s degree in computer science, Information Systems, or another related field with 7+ years of IT and Infrastructure engineering work experience. Experience (In Years) 7+ Years Total IT experience & 4+ Years relevant experience in Big Data database Technical Skills Big Data Platform Management : Big Data Platform Management: Knowledge in managing and optimizing the Cloudera Data Platform, including components such as Apache Hadoop (YARN and HDFS), Apache HBase, Apache Solr , Apache Hive, Apache Kafka, Apache NiFi , Apache Ranger, Apache Spark, as well as JanusGraph and IBM BigSQL . Automation and Scripting : Expertise in automation tools and scripting languages such as Ansible, Python, and Bash to streamline operational tasks and improve efficiency. DevOps Practices : Proficiency in DevOps tools and methodologies, including CI/CD pipelines, version control systems (e.g., Git), and infrastructure-as-code practices. Monitoring and Troubleshooting : Experience with monitoring and observability tools such as Splunk, Elastic Stack, or Prometheus to identify and resolve system issues. Linux Administration : Solid knowledge of Linux operating systems, including system administration, troubleshooting, and performance tuning. Backup and Recovery : Familiarity with implementing and managing backup and recovery processes to ensure data availability and business continuity. Security and Access Management : Understanding of security best practices, including user access management and integration with tools like Kerberos. Agile Methodologies : Knowledge of Agile practices and frameworks, such as SAFe , with experience working in Agile environments. ITSM Tools : Familiarity with ITSM processes and tools like ServiceNow for incident and change management. Other Critical Requirements Excellent Analytical and Problem-Solving skills Ability to work in a 24x7 rotational shift to support Hadoop platforms and ensure high availability. Excellent written and oral communication skills, including the ability to clearly communicate/articulate technical and functional issues with conclusions and recommendations to stakeholders. Prior experience in handling state side and offshore stakeholders Experience in creating and delivering Business presentations. Demonstrate ability to work independently and in a team environment Demonstrate willingness to learn and adopt new technologies and tools to improve operational efficiency About MetLife Recognized on Fortune magazine's list of the 2025 "World's Most Admired Companies" and Fortune World’s 25 Best Workplaces™ for 2024, MetLife , through its subsidiaries and affiliates, is one of the world’s leading financial services companies; providing insurance, annuities, employee benefits and asset management to individual and institutional customers. With operations in more than 40 markets, we hold leading positions in the United States, Latin America, Asia, Europe, and the Middle East. Our purpose is simple - to help our colleagues, customers, communities, and the world at large create a more confident future. United by purpose and guided by empathy, we’re inspired to transform the next century in financial services. At MetLife, it’s #AllTogetherPossible . Join us!

Posted 2 weeks ago

Apply

8.0 - 11.0 years

35 - 37 Lacs

Kolkata, Ahmedabad, Bengaluru

Work from Office

Dear Candidate, We are looking for a Svelte Developer to build lightweight, reactive web applications with excellent performance and maintainability. Key Responsibilities: Design and implement applications using Svelte and SvelteKit. Build reusable components and libraries for future use. Optimize applications for speed and responsiveness. Collaborate with design and backend teams to create cohesive solutions. Required Skills & Qualifications: 8+ years of experience with Svelte or similar reactive frameworks. Strong understanding of JavaScript, HTML, CSS, and reactive programming concepts. Familiarity with SSR and JAMstack architectures. Experience integrating RESTful APIs or GraphQL endpoints. Soft Skills: Strong troubleshooting and problem-solving skills. Ability to work independently and in a team. Excellent communication and documentation skills. Note: If interested, please share your updated resume and preferred time for a discussion. If shortlisted, our HR team will contact you. Kandi Srinivasa Reddy Delivery Manager Integra Technologies

Posted 1 month ago

Apply

7.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Requirements Description and Requirements Summary: A Big Data (Hadoop) Administrator responsible for supporting the installation, configuration, and maintenance of Cloudera Data Platform (CDP) and Cloudera Flow Management (CFM) streaming clusters on RedHat Linux. Strong expertise in DevOps practices, automation, and scripting (e.g . Ansible , Azure DevOps, Shell, Python ) to streamline operations and improve efficiency is highly valued. Job Responsibilities: Assist in the installation, configuration, and maintenance of Cloudera Data Platform (CDP) and Cloudera Flow Management (CFM) streaming clusters on RedHat Linux. Perform routine monitoring, troubleshooting, and issue resolution to ensure the stability and performance of Hadoop clusters. Develop and maintain scripts (e.g., Python, Bash, Ansible) to automate operational tasks and improve system efficiency. Collaborate with cross-functional teams, including application development, infrastructure, and operations, to support business requirements and implement new features. Implement and follow best practices for cluster security, including user access management and integration with tools like Apache Ranger and Kerberos. Support backup, recovery, and disaster recovery processes to ensure data availability and business continuity. Conduct performance tuning and optimization of Hadoop clusters to enhance system efficiency and reduce latency. Analyze logs and use tools like Splunk to debug and resolve production issues. Document operational processes, maintenance procedures, and troubleshooting steps to ensure knowledge sharing and consistency. Stay updated on emerging technologies and contribute to the adoption of new tools and practices to improve cluster management. Education: Bachelor’s degree in computer science, Information Systems, or another related field with 7+ years of IT and Infrastructure engineering work experience Experience: 7+ Years Total IT experience & 4+ Years relevant experience in Big Data database Big Data Platform Management : Big Data Platform Management: Knowledge in managing and optimizing the Cloudera Data Platform, including components such as Apache Hadoop (YARN and HDFS), Apache HBase, Apache Solr , Apache Hive, Apache Kafka, Apache NiFi , Apache Ranger, Apache Spark, as well as JanusGraph and IBM BigSQL . Automation and Scripting : Expertise in automation tools and scripting languages such as Ansible, Python, and Bash to streamline operational tasks and improve efficiency. DevOps Practices : Proficiency in DevOps tools and methodologies, including CI/CD pipelines, version control systems (e.g., Git), and infrastructure-as-code practices. Monitoring and Troubleshooting : Experience with monitoring and observability tools such as Splunk, Elastic Stack, or Prometheus to identify and resolve system issues. Linux Administration : Solid knowledge of Linux operating systems, including system administration, troubleshooting, and performance tuning. Backup and Recovery : Familiarity with implementing and managing backup and recovery processes to ensure data availability and business continuity. Security and Access Management : Understanding of security best practices, including user access management and integration with tools like Kerberos. Agile Methodologies : Knowledge of Agile practices and frameworks, such as SAFe , with experience working in Agile environments. ITSM Tools : Familiarity with ITSM processes and tools like ServiceNow for incident and change management. Other Critical Requirement: Excellent Analytical and Problem-Solving skills Ability to work in a 24x7 rotational shift to support Hadoop platforms and ensure high availability. Excellent written and oral communication skills, including the ability to clearly communicate/articulate technical and functional issues with conclusions and recommendations to stakeholders. Prior experience in handling state side and offshore stakeholders Experience in creating and delivering Business presentations. Demonstrate ability to work independently and in a team environment Demonstrate willingness to learn and adopt new technologies and tools to improve operational efficiency About MetLife Recognized on Fortune magazine's list of the 2025 "World's Most Admired Companies" and Fortune World’s 25 Best Workplaces™ for 2024, MetLife , through its subsidiaries and affiliates, is one of the world’s leading financial services companies; providing insurance, annuities, employee benefits and asset management to individual and institutional customers. With operations in more than 40 markets, we hold leading positions in the United States, Latin America, Asia, Europe, and the Middle East. Our purpose is simple - to help our colleagues, customers, communities, and the world at large create a more confident future. United by purpose and guided by empathy, we’re inspired to transform the next century in financial services. At MetLife, it’s #AllTogetherPossible . Join us!

Posted 1 month ago

Apply

7.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Country India Working Schedule Full-Time Work Arrangement Hybrid Relocation Assistance Available No Posted Date 23-Jun-2025 Job ID 10076 Summary Description and Requirements A Big Data (Hadoop) Administrator responsible for supporting the installation, configuration, and maintenance of Cloudera Data Platform (CDP) and Cloudera Flow Management (CFM) streaming clusters on RedHat Linux. Strong expertise in DevOps practices, automation, and scripting (e.g. Ansible, Azure DevOps, Shell, Python) to streamline operations and improve efficiency is highly valued. Job Responsibilities Assist in the installation, configuration, and maintenance of Cloudera Data Platform (CDP) and Cloudera Flow Management (CFM) streaming clusters on RedHat Linux. Perform routine monitoring, troubleshooting, and issue resolution to ensure the stability and performance of Hadoop clusters. Develop and maintain scripts (e.g., Python, Bash, Ansible) to automate operational tasks and improve system efficiency. Collaborate with cross-functional teams, including application development, infrastructure, and operations, to support business requirements and implement new features. Implement and follow best practices for cluster security, including user access management and integration with tools like Apache Ranger and Kerberos. Support backup, recovery, and disaster recovery processes to ensure data availability and business continuity. Conduct performance tuning and optimization of Hadoop clusters to enhance system efficiency and reduce latency. Analyze logs and use tools like Splunk to debug and resolve production issues. Document operational processes, maintenance procedures, and troubleshooting steps to ensure knowledge sharing and consistency. Stay updated on emerging technologies and contribute to the adoption of new tools and practices to improve cluster management. Education Bachelor’s degree in computer science, Information Systems, or another related field with 7+ years of IT and Infrastructure engineering work experience Experience 7+ Years Total IT experience & 4+ Years relevant experience in Big Data database Big Data Platform Management: Big Data Platform Management: Knowledge in managing and optimizing the Cloudera Data Platform, including components such as Apache Hadoop (YARN and HDFS), Apache HBase, Apache Solr, Apache Hive, Apache Kafka, Apache NiFi, Apache Ranger, Apache Spark, as well as JanusGraph and IBM BigSQL. Automation and Scripting: Expertise in automation tools and scripting languages such as Ansible, Python, and Bash to streamline operational tasks and improve efficiency. DevOps Practices: Proficiency in DevOps tools and methodologies, including CI/CD pipelines, version control systems (e.g., Git), and infrastructure-as-code practices. Monitoring and Troubleshooting: Experience with monitoring and observability tools such as Splunk, Elastic Stack, or Prometheus to identify and resolve system issues. Linux Administration: Solid knowledge of Linux operating systems, including system administration, troubleshooting, and performance tuning. Backup and Recovery: Familiarity with implementing and managing backup and recovery processes to ensure data availability and business continuity. Security and Access Management: Understanding of security best practices, including user access management and integration with tools like Kerberos. Agile Methodologies: Knowledge of Agile practices and frameworks, such as SAFe, with experience working in Agile environments. ITSM Tools: Familiarity with ITSM processes and tools like ServiceNow for incident and change management. Other Critical Requirement Excellent Analytical and Problem-Solving skills Ability to work in a 24x7 rotational shift to support Hadoop platforms and ensure high availability. Excellent written and oral communication skills, including the ability to clearly communicate/articulate technical and functional issues with conclusions and recommendations to stakeholders. Prior experience in handling state side and offshore stakeholders Experience in creating and delivering Business presentations. Demonstrate ability to work independently and in a team environment Demonstrate willingness to learn and adopt new technologies and tools to improve operational efficiency About MetLife Recognized on Fortune magazine's list of the 2025 "World's Most Admired Companies" and Fortune World’s 25 Best Workplaces™ for 2024, MetLife , through its subsidiaries and affiliates, is one of the world’s leading financial services companies; providing insurance, annuities, employee benefits and asset management to individual and institutional customers. With operations in more than 40 markets, we hold leading positions in the United States, Latin America, Asia, Europe, and the Middle East. Our purpose is simple - to help our colleagues, customers, communities, and the world at large create a more confident future. United by purpose and guided by empathy, we’re inspired to transform the next century in financial services. At MetLife, it’s #AllTogetherPossible. Join us!

Posted 1 month ago

Apply

14.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Requirements Description and Requirements Position Summary: A highly skilled Big Data (Hadoop) Administrator responsible for the installation, configuration, engineering, and architecture of Cloudera Data Platform (CDP) and Cloudera Flow Management (CFM) streaming clusters on RedHat Linux. Strong expertise in DevOps practices, scripting, and infrastructure-as-code for automating and optimizing operations is highly desirable. Experience in collaborating with cross-functional teams, including application development, infrastructure, and operations, is highly preferred. Job Responsibilities: Manages the design, distribution, performance, replication, security, availability, and access requirements for large and complex Big Data clusters. Designs and develops the architecture and configurations to support various application needs; implements backup, recovery, archiving, conversion strategies, and performance tuning; manages job scheduling, application release, cluster changes, and compliance. Identifies and resolves issues utilizing structured tools and techniques. Provides technical assistance and mentoring to staff in all aspects of Hadoop cluster management; consults and advises application development teams on security, query optimization, and performance. Writes scripts to automate routine cluster management tasks and documents maintenance processing flows per standards. Implement industry best practices while performing Hadoop cluster administration tasks. Works in an Agile model with a strong understanding of Agile concepts. Collaborates with development teams to provide and implement new features. Debugs production issues by analyzing logs directly and using tools like Splunk and Elastic. Address organizational obstacles to enhance processes and workflows. Adopts and learns new technologies based on demand and supports team members by coaching and assisting. Education: Bachelor’s degree in computer science, Information Systems, or another related field with 14+ years of IT and Infrastructure engineering work experience. Experience: 14+ Years Total IT experience & 10+ Years relevant experience in Big Data database Technical Skills: Big Data Platform Management : Big Data Platform Management: Expertise in managing and optimizing the Cloudera Data Platform, including components such as Apache Hadoop (YARN and HDFS), Apache HBase, Apache Solr , Apache Hive, Apache Kafka, Apache NiFi , Apache Ranger, Apache Spark, as well as JanusGraph and IBM BigSQL . Data Infrastructure & Security : Proficient in designing and implementing robust data infrastructure solutions with a strong focus on data security, utilizing tools like Apache Ranger and Kerberos. Performance Tuning & Optimization : Skilled in performance tuning and optimization of big data environments, leveraging advanced techniques to enhance system efficiency and reduce latency. Backup & Recovery : Experienced in developing and executing comprehensive backup and recovery strategies to safeguard critical data and ensure business continuity. Linux & Troubleshooting : Strong knowledge of Linux operating systems , with proven ability to troubleshoot and resolve complex technical issues, collaborating effectively with cross-functional teams. DevOps & Scripting : Proficient in scripting and automation using tools like Ansible, enabling seamless integration and automation of cluster operations. Experienced in infrastructure-as-code practices and observability tools such as Elastic. Agile & Collaboration : Strong understanding of Agile SAFe for Teams, with the ability to work effectively in Agile environments and collaborate with cross-functional teams. ITSM Process & Tools : Knowledgeable in ITSM processes and tools such as ServiceNow. Other Critical Requirements: Automation and Scripting : Proficiency in automation tools and programming languages such as Ansible and Python to streamline operations and improve efficiency. Analytical and Problem-Solving Skills : Strong analytical and problem-solving abilities to address complex technical challenges in a dynamic enterprise environment. 24x7 Support : Ability to work in a 24x7 rotational shift to support Hadoop platforms and ensure high availability. Team Management and Leadership : Proven experience managing geographically distributed and culturally diverse teams, with strong leadership, coaching, and mentoring skills. Communication Skills : Exceptional written and oral communication skills, with the ability to clearly articulate technical and functional issues, conclusions, and recommendations to stakeholders at all levels. Stakeholder Management : Prior experience in effectively managing both onshore and offshore stakeholders, ensuring alignment and collaboration across teams. Business Presentations : Skilled in creating and delivering impactful business presentations to communicate key insights and recommendations. Collaboration and Independence : Demonstrated ability to work independently as well as collaboratively within a team environment, ensuring successful project delivery in a complex enterprise setting. About MetLife Recognized on Fortune magazine's list of the 2025 "World's Most Admired Companies" and Fortune World’s 25 Best Workplaces™ for 2024, MetLife , through its subsidiaries and affiliates, is one of the world’s leading financial services companies; providing insurance, annuities, employee benefits and asset management to individual and institutional customers. With operations in more than 40 markets, we hold leading positions in the United States, Latin America, Asia, Europe, and the Middle East. Our purpose is simple - to help our colleagues, customers, communities, and the world at large create a more confident future. United by purpose and guided by empathy, we’re inspired to transform the next century in financial services. At MetLife, it’s #AllTogetherPossible . Join us! Show more Show less

Posted 1 month ago

Apply

8.0 - 13.0 years

0 - 1 Lacs

Gurugram

Work from Office

Dsgn & Blt Telemetry Ingestion Pipelines.Architect a high-thrgh telemetry collector using protocols & publish validated events into streaming platforms. Dvlop Real-Time Detection Pipelines Build REST APIs & OTA Rule Deliv. Systm, Threat Graph Engine Required Candidate profile Exp implementing monitoring, observability & resilience engineering practices.Leadership- incl. mentor jnr engineers & driving bcknd practices. Exlt com. skills for tech doc, backend exp

Posted 1 month ago

Apply

12.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Job Description What We Do At Goldman Sachs, our Engineers don’t just make things – we make things possible. Change the world by connecting people and capital with ideas. Solve the most challenging and pressing engineering problems for our clients. Join our engineering teams that build massively scalable software and systems, architect low latency infrastructure solutions, proactively guard against cyber threats, and leverage machine learning alongside financial engineering to continuously turn data into action. Create new businesses, transform finance, and explore a world of opportunity at the speed of markets. Engineering, which is comprised of our Technology Division and global strategists groups, is at the critical center of our business, and our dynamic environment requires innovative strategic thinking and immediate, real solutions. Want to push the limit of digital possibilities? Start here. Who We Look For Goldman Sachs Engineers are innovators and problem-solvers, building solutions in risk management, big data, mobile and more. We look for creative collaborators who evolve, adapt to change and thrive in a fast-paced global environment. Roles And Responsibilities An individual in this role is responsible for design, development, deployment and support of products and platforms that leverage Java based technologies and enable large scale event processing in engineering products in GS. The individual will engage in both server side as well as front-end development as may be required to achieve the desired outcomes. Specific responsibilities include: Design component as well as integration architecture for large scale web applications Develop, test and support features for globally deployed web apps. Follow best practices throughout the project lifecycle Participate in team-wide design and code reviews. Keep abreast of emerging technical trends, so applicability to GS products can be determined. Qualification Bachelor's Degree (or equivalent or higher) in Computer Science, Information Technology, Electronics and Communication. Overall, 7 – 12 years of experience with a minimum of 5 years in developing Java-based applications. Essential Skills Technical Strong programming skills in Java and Python with proficiency in object-oriented design principles Experience with Java frameworks such as DropWizard, Spring and Hibernate Familiarity with web development frameworks (Angular or React) Experience with Testing frameworks (JUnit, TestNG, Cucumber, Mockito) Hands-on experience with building stream-processing systems using Hadoop, Spark and related technologies Familiarity with distributed storage systems like Cassandra, MongoDB and JanusGraph. Experience with various messaging systems, such as Kafka or RabbitMQ Experience with Caching solutions like Hazelcast, Redis or MemCache Knowledge of build tools like Maven or Gradle Familiarity with continuous integration and continuous deployment (CI/CD) pipelines especially using Git Working knowledge of Unix/Linux experience Strong problem-solving skills and attention to detail Soft skills Strong communication skills with a track record of working and collaborating with global teams, Must possess the ability to handle multiple on-going assignments and be able to work independently in addition to contributing as part of a highly collaborative and globally dispersed team, Strong analytical skills with the ability to break down and communicate complex issues, ideas and solutions, Thorough knowledge and experience in all phases of SDLC Additional skills (Advantage) Working knowledge of enterprise database systems (Sybase or DB2), Programming in Perl, Python and shell script. Knowledge and experience in building conversational user interfaces enabled by AI About Goldman Sachs At Goldman Sachs, we commit our people, capital and ideas to help our clients, shareholders and the communities we serve to grow. Founded in 1869, we are a leading global investment banking, securities and investment management firm. Headquartered in New York, we maintain offices around the world. We believe who you are makes you better at what you do. We're committed to fostering and advancing diversity and inclusion in our own workplace and beyond by ensuring every individual within our firm has a number of opportunities to grow professionally and personally, from our training and development opportunities and firmwide networks to benefits, wellness and personal finance offerings and mindfulness programs. Learn more about our culture, benefits, and people at GS.com/careers. We’re committed to finding reasonable accommodations for candidates with special needs or disabilities during our recruiting process. Learn more: https://www.goldmansachs.com/careers/footer/disability-statement.html Please note that our firm has adopted a COVID-19 vaccination requirement for employees who work onsite at any of our U.S. locations to safeguard the health and well-being of all our employees and others who enter our U.S. offices. This role requires the employee to be able to work on-site. As a condition of employment, employees working on-site at any of our U.S. locations are required to be fully vaccinated for COVID-19, and to have either had COVID-19 or received a booster dose if eligible under Centers for Disease Prevention and Control (CDC) guidance, unless prohibited by applicable federal, state, or local law. Applicants who wish to request for a medical or religious accommodation, or any other accommodation required under applicable law, can do so later in the process. Please note that accommodations are not guaranteed and are decided on a case by case basis. © The Goldman Sachs Group, Inc., 2025. All rights reserved. Goldman Sachs is an equal employment/affirmative action employer Female/Minority/Disability/Veteran/Sexual Orientation/Gender Identity. Show more Show less

Posted 1 month ago

Apply

3.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Location: In-Person (sftwtrs.ai Lab) Experience Level: Early Career / 1–3 years About sftwtrs.ai sftwtrs.ai is a leading AI lab focused on security automation, adversarial machine learning, and scalable AI-driven solutions for enterprise clients. Under the guidance of our Principal Scientist, we combine cutting-edge research with production-grade development to deliver next-generation AI products in cybersecurity and related domains. Role Overview As a Research Engineer I , you will work closely with our Principal Scientist and Senior Research Engineers to ideate, prototype, and implement AI/ML models and pipelines. This role bridges research and software development: you’ll both explore novel algorithms (especially in adversarial ML and security automation) and translate successful prototypes into robust, maintainable code. This position is ideal for someone who is passionate about pushing the boundaries of AI research while also possessing strong software engineering skills. Key Responsibilities Research & Prototyping Dive into state-of-the-art AI/ML literature (particularly adversarial methods, anomaly detection, and automation in security contexts). Rapidly prototype novel model architectures, training schemes, and evaluation pipelines. Design experiments, run benchmarks, and analyze results to validate research hypotheses. Software Development & Integration Collaborate with DevOps and MLOps teams to containerize research prototypes (e.g., Docker, Kubernetes). Develop and maintain production-quality codebases in Python (TensorFlow, PyTorch, scikit-learn, etc.). Implement data pipelines for training and inference: data ingestion, preprocessing, feature extraction, and serving. Collaboration & Documentation Work closely with Principal Scientist and cross-functional stakeholders (DevOps, Security Analysts, QA) to align on research objectives and engineering requirements. Author clear, concise documentation: experiment summaries, model design notes, code review comments, and API specifications. Participate in regular code reviews, design discussions, and sprint planning sessions. Model Deployment & Monitoring Assist in deploying models to staging or production environments; integrate with internal tooling (e.g., MLflow, Kubeflow, or custom MLOps stack). Implement automated model-monitoring scripts to track performance drift, data quality, and security compliance metrics. Troubleshoot deployment issues, optimize inference pipelines for latency and throughput. Continuous Learning & Contribution Stay current with AI/ML trends—present findings to the team and propose opportunities for new research directions. Contribute to open-source libraries or internal frameworks as needed (e.g., adding new modules to our adversarial-ML toolkit). Mentor interns or junior engineers on machine learning best practices and coding standards. Qualifications Education: Bachelor’s or Master’s degree in Computer Science, Electrical Engineering, Data Science, or a closely related field. Research Experience: 1–3 years of hands-on experience in AI/ML research or equivalent internships. Familiarity with adversarial machine learning concepts (evasion attacks, poisoning attacks, adversarial training). Exposure to security-related ML tasks (e.g., anomaly detection in logs, malware classification using neural networks) is a strong plus. Development Skills: Proficient in Python, with solid experience using at least one major deep-learning framework (TensorFlow 2.x, PyTorch). Demonstrated ability to write clean, modular, and well-documented code (PEP 8 compliant). Experience building data pipelines (using pandas, Apache Beam, or equivalent) and integrating with RESTful APIs. Software Engineering Practices: Familiarity with version control (Git), CI/CD pipelines, and containerization (Docker). Comfortable writing unit tests (pytest or unittest) and conducting code reviews. Understanding of cloud services (AWS, GCP, or Azure) for training and serving models. Analytical & Collaborative Skills: Strong problem-solving mindset, attention to detail, and ability to work under tight deadlines. Excellent written and verbal communication skills; able to present technical concepts clearly to both research and engineering audiences. Demonstrated ability to collaborate effectively in a small, agile team. Preferred Skills (Not Mandatory) Experience with MLOps tools (MLflow, Kubeflow, or TensorFlow Extended). Hands-on knowledge of graph databases (e.g., JanusGraph, Neo4j) or NLP techniques (transformer models, embeddings). Familiarity with security compliance standards (HIPAA, GDPR) and secure software development practices. Exposure to Rust or Go for high-performance inference code. Contributions to open-source AI or security automation projects. Why Join Us? Cutting-Edge Research & Production Impact: Work on adversarial ML and security–automation projects that go from concept to real-world deployment. Hands-On Mentorship: Collaborate directly with our Principal Scientist and Senior Engineers, learning best practices in both research methodology and production engineering. Innovative Environment: Join a lean, highly specialized team where your contributions are immediately visible and valued. Professional Growth: Access to conferences, lab resources, and continuous learning opportunities in AI, cybersecurity, and software development. Competitive Compensation & Benefits: Attractive salary, health insurance, and opportunities for performance-based bonuses. How to Apply Please send a résumé/CV, a brief cover letter outlining relevant AI/ML projects, and any GitHub or portfolio links to careers@sftwtrs.ai with the subject line “RE: Research Engineer I Application.” sftwtrs.ai is an equal-opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. Show more Show less

Posted 1 month ago

Apply

17.0 - 19.0 years

0 Lacs

Andhra Pradesh

On-site

Software Engineering Associate Director - HIH - Evernorth. About Evernorth Evernorth Health Services, a division of The Cigna Group (NYSE: CI), creates pharmacy, care, and benefits solutions to improve health and increase vitality. We relentlessly innovate to make the prediction, prevention, and treatment of illness and disease more accessible to millions of people. Position Summary: The Software Development Associate Director provides hands on leadership, management, and thought leadership for a Delivery organization enabling Cigna's Technology teams. This individual will lead a team based in our Hyderabad Innovation Hub to deliver innovative solutions supporting multiple business and technology domains within Cigna. This includes the Sales & Underwriting, Producer, Service Operations, and Pharmacy business lines, as well as testing and DevOps enablement. The focus of the team is to build innovative go-to-market solutions enabling business while modernizing our existing asset base to support business growth. The Technology strategy is aligned to our business strategy and the candidate will not only be able to influence technology direction but also establishing our team through recruiting and mentoring employees and vendor resources. This is a hands-on position with visibility to the highest levels of the Cigna Technology team. This leader will focus on enabling innovation using the latest technologies and development techniques. This role will foster rapidly building out a scalable delivery organization that aligns with all areas within the Technology team. The ideal candidate will be able to attract and develop talent in a highly dynamic environment. Job Description & Responsibilities: Provide leadership, vision, and design direction for the quality and development of the US Medical and Health Services Technology teams based at the Hyderabad Innovation Hub (HIH). Work in close coordination with leaders and teams based in the United States, as well as contractors employed by the US Medical and Health Services Technology team who are based both within and outside of the United States, to deliver products and capabilities in support of Cigna's business lines. Provide leadership to HIH leaders and teams ensuring the team is meeting the following objectives: Design, configuration, implementation application design/development, and quality engineering within the supported technologies and products. Hands-on people manager who has experience leading agile teams of highly talented technology professionals developing large solutions and internal facing applications. They are expected to work closely with developers, quality engineers, technical project managers, principal engineers, and business stakeholders to ensure that application solutions meet business/customer requirements. A servant leader mentality and a history of creating an inclusive environment, fostering diverse views and approaches from the team, and coaching and mentoring them to thrive in a dynamic workplace. A history of embracing and incubating emerging technology and open-source products. A passion for building highly resilient, scalable, and available platforms, rich reusable foundational capabilities and seamless developer experience while focusing on strategic vision and technology roadmap delivery in an MVP / iterative fast paced approach. Accountable for driving towards timely decisions while influencing across engineering and development delivery teams to drive towards meeting project timelines while balancing destination state. Ensure engineering solutions align with the Technology strategy and that they support the application’s requirements. Plan and implement procedures that will maximize engineering and operating efficiency for application integration technologies. Identify and drive process improvement opportunities. Proactive monitoring and management design of supported assets assuring performance, availability, security, and capacity. Maximize the efficiency (operational, performance, and cost) of the application assets. Experience Required: 17 to 19 years of IT and business/industry or equivalent experience preferred, with at least 5 years of experience in a leadership role with responsibility for the delivery of large-scale projects and programs. Leadership, cross-cultural communication, and familiarity with wide range of technologies and stakeholders. Strong Emotional Intelligence with the ability to foster collaboration across geographically dispersed teams. Experience Desired: Recognized leader with proven track record of delivering software engineering initiatives and cross-IT/business initiatives. Proven experience leading/managing technical teams with a passion for developing talent within the team. Experience with vendor management in an onshore/offshore model. Experience in Healthcare, Pharmacy and/or Underwriting systems. Experience with AWS. Education and Training Required: B.S. degree in Computer Science, Information Systems, or other related degrees; Industry certifications such as AWS Solution Architect, PMP, Scrum Master, or Six Sigma Green Belt are also ideal. Primary Skills: Familiarity with most of the following Application Development technologies: Python, RESTful services, React, Angular, Postgres, and MySQL (relational database management systems). Familiarity with most of the following Data Engineering technologies: Databricks, Spark, PySpark, SQL, Teradata, and multi-cloud environments. Familiarity with most of the following Cloud and Emerging technologies: AWS, LLMs (OpenAI, Anthropic), vector databases (Pinecone, Milvus), graph databases (Neo4j, JanusGraph, Neptune), prompt engineering, and fine-tuning AI models. Familiarity with enterprise software development lifecycle to include production reviews and ticket resolution, navigating freeze/stability periods effectively, total cost of ownership reporting, and updating applications to align with evolving security and cloud standards. Familiarity with agile methodology including SCRUM team leadership or Scaled Agile (SAFE). Familiarity with modern delivery practices such as continuous integration, behavior/test driven development, and specification by example. Deep people and matrix management skills, with a heavy emphasis on coaching and mentoring of less senior staff, and a strong ability to influence VP level leaders. Proven ability to resolve issues and mitigate risks that could undermine the delivery of critical initiatives. Strong written and verbal communication skills with the ability to interact with all levels of the organization. Strong influencing/negotiation skills. Strong interpersonal/relationship management skills. Strong time and project management skills. About Evernorth Health Services Evernorth Health Services, a division of The Cigna Group, creates pharmacy, care and benefit solutions to improve health and increase vitality. We relentlessly innovate to make the prediction, prevention and treatment of illness and disease more accessible to millions of people. Join us in driving growth and improving lives.

Posted 1 month ago

Apply

10.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Requirements Description and Requirements Position Summary: A skilled Big Data (Hadoop) Administrator responsible for the installation, configuration, and maintenance of Cloudera Data Platform (CDP) and Cloudera Flow Management (CFM) streaming clusters on RedHat Linux. Proficiency in DevOps practices, scripting, and infrastructure-as-code for automating routine tasks and improving operational efficiency is desirable. Experience working with cross-functional teams, including application development, infrastructure, and operations, is preferred. Job Responsibilities: Manages the design, distribution, performance, replication, security, availability, and access requirements for large and complex Big Data clusters. Designs and develops the architecture and configurations to support various application needs; implements backup, recovery, archiving, conversion strategies, and performance tuning; manages job scheduling, application release, cluster changes, and compliance. Identifies and resolves issues utilizing structured tools and techniques. Provides technical assistance and mentoring to staff in all aspects of Hadoop cluster management; consults and advises application development teams on security, query optimization, and performance. Writes scripts to automate routine cluster management tasks and documents maintenance processing flows per standards. Implement industry best practices while performing Hadoop cluster administration tasks. Works in an Agile model with a strong understanding of Agile concepts. Collaborates with development teams to provide and implement new features. Debugs production issues by analyzing logs directly and using tools like Splunk and Elastic. Address organizational obstacles to enhance processes and workflows. Adopts and learns new technologies based on demand and supports team members by coaching and assisting. Education: Bachelor’s degree in computer science, Information Systems, or another related field with 10+ years of IT and Infrastructure engineering work experience. Experience: 10+ Years Total IT experience & 7+ Years relevant experience in Big Data database Technical Skills: Big Data Platform Management : Big Data Platform Management: Expertise in managing and optimizing the Cloudera Data Platform, including components such as Apache Hadoop (YARN and HDFS), Apache HBase, Apache Solr , Apache Hive, Apache Kafka, Apache NiFi , Apache Ranger, Apache Spark, as well as JanusGraph and IBM BigSQL . Data Infrastructure & Security : Proficient in designing and implementing robust data infrastructure solutions with a strong focus on data security, utilizing tools like Apache Ranger and Kerberos. Performance Tuning & Optimization : Skilled in performance tuning and optimization of big data environments, leveraging advanced techniques to enhance system efficiency and reduce latency. Backup & Recovery : Experienced in developing and executing comprehensive backup and recovery strategies to safeguard critical data and ensure business continuity. Linux & Troubleshooting : Strong knowledge of Linux operating systems , with proven ability to troubleshoot and resolve complex technical issues, collaborating effectively with cross-functional teams. DevOps & Scripting : Proficient in scripting and automation using tools like Ansible, enabling seamless integration and automation of cluster operations. Experienced in infrastructure-as-code practices and observability tools such as Elastic. Agile & Collaboration : Strong understanding of Agile SAFe for Teams, with the ability to work effectively in Agile environments and collaborate with cross-functional teams. ITSM Process & Tools : Knowledgeable in ITSM processes and tools such as ServiceNow. Other critical Requirement: Automation and Scripting : Proficiency in automation tools and programming languages such as Ansible and Python to streamline operations and improve efficiency. Analytical and Problem-Solving Skills : Strong analytical and problem-solving abilities to address complex technical challenges in a dynamic enterprise environment. Communication Skills : Exceptional written and oral communication skills, with the ability to clearly articulate technical and functional issues, conclusions, and recommendations to stakeholders at all levels. 24x7 Support : Ability to work in a 24x7 rotational shift to support Hadoop platforms and ensure high availability. Stakeholder Management : Prior experience in effectively managing both onshore and offshore stakeholders, ensuring alignment and collaboration across teams. Business Presentations : Skilled in creating and delivering impactful business presentations to communicate key insights and recommendations. Collaboration and Independence : Demonstrated ability to work independently as well as collaboratively within a team environment, ensuring successful project delivery in a complex enterprise setting. About MetLife Recognized on Fortune magazine's list of the 2025 "World's Most Admired Companies" and Fortune World’s 25 Best Workplaces™ for 2024, MetLife , through its subsidiaries and affiliates, is one of the world’s leading financial services companies; providing insurance, annuities, employee benefits and asset management to individual and institutional customers. With operations in more than 40 markets, we hold leading positions in the United States, Latin America, Asia, Europe, and the Middle East. Our purpose is simple - to help our colleagues, customers, communities, and the world at large create a more confident future. United by purpose and guided by empathy, we’re inspired to transform the next century in financial services. At MetLife, it’s #AllTogetherPossible . Join us! Show more Show less

Posted 1 month ago

Apply

7.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Requirements Description and Requirements Position Summary: A Big Data (Hadoop) Administrator responsible for supporting the installation, configuration, and maintenance of Cloudera Data Platform (CDP) and Cloudera Flow Management (CFM) streaming clusters on RedHat Linux. Strong expertise in DevOps practices, automation, and scripting (e.g . Ansible , Azure DevOps, Shell, Python ) to streamline operations and improve efficiency is highly valued. Job Responsibilities: Assist in the installation, configuration, and maintenance of Cloudera Data Platform (CDP) and Cloudera Flow Management (CFM) streaming clusters on RedHat Linux. Perform routine monitoring, troubleshooting, and issue resolution to ensure the stability and performance of Hadoop clusters. Develop and maintain scripts (e.g., Python, Bash, Ansible) to automate operational tasks and improve system efficiency. Collaborate with cross-functional teams, including application development, infrastructure, and operations, to support business requirements and implement new features. Implement and follow best practices for cluster security, including user access management and integration with tools like Apache Ranger and Kerberos. Support backup, recovery, and disaster recovery processes to ensure data availability and business continuity. Conduct performance tuning and optimization of Hadoop clusters to enhance system efficiency and reduce latency. Analyze logs and use tools like Splunk to debug and resolve production issues. Document operational processes, maintenance procedures, and troubleshooting steps to ensure knowledge sharing and consistency. Stay updated on emerging technologies and contribute to the adoption of new tools and practices to improve cluster management. Education: Bachelor’s degree in computer science, Information Systems, or another related field with 7+ years of IT and Infrastructure engineering work experience. Experience: 7+ Years Total IT experience & 4+ Years relevant experience in Big Data database. Technical Skills: Big Data Platform Management : Big Data Platform Management: Knowledge in managing and optimizing the Cloudera Data Platform, including components such as Apache Hadoop (YARN and HDFS), Apache HBase, Apache Solr , Apache Hive, Apache Kafka, Apache NiFi , Apache Ranger, Apache Spark, as well as JanusGraph and IBM BigSQL . Automation and Scripting : Expertise in automation tools and scripting languages such as Ansible, Python, and Bash to streamline operational tasks and improve efficiency. DevOps Practices : Proficiency in DevOps tools and methodologies, including CI/CD pipelines, version control systems (e.g., Git), and infrastructure-as-code practices. Monitoring and Troubleshooting : Experience with monitoring and observability tools such as Splunk, Elastic Stack, or Prometheus to identify and resolve system issues. Linux Administration : Solid knowledge of Linux operating systems, including system administration, troubleshooting, and performance tuning. Backup and Recovery : Familiarity with implementing and managing backup and recovery processes to ensure data availability and business continuity. Security and Access Management : Understanding of security best practices, including user access management and integration with tools like Kerberos. Agile Methodologies : Knowledge of Agile practices and frameworks, such as SAFe , with experience working in Agile environments. ITSM Tools : Familiarity with ITSM processes and tools like ServiceNow for incident and change management. About MetLife Recognized on Fortune magazine's list of the 2024 "World's Most Admired Companies" and Fortune World’s 25 Best Workplaces™ for 2024, MetLife , through its subsidiaries and affiliates, is one of the world’s leading financial services companies; providing insurance, annuities, employee benefits and asset management to individual and institutional customers. With operations in more than 40 markets, we hold leading positions in the United States, Latin America, Asia, Europe, and the Middle East. Our purpose is simple - to help our colleagues, customers, communities, and the world at large create a more confident future. United by purpose and guided by empathy, we’re inspired to transform the next century in financial services. At MetLife, it’s #AllTogetherPossible . Join us! Show more Show less

Posted 2 months ago

Apply

7.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Requirements Description and Requirements Position Summary: A Big Data (Hadoop) Administrator responsible for supporting the installation, configuration, and maintenance of Cloudera Data Platform (CDP) and Cloudera Flow Management (CFM) streaming clusters on RedHat Linux. Strong expertise in DevOps practices, automation, and scripting (e.g . Ansible , Azure DevOps, Shell, Python ) to streamline operations and improve efficiency is highly valued. Job responsibilities: Assist in the installation, configuration, and maintenance of Cloudera Data Platform (CDP) and Cloudera Flow Management (CFM) streaming clusters on RedHat Linux. Perform routine monitoring, troubleshooting, and issue resolution to ensure the stability and performance of Hadoop clusters. Develop and maintain scripts (e.g., Python, Bash, Ansible) to automate operational tasks and improve system efficiency. Collaborate with cross-functional teams, including application development, infrastructure, and operations, to support business requirements and implement new features. Implement and follow best practices for cluster security, including user access management and integration with tools like Apache Ranger and Kerberos. Support backup, recovery, and disaster recovery processes to ensure data availability and business continuity. Conduct performance tuning and optimization of Hadoop clusters to enhance system efficiency and reduce latency. Analyze logs and use tools like Splunk to debug and resolve production issues. Document operational processes, maintenance procedures, and troubleshooting steps to ensure knowledge sharing and consistency. Stay updated on emerging technologies and contribute to the adoption of new tools and practices to improve cluster management. Education: Bachelor’s degree in computer science, Information Systems, or another related field with 7+ years of IT and Infrastructure engineering work experience. Experience: 7+ Years Total IT experience & 4+ Years relevant experience in Big Data database. Big Data Platform Management : Big Data Platform Management: Knowledge in managing and optimizing the Cloudera Data Platform, including components such as Apache Hadoop (YARN and HDFS), Apache HBase, Apache Solr , Apache Hive, Apache Kafka, Apache NiFi , Apache Ranger, Apache Spark, as well as JanusGraph and IBM BigSQL . Automation and Scripting : Expertise in automation tools and scripting languages such as Ansible, Python, and Bash to streamline operational tasks and improve efficiency. DevOps Practices : Proficiency in DevOps tools and methodologies, including CI/CD pipelines, version control systems (e.g., Git), and infrastructure-as-code practices. Monitoring and Troubleshooting : Experience with monitoring and observability tools such as Splunk, Elastic Stack, or Prometheus to identify and resolve system issues. Linux Administration : Solid knowledge of Linux operating systems, including system administration, troubleshooting, and performance tuning. Backup and Recovery : Familiarity with implementing and managing backup and recovery processes to ensure data availability and business continuity. Security and Access Management : Understanding of security best practices, including user access management and integration with tools like Kerberos. Agile Methodologies : Knowledge of Agile practices and frameworks, such as SAFe , with experience working in Agile environments. ITSM Tools : Familiarity with ITSM processes and tools like ServiceNow for incident and change management. About MetLife Recognized on Fortune magazine's list of the 2024 "World's Most Admired Companies" and Fortune World’s 25 Best Workplaces™ for 2024, MetLife , through its subsidiaries and affiliates, is one of the world’s leading financial services companies; providing insurance, annuities, employee benefits and asset management to individual and institutional customers. With operations in more than 40 markets, we hold leading positions in the United States, Latin America, Asia, Europe, and the Middle East. Our purpose is simple - to help our colleagues, customers, communities, and the world at large create a more confident future. United by purpose and guided by empathy, we’re inspired to transform the next century in financial services. At MetLife, it’s #AllTogetherPossible . Join us! Show more Show less

Posted 2 months ago

Apply

2 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Job Description What We Do At Goldman Sachs, our Engineers don’t just make things – we make things possible. Change the world by connecting people and capital with ideas. Solve the most challenging and pressing engineering problems for our clients. Join our engineering teams that build massively scalable software and systems, architect low latency infrastructure solutions, proactively guard against cyber threats, and leverage machine learning alongside financial engineering to continuously turn data into action. Create new businesses, transform finance, and explore a world of opportunity at the speed of markets. Engineering, which is comprised of our Technology Division and global strategists groups, is at the critical center of our business, and our dynamic environment requires innovative strategic thinking and immediate, real solutions. Want to push the limit of digital possibilities? Start here. Who We Look For Goldman Sachs Engineers are innovators and problem-solvers, building solutions in risk management, big data, mobile and more. We look for creative collaborators who evolve, adapt to change and thrive in a fast-paced global environment. Roles And Responsibilities An individual in this role is responsible for design, development, deployment and support of products and platforms that leverage Java based technologies and enable large scale event processing in engineering products in GS. The individual will engage in both server side as well as front-end development as may be required to achieve the desired outcomes. Specific responsibilities include: Design component as well as integration architecture for large scale web applications Develop, test and support features for globally deployed web apps. Follow best practices throughout the project lifecycle Participate in team-wide design and code reviews. Keep abreast of emerging technical trends, so applicability to GS products can be determined. Qualification Bachelor's Degree (or equivalent or higher) in Computer Science, Information Technology, Electronics and Communication. Overall, 3 – 6 years of experience with a minimum of 2 years in developing Java-based applications. Essential Skills Technical Strong programming skills in Java and Python with proficiency in object-oriented design principles Experience with Java frameworks such as DropWizard, Spring and Hibernate Familiarity with web development frameworks (Angular or React) Experience with Testing frameworks (JUnit, TestNG, Cucumber, Mockito) Hands-on experience with building stream-processing systems using Hadoop, Spark and related technologies Familiarity with distributed storage systems like Cassandra, MongoDB and JanusGraph. Experience with various messaging systems, such as Kafka or RabbitMQ Experience with Caching solutions like Hazelcast, Redis or MemCache Knowledge of build tools like Maven or Gradle Familiarity with continuous integration and continuous deployment (CI/CD) pipelines especially using Git Working knowledge of Unix/Linux experience Strong problem-solving skills and attention to detail Soft skills Strong communication skills with a track record of working and collaborating with global teams, Must possess the ability to handle multiple on-going assignments and be able to work independently in addition to contributing as part of a highly collaborative and globally dispersed team, Strong analytical skills with the ability to break down and communicate complex issues, ideas and solutions, Thorough knowledge and experience in all phases of SDLC Additional skills (Advantage) Working knowledge of enterprise database systems (Sybase or DB2), Programming in Perl, Python and shell script. Knowledge and experience in building conversational user interfaces enabled by AI About Goldman Sachs At Goldman Sachs, we commit our people, capital and ideas to help our clients, shareholders and the communities we serve to grow. Founded in 1869, we are a leading global investment banking, securities and investment management firm. Headquartered in New York, we maintain offices around the world. We believe who you are makes you better at what you do. We're committed to fostering and advancing diversity and inclusion in our own workplace and beyond by ensuring every individual within our firm has a number of opportunities to grow professionally and personally, from our training and development opportunities and firmwide networks to benefits, wellness and personal finance offerings and mindfulness programs. Learn more about our culture, benefits, and people at GS.com/careers. We’re committed to finding reasonable accommodations for candidates with special needs or disabilities during our recruiting process. Learn more: https://www.goldmansachs.com/careers/footer/disability-statement.html Please note that our firm has adopted a COVID-19 vaccination requirement for employees who work onsite at any of our U.S. locations to safeguard the health and well-being of all our employees and others who enter our U.S. offices. This role requires the employee to be able to work on-site. As a condition of employment, employees working on-site at any of our U.S. locations are required to be fully vaccinated for COVID-19, and to have either had COVID-19 or received a booster dose if eligible under Centers for Disease Prevention and Control (CDC) guidance, unless prohibited by applicable federal, state, or local law. Applicants who wish to request for a medical or religious accommodation, or any other accommodation required under applicable law, can do so later in the process. Please note that accommodations are not guaranteed and are decided on a case by case basis. © The Goldman Sachs Group, Inc., 2025. All rights reserved. Goldman Sachs is an equal employment/affirmative action employer Female/Minority/Disability/Veteran/Sexual Orientation/Gender Identity. Show more Show less

Posted 2 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies