Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
14.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Requirements Description and Requirements Position Summary: A highly skilled Big Data (Hadoop) Administrator responsible for the installation, configuration, engineering, and architecture of Cloudera Data Platform (CDP) and Cloudera Flow Management (CFM) streaming clusters on RedHat Linux. Strong expertise in DevOps practices, scripting, and infrastructure-as-code for automating and optimizing operations is highly desirable. Experience in collaborating with cross-functional teams, including application development, infrastructure, and operations, is highly preferred. Job Responsibilities: Manages the design, distribution, performance, replication, security, availability, and access requirements for large and complex Big Data clusters. Designs and develops the architecture and configurations to support various application needs; implements backup, recovery, archiving, conversion strategies, and performance tuning; manages job scheduling, application release, cluster changes, and compliance. Identifies and resolves issues utilizing structured tools and techniques. Provides technical assistance and mentoring to staff in all aspects of Hadoop cluster management; consults and advises application development teams on security, query optimization, and performance. Writes scripts to automate routine cluster management tasks and documents maintenance processing flows per standards. Implement industry best practices while performing Hadoop cluster administration tasks. Works in an Agile model with a strong understanding of Agile concepts. Collaborates with development teams to provide and implement new features. Debugs production issues by analyzing logs directly and using tools like Splunk and Elastic. Address organizational obstacles to enhance processes and workflows. Adopts and learns new technologies based on demand and supports team members by coaching and assisting. Education: Bachelor’s degree in computer science, Information Systems, or another related field with 14+ years of IT and Infrastructure engineering work experience. Experience: 14+ Years Total IT experience & 10+ Years relevant experience in Big Data database Technical Skills: Big Data Platform Management : Big Data Platform Management: Expertise in managing and optimizing the Cloudera Data Platform, including components such as Apache Hadoop (YARN and HDFS), Apache HBase, Apache Solr , Apache Hive, Apache Kafka, Apache NiFi , Apache Ranger, Apache Spark, as well as JanusGraph and IBM BigSQL . Data Infrastructure & Security : Proficient in designing and implementing robust data infrastructure solutions with a strong focus on data security, utilizing tools like Apache Ranger and Kerberos. Performance Tuning & Optimization : Skilled in performance tuning and optimization of big data environments, leveraging advanced techniques to enhance system efficiency and reduce latency. Backup & Recovery : Experienced in developing and executing comprehensive backup and recovery strategies to safeguard critical data and ensure business continuity. Linux & Troubleshooting : Strong knowledge of Linux operating systems , with proven ability to troubleshoot and resolve complex technical issues, collaborating effectively with cross-functional teams. DevOps & Scripting : Proficient in scripting and automation using tools like Ansible, enabling seamless integration and automation of cluster operations. Experienced in infrastructure-as-code practices and observability tools such as Elastic. Agile & Collaboration : Strong understanding of Agile SAFe for Teams, with the ability to work effectively in Agile environments and collaborate with cross-functional teams. ITSM Process & Tools : Knowledgeable in ITSM processes and tools such as ServiceNow. Other Critical Requirements: Automation and Scripting : Proficiency in automation tools and programming languages such as Ansible and Python to streamline operations and improve efficiency. Analytical and Problem-Solving Skills : Strong analytical and problem-solving abilities to address complex technical challenges in a dynamic enterprise environment. 24x7 Support : Ability to work in a 24x7 rotational shift to support Hadoop platforms and ensure high availability. Team Management and Leadership : Proven experience managing geographically distributed and culturally diverse teams, with strong leadership, coaching, and mentoring skills. Communication Skills : Exceptional written and oral communication skills, with the ability to clearly articulate technical and functional issues, conclusions, and recommendations to stakeholders at all levels. Stakeholder Management : Prior experience in effectively managing both onshore and offshore stakeholders, ensuring alignment and collaboration across teams. Business Presentations : Skilled in creating and delivering impactful business presentations to communicate key insights and recommendations. Collaboration and Independence : Demonstrated ability to work independently as well as collaboratively within a team environment, ensuring successful project delivery in a complex enterprise setting. About MetLife Recognized on Fortune magazine's list of the 2025 "World's Most Admired Companies" and Fortune World’s 25 Best Workplaces™ for 2024, MetLife , through its subsidiaries and affiliates, is one of the world’s leading financial services companies; providing insurance, annuities, employee benefits and asset management to individual and institutional customers. With operations in more than 40 markets, we hold leading positions in the United States, Latin America, Asia, Europe, and the Middle East. Our purpose is simple - to help our colleagues, customers, communities, and the world at large create a more confident future. United by purpose and guided by empathy, we’re inspired to transform the next century in financial services. At MetLife, it’s #AllTogetherPossible . Join us! Show more Show less
Posted 1 day ago
8.0 - 13.0 years
0 - 1 Lacs
Gurugram
Work from Office
Dsgn & Blt Telemetry Ingestion Pipelines.Architect a high-thrgh telemetry collector using protocols & publish validated events into streaming platforms. Dvlop Real-Time Detection Pipelines Build REST APIs & OTA Rule Deliv. Systm, Threat Graph Engine Required Candidate profile Exp implementing monitoring, observability & resilience engineering practices.Leadership- incl. mentor jnr engineers & driving bcknd practices. Exlt com. skills for tech doc, backend exp
Posted 1 week ago
12.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Description What We Do At Goldman Sachs, our Engineers don’t just make things – we make things possible. Change the world by connecting people and capital with ideas. Solve the most challenging and pressing engineering problems for our clients. Join our engineering teams that build massively scalable software and systems, architect low latency infrastructure solutions, proactively guard against cyber threats, and leverage machine learning alongside financial engineering to continuously turn data into action. Create new businesses, transform finance, and explore a world of opportunity at the speed of markets. Engineering, which is comprised of our Technology Division and global strategists groups, is at the critical center of our business, and our dynamic environment requires innovative strategic thinking and immediate, real solutions. Want to push the limit of digital possibilities? Start here. Who We Look For Goldman Sachs Engineers are innovators and problem-solvers, building solutions in risk management, big data, mobile and more. We look for creative collaborators who evolve, adapt to change and thrive in a fast-paced global environment. Roles And Responsibilities An individual in this role is responsible for design, development, deployment and support of products and platforms that leverage Java based technologies and enable large scale event processing in engineering products in GS. The individual will engage in both server side as well as front-end development as may be required to achieve the desired outcomes. Specific responsibilities include: Design component as well as integration architecture for large scale web applications Develop, test and support features for globally deployed web apps. Follow best practices throughout the project lifecycle Participate in team-wide design and code reviews. Keep abreast of emerging technical trends, so applicability to GS products can be determined. Qualification Bachelor's Degree (or equivalent or higher) in Computer Science, Information Technology, Electronics and Communication. Overall, 7 – 12 years of experience with a minimum of 5 years in developing Java-based applications. Essential Skills Technical Strong programming skills in Java and Python with proficiency in object-oriented design principles Experience with Java frameworks such as DropWizard, Spring and Hibernate Familiarity with web development frameworks (Angular or React) Experience with Testing frameworks (JUnit, TestNG, Cucumber, Mockito) Hands-on experience with building stream-processing systems using Hadoop, Spark and related technologies Familiarity with distributed storage systems like Cassandra, MongoDB and JanusGraph. Experience with various messaging systems, such as Kafka or RabbitMQ Experience with Caching solutions like Hazelcast, Redis or MemCache Knowledge of build tools like Maven or Gradle Familiarity with continuous integration and continuous deployment (CI/CD) pipelines especially using Git Working knowledge of Unix/Linux experience Strong problem-solving skills and attention to detail Soft skills Strong communication skills with a track record of working and collaborating with global teams, Must possess the ability to handle multiple on-going assignments and be able to work independently in addition to contributing as part of a highly collaborative and globally dispersed team, Strong analytical skills with the ability to break down and communicate complex issues, ideas and solutions, Thorough knowledge and experience in all phases of SDLC Additional skills (Advantage) Working knowledge of enterprise database systems (Sybase or DB2), Programming in Perl, Python and shell script. Knowledge and experience in building conversational user interfaces enabled by AI About Goldman Sachs At Goldman Sachs, we commit our people, capital and ideas to help our clients, shareholders and the communities we serve to grow. Founded in 1869, we are a leading global investment banking, securities and investment management firm. Headquartered in New York, we maintain offices around the world. We believe who you are makes you better at what you do. We're committed to fostering and advancing diversity and inclusion in our own workplace and beyond by ensuring every individual within our firm has a number of opportunities to grow professionally and personally, from our training and development opportunities and firmwide networks to benefits, wellness and personal finance offerings and mindfulness programs. Learn more about our culture, benefits, and people at GS.com/careers. We’re committed to finding reasonable accommodations for candidates with special needs or disabilities during our recruiting process. Learn more: https://www.goldmansachs.com/careers/footer/disability-statement.html Please note that our firm has adopted a COVID-19 vaccination requirement for employees who work onsite at any of our U.S. locations to safeguard the health and well-being of all our employees and others who enter our U.S. offices. This role requires the employee to be able to work on-site. As a condition of employment, employees working on-site at any of our U.S. locations are required to be fully vaccinated for COVID-19, and to have either had COVID-19 or received a booster dose if eligible under Centers for Disease Prevention and Control (CDC) guidance, unless prohibited by applicable federal, state, or local law. Applicants who wish to request for a medical or religious accommodation, or any other accommodation required under applicable law, can do so later in the process. Please note that accommodations are not guaranteed and are decided on a case by case basis. © The Goldman Sachs Group, Inc., 2025. All rights reserved. Goldman Sachs is an equal employment/affirmative action employer Female/Minority/Disability/Veteran/Sexual Orientation/Gender Identity. Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Location: In-Person (sftwtrs.ai Lab) Experience Level: Early Career / 1–3 years About sftwtrs.ai sftwtrs.ai is a leading AI lab focused on security automation, adversarial machine learning, and scalable AI-driven solutions for enterprise clients. Under the guidance of our Principal Scientist, we combine cutting-edge research with production-grade development to deliver next-generation AI products in cybersecurity and related domains. Role Overview As a Research Engineer I , you will work closely with our Principal Scientist and Senior Research Engineers to ideate, prototype, and implement AI/ML models and pipelines. This role bridges research and software development: you’ll both explore novel algorithms (especially in adversarial ML and security automation) and translate successful prototypes into robust, maintainable code. This position is ideal for someone who is passionate about pushing the boundaries of AI research while also possessing strong software engineering skills. Key Responsibilities Research & Prototyping Dive into state-of-the-art AI/ML literature (particularly adversarial methods, anomaly detection, and automation in security contexts). Rapidly prototype novel model architectures, training schemes, and evaluation pipelines. Design experiments, run benchmarks, and analyze results to validate research hypotheses. Software Development & Integration Collaborate with DevOps and MLOps teams to containerize research prototypes (e.g., Docker, Kubernetes). Develop and maintain production-quality codebases in Python (TensorFlow, PyTorch, scikit-learn, etc.). Implement data pipelines for training and inference: data ingestion, preprocessing, feature extraction, and serving. Collaboration & Documentation Work closely with Principal Scientist and cross-functional stakeholders (DevOps, Security Analysts, QA) to align on research objectives and engineering requirements. Author clear, concise documentation: experiment summaries, model design notes, code review comments, and API specifications. Participate in regular code reviews, design discussions, and sprint planning sessions. Model Deployment & Monitoring Assist in deploying models to staging or production environments; integrate with internal tooling (e.g., MLflow, Kubeflow, or custom MLOps stack). Implement automated model-monitoring scripts to track performance drift, data quality, and security compliance metrics. Troubleshoot deployment issues, optimize inference pipelines for latency and throughput. Continuous Learning & Contribution Stay current with AI/ML trends—present findings to the team and propose opportunities for new research directions. Contribute to open-source libraries or internal frameworks as needed (e.g., adding new modules to our adversarial-ML toolkit). Mentor interns or junior engineers on machine learning best practices and coding standards. Qualifications Education: Bachelor’s or Master’s degree in Computer Science, Electrical Engineering, Data Science, or a closely related field. Research Experience: 1–3 years of hands-on experience in AI/ML research or equivalent internships. Familiarity with adversarial machine learning concepts (evasion attacks, poisoning attacks, adversarial training). Exposure to security-related ML tasks (e.g., anomaly detection in logs, malware classification using neural networks) is a strong plus. Development Skills: Proficient in Python, with solid experience using at least one major deep-learning framework (TensorFlow 2.x, PyTorch). Demonstrated ability to write clean, modular, and well-documented code (PEP 8 compliant). Experience building data pipelines (using pandas, Apache Beam, or equivalent) and integrating with RESTful APIs. Software Engineering Practices: Familiarity with version control (Git), CI/CD pipelines, and containerization (Docker). Comfortable writing unit tests (pytest or unittest) and conducting code reviews. Understanding of cloud services (AWS, GCP, or Azure) for training and serving models. Analytical & Collaborative Skills: Strong problem-solving mindset, attention to detail, and ability to work under tight deadlines. Excellent written and verbal communication skills; able to present technical concepts clearly to both research and engineering audiences. Demonstrated ability to collaborate effectively in a small, agile team. Preferred Skills (Not Mandatory) Experience with MLOps tools (MLflow, Kubeflow, or TensorFlow Extended). Hands-on knowledge of graph databases (e.g., JanusGraph, Neo4j) or NLP techniques (transformer models, embeddings). Familiarity with security compliance standards (HIPAA, GDPR) and secure software development practices. Exposure to Rust or Go for high-performance inference code. Contributions to open-source AI or security automation projects. Why Join Us? Cutting-Edge Research & Production Impact: Work on adversarial ML and security–automation projects that go from concept to real-world deployment. Hands-On Mentorship: Collaborate directly with our Principal Scientist and Senior Engineers, learning best practices in both research methodology and production engineering. Innovative Environment: Join a lean, highly specialized team where your contributions are immediately visible and valued. Professional Growth: Access to conferences, lab resources, and continuous learning opportunities in AI, cybersecurity, and software development. Competitive Compensation & Benefits: Attractive salary, health insurance, and opportunities for performance-based bonuses. How to Apply Please send a résumé/CV, a brief cover letter outlining relevant AI/ML projects, and any GitHub or portfolio links to careers@sftwtrs.ai with the subject line “RE: Research Engineer I Application.” sftwtrs.ai is an equal-opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. Show more Show less
Posted 1 week ago
17.0 - 19.0 years
0 Lacs
Andhra Pradesh
On-site
Software Engineering Associate Director - HIH - Evernorth. About Evernorth Evernorth Health Services, a division of The Cigna Group (NYSE: CI), creates pharmacy, care, and benefits solutions to improve health and increase vitality. We relentlessly innovate to make the prediction, prevention, and treatment of illness and disease more accessible to millions of people. Position Summary: The Software Development Associate Director provides hands on leadership, management, and thought leadership for a Delivery organization enabling Cigna's Technology teams. This individual will lead a team based in our Hyderabad Innovation Hub to deliver innovative solutions supporting multiple business and technology domains within Cigna. This includes the Sales & Underwriting, Producer, Service Operations, and Pharmacy business lines, as well as testing and DevOps enablement. The focus of the team is to build innovative go-to-market solutions enabling business while modernizing our existing asset base to support business growth. The Technology strategy is aligned to our business strategy and the candidate will not only be able to influence technology direction but also establishing our team through recruiting and mentoring employees and vendor resources. This is a hands-on position with visibility to the highest levels of the Cigna Technology team. This leader will focus on enabling innovation using the latest technologies and development techniques. This role will foster rapidly building out a scalable delivery organization that aligns with all areas within the Technology team. The ideal candidate will be able to attract and develop talent in a highly dynamic environment. Job Description & Responsibilities: Provide leadership, vision, and design direction for the quality and development of the US Medical and Health Services Technology teams based at the Hyderabad Innovation Hub (HIH). Work in close coordination with leaders and teams based in the United States, as well as contractors employed by the US Medical and Health Services Technology team who are based both within and outside of the United States, to deliver products and capabilities in support of Cigna's business lines. Provide leadership to HIH leaders and teams ensuring the team is meeting the following objectives: Design, configuration, implementation application design/development, and quality engineering within the supported technologies and products. Hands-on people manager who has experience leading agile teams of highly talented technology professionals developing large solutions and internal facing applications. They are expected to work closely with developers, quality engineers, technical project managers, principal engineers, and business stakeholders to ensure that application solutions meet business/customer requirements. A servant leader mentality and a history of creating an inclusive environment, fostering diverse views and approaches from the team, and coaching and mentoring them to thrive in a dynamic workplace. A history of embracing and incubating emerging technology and open-source products. A passion for building highly resilient, scalable, and available platforms, rich reusable foundational capabilities and seamless developer experience while focusing on strategic vision and technology roadmap delivery in an MVP / iterative fast paced approach. Accountable for driving towards timely decisions while influencing across engineering and development delivery teams to drive towards meeting project timelines while balancing destination state. Ensure engineering solutions align with the Technology strategy and that they support the application’s requirements. Plan and implement procedures that will maximize engineering and operating efficiency for application integration technologies. Identify and drive process improvement opportunities. Proactive monitoring and management design of supported assets assuring performance, availability, security, and capacity. Maximize the efficiency (operational, performance, and cost) of the application assets. Experience Required: 17 to 19 years of IT and business/industry or equivalent experience preferred, with at least 5 years of experience in a leadership role with responsibility for the delivery of large-scale projects and programs. Leadership, cross-cultural communication, and familiarity with wide range of technologies and stakeholders. Strong Emotional Intelligence with the ability to foster collaboration across geographically dispersed teams. Experience Desired: Recognized leader with proven track record of delivering software engineering initiatives and cross-IT/business initiatives. Proven experience leading/managing technical teams with a passion for developing talent within the team. Experience with vendor management in an onshore/offshore model. Experience in Healthcare, Pharmacy and/or Underwriting systems. Experience with AWS. Education and Training Required: B.S. degree in Computer Science, Information Systems, or other related degrees; Industry certifications such as AWS Solution Architect, PMP, Scrum Master, or Six Sigma Green Belt are also ideal. Primary Skills: Familiarity with most of the following Application Development technologies: Python, RESTful services, React, Angular, Postgres, and MySQL (relational database management systems). Familiarity with most of the following Data Engineering technologies: Databricks, Spark, PySpark, SQL, Teradata, and multi-cloud environments. Familiarity with most of the following Cloud and Emerging technologies: AWS, LLMs (OpenAI, Anthropic), vector databases (Pinecone, Milvus), graph databases (Neo4j, JanusGraph, Neptune), prompt engineering, and fine-tuning AI models. Familiarity with enterprise software development lifecycle to include production reviews and ticket resolution, navigating freeze/stability periods effectively, total cost of ownership reporting, and updating applications to align with evolving security and cloud standards. Familiarity with agile methodology including SCRUM team leadership or Scaled Agile (SAFE). Familiarity with modern delivery practices such as continuous integration, behavior/test driven development, and specification by example. Deep people and matrix management skills, with a heavy emphasis on coaching and mentoring of less senior staff, and a strong ability to influence VP level leaders. Proven ability to resolve issues and mitigate risks that could undermine the delivery of critical initiatives. Strong written and verbal communication skills with the ability to interact with all levels of the organization. Strong influencing/negotiation skills. Strong interpersonal/relationship management skills. Strong time and project management skills. About Evernorth Health Services Evernorth Health Services, a division of The Cigna Group, creates pharmacy, care and benefit solutions to improve health and increase vitality. We relentlessly innovate to make the prediction, prevention and treatment of illness and disease more accessible to millions of people. Join us in driving growth and improving lives.
Posted 1 week ago
10.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Requirements Description and Requirements Position Summary: A skilled Big Data (Hadoop) Administrator responsible for the installation, configuration, and maintenance of Cloudera Data Platform (CDP) and Cloudera Flow Management (CFM) streaming clusters on RedHat Linux. Proficiency in DevOps practices, scripting, and infrastructure-as-code for automating routine tasks and improving operational efficiency is desirable. Experience working with cross-functional teams, including application development, infrastructure, and operations, is preferred. Job Responsibilities: Manages the design, distribution, performance, replication, security, availability, and access requirements for large and complex Big Data clusters. Designs and develops the architecture and configurations to support various application needs; implements backup, recovery, archiving, conversion strategies, and performance tuning; manages job scheduling, application release, cluster changes, and compliance. Identifies and resolves issues utilizing structured tools and techniques. Provides technical assistance and mentoring to staff in all aspects of Hadoop cluster management; consults and advises application development teams on security, query optimization, and performance. Writes scripts to automate routine cluster management tasks and documents maintenance processing flows per standards. Implement industry best practices while performing Hadoop cluster administration tasks. Works in an Agile model with a strong understanding of Agile concepts. Collaborates with development teams to provide and implement new features. Debugs production issues by analyzing logs directly and using tools like Splunk and Elastic. Address organizational obstacles to enhance processes and workflows. Adopts and learns new technologies based on demand and supports team members by coaching and assisting. Education: Bachelor’s degree in computer science, Information Systems, or another related field with 10+ years of IT and Infrastructure engineering work experience. Experience: 10+ Years Total IT experience & 7+ Years relevant experience in Big Data database Technical Skills: Big Data Platform Management : Big Data Platform Management: Expertise in managing and optimizing the Cloudera Data Platform, including components such as Apache Hadoop (YARN and HDFS), Apache HBase, Apache Solr , Apache Hive, Apache Kafka, Apache NiFi , Apache Ranger, Apache Spark, as well as JanusGraph and IBM BigSQL . Data Infrastructure & Security : Proficient in designing and implementing robust data infrastructure solutions with a strong focus on data security, utilizing tools like Apache Ranger and Kerberos. Performance Tuning & Optimization : Skilled in performance tuning and optimization of big data environments, leveraging advanced techniques to enhance system efficiency and reduce latency. Backup & Recovery : Experienced in developing and executing comprehensive backup and recovery strategies to safeguard critical data and ensure business continuity. Linux & Troubleshooting : Strong knowledge of Linux operating systems , with proven ability to troubleshoot and resolve complex technical issues, collaborating effectively with cross-functional teams. DevOps & Scripting : Proficient in scripting and automation using tools like Ansible, enabling seamless integration and automation of cluster operations. Experienced in infrastructure-as-code practices and observability tools such as Elastic. Agile & Collaboration : Strong understanding of Agile SAFe for Teams, with the ability to work effectively in Agile environments and collaborate with cross-functional teams. ITSM Process & Tools : Knowledgeable in ITSM processes and tools such as ServiceNow. Other critical Requirement: Automation and Scripting : Proficiency in automation tools and programming languages such as Ansible and Python to streamline operations and improve efficiency. Analytical and Problem-Solving Skills : Strong analytical and problem-solving abilities to address complex technical challenges in a dynamic enterprise environment. Communication Skills : Exceptional written and oral communication skills, with the ability to clearly articulate technical and functional issues, conclusions, and recommendations to stakeholders at all levels. 24x7 Support : Ability to work in a 24x7 rotational shift to support Hadoop platforms and ensure high availability. Stakeholder Management : Prior experience in effectively managing both onshore and offshore stakeholders, ensuring alignment and collaboration across teams. Business Presentations : Skilled in creating and delivering impactful business presentations to communicate key insights and recommendations. Collaboration and Independence : Demonstrated ability to work independently as well as collaboratively within a team environment, ensuring successful project delivery in a complex enterprise setting. About MetLife Recognized on Fortune magazine's list of the 2025 "World's Most Admired Companies" and Fortune World’s 25 Best Workplaces™ for 2024, MetLife , through its subsidiaries and affiliates, is one of the world’s leading financial services companies; providing insurance, annuities, employee benefits and asset management to individual and institutional customers. With operations in more than 40 markets, we hold leading positions in the United States, Latin America, Asia, Europe, and the Middle East. Our purpose is simple - to help our colleagues, customers, communities, and the world at large create a more confident future. United by purpose and guided by empathy, we’re inspired to transform the next century in financial services. At MetLife, it’s #AllTogetherPossible . Join us! Show more Show less
Posted 1 week ago
7.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Requirements Description and Requirements Position Summary: A Big Data (Hadoop) Administrator responsible for supporting the installation, configuration, and maintenance of Cloudera Data Platform (CDP) and Cloudera Flow Management (CFM) streaming clusters on RedHat Linux. Strong expertise in DevOps practices, automation, and scripting (e.g . Ansible , Azure DevOps, Shell, Python ) to streamline operations and improve efficiency is highly valued. Job Responsibilities: Assist in the installation, configuration, and maintenance of Cloudera Data Platform (CDP) and Cloudera Flow Management (CFM) streaming clusters on RedHat Linux. Perform routine monitoring, troubleshooting, and issue resolution to ensure the stability and performance of Hadoop clusters. Develop and maintain scripts (e.g., Python, Bash, Ansible) to automate operational tasks and improve system efficiency. Collaborate with cross-functional teams, including application development, infrastructure, and operations, to support business requirements and implement new features. Implement and follow best practices for cluster security, including user access management and integration with tools like Apache Ranger and Kerberos. Support backup, recovery, and disaster recovery processes to ensure data availability and business continuity. Conduct performance tuning and optimization of Hadoop clusters to enhance system efficiency and reduce latency. Analyze logs and use tools like Splunk to debug and resolve production issues. Document operational processes, maintenance procedures, and troubleshooting steps to ensure knowledge sharing and consistency. Stay updated on emerging technologies and contribute to the adoption of new tools and practices to improve cluster management. Education: Bachelor’s degree in computer science, Information Systems, or another related field with 7+ years of IT and Infrastructure engineering work experience. Experience: 7+ Years Total IT experience & 4+ Years relevant experience in Big Data database. Technical Skills: Big Data Platform Management : Big Data Platform Management: Knowledge in managing and optimizing the Cloudera Data Platform, including components such as Apache Hadoop (YARN and HDFS), Apache HBase, Apache Solr , Apache Hive, Apache Kafka, Apache NiFi , Apache Ranger, Apache Spark, as well as JanusGraph and IBM BigSQL . Automation and Scripting : Expertise in automation tools and scripting languages such as Ansible, Python, and Bash to streamline operational tasks and improve efficiency. DevOps Practices : Proficiency in DevOps tools and methodologies, including CI/CD pipelines, version control systems (e.g., Git), and infrastructure-as-code practices. Monitoring and Troubleshooting : Experience with monitoring and observability tools such as Splunk, Elastic Stack, or Prometheus to identify and resolve system issues. Linux Administration : Solid knowledge of Linux operating systems, including system administration, troubleshooting, and performance tuning. Backup and Recovery : Familiarity with implementing and managing backup and recovery processes to ensure data availability and business continuity. Security and Access Management : Understanding of security best practices, including user access management and integration with tools like Kerberos. Agile Methodologies : Knowledge of Agile practices and frameworks, such as SAFe , with experience working in Agile environments. ITSM Tools : Familiarity with ITSM processes and tools like ServiceNow for incident and change management. About MetLife Recognized on Fortune magazine's list of the 2024 "World's Most Admired Companies" and Fortune World’s 25 Best Workplaces™ for 2024, MetLife , through its subsidiaries and affiliates, is one of the world’s leading financial services companies; providing insurance, annuities, employee benefits and asset management to individual and institutional customers. With operations in more than 40 markets, we hold leading positions in the United States, Latin America, Asia, Europe, and the Middle East. Our purpose is simple - to help our colleagues, customers, communities, and the world at large create a more confident future. United by purpose and guided by empathy, we’re inspired to transform the next century in financial services. At MetLife, it’s #AllTogetherPossible . Join us! Show more Show less
Posted 2 weeks ago
7.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Requirements Description and Requirements Position Summary: A Big Data (Hadoop) Administrator responsible for supporting the installation, configuration, and maintenance of Cloudera Data Platform (CDP) and Cloudera Flow Management (CFM) streaming clusters on RedHat Linux. Strong expertise in DevOps practices, automation, and scripting (e.g . Ansible , Azure DevOps, Shell, Python ) to streamline operations and improve efficiency is highly valued. Job responsibilities: Assist in the installation, configuration, and maintenance of Cloudera Data Platform (CDP) and Cloudera Flow Management (CFM) streaming clusters on RedHat Linux. Perform routine monitoring, troubleshooting, and issue resolution to ensure the stability and performance of Hadoop clusters. Develop and maintain scripts (e.g., Python, Bash, Ansible) to automate operational tasks and improve system efficiency. Collaborate with cross-functional teams, including application development, infrastructure, and operations, to support business requirements and implement new features. Implement and follow best practices for cluster security, including user access management and integration with tools like Apache Ranger and Kerberos. Support backup, recovery, and disaster recovery processes to ensure data availability and business continuity. Conduct performance tuning and optimization of Hadoop clusters to enhance system efficiency and reduce latency. Analyze logs and use tools like Splunk to debug and resolve production issues. Document operational processes, maintenance procedures, and troubleshooting steps to ensure knowledge sharing and consistency. Stay updated on emerging technologies and contribute to the adoption of new tools and practices to improve cluster management. Education: Bachelor’s degree in computer science, Information Systems, or another related field with 7+ years of IT and Infrastructure engineering work experience. Experience: 7+ Years Total IT experience & 4+ Years relevant experience in Big Data database. Big Data Platform Management : Big Data Platform Management: Knowledge in managing and optimizing the Cloudera Data Platform, including components such as Apache Hadoop (YARN and HDFS), Apache HBase, Apache Solr , Apache Hive, Apache Kafka, Apache NiFi , Apache Ranger, Apache Spark, as well as JanusGraph and IBM BigSQL . Automation and Scripting : Expertise in automation tools and scripting languages such as Ansible, Python, and Bash to streamline operational tasks and improve efficiency. DevOps Practices : Proficiency in DevOps tools and methodologies, including CI/CD pipelines, version control systems (e.g., Git), and infrastructure-as-code practices. Monitoring and Troubleshooting : Experience with monitoring and observability tools such as Splunk, Elastic Stack, or Prometheus to identify and resolve system issues. Linux Administration : Solid knowledge of Linux operating systems, including system administration, troubleshooting, and performance tuning. Backup and Recovery : Familiarity with implementing and managing backup and recovery processes to ensure data availability and business continuity. Security and Access Management : Understanding of security best practices, including user access management and integration with tools like Kerberos. Agile Methodologies : Knowledge of Agile practices and frameworks, such as SAFe , with experience working in Agile environments. ITSM Tools : Familiarity with ITSM processes and tools like ServiceNow for incident and change management. About MetLife Recognized on Fortune magazine's list of the 2024 "World's Most Admired Companies" and Fortune World’s 25 Best Workplaces™ for 2024, MetLife , through its subsidiaries and affiliates, is one of the world’s leading financial services companies; providing insurance, annuities, employee benefits and asset management to individual and institutional customers. With operations in more than 40 markets, we hold leading positions in the United States, Latin America, Asia, Europe, and the Middle East. Our purpose is simple - to help our colleagues, customers, communities, and the world at large create a more confident future. United by purpose and guided by empathy, we’re inspired to transform the next century in financial services. At MetLife, it’s #AllTogetherPossible . Join us! Show more Show less
Posted 2 weeks ago
2 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Description What We Do At Goldman Sachs, our Engineers don’t just make things – we make things possible. Change the world by connecting people and capital with ideas. Solve the most challenging and pressing engineering problems for our clients. Join our engineering teams that build massively scalable software and systems, architect low latency infrastructure solutions, proactively guard against cyber threats, and leverage machine learning alongside financial engineering to continuously turn data into action. Create new businesses, transform finance, and explore a world of opportunity at the speed of markets. Engineering, which is comprised of our Technology Division and global strategists groups, is at the critical center of our business, and our dynamic environment requires innovative strategic thinking and immediate, real solutions. Want to push the limit of digital possibilities? Start here. Who We Look For Goldman Sachs Engineers are innovators and problem-solvers, building solutions in risk management, big data, mobile and more. We look for creative collaborators who evolve, adapt to change and thrive in a fast-paced global environment. Roles And Responsibilities An individual in this role is responsible for design, development, deployment and support of products and platforms that leverage Java based technologies and enable large scale event processing in engineering products in GS. The individual will engage in both server side as well as front-end development as may be required to achieve the desired outcomes. Specific responsibilities include: Design component as well as integration architecture for large scale web applications Develop, test and support features for globally deployed web apps. Follow best practices throughout the project lifecycle Participate in team-wide design and code reviews. Keep abreast of emerging technical trends, so applicability to GS products can be determined. Qualification Bachelor's Degree (or equivalent or higher) in Computer Science, Information Technology, Electronics and Communication. Overall, 3 – 6 years of experience with a minimum of 2 years in developing Java-based applications. Essential Skills Technical Strong programming skills in Java and Python with proficiency in object-oriented design principles Experience with Java frameworks such as DropWizard, Spring and Hibernate Familiarity with web development frameworks (Angular or React) Experience with Testing frameworks (JUnit, TestNG, Cucumber, Mockito) Hands-on experience with building stream-processing systems using Hadoop, Spark and related technologies Familiarity with distributed storage systems like Cassandra, MongoDB and JanusGraph. Experience with various messaging systems, such as Kafka or RabbitMQ Experience with Caching solutions like Hazelcast, Redis or MemCache Knowledge of build tools like Maven or Gradle Familiarity with continuous integration and continuous deployment (CI/CD) pipelines especially using Git Working knowledge of Unix/Linux experience Strong problem-solving skills and attention to detail Soft skills Strong communication skills with a track record of working and collaborating with global teams, Must possess the ability to handle multiple on-going assignments and be able to work independently in addition to contributing as part of a highly collaborative and globally dispersed team, Strong analytical skills with the ability to break down and communicate complex issues, ideas and solutions, Thorough knowledge and experience in all phases of SDLC Additional skills (Advantage) Working knowledge of enterprise database systems (Sybase or DB2), Programming in Perl, Python and shell script. Knowledge and experience in building conversational user interfaces enabled by AI About Goldman Sachs At Goldman Sachs, we commit our people, capital and ideas to help our clients, shareholders and the communities we serve to grow. Founded in 1869, we are a leading global investment banking, securities and investment management firm. Headquartered in New York, we maintain offices around the world. We believe who you are makes you better at what you do. We're committed to fostering and advancing diversity and inclusion in our own workplace and beyond by ensuring every individual within our firm has a number of opportunities to grow professionally and personally, from our training and development opportunities and firmwide networks to benefits, wellness and personal finance offerings and mindfulness programs. Learn more about our culture, benefits, and people at GS.com/careers. We’re committed to finding reasonable accommodations for candidates with special needs or disabilities during our recruiting process. Learn more: https://www.goldmansachs.com/careers/footer/disability-statement.html Please note that our firm has adopted a COVID-19 vaccination requirement for employees who work onsite at any of our U.S. locations to safeguard the health and well-being of all our employees and others who enter our U.S. offices. This role requires the employee to be able to work on-site. As a condition of employment, employees working on-site at any of our U.S. locations are required to be fully vaccinated for COVID-19, and to have either had COVID-19 or received a booster dose if eligible under Centers for Disease Prevention and Control (CDC) guidance, unless prohibited by applicable federal, state, or local law. Applicants who wish to request for a medical or religious accommodation, or any other accommodation required under applicable law, can do so later in the process. Please note that accommodations are not guaranteed and are decided on a case by case basis. © The Goldman Sachs Group, Inc., 2025. All rights reserved. Goldman Sachs is an equal employment/affirmative action employer Female/Minority/Disability/Veteran/Sexual Orientation/Gender Identity. Show more Show less
Posted 4 weeks ago
6 - 10 years
8 - 12 Lacs
Bengaluru
Work from Office
PrX - Java Developer ROLE AND RESPONSIBILITIES An individual in this role is responsible for design, development, deployment and support of products and platforms that leverage Java based technologies and enable large scale event processing in engineering products in ETO. The individual will engage in both server side as well as front-end development as may be required to achieve the desired outcomes. Specific responsibilities include: Design component as well as integration architecture for large scale web applications Develop, test and support features for globally deployed web apps. Follow best practices throughout the project lifecycle Participate in team-wide design and code reviews. Keep abreast of emerging technical trends, so applicability to ETO products can be determined. QUALIFICATION Bachelor's Degree (or equivalent or higher) in Computer Science, Information Technology, Electronics and Communication. Overall 4 8 years of experience with a minimum of 2 years in developing Java-based applications. ESSENTIAL SKILLS 1. Technical Strong programming skills in Java and Python with proficiency in object-oriented design principles Experience with Java frameworks such as DropWizard, Spring and Hibernate Familiarity with web development frameworks (Angular or React) Experience with Testing frameworks (JUnit, TestNG, Cucumber, Mockito) Hands-on experience with building stream-processing systems using Hadoop, Spark and related technologies Familiarity with distributed storage systems like Cassandra, MongoDB and JanusGraph. Experience with various messaging systems, such as Kafka or RabbitMQ Experience with Caching solutions like Hazelcast, Redis or MemCache Knowledge of build tools like Maven or Gradle Familiarity with continuous integration and continuous deployment (CI/CD) pipelines especially using Git Working knowledge of Unix/Linux experience Strong problem solving skills and attention to detail 2. Soft skills Strong communication skills with a track record of working and collaborating with global teams, Must possess the ability to handle multiple on-going assignments and be able to work independently in addition to contributing as part of a highly collaborative and globally dispersed team, Strong analytical skills with the ability to break down and communicate complex issues, ideas and solutions, Thorough knowledge and experience in all phases of SDLC 3. Additional skills (Advantage) Working knowledge of enterprise database systems (Sybase or DB2) , Programming in Perl, Python and shell script. Knowledge and experience in building conversational user interfaces enabled by AI JD Link :
Posted 3 months ago
2 - 7 years
27 - 32 Lacs
Hyderabad
Work from Office
Goldman Sachs Engineers are innovators and problem-solvers, building solutions in risk management, big data, mobile and more. We look for creative collaborators who evolve, adapt to change and thrive in a fast-paced global environment. BUSINESS UNIT OVERVIEW Enterprise Technology Operations (ETO) is a Business Unit within Core Engineering focused on running scalable production management services with a mandate of operational excellence and operational risk reduction achieved through large scale automation, best-in-class engineering, and application of data science and machine learning. The Production Runtime Experience (PRX) team in ETO applies software engineering and machine learning to production management services, processes, and activities to streamline monitoring, alerting, automation, and incident management. The team also builds and operates products for order management, disaster recovery testing, and developer onboarding. TEAM OVERVIEW ETOs Production Runtime Experience team uses big data processing, machine learning, real-time streaming analytics and simplified visualization/interaction to give computer systems the ability to learn and automate many of the tasks that humans would normally perform manually to run the bank s systems. This is achieved through sophisticated engineering; autonomics and machine learning that statistically and inductively help understand the behaviour of these complex systems. ROLE AND RESPONSIBILIITES An individual in this role is responsible for design, development, deployment and support of products and platforms that leverage Java based technologies and enable large scale event processing in engineering products in ETO. The individual will engage in both server side as well as front-end development as may be required to achieve the desired outcomes. Specific responsibilities include: Design component as well as integration architecture for large scale web applications Develop, test and support features for globally deployed web apps. Follow best practices throughout the project lifecycle Participate in team-wide design and code reviews. Keep abreast of emerging technical trends, so applicability to ETO products can be determined. QUALIFICATION Bachelors Degree (or equivalent or higher) in Computer Science, Information Technology, Electronics and Communication. Overall 4 - 8 years of experience with a minimum of 2 years in developing Java-based applications. ESSENTIAL SKILLS 1. Technical Strong programming skills in Java and Python with proficiency in object-oriented design principles Experience with Java frameworks such as DropWizard, Spring and Hibernate Familiarity with web development frameworks (Angular or React) Experience with Testing frameworks (JUnit, TestNG, Cucumber, Mockito) Hands-on experience with building stream-processing systems using Hadoop, Spark and related technologies Familiarity with distributed storage systems like Cassandra, MongoDB and JanusGraph. Experience with various messaging systems, such as Kafka or RabbitMQ Experience with Caching solutions like Hazelcast, Redis or MemCache Knowledge of build tools like Maven or Gradle Familiarity with continuous integration and continuous deployment (CI/CD) pipelines especially using Git Working knowledge of Unix/Linux experience Strong problem solving skills and attention to detail 2. Soft skills Strong communication skills with a track record of working and collaborating with global teams, Must possess the ability to handle multiple on-going assignments and be able to work independently in addition to contributing as part of a highly collaborative and globally dispersed team, Strong analytical skills with the ability to break down and communicate complex issues, ideas and solutions, Thorough knowledge and experience in all phases of SDLC 3. Additional skills (Advantage) Working knowledge of enterprise database systems (Sybase or DB2) , Programming in Perl, Python and shell script. Knowledge and experience in building conversational user interfaces enabled by
Posted 3 months ago
2 - 7 years
27 - 32 Lacs
Bengaluru
Work from Office
An individual in this role is responsible for design, development, deployment and support of products and platforms that leverage Java based technologies and enable large scale event processing in engineering products in ETO. The individual will engage in both server side as well as front-end development as may be required to achieve the desired outcomes. Specific responsibilities include: Design component as well as integration architecture for large scale web applications Develop, test and support features for globally deployed web apps. Follow best practices throughout the project lifecycle Participate in team-wide design and code reviews. Keep abreast of emerging technical trends, so applicability to ETO products can be determined. QUALIFICATION Bachelors Degree (or equivalent or higher) in Computer Science, Information Technology, Electronics and Communication. Overall 4 - 8 years of experience with a minimum of 2 years in developing Java-based applications. ESSENTIAL SKILLS 1. Technical Strong programming skills in Java and Python with proficiency in object-oriented design principles Experience with Java frameworks such as DropWizard, Spring and Hibernate Familiarity with web development frameworks (Angular or React) Experience with Testing frameworks (JUnit, TestNG, Cucumber, Mockito) Hands-on experience with building stream-processing systems using Hadoop, Spark and related technologies Familiarity with distributed storage systems like Cassandra, MongoDB and JanusGraph. Experience with various messaging systems, such as Kafka or RabbitMQ Experience with Caching solutions like Hazelcast, Redis or MemCache Knowledge of build tools like Maven or Gradle Familiarity with continuous integration and continuous deployment (CI/CD) pipelines especially using Git Working knowledge of Unix/Linux experience Strong problem solving skills and attention to detail 2. Soft skills Strong communication skills with a track record of working and collaborating with global teams, Must possess the ability to handle multiple on-going assignments and be able to work independently in addition to contributing as part of a highly collaborative and globally dispersed team, Strong analytical skills with the ability to break down and communicate complex issues, ideas and solutions, Thorough knowledge and experience in all phases of SDLC 3. Additional skills (Advantage) Working knowledge of enterprise database systems (Sybase or DB2) , Programming in Perl, Python and shell script. Knowledge and experience in building conversational user interfaces enabled by A
Posted 3 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2