Jobs
Interviews

714 Logstash Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 8.0 years

0 Lacs

pune, maharashtra, india

On-site

Job Description Some careers shine brighter than others. If you’re looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions. We are currently seeking an experienced professional to join our team in the role of Consultant Specialist. In this role, you will: Develop and design strategic trade store and life-cycle management platform for Rates business. Have experience and can code, architect/design systems and understand building highly available and scalable micro services. Build solutions with an engineering mindset that not just achieves functional objectives but also cater to non-functional requirements with consistent performance that help our business to grow revenue. Coordinate with global project managers for development book of work and demonstrate accountability with end-to-end ownership of deliverables for global business. Develop / Adopt best practices and ensure on-time and top quality deliveries in DevOps and agile fashion. Be aware of the Operational Risk scenario associated with your role and act in a manner that takes account of operational risk considerations. Proactively remove impediments, see risks, and communicate issues to Program Management. Identify process inefficiencies and find innovative and pragmatic ways to eliminate them Requirements To be successful in this role, you should meet the following requirements: 5-8 years of software engineering experience with expertise in designing, developing and deploying Java based applications. In-depth knowledge of Java 8/11/21, Micro-Services architecture and MongoDB. Good to have understanding of Containers and Container Orchestration Technology such as Docker / Kubernetes and exposure to Redis DevOps & Tooling expertise with exposure to continuous integration and deployments tools such as Git, Gradle, Jenkins, Ansible and exposure to SOAP / Restful APIs. Exposure to Monitoring tools platforms such as Grafana, Elastic search, Logstash, Kibana, Geneos. Good to have exposure to GUI Development using HTML5, JavaScript / Node, ReactJS, Angular etc. Good to have functional understanding of Investment Banking and Fixed Income Business. You’ll achieve more when you join HSBC. www.hsbc.com/careers HSBC is committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within an inclusive and diverse environment. Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website. Issued by – HSBC Software Development India

Posted 1 day ago

Apply

6.0 years

0 Lacs

udaipur, rajasthan, india

On-site

Job Title: Datadog Observability & Automation Specialist Location: Pune /Mumbai/ Noida/Udaipur Job Type: Full-Time/Hybrid Experience : 7-15yrs Job Summary: We are seeking a skilled Datadog Observability & Automation Specialist with hands-on experience in building observability practices and implementing end-to-end automation, including AI and GenAI capabilities. The ideal candidate will be responsible for configuring and optimizing observability platforms to deliver actionable insights into system performance and reliability across various industry use cases. Key Responsibilities: Design, implement, and maintain heterogeneous observability solutions using infrastructure, logs, synthetic monitoring, automation, AI, and GenAI. Create and manage dashboards, monitors, alerts, service maps, and user interfaces. Collaborate with DevOps, Development, and Security teams to define and maintain SLIs, SLOs, and SLAs. Develop integrations between observability platforms and other systems (e.g., hybrid cloud, on-prem data centers, end-user assets, Kubernetes, Terraform, CI/CD tools). Optimize alerting mechanisms to reduce false positives and improve incident response. Provide support during incidents, including root cause analysis and post-mortem reviews. Conduct training sessions for internal teams on effective platform usage. Required Skills and Qualifications: 6+ years of experience in development, automation, system monitoring, and DevOps. 3+ years of hands-on experience with advanced automation and observability platforms such as Dynatrace, Datadog, AppDynamics, New Relic, Zabbix, ELK(Elasticsearch, Logstash, and Kibana.), AI/GenAI, and Machine Learning. Strong understanding of infrastructure components including cloud platforms (AWS, Azure, GCP), containers (Docker, Kubernetes), networking, and operating systems. Proficiency in scripting languages such as Python, Bash, or Shell. Experience with CI/CD pipelines and automation tools (e.g., Jenkins, GitHub Actions, Terraform, Packer). Familiarity with log collection, parsing, and automation using observability platforms. Strong analytical and problem-solving skills with a product-oriented mindset. Preferred Qualifications: Certifications in observability platforms (e.g., Datadog Certified Monitoring Professional, Dynatrace, AppDynamics, ELK). Experience with additional monitoring tools (e.g., Prometheus, Grafana, New Relic, Nagios, ManageEngine). Familiarity with ITIL processes and incident management tools (e.g., PagerDuty, ServiceNow, Why Join BXI Technologies? Lead innovation in AI, Cloud, and Cybersecurity with top-tier partners. Be part of a forward-thinking team driving digital transformation. Access to cutting-edge technologies and continuous learning opportunities. Competitive compensation and performance-based incentives. Flexible and dynamic work environment based in India. About BXI Tech BXI Tech is a purpose-driven technology company, backed by private equity and focused on delivering innovation in engineering, AI, cybersecurity, and cloud solutions. We combine deep tech expertise with a commitment to creating value for both businesses and communities. Our ecosystem includes BXI Ventures, which invests across technology, healthcare, real estate, and hospitality , and BXI Foundation, which leads impactful initiatives in education, healthcare, and care homes . Together, we aim to drive sustainable growth and meaningful social impact .

Posted 1 day ago

Apply

5.0 years

0 Lacs

ahmedabad, gujarat, india

On-site

Infilon Technologies Pvt Ltd is a prominent software development company located in Ahmedabad, is hiring a Senior Site Reliability Engineer (Immediate Joiner) for one of its clients TenForce . TenForce is an expert in EHSQ and Operational Risk Management software, based in Belgium and part of Elisa Industriq - a Finnish group committed to making intelligent manufacturing happen. Job Location - Ahmedabad, Gujarat (Work from Office) Experience - 5+ Years The Site Reliability engineer we are looking for has the following characteristics: Strong team player skills, excellent communication skills, ability to communicate openly and contribute actively to group discussions and brainstorming sessions A proactive approach to identifying problems, performance bottlenecks, and areas for improvement. An affinity with DevOps best practices Willingness to perform root cause analysis on incidents, prepare detailed reports to present to the stakeholders, and develop solutions to prevent similar incidents from occurring in the future. An interest in developing tools to extend the functionality of the monitoring platform. Problem solving skills: you can identify problems, analyze them, and make them disappear. Strong collaboration skill to provide quick and accurate feedback. Who are we looking for? A hand-on experience working with an Enterprise Web Applications and IIS. A good understanding of GIT. Hans-on experience with SQL and REDIS. A working understanding of infrastructure and virtualized environments. Fluent in English (oral and written) and strong communicator. Knowledge of Scrum and of the product Owner role Experience with Elastic Search and Kibana for investigation data sets is a plus. Knowledge of log collection systems (I.e., Logstash, file beats, …) is a plus. Willingness to work with .Net A good knowledge of Linux OS and experience with bash and/or Linux command-line utilities is a plus. What is in it for you You become part of an international multicultural team that loves solving challenges through an unconventional and pragmatic approach but does not tolerate breaking the boundaries of trust, mutual respect, diversity, inclusion, and team-player spirit. Tackling a wide range of challenges daily, across multiple customers and continuously growing and expanding your expertise. A lot of responsibility and impact on a scaling organization with big goals. Eternal respect from your colleagues as you build simple, powerful and future proof solutions. Join us today and power your potential! Interested Candidates kindly share your CV at hr@infilon.com www.infilon.com Ahmedabad, Gujarat

Posted 1 day ago

Apply

3.0 years

15 - 22 Lacs

vellore, tamil nadu, india

Remote

Experience : 3.00 + years Salary : INR 1500000-2200000 / year (based on experience) Expected Notice Period : 30 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: LINEN.Cloud) (*Note: This is a requirement for one of Uplers' client - LINEN.Cloud) What do you need for this opportunity? Must have skills required: Cloud Foundry, Java Web Services, Kafka., RabbitMQ, Spring Boot, Docker, JavaScript, Kubernetes LINEN.Cloud is Looking for: Java Developer Function: Technical Management → Engineering Management, Software Engineering → Backend Development, Full-Stack Development Java, Angular. Microservices, React.js, SQL We are looking for highly skilled developers with experience building web applications and REST APIs in Java. You will collaborate with cross-functional teams to translate business requirements into high-quality, scalable, and maintainable code. The ideal candidate should have a strong foundation in Java development, along with excellent problem-solving skills and a passion for building innovative solutions. Responsibilities: Designing, implementing, and Unit testing Java applications. Aligning application design with business goals. Debugging and resolving technical problems that arise. Recommending changes to the existing Java infrastructure. Ensuring continuous professional self-development. Requirements: Experience developing and testing Java Web Services RESTful (primary), XML, JSON, and supporting integration and enabling access via API calls. Experience with Tomcat, Apache, and similar web server technologies. Hands-on experience working with RabbitMQ and Kafka. Experience with the Spring Boot framework. Hands-on with Angular/Node.js is preferred. Working knowledge of ELK (Elasticsearch, Logstash, and Kibana) or Solr is a big plus. Experience with virtualization like Cloud Foundry (PCF), Kubernetes (PKS), Docker, etc, is a big plus. Agile/Scrum expertise. Experience establishing and enforcing branching and software development processes and deployment via CI/CD. Competencies: Aligning application design with business goals. Debugging and resolving technical problems that arise. Recommending changes to the existing Java infrastructure. Ensuring continuous professional self-development. Team spirit and strong communication skills Customer- and service-oriented, confident appearance in an international environment Very high proficiency in English. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 2 days ago

Apply

3.0 years

15 - 22 Lacs

coimbatore, tamil nadu, india

Remote

Experience : 3.00 + years Salary : INR 1500000-2200000 / year (based on experience) Expected Notice Period : 30 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: LINEN.Cloud) (*Note: This is a requirement for one of Uplers' client - LINEN.Cloud) What do you need for this opportunity? Must have skills required: Cloud Foundry, Java Web Services, Kafka., RabbitMQ, Spring Boot, Docker, JavaScript, Kubernetes LINEN.Cloud is Looking for: Java Developer Function: Technical Management → Engineering Management, Software Engineering → Backend Development, Full-Stack Development Java, Angular. Microservices, React.js, SQL We are looking for highly skilled developers with experience building web applications and REST APIs in Java. You will collaborate with cross-functional teams to translate business requirements into high-quality, scalable, and maintainable code. The ideal candidate should have a strong foundation in Java development, along with excellent problem-solving skills and a passion for building innovative solutions. Responsibilities: Designing, implementing, and Unit testing Java applications. Aligning application design with business goals. Debugging and resolving technical problems that arise. Recommending changes to the existing Java infrastructure. Ensuring continuous professional self-development. Requirements: Experience developing and testing Java Web Services RESTful (primary), XML, JSON, and supporting integration and enabling access via API calls. Experience with Tomcat, Apache, and similar web server technologies. Hands-on experience working with RabbitMQ and Kafka. Experience with the Spring Boot framework. Hands-on with Angular/Node.js is preferred. Working knowledge of ELK (Elasticsearch, Logstash, and Kibana) or Solr is a big plus. Experience with virtualization like Cloud Foundry (PCF), Kubernetes (PKS), Docker, etc, is a big plus. Agile/Scrum expertise. Experience establishing and enforcing branching and software development processes and deployment via CI/CD. Competencies: Aligning application design with business goals. Debugging and resolving technical problems that arise. Recommending changes to the existing Java infrastructure. Ensuring continuous professional self-development. Team spirit and strong communication skills Customer- and service-oriented, confident appearance in an international environment Very high proficiency in English. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 2 days ago

Apply

3.0 years

15 - 22 Lacs

faridabad, haryana, india

Remote

Experience : 3.00 + years Salary : INR 1500000-2200000 / year (based on experience) Expected Notice Period : 30 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: LINEN.Cloud) (*Note: This is a requirement for one of Uplers' client - LINEN.Cloud) What do you need for this opportunity? Must have skills required: Cloud Foundry, Java Web Services, Kafka., RabbitMQ, Spring Boot, Docker, JavaScript, Kubernetes LINEN.Cloud is Looking for: Java Developer Function: Technical Management → Engineering Management, Software Engineering → Backend Development, Full-Stack Development Java, Angular. Microservices, React.js, SQL We are looking for highly skilled developers with experience building web applications and REST APIs in Java. You will collaborate with cross-functional teams to translate business requirements into high-quality, scalable, and maintainable code. The ideal candidate should have a strong foundation in Java development, along with excellent problem-solving skills and a passion for building innovative solutions. Responsibilities: Designing, implementing, and Unit testing Java applications. Aligning application design with business goals. Debugging and resolving technical problems that arise. Recommending changes to the existing Java infrastructure. Ensuring continuous professional self-development. Requirements: Experience developing and testing Java Web Services RESTful (primary), XML, JSON, and supporting integration and enabling access via API calls. Experience with Tomcat, Apache, and similar web server technologies. Hands-on experience working with RabbitMQ and Kafka. Experience with the Spring Boot framework. Hands-on with Angular/Node.js is preferred. Working knowledge of ELK (Elasticsearch, Logstash, and Kibana) or Solr is a big plus. Experience with virtualization like Cloud Foundry (PCF), Kubernetes (PKS), Docker, etc, is a big plus. Agile/Scrum expertise. Experience establishing and enforcing branching and software development processes and deployment via CI/CD. Competencies: Aligning application design with business goals. Debugging and resolving technical problems that arise. Recommending changes to the existing Java infrastructure. Ensuring continuous professional self-development. Team spirit and strong communication skills Customer- and service-oriented, confident appearance in an international environment Very high proficiency in English. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 2 days ago

Apply

3.0 years

15 - 22 Lacs

chennai, tamil nadu, india

Remote

Experience : 3.00 + years Salary : INR 1500000-2200000 / year (based on experience) Expected Notice Period : 30 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: LINEN.Cloud) (*Note: This is a requirement for one of Uplers' client - LINEN.Cloud) What do you need for this opportunity? Must have skills required: Cloud Foundry, Java Web Services, Kafka., RabbitMQ, Spring Boot, Docker, JavaScript, Kubernetes LINEN.Cloud is Looking for: Java Developer Function: Technical Management → Engineering Management, Software Engineering → Backend Development, Full-Stack Development Java, Angular. Microservices, React.js, SQL We are looking for highly skilled developers with experience building web applications and REST APIs in Java. You will collaborate with cross-functional teams to translate business requirements into high-quality, scalable, and maintainable code. The ideal candidate should have a strong foundation in Java development, along with excellent problem-solving skills and a passion for building innovative solutions. Responsibilities: Designing, implementing, and Unit testing Java applications. Aligning application design with business goals. Debugging and resolving technical problems that arise. Recommending changes to the existing Java infrastructure. Ensuring continuous professional self-development. Requirements: Experience developing and testing Java Web Services RESTful (primary), XML, JSON, and supporting integration and enabling access via API calls. Experience with Tomcat, Apache, and similar web server technologies. Hands-on experience working with RabbitMQ and Kafka. Experience with the Spring Boot framework. Hands-on with Angular/Node.js is preferred. Working knowledge of ELK (Elasticsearch, Logstash, and Kibana) or Solr is a big plus. Experience with virtualization like Cloud Foundry (PCF), Kubernetes (PKS), Docker, etc, is a big plus. Agile/Scrum expertise. Experience establishing and enforcing branching and software development processes and deployment via CI/CD. Competencies: Aligning application design with business goals. Debugging and resolving technical problems that arise. Recommending changes to the existing Java infrastructure. Ensuring continuous professional self-development. Team spirit and strong communication skills Customer- and service-oriented, confident appearance in an international environment Very high proficiency in English. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 2 days ago

Apply

7.0 - 12.0 years

3 - 7 Lacs

bengaluru

Work from Office

About The Role Project Role : Security Engineer Project Role Description : Apply security skills to design, build and protect enterprise systems, applications, data, assets, and people. Provide services to safeguard information, infrastructures, applications, and business processes against cyber threats. Must have skills : Threat Intelligence Operations Good to have skills : NA Minimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :We are looking for an experienced and detail-oriented Security Delivery Specialist to support the planning, implementation, and delivery of cybersecurity services across Microsoft security technologies. The ideal candidate will have practical expertise in Microsoft Sentinel, Cribl, Logstash, DevOps, Terraform, Log source onboarding, and ASIM Parsing, and will play a key role in delivering secure, scalable, and compliant security solutions for internal stakeholders or clients.Roles & Responsibilities:Deliver security solutions using Microsoft security stack, with a focus on Microsoft Sentinel Platform Management.Translate business and technical requirements into well-architected security solutions and support delivery from design to deployment.Manage clusters with multiple clients.Lead and manage cross-functional teams, ensuring effective collaboration, communication, and alignment with business objectives. Responsible for team decisions.Engage with multiple teams and contribute on key decisions.Develop and implement security strategies.Conduct security assessments and audits.Stay updated on the latest security trends and technologies.Configure and fine-tune Microsoft Sentinel, develop analytics rules, workbooks, playbooks, and maintain alerting mechanisms.Coordinate with engineering, operations, and risk teams to ensure consistent and secure delivery of services.Create technical documentation, deployment guides, and knowledge transfer materials for clients or internal teams.Collaborate with project managers and stakeholders to ensure timely and successful delivery of security services.Contribute to continuous improvement initiatives and automation of delivery processes.Professional & Technical Skills: Strong client-facing and stakeholder engagement capabilities.Excellent organizational and project coordination skills.Ability to clearly communicate technical information to both technical and non-technical audiences.Proactive mindset with a focus on security service quality and consistency.Experience working in delivery frameworks such as Agile, ITIL.Microsoft Sentinel:Hands-on experience with SIEM/SOAR, including KQL query development, alert tuning, and automation with Logic Apps.Configure and fine-tune Microsoft Sentinel, develop analytics rules, workbooks, playbooks, and maintain alerting mechanisms.KQL (Kusto Query Language) Proficiency:Ability to create analytics rules, hunting queries, workbooks, and detections in Sentinel.Ability to create and tune analytics rules using behavioral detection techniques, building watchlists, and custom rule logic.Knowledge of MITRE ATT&CK & Threat Modeling:Developing detection coverage across ATT&CK techniques, identifying detection gaps, and prioritizing use cases based on threat relevance.Log Source and Data Schema Familiarity (ASIM):Mapping raw logs to the ASIM model, understanding normalized data schemas (e.g., DeviceEvents, NetworkSession), and validating data quality.Able to manage Key Vault and secret rotation.Required knowledge of Entra ID management.Required knowledge in log source optimization.SIM parsing and normalization.Managing Cribl and Logstash pipeline for log source onboarding.Strong understanding of incident response and threat management.Familiarity with scripting (PowerShell, KQL), infrastructure-as-code, and automation tools is a plus.Able to manage requests, incidents, and changes on ServiceNow as per service management process.Required active participation/contribution in team discussions.To be a part of audits and service improvement activities within the team.Experience in designing and implementing security solutions.Deliver security solutions using Microsoft security stack, with a focus on Microsoft Defender for Cloud, Endpoint, Identity, Azure Firewall, and Microsoft Sentinel.Implement and operationalize Microsoft Defender for Cloud (MDC) for cloud security posture management and workload protection.Support deployment and ongoing management of Microsoft Defender for Endpoint (MDE) for endpoint threat detection and response.Integrate Microsoft Defender for Identity (MDI) into customer environments to monitor identity- related threats and provide remediation recommendations.Knowledge of network security protocols and best practices.Hands-on experience with security tools and technologies. Additional Information:-The candidate should have a minimum of 7+ years of experience in Managed Cloud Security Services.-This position will be operated from Bengaluru location.-A 15 years full time education is required. Qualification 15 years full time education

Posted 2 days ago

Apply

10.0 years

29 - 35 Lacs

india

On-site

(India & North Macedonia pods · United States & India internship programs · Java Spring Boot Microservices Angular Flutter Amazon Web Services Google Cloud Platform OVHcloud MongoDB MySQL PostgreSQL Jira with sprint-based delivery · Sub-millisecond / microsecond-class latency ) About the role Lead two senior engineering pods in India and North Macedonia , plus coordinated internship programs in the United States and India . This player-coach role owns strategy and hands-on excellence for a platform that targets sub-millisecond and microsecond-class latency on critical paths. You will shape architecture, raise the code quality bar, and run a clear, Jira-driven sprint model that delivers predictable outcomes at extreme performance levels. What you’ll do Institutionalize latency as a first-class goal: Define service-level objectives in microseconds/milliseconds (p50, p95, p99, p99.9), set per-service latency budgets, and enforce them in pull requests, load tests, and release gates. Architect for ultra-low latency: Evolve an application programming interface–first Spring Boot microservices platform on Kubernetes (Amazon Elastic Kubernetes Service, Google Kubernetes Engine, or OVHcloud Managed Kubernetes) with: lightweight binary protocols and efficient serialization (for example, Protocol Buffers where appropriate), connection pooling and keep-alive tuning, zero-copy and off-heap patterns where beneficial, lock-free or low-contention designs (for example, ring buffers / disruptor patterns), asynchronous and reactive pipelines for back-pressure control, network and operating system tuning (receive side scaling, interrupt moderation, jumbo frames where safe, non-uniform memory access awareness, thread pinning). Engineer the Java Virtual Machine for speed: Standardize low-pause garbage collectors (Z Garbage Collector or Shenandoah), heap sizing, just-in-time compiler warm-up, class data sharing, and profiling (Java Flight Recorder, async-profiler) with performance baselines checked into the repository. Data paths built for microseconds: Drive designs in MySQL, PostgreSQL, and MongoDB with partitioning/sharding, change-data-capture, prepared statements, read/write separation, hot caches (Redis), page-cache warming, and point-in-time recovery and disaster-recovery plans that do not compromise latency on the happy path. Quality, reliability, and safety at speed: Implement contract tests , end-to-end smoke tests , progressive delivery (canary and blue-green releases), and observability with high-resolution histograms for latency. Use OpenTelemetry traces, metrics, and logs to visualize tail latency and eliminate jitter. Security that respects performance: Apply Transport Layer Security termination with sensible cipher choices and hardware acceleration where available; run a secure software development life cycle with static , dynamic , and software-composition security testing and software bill of materials / artifact signing. Operate the sprint system: Make Jira the source of truth—well-formed epics, stories with acceptance criteria, two-week sprints, and ceremonies (refinement, planning, daily stand-ups, reviews, retrospectives). Publish live dashboards for velocity, burndown/burnup, cycle/lead time, throughput, and work-in-progress. Build and mentor player-coaches: Hire and grow Developers, Senior Developers, and a hands-on Engineering Manager at each site. Lead by example with design spikes, reference implementations, and deep code reviews. Run internship programs (United States and India): Create 10–12 week curricula, sandboxed backlogs, pair-programming, weekly demos, and conversion paths to full-time roles. What success looks like (6–12 months) Latency targets met: Example targets—critical in-cluster request p99 ≤ 1 millisecond ; in-process hot path p99 ≤ 150–300 microseconds ; end-to-end user journey p95 ≤ 50 milliseconds (numbers will be finalized per service). Predictable delivery: ≥ 85% sprint predictability (planned versus completed) with reduced cycle time and mean time to recovery trending down quarter-over-quarter. Production confidence: Progressive delivery in place, service-level objectives consistently met, and zero critical vulnerabilities outstanding. Cost-aware performance: Measurable reduction in cost per customer or cost per transaction while maintaining latency goals. Talent engine: Two self-sufficient pods with strong engagement; internship programs meeting satisfaction and conversion targets. Qualifications Experience: 10+ years in software engineering; 5+ years leading multi-team organizations; proven leadership of distributed pods and early-career programs. Low-latency depth (Java focus): Spring Boot 3.x, asynchronous/reactive design, Netty-class networking, disruptor or ring-buffer patterns, off-heap strategies, garbage-collector tuning (Z Garbage Collector or Shenandoah), and Linux performance tuning (thread pinning, non-uniform memory access awareness, kernel parameters). Platform: Kubernetes, Helm, Argo Continuous Delivery, GitHub Actions or GitLab Continuous Integration, and infrastructure as code with Terraform across Amazon Web Services, Google Cloud Platform, and OVHcloud. Data: MySQL, PostgreSQL, MongoDB, and Redis; schema design, indexing, partitioning, performance tuning, and change-data-capture. Observability and resilience: OpenTelemetry traces/metrics/logs; Prometheus and Grafana; Elasticsearch/Logstash/Kibana or OpenSearch; incident management with blameless postmortems. Security: OAuth 2.0, OpenID Connect, and JSON Web Tokens; secrets management; static/dynamic/software-composition testing; supply-chain hardening. Leadership: A true player-coach who can set crisp strategy, mentor managers and senior engineers, and translate microsecond-level engineering choices into business outcomes. Our stack (you will influence and improve) Backend: Java 17+, Spring Boot 3.x, Spring Cloud, RESTful and GraphQL APIs Web/Mobile: Angular, Flutter Infrastructure and Cloud: Kubernetes, Helm, Argo Continuous Delivery, Terraform, GitHub Actions or GitLab Continuous Integration; Amazon Web Services; Google Cloud Platform; OVHcloud Data: MySQL, PostgreSQL, MongoDB; Redis for hot-path caching Observability and Security: OpenTelemetry; Prometheus and Grafana; Elasticsearch/Logstash/Kibana or OpenSearch; OAuth 2.0, OpenID Connect, JSON Web Tokens; Vault/Secrets Manager Process: Jira with Scrum/Kanban; Confluence for specifications and runbooks Job Types: Full-time, Permanent Pay: ₹2,913,711.81 - ₹3,581,863.33 per year Benefits: Health insurance Life insurance Paid sick time Paid time off Provident Fund

Posted 3 days ago

Apply

3.0 - 7.0 years

8 - 13 Lacs

bengaluru

Work from Office

Join our Team About this opportunity: Join Ericsson as a Data Scientist This position plays a crucial role in the development of Python-based solutions, their deployment within a Kubernetes-based environment, and ensuring the smooth data flow for our machine learning and data science initiatives The ideal candidate will possess a strong foundation in Python programming, hands-on experience with ElasticSearch, Logstash, and Kibana (ELK), a solid grasp of fundamental Spark concepts, and familiarity with visualization tools such as Grafana and Kibana Furthermore, a background in ML Ops and expertise in both machine learning model development and deployment will be highly advantageous, What you will do: Python Development: Write clean, efficient and maintainable Python code to support data engineering tasks including collection, transformation and integration with ML models, Data Pipeline Development: Design, build and maintain robust data pipelines to gather, process and transform data from multiple sources into formats suitable for ML and analytics, leveraging ELK, Python and other leading technologies, Spark Knowledge: Apply core Spark concepts for distributed data processing where required, and optimize workflows for performance and scalability, ELK Integration: Implement ElasticSearch, Logstash and Kibana for data ingestion, indexing, search and real-time visualization Knowledge of OpenSearch and related tooling is beneficial, Dashboards and Visualization: Create and manage Grafana and Kibana dashboards to deliver real-time insights into application and data performance, Model Deployment and Monitoring: Deploy machine learning models and implement monitoring solutions to track model performance, drift, and health, Data Quality and Governance: Implement data quality checks and data governance practices to ensure data accuracy, consistency, and compliance with data privacy regulations, MLOps (Added Advantage): Contribute to the implementation of MLOps practices, including model deployment, monitoring, and automation of machine learning workflows, Documentation: Maintain clear and comprehensive documentation for data engineering processes, ELK configurations, machine learning models, visualizations, and deployments, The skills you bring: Core Skills: Strong Python programming skills, experience building data pipelines, and knowledge of ELK stack (ElasticSearch, Logstash, Kibana), Distributed Processing: Familiarity with Spark fundamentals and when to leverage distributed processing for large datasets, Cloud & Containerization: Practical experience deploying applications and services on Kubernetes Familiarity with Docker and container best practices, Monitoring & Visualization: Hands-on experience creating dashboards and alerts with Grafana and Kibana, ML & MLOps: Experience collaborating on ML model development, and deploying and monitoring ML models in production; knowledge of model monitoring, drift detection and CI/CD for ML is a plus, Experience criteria is 9 to 14years Why join Ericsson At Ericsson, you?ll have an outstanding opportunity The chance to use your skills and imagination to push the boundaries of what?s possible To build solutions never seen before to some of the worlds toughest problems You?ll be challenged, but you wont be alone You?ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next, What happens once you apply Click Here to find all you need to know about what our typical hiring process looks like, Encouraging a diverse and inclusive organization is core to our values at Ericsson, that's why we champion it in everything we do We truly believe that by collaborating with people with different experiences we drive innovation, which is essential for our future growth We encourage people from all backgrounds to apply and realize their full potential as part of our Ericsson team Ericsson is proud to be an Equal Opportunity Employer learn more, Primary country and city: India (IN) || Bangalore Req ID: 772044 Show more Show less

Posted 3 days ago

Apply

10.0 - 14.0 years

0 Lacs

haryana

On-site

Role Overview: At Capgemini Engineering, as a Software Engineer, you will work in the area of Software Engineering, focusing on the development, maintenance, and optimization of software solutions and applications across various industries. You will apply scientific methods to analyze and solve software engineering problems, while also being responsible for the development and application of software engineering practices and knowledge. Your work will involve original thought, judgment, and supervision of other software engineers, collaborating with team members and stakeholders to drive projects forward. Key Responsibilities: - Apply scientific methods to analyze and solve software engineering problems - Develop and apply software engineering practices and knowledge in research, design, development, and maintenance - Exercise original thought and judgment, supervising the technical and administrative work of other software engineers - Build skills and expertise in the software engineering discipline to meet standard expectations for the role - Collaborate with other software engineers and stakeholders as a team player Qualifications Required: - 10+ years of operational knowledge in C# or Python development, as well as Docker - Experience with PostgreSQL or Oracle - Knowledge of AWS S3, and optionally AWS Kinesis and AWS Redshift - Real desire to master new technologies - Familiarity with unit testing and TDD methodology - Team spirit, analytical, and synthesis skills Additional Company Details: At Capgemini, you will have the opportunity to work on cutting-edge projects in tech and engineering with industry leaders, while also contributing to solutions that address societal and environmental challenges. The work culture is highly rated by employees, with a focus on collaboration and supportive teams. Hybrid work models and flexible timings are common, offering remote or partially remote options. Capgemini is a global business and technology transformation partner, with a diverse team of over 340,000 members in more than 50 countries. Trusted by clients for over 55 years, Capgemini delivers end-to-end services and solutions leveraging AI, cloud, data, and industry expertise to drive digital and sustainable transformations for enterprises and society.,

Posted 3 days ago

Apply

55.0 years

0 Lacs

gurgaon, haryana, india

Remote

At Capgemini Engineering, the world leader in engineering services, we bring together a global team of engineers, scientists, and architects to help the world’s most innovative companies unleash their potential. From autonomous cars to life-saving robots, our digital and software technology experts think outside the box as they provide unique R&D and engineering services across all industries. Join us for a career full of opportunities. Where you can make a difference. Where no two days are the same. Your Role C# .NET and/or Python Oracle, PostgreSQL ,AWS ELK (Elasticsearch, Logstash, Kibana) GIT, GitHub, TeamCity, Docker , Ansible Works in the area of Software Engineering, which encompasses the development, maintenance and optimization of software solutions/applications.1. Applies scientific methods to analyse and solve software engineering problems.2. He/she is responsible for the development and application of software engineering practice and knowledge, in research, design, development and maintenance.3. His/her work requires the exercise of original thought and judgement and the ability to supervise the technical and administrative work of other software engineers.4. The software engineer builds skills and expertise of his/her software engineering discipline to reach standard software engineer skills expectations for the applicable role, as defined in Professional Communities.5. The software engineer collaborates and acts as team player with other software engineers and stakeholders. Your Profile 10+yrs of Operational knowledge of C# or python development , as well as in Docker Experience with PostgreSQL or Oracle Knowledge of AWS S3, and optionally AWS Kinesis and AWS Redshift Real desire to master new technologies Unit test & TDD methodology are assets Team spirit, analytical and synthesis skills What Will You Love Working At Capgemini Every Monday, kick off the week with a musical performance by our in-house band - The Rubber Band. Also get to participate in internal sports events, yoga challenges, or marathons. At Capgemini, you can work on cutting-edge projects in tech and engineering with industry leaders or create solutions to overcome societal and environmental challenges. Employees rate the work culture highly (4.1/5), appreciating the collaborative environment and supportive teams Hybrid work models and flexible timings are common, with many roles offering remote or partially remote options Capgemini is a global business and technology transformation partner, helping organizations to accelerate their dual transition to a digital and sustainable world, while creating tangible impact for enterprises and society. It is a responsible and diverse group of 340,000 team members in more than 50 countries. With its strong over 55-year heritage, Capgemini is trusted by its clients to unlock the value of technology to address the entire breadth of their business needs. It delivers end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fueled by its market leading capabilities in AI, generative AI, cloud and data, combined with its deep industry expertise and partner ecosystem.

Posted 3 days ago

Apply

1.0 - 6.0 years

15 - 25 Lacs

bengaluru

Work from Office

We have developed API gateway aggregators using frameworks like Hystrix and spring-cloud-gateway for circuit breaking and parallel processing. Our serving microservices handle more than 15K RPS on normal days and during saledays this can go to 30K RPS. Being a consumer app, these systems have SLAs of ~10ms Our distributed scheduler tracks more than 50 million shipments periodically fromdifferent partners and does async processing involving RDBMS. We use an in-house video streaming platform to support a wide variety of devices and networks. What Youll Do Design and implement scalable and fault-tolerant data pipelines (batch and streaming) using frameworks like Apache Spark , Flink , and Kafka . Lead the design and development of data platforms and reusable frameworks that serve multiple teams and use cases. Build and optimize data models and schemas to support large-scale operational and analytical workloads. Deeply understand Apache Spark internals and be capable of modifying or extending the open-source Spark codebase as needed. Develop streaming solutions using tools like Apache Flink , Spark Structured Streaming . Drive initiatives that abstract infrastructure complexity , enabling ML, analytics, and product teams to build faster on the platform. Champion a platform-building mindset focused on reusability , extensibility , and developer self-service . Ensure data quality, consistency, and governance through validation frameworks, observability tooling, and access controls. Optimize infrastructure for cost, latency, performance , and scalability in modern cloud-native environments . Mentor and guide junior engineers , contribute to architecture reviews, and uphold high engineering standards. Collaborate cross-functionally with product, ML, and data teams to align technical solutions with business needs. What Were Looking For 5-8 years of professional experience in software/data engineering with a focus on distributed data systems . Strong programming skills in Java , Scala , or Python , and expertise in SQL . At least 2 years of hands-on experience with big data systems including Apache Kafka , Apache Spark/EMR/Dataproc , Hive , Delta Lake , Presto/Trino , Airflow , and data lineage tools (e.g., Datahb,Marquez, OpenLineage). Experience implementing and tuning Spark/Delta Lake/Presto at terabyte-scale or beyond. Strong understanding of Apache Spark internals (Catalyst, Tungsten, shuffle, etc.) with experience customizing or contributing to open-source code. Familiarity and worked with modern open-source and cloud-native data stack components such as: Apache Iceberg , Hudi , or Delta Lake Trino/Presto , DuckDB , or ClickHouse,Pinot ,Druid Airflow , Dagster , or Prefect DBT , Great Expectations , DataHub , or OpenMetadata Kubernetes , Terraform , Docker Strong analytical and problem-solving skills , with the ability to debug complex issues in large-scale systems. Exposure to data security, privacy, observability , and compliance frameworks is a plus. Good to Have Contributions to open-source projects in the big data ecosystem (e.g., Spark, Kafka, Hive, Airflow) Hands-on data modeling experience and exposure to end-to-end data pipeline development Familiarity with OLAP data cubes and BI/reporting tools such as Tableau, Power BI, Superset, or Looker Working knowledge of tools and technologies like ELK Stack (Elasticsearch, Logstash, Kibana) , Redis , and MySQL Exposure to backend technologies including RxJava , Spring Boot , and Microservices architecture

Posted 3 days ago

Apply

5.0 years

0 Lacs

greater chennai area

Remote

Your work days are brighter here. At Workday, it all began with a conversation over breakfast. When our founders met at a sunny California diner, they came up with an idea to revolutionize the enterprise software market. And when we began to rise, one thing that really set us apart was our culture. A culture which was driven by our value of putting our people first. And ever since, the happiness, development, and contribution of every Workmate is central to who we are. Our Workmates believe a healthy employee-centric, collaborative culture is the essential mix of ingredients for success in business. That’s why we look after our people, communities and the planet while still being profitable. Feel encouraged to shine, however that manifests: you don’t need to hide who you are. You can feel the energy and the passion, it's what makes us unique. Inspired to make a brighter work day for all and transform with us to the next stage of our growth journey? Bring your brightest version of you and have a brighter work day here. At Workday, we value our candidates’ privacy and data security. Workday will never ask candidates to apply to jobs through websites that are not Workday Careers. Please be aware of sites that may ask for you to input your data in connection with a job posting that appears to be from Workday but is not. In addition, Workday will never ask candidates to pay a recruiting fee, or pay for consulting or coaching services, in order to apply for a job at Workday. About The Team Workday’s Planning Cloud SRE team is looking for a Senior Cloud Engineer with 5 or more years experience in public cloud (AWS, GCP or Azure). In this role you will take an active role in designing and building the infrastructure, tools, and services delivering Workday Adaptive Planning next generation cloud platform. You will be challenged with everything from infrastructure tooling, automation, build and deployment pipelines, monitoring and logging architecture, containerization and more!. You must be responsive, flexible and able to succeed within an open collaborative peer environment. About The Role Support Workday Planning Cloud infrastructure, working with technologies like Docker, Kubernetes, AWS, Azure, Chef and Terraform. Participate in infrastructure automation using Terraform, Chef, Jenkins and Golang. Participate in planning and implementing complicated technical projects that interact with a wide variety of teams within the company. Build and respond to production monitors: triage, fix and resolution, perform root cause analysis. Good experience at problem solving and complexity analysis on large distributed systems and maintaining operational runbooks. Support the deployment of cloud solution software during and off regular office hours. Support for both Linux and Windows systems. Participate in on-call monitoring response. About You Are you a hardworking, creative and driven team member who can support us in our mission to gracefully support our Multi-Cloud infrastructure and Automation? If yes, we would love to hear from you! If you like trying new techniques and approaches to sophisticated problems, love to learn new technologies, are a natural collaborator and a phenomenal teammate who brings out the best in everyone around you, then give us a shout! Basic Qualifications : 5 to 7 Years DevOps, Systems/Infrastructure, or related Operations and SRE experience. 3+ years of experience working directly with AWS Infrastructure services; AWS Certification preferred; solid understanding of AWS services and security required. 3+ Years Experience with at least one programming language like: Go, Python, Bash, Perl . Authoring configuration management scripts and deployment tools: Jenkins, Puppet, Chef or equivalent Splunk, Nagios, Elasticsearch, Kibana, CloudWatch and Logstash and ways to scale these systems. Other Qualifications: 3+ years of experience in the following: Cloud databases like AWS oracle/PostgreSQL RDS or Aurora PostgreSQL or GCP Cloud SQL are helpful. Orchestration tools like Kubernetes and Working knowledge of containerization (Docker). Source control management such as Git, GitHub Knowledge of web servers and Cloud load balancers such as Apache HTTP Server, Nginx, HA-Proxy, AWS ELB/NLB. Effective communication of complex technical concepts. Bachelor’s in computer science or equivalent Our Approach to Flexible Work With Flex Work, we’re combining the best of both worlds: in-person time and remote. Our approach enables our teams to deepen connections, maintain a strong community, and do their best work. We know that flexibility can take shape in many ways, so rather than a number of required days in-office each week, we simply spend at least half (50%) of our time each quarter in the office or in the field with our customers, prospects, and partners (depending on role). This means you'll have the freedom to create a flexible schedule that caters to your business, team, and personal needs, while being intentional to make the most of time spent together. Those in our remote "home office" roles also have the opportunity to come together in our offices for important moments that matter. Are you being referred to one of our roles? If so, ask your connection at Workday about our Employee Referral process!

Posted 3 days ago

Apply

10.0 - 15.0 years

13 - 17 Lacs

gurugram

Work from Office

Your Role C# .NET and/or Python Oracle, PostgreSQL ,AWS ELK (Elasticsearch, Logstash, Kibana) GIT, GitHub, TeamCity, Docker , Ansible Works in the area of Software Engineering, which encompasses the development, maintenance and optimization of software solutions/applications.1. Applies scientific methods to analyse and solve software engineering problems.2. He/she is responsible for the development and application of software engineering practice and knowledge, in research, design, development and maintenance.3. His/her work requires the exercise of original thought and judgement and the ability to supervise the technical and administrative work of other software engineers.4. The software engineer builds skills and expertise of his/her software engineering discipline to reach standard software engineer skills expectations for the applicable role, as defined in Professional Communities.5. The software engineer collaborates and acts as team player with other software engineers and stakeholders. Your Profile 10+yrs of Operational knowledge of C# or python development , as well as in Docker Experience with PostgreSQL or Oracle Knowledge of AWS S3, and optionally AWS Kinesis and AWS Redshift Real desire to master new technologies Unit test & TDD methodology are assets Team spirit, analytical and synthesis skills What will you love working at Capgemini Every Monday, kick off the week with a musical performance by our in-house band - The Rubber Band. Also get to participate in internal sports events , yoga challenges, or marathons. At Capgemini, you can work on cutting-edge projects in tech and engineering with industry leaders or create solutions to overcome societal and environmental challenges. Employees rate thework culture highly (4.1/5), appreciating the collaborative environment and supportive teams Hybrid work modelsandflexible timingsare common, with many roles offering remote or partially remote options

Posted 3 days ago

Apply

8.0 - 10.0 years

3 - 7 Lacs

hyderābād

On-site

Company Profile: We’re Hiring at CGI for our GCC - Right Here in Hyderabad! Join us at the intersection of technology, finance, and innovation. You will be working to support the PNC Financial Services Group, one of the top-tier financial institutions in the U.S. You’ll help shape digital solutions for a global enterprise—from the ground up. This is more than a job. It’s your opportunity to: Work on cutting-edge technologies Collaborate with global teams Build a career with purpose and impact Ready to build the future of banking? Let’s talk. Job Title:Lead Analyst Position: Java Developer Experience:8-10 Years Category: Software Development/ Engineering Shift: General Main location: India, Telangana, Hyderabad Position ID: J0225-1964 Employment Type: Full Time CGI is looking for a talented and motivated Java developer - The developer is one of the most critical roles on the Data Streaming Platform team. The ability to build java applications for data pipelines using Kafka, and Oracle is essential to the platform. Here are some skills required: Core Java Skills* o Strong understanding of Java Apache Kafka Basics* o Understanding of Kafka architecture (brokers, partitions, topics, producers, consumers) (High level) o Experience with Kafka Producers and Consumers using the Kafka Java client o Knowledge of Kafka topic configurations (retention, replication, partitioning) (High level) o Understanding of the Kafka Streams Distributed Processing Concepts (Just a high level) o Familiarity with event-driven architecture o Knowledge of exactly-once processing vs at-least-once processing o Understanding of stream-table duality (Kafka Streams vs. KTables) o Schema Management o Experience with Avro, Protobuf, or JSON for structured messages Integration with External Systems o Connecting Kafka Streams with databases (PostgreSQL, MongoDB, Cassandra) o Using Kafka Connect for external data integration o Knowledge of REST APIs and how to expose data from Kafka Streams DevOps and Deployment* o Familiarity with Docker and Kubernetes for containerized deployment o Using CI/CD pipelines (GitHub Actions, Jenkins, GitLab CI) o Logging and tracing using ELK (Elasticsearch, Logstash, Kibana) or OpenTelemetry (High level understanding) Testing Kafka Streams Applications o Writing unit tests with Mockito and JUnit o Using TestContainers for integration testing with Kafka o Validating Kafka Streams topologies using TopologyTestDriver API developers: o Experience building REST APIs using Spring Boot o Experience with Spring Data/Spring Data JPA for connecting to and reading from databases via APIs o Experience writing unit tests using JUnit/Spock o Familiarity with CI/CD pipelines using Jenkins o Familiarity with SQL/NoSQL databases Nice-to-have Skills: o Monitoring and Optimization o Understanding of Kafka Streams metrics (through JMX, Grafana, Prometheus) o Profiling performance and tuning configurations (buffer sizes, commit intervals) o Handling out-of-order events and rebalancing issues o Knowledge of Apache Flink or KSQLDB for alternative stream processing o Knowledge of Docker, OpenShift o Experience with tools like Dynatrace for troubleshooting Your future duties and responsibilities Design, develop, and optimize Oracle relational databases tables, ensuring high availability, scalability, and performance. Optimize SQL queries, indexes, and execution plans for efficient data processing. Develop ETL pipelines and PL/SQL to transform and integrate data from multiple sources. Implement job scheduling, store procedure, data validation, and monitoring solutions. Work closely with data architecture, DA teams, and application developers to enable data-driven decision-making. Strong in creating logical and physical data model for RDBMS and NoSQL technologies. Strong expertise in PL/SQL, SQL tuning, stored procedures, and triggers. Knowledge of data modeling, data lakes, and warehousing. Familiarity with Python, shell scripting for transformation and automation. Experience with Big Data & NoSQL technologies (e.g: MongoDB, Kafka, Hadoop). Nice to have: Experience with BIAN (Banking Industry Architecture Network) Required qualifications to be successful in this role Core Java Skills* o Strong understanding of Java Apache Kafka Basics* o Understanding of Kafka architecture (brokers, partitions, topics, producers, consumers) (High level) o Experience with Kafka Producers and Consumers using the Kafka Java client o Knowledge of Kafka topic configurations (retention, replication, partitioning) (High level) o Understanding of the Kafka Streams Distributed Processing Concepts (Just a high level) o Familiarity with event-driven architecture o Knowledge of exactly-once processing vs at-least-once processing o Understanding of stream-table duality (Kafka Streams vs. KTables) o Schema Management o Experience with Avro, Protobuf, or JSON for structured messages Integration with External Systems o Connecting Kafka Streams with databases (PostgreSQL, MongoDB, Cassandra) o Using Kafka Connect for external data integration o Knowledge of REST APIs and how to expose data from Kafka Streams DevOps and Deployment* o Familiarity with Docker and Kubernetes for containerized deployment o Using CI/CD pipelines (GitHub Actions, Jenkins, GitLab CI) o Logging and tracing using ELK (Elasticsearch, Logstash, Kibana) or OpenTelemetry (High level understanding) Testing Kafka Streams Applications o Writing unit tests with Mockito and JUnit o Using TestContainers for integration testing with Kafka o Validating Kafka Streams topologies using TopologyTestDriver API developers: o Experience building REST APIs using Spring Boot o Experience with Spring Data/Spring Data JPA for connecting to and reading from databases via APIs o Experience writing unit tests using JUnit/Spock o Familiarity with CI/CD pipelines using Jenkins o Familiarity with SQL/NoSQL databases Together, as owners, let’s turn meaningful insights into action. Life at CGI is rooted in ownership, teamwork, respect and belonging. Here, you’ll reach your full potential because… You are invited to be an owner from day 1 as we work together to bring our Dream to life. That’s why we call ourselves CGI Partners rather than employees. We benefit from our collective success and actively shape our company’s strategy and direction. Your work creates value. You’ll develop innovative solutions and build relationships with teammates and clients while accessing global capabilities to scale your ideas, embrace new opportunities, and benefit from expansive industry and technology expertise. You’ll shape your career by joining a company built to grow and last. You’ll be supported by leaders who care about your health and well-being and provide you with opportunities to deepen your skills and broaden your horizons. Come join our team—one of the largest IT and business consulting services firms in the world.

Posted 3 days ago

Apply

0 years

4 - 5 Lacs

gurgaon

Remote

We are seeking a talented individual to join our MMC Corporate team at MMC Tech .This role will be based in Noida/Gurgaon. This is a hybrid role that has a requirement of working at least three days a week in the office. Role Overview As a DevOps Engineer at Mercer, you will be a key driver in automating and optimizing our software development, deployment, and runtime environments, with a focus on scalable, secure, and reliable AI solutions. You will possess a comprehensive understanding of the Software Development Lifecycle (SDLC), CI/CD pipelines, and infrastructure automation, with an emphasis on containerized workloads and cloud-native architectures. Your role will also include supporting the deployment and management of AI/ML models, ensuring smooth integration of LLMOps/MLOps practices into our digital pipelines. This position requires a proactive, collaborative engineer who can bridge development, operations, and AI teams to deliver high-quality, scalable AI-enabled applications and services. We will count on you to: Manage and Prioritize Tasks: Self-manage assigned tasks within the DevOps team backlog, ensuring timely delivery and continuous improvement. Design & Implement DevOps Processes: Develop, refine, and document CI/CD pipelines, infrastructure-as-code, and automation procedures to streamline software and AI model deployment. Collaborate Across Teams: Work closely with development, architecture, and AI/ML teams to understand their deployment requirements, including model serving, data pipelines, and runtime environments. Automate Deployment & Infrastructure: Build and maintain automated workflows for software deployment, testing, environment provisioning, and AI/ML model lifecycle management, including versioning, monitoring, and rollback strategies. Support AI/ML & LLMOps/MLOps: Implement best practices for deploying, scaling, and monitoring AI models, including large language models, ensuring compliance with security and performance standards. Cost & Security Management: Monitor environment costs, ensure environments are patched, secure, and compliant, and optimize resource utilization. Maintain Environment Consistency: Ensure environment definitions are version-controlled, reproducible, and aligned with organizational standards. Promote Best Practices: Advocate for infrastructure security, reliability, and efficiency, incorporating AI-specific considerations such as model drift detection, data privacy, and model explainability. Monitor & Troubleshoot: Use observability tools (DataDog, ElasticSearch, Grafana, etc.) to monitor system health, troubleshoot issues, and optimize AI deployment pipelines. Contribute to Innovation: Stay current with emerging DevOps, MLOps, and LLMOps trends, integrating new tools and methodologies to improve deployment workflows. Documentation & Knowledge Sharing: Maintain comprehensive documentation of processes, environments, and AI deployment strategies, sharing knowledge across teams. What You Need to Have: Proven experience automating CI/CD pipelines, infrastructure provisioning, and deployment workflows, especially in containerized environments. Strong understanding of DevOps best practices, including security, scalability, and cost management. Experience working with AI/ML models, including deployment, monitoring, and lifecycle management, with familiarity in LLMOps/MLOps practices. Ability to collaborate with AI teams to support model deployment, versioning, and scaling. Knowledge of cloud platforms (AWS, Azure, GCP) for deploying AI solutions at scale. Familiarity with container orchestration tools such as Kubernetes, Rancher, Helm, and Docker. Experience with automation tools like Jenkins, Ansible, GitHub Actions, and scripting languages (Python, Bash). Experience with observability and logging tools such as DataDog, ElasticSearch, Logstash, Kibana, and Grafana. Understanding of security best practices for cloud and container environments. Strong problem-solving skills and a proactive approach to automation and optimization. What makes you stand out? Extensive experience with Docker, building and maintaining images across diverse technologies (MEAN stack, Java, Python, Machine Learning, Kafka, .NET, etc.). Proficiency with CI/CD tools such as Jenkins, GitHub Actions, and scripting languages (Python, Bash). Hands-on experience with Kubernetes, Rancher, Helm, and container orchestration. Familiarity with infrastructure automation tools like Ansible. Knowledge of cloud services (AWS, Azure, GCP) for deploying scalable AI/ML workloads. Experience with model deployment frameworks and tools supporting MLOps/LLMOps workflows. Familiarity with API management (APIGEE) and security tools. Experience with monitoring and logging ecosystems (DataDog, ElasticSearch, Logstash, Kibana, Grafana). Understanding of software security, environment management, and cost optimization. Why join our team: We help you be your best through professional development opportunities, interesting work and supportive leaders. We foster a vibrant and inclusive culture where you can work with talented colleagues to create new solutions and have impact for colleagues, clients and communities. Our scale enables us to provide a range of career opportunities, as well as benefits and rewards to enhance your well-being. Marsh McLennan (NYSE: MMC) is the world’s leading professional services firm in the areas of risk, strategy and people. The Company’s more than 85,000 colleagues advise clients in over 130 countries. With annual revenue of $23 billion, Marsh McLennan helps clients navigate an increasingly dynamic and complex environment through four market-leading businesses. Marsh provides data-driven risk advisory services and insurance solutions to commercial and consumer clients. Guy Carpenter develops advanced risk, reinsurance and capital strategies that help clients grow profitably and pursue emerging opportunities. Mercer delivers advice and technology-driven solutions that help organizations redefine the world of work, reshape retirement and investment outcomes, and unlock health and well being for a changing workforce. Oliver Wyman serves as a critical strategic, economic and brand advisor to private sector and governmental clients. For more information, visit marshmclennan.com, or follow us on LinkedIn and X. Marsh McLennan is committed to embracing a diverse, inclusive and flexible work environment. We aim to attract and retain the best people and embrace diversity of age, background, caste, disability, ethnic origin, family duties, gender orientation or expression, gender reassignment, marital status, nationality, parental status, personal or social status, political affiliation, race, religion and beliefs, sex/gender, sexual orientation or expression, skin color, or any other characteristic protected by applicable law. Marsh McLennan is committed to hybrid work, which includes the flexibility of working remotely and the collaboration, connections and professional development benefits of working together in the office. All Marsh McLennan colleagues are expected to be in their local office or working onsite with clients at least three days per week. Office-based teams will identify at least one “anchor day” per week on which their full team will be together in person Marsh McLennan (NYSE: MMC) is a global leader in risk, strategy and people, advising clients in 130 countries across four businesses: Marsh, Guy Carpenter, Mercer and Oliver Wyman. With annual revenue of $24 billion and more than 90,000 colleagues, Marsh McLennan helps build the confidence to thrive through the power of perspective. For more information, visit marshmclennan.com, or follow on LinkedIn and X. Marsh McLennan is committed to embracing a diverse, inclusive and flexible work environment. We aim to attract and retain the best people and embrace diversity of age, background, caste, disability, ethnic origin, family duties, gender orientation or expression, gender reassignment, marital status, nationality, parental status, personal or social status, political affiliation, race, religion and beliefs, sex/gender, sexual orientation or expression, skin color, or any other characteristic protected by applicable law. Marsh McLennan is committed to hybrid work, which includes the flexibility of working remotely and the collaboration, connections and professional development benefits of working together in the office. All Marsh McLennan colleagues are expected to be in their local office or working onsite with clients at least three days per week. Office-based teams will identify at least one “anchor day” per week on which their full team will be together in person.

Posted 4 days ago

Apply

7.0 - 12.0 years

22 - 25 Lacs

bengaluru

Work from Office

Position Summary We are seeking an experienced SOC Analyst to join our Security Operations team. This role demands an individual with a strong technical background in incident analysis, SIEM administration, and rule fine-tuning. The ideal candidate will have experience working with diverse environments, including Windows, Linux, and network security, and will be well-versed in ELK stack management and troubleshooting beats agents. Key Responsibilities 1. Incident Detection and Analysis: o Conduct deep-dive analysis on security incidents, assessing root causes, and recommending solutions. o Proactively monitor and respond to security alerts, managing incident escalation and resolution processes. o Prepare detailed reports and document incidents to support future analysis and security measures. 2. SIEM Administration and Rule Fine-Tuning: o Oversee SIEM configurations, including tuning rules to optimize alerting and reduce false positives. o Conduct SIEM platform upgrades, troubleshoot performance issues, and ensure platform availability. o Collaborate with IT teams to integrate new data sources into SIEM and enhance visibility. 3. System and Network Security: o Perform continuous monitoring and analysis across Windows and Linux systems and network infrastructures. o Utilize tools for traffic analysis, anomaly detection, and threat identification. o Support configurations and policies within the IT and network environment to strengthen security. 4. ELK Stack and Beats Agent Management: o Manage and troubleshoot ELK Stack components (Elasticsearch, Logstash, and Kibana) to ensure seamless data flow. o Perform regular maintenance and troubleshooting of beats agents, ensuring reliable log ingestion and parsing. 5. Security Policies and Compliance: o Contribute to policy updates, ensuring adherence to organizational and industry compliance standards. o Document and enforce security controls aligned with best practices and regulatory requirements. Skills and Qualifications Education: Bachelors degree in Information Security, Computer Science, or a related field. Experience: o Minimum of 5+ years in SOC operations or a similar cybersecurity role. o Proven experience in SIEM administration, incident analysis, and configuration fine-tuning. o Proficiency in monitoring and troubleshooting Windows and Linux systems and managing network security protocols. o Hands-on experience with the ELK Stack, with expertise in troubleshooting beats agents. Technical Skills: o Familiarity with SIEM tools (e.g., Splunk, QRadar) and network protocols. o Strong command of incident response processes, security frameworks, and best practices. o Knowledge of communication protocols and system integrations for data protection. Certifications (preferred): CISSP, CompTIA Security+, CEH, or similar security certifications. Competencies Strong analytical skills with attention to detail. Excellent verbal and written communication abilities. Ability to work independently and collaboratively in a fast-paced environment. Additional Preferred Skills Knowledge of regulatory compliance standards. Experience in using EDR solutions. Ability to document processes and create incident playbooks. This role offers an opportunity to work on advanced cybersecurity initiatives within a dynamic SOC environment, contributing to enhanced organizational security. Mandatory Key Skills incident analysis,linux system,security framework,beats,protocols,logstash,qradar,kibana,elastic search,soc,splunk,linux,information security,security operations,cissp,siem*,windows troubleshooting*,troubleshooting*,incident response*,network security*

Posted 4 days ago

Apply

0 years

0 Lacs

bangalore urban, karnataka, india

On-site

We are seeking a highly skilled Python Backend Engineer to join our team. The ideal candidate will have strong experience in backend development, data handling, and AI integration, with expertise in building scalable, high-performance systems. You will work on designing and implementing APIs, integrating AI models, and ensuring smooth data processing pipelines. Key Responsibilities: Design, develop, and maintain backend services using Python (Flask/FastAPI) . Build and optimize REST APIs for Lead Predictor features. Implement and manage PostgreSQL databases with SQLAlchemy ORM and handle schema migrations using Alembic . Integrate ElasticSearch for search and analytics functionalities. Work with Redis / Celery / RQ for background task processing and caching. Collaborate with AI teams to integrate solutions using Azure OpenAI, OpenAI SDK, LangChain, and LangGraph . Monitor and log system performance using ELK stack (Elasticsearch, Logstash, Kibana) . Containerize and deploy applications using Docker . Ensure system scalability, security, and performance optimization. Collaborate with cross-functional teams (Product, AI, and DevOps) to deliver end-to-end features. Required Skills & Experience: Strong programming skills in Python 3 with experience in Flask or FastAPI . Hands-on experience with PostgreSQL and query optimization. Proficiency in SQLAlchemy ORM and database migration tools like Alembic . Experience with Redis , RQ / Celery for task queues and caching. Good understanding of ElasticSearch integration and optimization. Experience with AI/ML model integration using Azure OpenAI, LangChain, LangGraph . Familiarity with Docker and containerized deployment. Experience with logging, monitoring, and debugging using ELK stack . Solid understanding of software design patterns, scalability, and performance optimization . Good to Have Exposure to microservices architecture. Knowledge of CI/CD pipelines and cloud environments (Azure/AWS/GCP). Prior experience in AI-powered platforms or predictive analytics projects. Position Title: Python Backend Engineer Location: Chennai Type: Full-Time Only Immediate joiners apply who are comfortable for chennai location.

Posted 4 days ago

Apply

0.0 - 5.0 years

0 Lacs

hyderabad, telangana, india

On-site

Job Responsibilities Analyze complex data sets to identify trends, patterns, and insights that can drive business decisions. Design and develop predictive models, algorithms, machine learning, and artificial intelligence techniques to improve the accuracy and efficiency of analytics solutions Collaborate with cross-functional teams to ensure the data used is accurate, relevant, and up-to-date for analytics purposes. Contribute to hands-on development of data & analytics solutions. Deliver products and solutions in a timely, proactive, and entrepreneurial manner. Accelerate solution delivery using re-usable frameworks, prototypes and hackathons. Follow MLOps principles to ensure scalability, repeatability, and automation in the end-to end machine learning lifecycle. Developing and maintaining detailed technical documentation. Stay up-to-date on industry trends and new technologies in data science and analytics, and apply this knowledge to improve the firm's analytics capabilities Education, Technical Skills & Other Critical Requirement 0-5 years of relevant experience in AI/ analytics product & solution delivery Bachelor’s/Master’s degree in an information technology/computer science/Statistics/ Economics or equivalent fields experience. Strong understanding of Machine Learning and Deep Learning concepts, with a focus on Natural Language Processing. Proficiency in Python programming language, with experience using libraries like PyTorch for deep learning tasks. Familiarity with Elastic stack (Elasticsearch, Logstash, Kibana) for data management and analysis. Experience in optimizing algorithms and time series forecasts. Knowledge of prompt engineering techniques to improve model performance. Ability to prototype applications using Streamlit or similar tools. Experience working with large and complex internal, external, structured and unstructured datasets. Model development and deployment in cloud; Familiarity with Github, CI/CD process, Docker, Containerization & Kubernetes. Strong conceptual and creative problem-solving skills Good written, verbal communication skills and presentation skills; engage in meaningful manner with variety of audience: business stakeholders, technology partners & practitioners, executive and senior management. Industry knowledge about emerging AI trends, AI tools and technologies Preferred: Familiarity of new age AI and ML techniques such as GenAI, foundational models, large language models (LLMs) and applications Certifications in AI space such as ML Ops, AI Ops, Generative AI, Ethical AI, AI deployment in cloud Familiarity with agile methodologies & tools Prior experience in P&C Insurance Analytics Prior experience in Analytics consulting and services

Posted 4 days ago

Apply

8.0 years

0 Lacs

noida, uttar pradesh, india

On-site

Job Description Role : Manager - DevOps We at Pine Labs are looking for those who share our core belief - Every Day is Game day. We bring our best selves to work each day to realize our mission of enriching the world through the power of digital commerce and financial services. Role Purpose We are seeking a Manager DevOps who will lead and manage the organizations DevOps Infrastructure, Observability stack for applications, CI-CD Pipeline and support services. This role involves managing a team of DevOps engineers, architecting scalable infrastructure, and ensuring high availability and performance of our messaging and API management systems. This individual will oversee a team of IT professionals, ensure the seamless delivery of IT services, and implement strategies to align technology solutions with business objectives. The ideal candidate is a strategic thinker with strong technical expertise and proven leadership we entrust you with : Lead and mentor a team of DevOps Lead/Engineers in designing and maintaining scalable infrastructure. Architect and manage Kafka clusters for high-throughput, low-latency data streaming. Deploy, configure, and manage Kong API Gateway for secure and scalable API traffic Design and implement CI/CD pipelines for microservices and infrastructure. Automate infrastructure provisioning using tools like Terraform or Ansible. Monitor system performance and ensure high availability and disaster recovery. Collaborate with development, QA, and security teams to streamline deployments and enforce best practices. Ensure compliance with security standards and implement DevSecOps practices. Maintain documentation and provide training on Kafka and Kong usage and best practices. Strong understanding of observability pillars : metrics, logs, traces, and events. Hands-on experience with Prometheus for metrics collection and Grafana for dashboarding and visualization. Proficiency in centralized logging solutions like ELK Stack (Elasticsearch, Logstash, Kibana), Fluentd, or Splunk. Experience with distributed tracing tools such as Jaeger, Zipkin, or OpenTelemetry. Ability to implement instrumentation in applications for custom metrics and traceability. Skilled in setting up alerting and incident response workflows using tools like Alertmanager, PagerDuty, or Opsgenie. Familiarity with SLOs, SLIs, and SLA definitions and monitoring for service reliability. Experience with anomaly detection and root cause analysis (RCA) using observability data. Knowledge of cloud-native monitoring tools (e.g., AWS CloudWatch, Azure Monitor, GCP Operations Suite). Ability to build actionable dashboards and reports for technical and business stakeholders. Understanding of security and compliance monitoring within observability frameworks. Collaborative mindset to work with SREs, developers, and QA teams to define meaningful observability goals. Prepare and manage the IT budget, ensuring alignment with organizational priorities. Monitor expenditures and identify opportunities for cost savings without compromising quality. Well-spoken with good communication skills, as lot of stakeholder management is needed. What matters in this role : work experience : Bachelors or masters degree in computer science, Engineering, or related field. 8+ years of experience in DevOps or related roles, with at least 5 years in a leadership position. Strong hands-on experience with Apache Kafka (setup, tuning, monitoring, security). Proven experience with Kong API Gateway (plugins, routing, authentication, rate limiting). Proficiency in cloud platforms (AWS, Azure, or GCP). Kafka certification or Kong Gateway certification. Experience with service mesh technologies (e.g., Istio, Linkerd). Knowledge of event-driven architecture and microservices patterns. Experience with GitOps and Infrastructure as Code (IaC). Experience with containerization and orchestration (Docker, Kubernetes). Strong scripting skills (Bash, Python, etc.). Hands on with monitoring tools (Prometheus, Grafana, Mimir, ELK you should be comfortable with : Working from office : 5 days a week ( Sector 62, Noida) Pushing The Boundaries Have a big idea? See something that you feel we should do but havent done? We will hustle hard to make it happen. We encourage out of the box thinking, and if you bring that with you, we will make sure you get a bag that fits all the energy you bring along. What We Value In Our People You take the shot : You Decide Fast and You Deliver Right You are the CEO of what you do : you show ownership and make things happen You own tomorrow : by building solutions for the merchants and doing the right thing You sign your work like an artist : You seek to learn and take pride in the work you do (ref:hirist.tech)

Posted 4 days ago

Apply

3.0 - 5.0 years

0 Lacs

gurugram, haryana, india

On-site

Position : Deployment Engineer / Devops Location: Gurgaon Educational Qualifications B.Tech/ BE/ MS/MTech / MCA with 3-5 years of experience Skills Set Specialities: Open Source, Linux, Devops, Bash, Monitoring, Haproxy, Logstash, Cloud, Web Servers (Apache, Nginx), Apache Tomcat, MySQL, Redis, Docker, Jenkins, Performance Tuning, Scalability,Python, postfix, q-mail Primary Responsibilities Maintain, manage, enhance and own builds and releases to various environments, such as development, QA, pre-production and production. Manage source code version control system and Build server. Develop and maintain an efficient and flexible automated deployment framework that Ensures repeatable and reliable deployment of releases into multiple environments Provide deployment and troubleshooting support. Implement branching and merging strategies for build and patch releases. Documentation, such as build procedures; build release notes, and installation/configuration notes. Work with the development team to define specifications for value added tools which can improve or automate support processes. Provide quick resolution to build issues. Design, develop and maintain tools to support the needs of the team. Expert in Source Code Management Tools such as GIT. Knowledge of Python, shell scripts, Batch, Maven. Detailed knowledge of the Linux operating system and tools. Sound understanding of web technologies. Excellent problem solving, analytical skills and technical troubleshooting skills. Requirements 3-4 yrs of experience in handling builds/releases in various environments ranging from QA to Production. Comfort with frequent, incremental code testing and deployment Knowledge on Relation & Document Database like MySQL & MongoDB client. Experience administering and deploying development CI/CD tools such as GitHub, Jira, CircleCi / Jenkins, Jfrog, SonarQube, etc. Should be familiar with OS (Linux) Must be good at analyzing and debugging Java based enterprise applications. Experience in deploying OpenSource components on Linux Based environments - i.e Web Servers / RDBMS -MySQL / Redis / Docker / NoSQL DB etc. Experience in deploying and configuring applications in Tomcat in Cloud and Docker environments. Proven experience with log monitoring, collection and analysis. Good at understanding build scripts in Gradle and Ant. Good with writing batch and shell scripts to automate the build and deployment process. Must have prior experience in troubleshooting the issues in deployed environments and answer the queries in releases. Self-motivated and willing to learn new technologies continuously.

Posted 4 days ago

Apply

5.0 - 9.0 years

7 - 11 Lacs

navi mumbai

Work from Office

At PitchBook, our Product & Engineering team is made up of big thinkers, innovators, and problem-solvers who aim to make a meaningful impact on our customers and company every day. We value curiosity, customer empathy, and the drive to find better ways of doing things. Our work blends creativity with technical expertise to deliver world-class customer experiences through product innovation. As a Senior Software Development Engineer in our Engineering Revenue Platform division, you will design, develop, simplify, and optimize back-end software solutions that powers the quality, scalability, and performance of our Sales/Customer Success technology stack. You’ll ensure seamless platform integrations and reliable business workflows across our global systems. In this role, you’ll partner with stakeholders to understand requirements, architect solutions, and implement services and APIs that drive efficiency, accuracy, and speed of delivery. You will leverage your expertise in modern system design, development patterns and practices, data modeling, and integration patterns to build solutions that integrate seamlessly into our continuous integration and continuous delivery (CI/CD) pipelines. You will play a key role in delivering end-to-end capabilities across lead-to-close, renewals, licensing, account provisioning, and integrations with Salesforce and connected applications. With a focus on future-ready solutions, you will also contribute to AI-powered initiatives and guide the team in building intelligent, data-driven platform capabilities. You’ll mentor engineers, provide technical leadership, and help shape the architecture and development strategy for our revenue platforms in a fast-paced, collaborative environment. You will simplify and modernize systems, processes, and tools to ensure our applications are robust, maintainable, and aligned with the company’s strategic goals. You are adept at translating the needs of customers, internal sales, and CS teams into clear, implementable solutions. You bring strong change management skills and foster transparent communication across stakeholders, ensuring delivery excellence remains a shared responsibility throughout the software development lifecycle. Primary Job Responsibilities Design, develop, and maintain back-end services, APIs, and CI/CD pipelines for revenue platforms with a focus on scalability, maintainability, and performance Build robust integration points and validation mechanisms to ensure functional accuracy and adherence to best practices across systems Architect and optimize solutions that integrate with Salesforce, Snowflake, Workato, and other core systems across Sales and CS workflows Lead initiatives to enhance system performance, code quality, delivery speed, scalability, and reliability through efficient code design/development, performance tuning, and AI tools Champion engineering practices and be the technical leader for the team Mentor engineers in design and development, driving technical excellence across the team Partner with Products, EMs, and Sales/Customer Success teams to translate complex business requirements into scalable, future-ready software and AI solutions Ensure engineering priorities are aligned with broader business goals and customer outcomes Own end-to-end service health, implementing monitoring, logging, and alerting to meet performance, reliability, and security benchmarks Proactively identify and resolve defects, bottlenecks, and technical debt to maintain delivery velocity and quality Evaluate and adopt emerging technologies like GenAI, tools, frameworks, and patterns to keep the revenue technology stack modern and competitive Foster a culture of continuous improvement, encouraging experimentation and data-driven decision-making in solution design Skills and Qualifications Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field 7+ years of experience in software development, with deep knowledge of building scalable, integrated systems including Salesforce, Snowflake, Marketo, Workato, and other connected platforms Expert-level proficiency in Java, Python, or JavaScript for developing robust backend services, APIs, and integration solutions Strong understanding of the Salesforce AI ecosystem, including Agentforce 3 and AI agents, with experience leveraging AI-driven features in software solutions Solid experience with Agile methodologies, DevOps principles, and CI/CD pipeline management, implementing automated deployments and integration workflows Hands-on experience designing and deploying applications in cloud environments using GCP and AWS Proficient with containerization and orchestration using Docker and Kubernetes to support scalable, resilient services Skilled in monitoring system health, performance, and reliability using tools like Grafana and Prometheus Experienced in debugging complex issues and analyzing logs using the ELK stack (Elasticsearch, Logstash, Kibana) Proven ability to design, develop, and deliver scalable, reliable, secure, and maintainable software solutions for complex backend and integration systems Experience in financial services or B2B platforms is a plus. Morningstar is an equal opportunity employer. Working Conditions The job conditions for this position are in a standard office setting. Employees in this position use PC and phone on an on-going basis throughout the day. Limited corporate travel may be required to remote offices or other business meetings and events.

Posted 4 days ago

Apply

3.0 years

3 - 6 Lacs

india

On-site

Job Title: DevOps Engineer (3+ Years Experience) Location: Delhi Job Type: Full-time Experience Required: Minimum 3 Years --- Job Summary: We are seeking a highly skilled and motivated DevOps Engineer with a minimum of 3 years of hands-on experience in managing CI/CD pipelines, cloud infrastructure (preferably AWS), container orchestration, configuration management, and infrastructure monitoring. You will work closely with the development, QA, and IT teams to streamline deployments, ensure system reliability, and automate operational tasks. --- Key Responsibilities: Design, implement, and maintain CI/CD pipelines using GitLab CI or similar tools. Manage and scale cloud infrastructure on AWS (EC2, S3, IAM, RDS, Route53, Lambda, CloudWatch, etc.). Containerize applications using Docker and orchestrate using Kubernetes (EKS preferred). Implement Infrastructure as Code (IaC) using Ansible, Terraform, or CloudFormation. Maintain secure and scalable Linux server environments, ensuring optimal performance and uptime. Write and maintain shell scripts or Python scripts for automation and monitoring tasks. Setup and manage monitoring, alerting, and logging systems using tools like Grafana, Prometheus, ELK Stack (Elasticsearch, Logstash, Kibana), or CloudWatch. Implement robust backup and disaster recovery strategies. Collaborate with development teams for efficient DevSecOps practices including secrets management and vulnerability scans. Troubleshoot and resolve production issues, performing root cause analysis and preventive planning. --- Required Skills and Experience: 3+ years of experience as a DevOps Engineer or similar role. Proficient in GitLab (or GitHub Actions, Jenkins), including runners and CI/CD pipelines. Strong hands-on experience with AWS services (EC2, RDS, S3, VPC, EKS, etc.). Proficient with Docker and Kubernetes, including Helm, volumes, services, autoscaling. Solid experience with Ansible for configuration management and automation. Good understanding of Linux systems administration and troubleshooting. Strong scripting skills in Bash, Shell, or Python. Experience with monitoring and alerting tools such as Grafana, Prometheus, or Zabbix. Familiar with log management tools (ELK Stack, Fluentd, or CloudWatch Logs). Familiarity with SSL/TLS, DNS, load balancers (Nginx/HAProxy), and firewall/security configurations. Knowledge of version control systems (Git), branching strategies, and GitOps practices. --- Good to Have (Optional but Preferred): Experience with Terraform or Pulumi for cloud infrastructure provisioning. Knowledge of security compliance standards (ISO, SOC2, PCI DSS). Experience with Kafka, RabbitMQ, or Redis. Familiarity with service meshes like Istio or Linkerd. Experience with cost optimization and autoscaling strategies on AWS. Exposure to incident management tools (PagerDuty, Opsgenie). Certification (e.g., AWS Certified DevOps Engineer, CKA, RHCE) is a plus. Job Type: Full-time Pay: ₹30,000.00 - ₹50,000.00 per month Work Location: In person

Posted 5 days ago

Apply

50.0 years

4 - 9 Lacs

noida

Remote

Principal Observability Engineer WHAT MAKES US, US Join some of the most innovative thinkers in FinTech as we lead the evolution of financial technology. If you are an innovative, curious, collaborative person who embraces challenges and wants to grow, learn and pursue outcomes with our prestigious financial clients, say Hello to SimCorp! At its foundation, SimCorp is guided by our values — caring, customer success-driven, collaborative, curious, and courageous. Our people-centered organization focuses on skills development, relationship building, and client success. We take pride in cultivating an environment where all team members can grow, feel heard, valued, and empowered. If you like what we’re saying, keep reading! WHY THIS ROLE IS IMPORTANT TO US Observability is at the heart of delivering robust, reliable, and innovative solutions to our global financial clients. This role puts you at the forefront of our strategic journey, building systems that proactively detect issues, provide clear insights, and drive continuous improvements. Your contribution ensures SimCorp can swiftly respond to challenges, keep systems performing optimally, and empower our teams with actionable insights. Success in this role means delivering exceptional reliability, transparency, and innovation, directly impacting client satisfaction and SimCorp’s industry leadership. WHAT WILL YOU BE RESPONSIBLE FOR Drive and execute technical observability strategy across the organization. Lead design and implementation of advanced observability tooling (Elastic, Grafana, OpenTelemetry). Ensure governance and standardization across observability platforms. Develop and maintain clear, practical documentation (runbooks, onboarding guides, wikis). Enable engineering teams via effective technical onboarding and knowledge-sharing. WHAT WE VALUE This strategic role requires a blend of deep technical expertise, architectural vision, and strong leadership to guide the organization's observability practices. Professional & Technical Skills Strategic & Architectural Planning: Ability to drive and execute a cohesive technical observability strategy across the entire organization. Experience designing and implementing multi-tenant, scalable, and resilient observability architectures using tools like Elastic, Grafana, and OpenTelemetry. Deep understanding of observability pillars: Logs, Metrics, and Traces, and how to integrate them effectively. Software & Platform Expertise: Elastic Stack: Advanced knowledge of Elasticsearch, Logstash, Kibana, and Beats. This includes cluster administration, performance tuning, creating complex queries, and index lifecycle management (ILM). Grafana: Advanced skills in creating sophisticated dashboards, using various data sources, and managing Grafana plugins and provisioning. OpenTelemetry: Deep experience with the OpenTelemetry framework, including collector configuration, instrumentation of applications (auto & manual), and context propagation. Azure Cloud: Senior-level experience with Azure services, particularly Azure Monitor, Log Analytics, Application Insights, and Azure Kubernetes Service (AKS). Governance & Enablement: Experience creating and enforcing governance models for observability platforms, including naming conventions, data retention policies, and access control (RBAC). Proven ability to develop clear and practical documentation, such as runbooks, onboarding guides, and internal wikis. Skilled in technical writing and creating enablement materials to train and support engineering teams. Soft Skills & Capabilities Leadership & Influence: Ability to manage stakeholders and communicate a clear technical vision to both engineering teams and senior leadership. Strategic Thinking: A forward-thinking mindset to anticipate future needs, evaluate emerging technologies, and ensure the observability platform meets long-term business goals. Mentorship: A passion for enabling other engineers through knowledge-sharing, workshops, and effective onboarding. Communication: Excellent verbal and written communication skills to articulate complex technical concepts to diverse audiences. BENEFITS Competitive salary, bonus scheme, and pension are essential for any work agreement. However, at SimCorp, we believe we can offer more. Therefore, in addition to the traditional benefit scheme, we provide an excellent work-life balance: flexible work hours, a hybrid workplace model. Simcorp follows a global hybrid policy, asking employees to work from the office two days each week while allowing remote work on other days. On top of that, we have IP sprints; where you have 3 weeks per quarter to spend on mastering your skills as well as contributing to the company development. There is never just only one route - we practice an individual approach to professional development to support the direction you want to take. NEXT STEPS Please send us your application in English via our career site as soon as possible, we process incoming applications continually. Please note that only applications sent through our system will be processed. At SimCorp, we recognize that bias can unintentionally occur in the recruitment process. To uphold fairness and equal opportunities for all applicants, we kindly ask you to exclude personal data such as photo, age, or any non-professional information from your application. Thank you for aiding us in our endeavor to mitigate biases in our recruitment process. If you are interested in being a part of SimCorp but are not sure this role is suitable, submit your CV anyway. SimCorp is on an exciting growth journey, and our Talent Acquisition Team is ready to assist you discover the right role for you. The approximate time to consider your CV is three weeks. We are eager to continually improve our talent acquisition process and make everyone’s experience positive and valuable. Therefore, during the process we will ask you to provide your feedback, which is highly appreciated. WHO WE ARE For over 50 years, we have worked closely with investment and asset managers to become the world’s leading provider of integrated investment management solutions. We are 3,000+ colleagues with a broad range of nationalities, educations, professional experiences, ages, and backgrounds in general. SimCorp is an independent subsidiary of the Deutsche Börse Group. Following the recent merger with Axioma, we leverage the combined strength of our brands to provide an industry-leading, full, front-to-back offering for our clients. SimCorp is an equal opportunity employer and welcome applicants from all backgrounds, without regard to race, gender, age, disability, or any other protected status under applicable law. We are committed to building a culture where diverse perspectives and expertise are integrated into our everyday work. We believe in the continual growth and development of our employees, so that we can provide best-in-class solutions to our clients. #Li-Hybrid

Posted 5 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies