Home
Jobs
Companies
Resume

63 Jaeger Jobs - Page 3

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Description 5+ years of working experience with industry-standard messaging systems Apache Kafka, Apache Pulsar, Rabbit MQ.Experience configuring Kafka for large-scale deployments is desirableHands-on experience with building reactive microservices using any popular Java stack.Experience building applications using Java 11 best practices and functional interfaces.Understanding Kubernetes custom operators is desirableExperience building stateful streaming applications using Kafka streams, Apache Flink is a plus.Experience with open telemetry/ tracing / Jaeger is a plus. Career Level - IC3 Responsibilities The role requires proven experience in managing Kafka in large deployments and distributed architectures. Extensive knowledge in configuring Kafka using various industry-driven architectural patterns. You must be passionate about building distributed messaging cloud services running on Oracle Cloud Infrastructure. Experience with pub-sub architectures using Kafka or Pulsar or point-to-point messaging with queues is desirable. Experience building distributed systems with traceability in a high-volume messaging environment. Each team owns its service deployment pipeline to production. Qualifications Career Level - IC3 About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law. Show more Show less

Posted 3 weeks ago

Apply

6.0 years

0 Lacs

Noida, Uttar Pradesh, India

Remote

Linkedin logo

Job Description Position Title: 5G Solution Integrator Product: Oracle Communications 5G & CNE Job Objective: The purpose of the Consulting Engineer team is to support with consulting activities for Oracle Communications products and services specifically around Oracle Communications Cloud Native Environment and Solutions including cnDSR, PCF, SCP, NRF, NSSF, NEF, BSF, cnUDR. IWF etc. Individual shall work as Solution Architect and Integrator that work on complex solutions using the Oracle Products in 4G/ 5G domains and Cloud Native products in order to solve the issues and ensure a professional and high quality work. The team takes care of the tasks such as Requirement gathering, Solution creation, Solution Implementation, Proposal creations, collateral creation, solution demo, PoC, RFP support, Project support, internal client handling, coordination with PLM, Engineering, License teams. The position requires having a very good knowledge and skills on Cloud Native Architecture including Docker and Kubernetes as Architect, Common Services as part of CNCF, Oracle Communications 5G Products such as SCP, NRF, NSSF, PCF, UDR, BSF, IWF, NEF and cnDSR. The mission is not limited to before said products and would also extend to any product Oracle Communications is working on) over the baremetal, virtualized or cloud infrastructure as IaaS or PaaS or SaaS. Job Description: Reporting to Consulting Practice Manager (CGBU) – Consulting Engineer is responsible for: Provide pre and post-sales technical services and support for Oracle Communication Product over Cloud Native Architecture to CGBU customers. Assist in the sales cycle to understand size and scope customer requirements for complex service implementation of Oracle products including capacity analysis, database sizing requirements, special feature implementation, etc. Work on underlying infrastructure ranging from Bare metal, Virtualized or Private or Public Cloud based deployments. Work in Cloud Native Environment for deployment of Cloud Native Environment (CNE), Common Services and Oracle Network Functions. Create and Implement technical solution based on customer requirement. Create presales deliverable such as Proposal, Effort Estimation, Profitability analysis, collateral creation, solution demo, PoC, RFP responses, HLD, Scope, MoPs, Acceptance Test Plans etc. Customize microservices such as Grafana/ Prometheus/ ELK/ jaeger to assist with new metrics/ alerts/ logs/ tracing capabilities. Provide support for specialized activities but not limited to system installation, upgrades/ downgrade, system migrations, cutovers, provisioning solutions, feature activation, custom scripts creation, scaling, Heat template customizations, VNFM support. Work with customers to ensure technical compatibility with third party products required for "turnkey" network implementations. Perform integration of network nodes and play the role for designing and integration network nodes in the 4G / 5G network. Perform onsite/remote activities as required for supporting the customer. Effectively communicate and work with the internal and external customers in a multiple project environment. Perform the duties assigned by management effectively and most efficiently to meet organizational goals. Should be able to conduct Special internal and customer workshop when not covered by Training department. Should be ready to travel on short notices when needed. Qualification, Experience, Communication and Inter-personal skills desired: BS/ B.Tech in Computer Science, Electrical Engineering or equivalent plus a minimum of 6+ years’ experience in a technical work environment. Experience of working with Cloud Native Architecture along with common services/ tools such as Jaeger, Prometheus, Elasticsearch, Fluentd, Kibana, Grafana, Logging, MetalLB, tracer, Kubernetes, Docker, Helm, Ansible etc as Solution Architect with ability to deploy and customize based on customer requirements. Working experience on tools such as Git, Jenkins, Maven, Selenium , Python , Automation Framework with experience in DevOps for CI/CD is desirable. Working Experience of other key products such as DSR, PCRF, PCF, NRF, SCP, NSSF, UDR, NEF, NSSF along with in depth knowledge on 5G Flows, LTE & IMS Flows, Interfaces, Use Cases is desirable but not mandatory. Strong oral and written communications skills in English is required. Must be able to clearly and effectively communicate work status, risks and issues. Knowledge of Portugese and Spanish shall be plus. Experience with solution design, customer presentations, system installations, upgrades, feature testing. Advanced knowledge of HTTP2/ Diameter/ SS7 protocol knowledge is desirable. Must have experience of working on Complex telecom project for deployment of 5G and Diameter Nodes/applications in both Bare metal and Cloud environment. Should be well conversant with cloud concepts including IaaS, PaaS, VMs, VNF, VNFM, Heat Templates, Containers, CNE, Docker, Kubernetes etc. Must have experience in creating project technical document like SAED/NAPD/HLD/LLD/ATP/SOW/ATPs. Knowledge and experience with UNIX/Linux/Solaris would be a plus. Knowledge of Shell Script, Python script, Perl scripting, PHP scripting, Java, JavaScript, Golang is desired. MySQL and Oracle database knowledge would be a plus. Willing to do domestic and international travel with minimal notice up to 65% of the time and occasionally more often. Ability to act under leader direction and exercise a leadership. Career Level - IC3 Responsibilities Position Title: 5G Solution Integrator Product: Oracle Communications 5G & CNE Job Objective: The purpose of the Consulting Engineer team is to support with consulting activities for Oracle Communications products and services specifically around Oracle Communications Cloud Native Environment and Solutions including cnDSR, PCF, SCP, NRF, NSSF, NEF, BSF, cnUDR. IWF etc. Individual shall work as Solution Architect and Integrator that work on complex solutions using the Oracle Products in 4G/ 5G domains and Cloud Native products in order to solve the issues and ensure a professional and high quality work. The team takes care of the tasks such as Requirement gathering, Solution creation, Solution Implementation, Proposal creations, collateral creation, solution demo, PoC, RFP support, Project support, internal client handling, coordination with PLM, Engineering, License teams. The position requires having a very good knowledge and skills on Cloud Native Architecture including Docker and Kubernetes as Architect, Common Services as part of CNCF, Oracle Communications 5G Products such as SCP, NRF, NSSF, PCF, UDR, BSF, IWF, NEF and cnDSR. The mission is not limited to before said products and would also extend to any product Oracle Communications is working on) over the baremetal, virtualized or cloud infrastructure as IaaS or PaaS or SaaS. Job Description: Reporting to Consulting Practice Manager (CGBU) – Consulting Engineer is responsible for: Provide pre and post-sales technical services and support for Oracle Communication Product over Cloud Native Architecture to CGBU customers. Assist in the sales cycle to understand size and scope customer requirements for complex service implementation of Oracle products including capacity analysis, database sizing requirements, special feature implementation, etc. Work on underlying infrastructure ranging from Bare metal, Virtualized or Private or Public Cloud based deployments. Work in Cloud Native Environment for deployment of Cloud Native Environment (CNE), Common Services and Oracle Network Functions. Create and Implement technical solution based on customer requirement. Create presales deliverable such as Proposal, Effort Estimation, Profitability analysis, collateral creation, solution demo, PoC, RFP responses, HLD, Scope, MoPs, Acceptance Test Plans etc. Customize microservices such as Grafana/ Prometheus/ ELK/ jaeger to assist with new metrics/ alerts/ logs/ tracing capabilities. Provide support for specialized activities but not limited to system installation, upgrades/ downgrade, system migrations, cutovers, provisioning solutions, feature activation, custom scripts creation, scaling, Heat template customizations, VNFM support. Work with customers to ensure technical compatibility with third party products required for "turnkey" network implementations. Perform integration of network nodes and play the role for designing and integration network nodes in the 4G / 5G network. Perform onsite/remote activities as required for supporting the customer. Effectively communicate and work with the internal and external customers in a multiple project environment. Perform the duties assigned by management effectively and most efficiently to meet organizational goals. Should be able to conduct Special internal and customer workshop when not covered by Training department. Should be ready to travel on short notices when needed. Qualification, Experience, Communication and Inter-personal skills desired: BS/ B.Tech in Computer Science, Electrical Engineering or equivalent plus a minimum of 6+ years’ experience in a technical work environment. Experience of working with Cloud Native Architecture along with common services/ tools such as Jaeger, Prometheus, Elasticsearch, Fluentd, Kibana, Grafana, Logging, MetalLB, tracer, Kubernetes, Docker, Helm, Ansible etc as Solution Architect with ability to deploy and customize based on customer requirements. Working experience on tools such as Git, Jenkins, Maven, Selenium , Python , Automation Framework with experience in DevOps for CI/CD is desirable. Working Experience of other key products such as DSR, PCRF, PCF, NRF, SCP, NSSF, UDR, NEF, NSSF along with in depth knowledge on 5G Flows, LTE & IMS Flows, Interfaces, Use Cases is desirable but not mandatory. Strong oral and written communications skills in English is required. Must be able to clearly and effectively communicate work status, risks and issues. Knowledge of Portugese and Spanish shall be plus. Experience with solution design, customer presentations, system installations, upgrades, feature testing. Advanced knowledge of HTTP2/ Diameter/ SS7 protocol knowledge is desirable. Must have experience of working on Complex telecom project for deployment of 5G and Diameter Nodes/applications in both Bare metal and Cloud environment. Should be well conversant with cloud concepts including IaaS, PaaS, VMs, VNF, VNFM, Heat Templates, Containers, CNE, Docker, Kubernetes etc. Must have experience in creating project technical document like SAED/NAPD/HLD/LLD/ATP/SOW/ATPs. Knowledge and experience with UNIX/Linux/Solaris would be a plus. Knowledge of Shell Script, Python script, Perl scripting, PHP scripting, Java, JavaScript, Golang is desired. MySQL and Oracle database knowledge would be a plus. Willing to do domestic and international travel with minimal notice up to 65% of the time and occasionally more often. Ability to act under leader direction and exercise a leadership. About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law. Show more Show less

Posted 3 weeks ago

Apply

3.0 - 7.0 years

15 - 20 Lacs

Pune

Work from Office

Naukri logo

What Youll Do - Configure and manage observability agents across AWS, Azure & GCP - Use IaC techniques and tools such as Terraform, Helm & GitOps, to automate deployment of Observability stack - Experience with different language stacks such as Java, Ruby, Python and Go - Instrument services using OpenTelemetry and integrate telemetry pipelines - Optimize telemetry metrics storage using time-series databases such as Mimir & NoSQL DBs - Create dashboards, set up alerts, and track SLIs/SLOs - Enable RCA and incident response using observability data - Secure the observability pipeline You Bring - BE/BTech/MTech (CS/IT or MCA), with an emphasis in Software Engineering - Strong skills in reading and interpreting logs, metrics, and traces - Proficiency with LGTM (Loki, Grafana, Tempo, Mimi) or similar stack, Jaeger, Datadog, Zipkin, InfluxDB etc. - Familiarity with log frameworks such as log4j, lograge, Zerolog, loguru etc. - Knowledge of OpenTelemetry, IaC, and security best practices - Clear documentation of observability processes, logging standards & instrumentation guidelines - Ability to proactively identify, debug, and resolve issues using observability data - Focused on maintaining data quality and integrity across the observability pipeline

Posted 3 weeks ago

Apply

3.0 - 5.0 years

15 - 20 Lacs

Pune

Work from Office

Naukri logo

What Youll Do - Configure and manage observability agents across AWS, Azure & GCP - Use IaC techniques and tools such as Terraform, Helm & GitOps, to automate deployment of Observability stack - Experience with different language stacks such as Java, Ruby, Python and Go - Instrument services using OpenTelemetry and integrate telemetry pipelines - Optimize telemetry metrics storage using time-series databases such as Mimir & NoSQL DBs - Create dashboards, set up alerts, and track SLIs/SLOs - Enable RCA and incident response using observability data - Secure the observability pipeline You Bring - BE/BTech/MTech (CS/IT or MCA), with an emphasis in Software Engineering - Strong skills in reading and interpreting logs, metrics, and traces - Proficiency with LGTM (Loki, Grafana, Tempo, Mimi) or similar stack, Jaeger, Datadog, Zipkin, InfluxDB etc. - Familiarity with log frameworks such as log4j, lograge, Zerolog, loguru etc. - Knowledge of OpenTelemetry, IaC, and security best practices - Clear documentation of observability processes, logging standards & instrumentation guidelines - Ability to proactively identify, debug, and resolve issues using observability data - Focused on maintaining data quality and integrity across the observability pipeline

Posted 3 weeks ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

Remote

Linkedin logo

When you join Verizon You want more out of a career. A place to share your ideas freely even if theyre daring or different. Where the true you can learn, grow, and thrive. At Verizon, we power and empower how people live, work and play by connecting them to what brings them joy. We do what we love driving innovation, creativity, and impact in the world. Our V Team is a community of people who anticipate, lead, and believe that listening is where learning begins. In crisis and in celebration, we come together lifting our communities and building trust in how we show up, everywhere & always. Want in? Join the V Team Life. What Youll Be Doing... You will be part of a World Class Container Platform team that builds and operates highly scalable Kubernetes based container platforms (EKS, OCP, OKE and GKE) at a large scale for Global Technology Solutions at Verizon, a top 20 Fortune 500 company. This individual will have a high level of technical expertise and daily hands-on implementation working in a product team developing services in two week sprints using agile principles. This entitles programming and orchestrating the deployment of feature sets into the Kubernetes CaaS platform along with building Docker containers via a fully automated CI/CD pipeline utilizing AWS, Jenkins Ansible playbooks, AWS, CI/CD tools and process (Jenkins, JIRA, GitLab, ArgoCD), Python, Shell Scripts or any other scripting technologies. You will have autonomous control over day-to-day activities allocated to the team as part of agile development of new services: Automation and testing of different platform deployments, maintenance and decommissioning Full Stack Development Participate in POC (Proof of Concept) technical evaluations for new technologies for use in the cloud What were looking for... Youll Need To Have Bachelor's degree or four or more years of work experience. Three or more years of relevant Kubernetes-centric development experience. Address Jira tickets opened by platform customers. Hands-on experience with one or more of the following platforms: EKS, Red Hat OpenShift, GKE, AKS, OCI. RBAC and Pod Security Standards, Quotas, LimitRanges, OPA & Gatekeeper Policies. Expertise in one or more of the following: Ansible, Terraform, Helm, Jenkins, GitLab VSC/Pipelines/Runners, Artifactory. Proficiency with monitoring/observability tools such as New Relic, Prometheus/Grafana, logging solutions (Fluentd/Elastic/Fluentbit/OTEL/ADOT/Splunk) to include creating/customizing metrics and/or logging dashboards. Infra components like Flux, cert-manager, Karpenter, Cluster Autoscaler, VPC CNI, Over-provisioning, CoreDNS, metrics-server. Familiarity with Wireshark, tshark, dumpcap, etc., capturing network traces and performing packet analysis. Working experience with Service Mesh lifecycle management and configuring, troubleshooting applications deployed on Service Mesh and Service Mesh related issues. Demonstrated expertise with the K8S ecosystem (inspecting cluster resources, determining cluster health, identifying potential application issues, etc.) Experience creating self-healing automation scripts/pipelines. Bash scripting experience to include automation scripting (netshoot, RBAC lookup, etc.). Demonstrated strong troubleshooting and problem-solving skills. Demonstrated expertise with the K8S security ecosystem (SCC, network policies, RBAC, CVE remediation, CIS benchmarks/hardening, etc.). Strong troubleshooting and problem-solving skills. Certified Kubernetes Administrator (CKA). Excellent cross collaboration and communication skills. Even better if you have one or more of the following: GitOps CI/CD workflows (ArgoCD, Flux) and working in Agile Ceremonies Model. Working experience with security tools such as Sysdig, Crowdstrike, Black Duck, Xray, etc. Networking of microservices - Solid understanding of Kubernetes networking and troubleshooting. Experience with monitoring tools like NewRelic working experience with Kiali, Jaeger lifecycle management and assisting app teams on how they could leverage these tools for their observability needs. K8s SRE Tools for Troubleshooting. Certified Kubernetes Administrator (CKA). Certified Kubernetes Application Developer (CKAD). Red Hat Certified OpenShift Administrator. If Verizon and this role sound like a fit for you, we encourage you to apply even if you dont meet every even better qualification listed above. Where youll be working In this hybrid role, you'll have a defined work location that includes work from home and assigned office days set by your manager. Scheduled Weekly Hours 40 Diversity and Inclusion Were proud to be an equal opportunity employer. At Verizon, we know that diversity makes us stronger. We are committed to a collaborative, inclusive environment that encourages authenticity and fosters a sense of belonging. We strive for everyone to feel valued, connected, and empowered to reach their potential and contribute their best. Check out our diversity and inclusion page to learn more. Locations Chennai, India Hyderabad, India Show more Show less

Posted 3 weeks ago

Apply

0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Linkedin logo

Job Description Your Impact OR Responsibilities: Combine your technical expertise and problem-solving passion to work closely with clients, turning complex ideas into end-to-end solutions that transform our clients’ business. Lead and support the implementation of Engineering side of Digital Business Transformations with cloud, multi-cloud, security, observability and DevOps as technology enablers. Responsible for Building Immutable Infrastructure & maintain highly scalable, secure, and reliable cloud infrastructure, which is optimized for performance cost, and compliant with security standards to prevent security breaches Enable our customers to accelerate their software development lifecycle and reduce the time-to-market for their products or services. Qualifications Your Skills & Experience: 4 to 12 years of experience in Cloud & DevOps with Full time Bachelor’s /Master’s degree (Science or Engineering preferred) Expertise in below DevOps & Cloud tools: Expertise in at-least one Cloud Must Have GCP (Compute, IAM, VPC, Storage, Serverless, Database, Kubernetes, Pub-Sub, Operations Suit) Azure (Virtual Machines, Azure Active Directory, Virtual Network, Blob Storage, Functions, Database, Azure Service Bus, Azure Monitor) AWS (EC2, IAM, VPC, S3, Lambda, RDS, SNS, Cloud Watch) Configuration and monitoring DNS, APP Servers, Load Balancer, Firewall for high volume traffic Extensive experience in designing, implementing, and maintaining infrastructure as code using preferably Terraform or Cloud Formation/ARM Templates/Deployment Manager/Pulumi Experience Managing Container Infrastructure (On Prem & Managed e.g., AWS ECS, EKS, or GKE) Design, implement and Upgrade container infrastructure e.g., K8S Cluster & Node Pools Create and maintain deployment manifest files for microservices using HELM Utilize service mesh Istio to create gateways, virtual services, traffic routing and fault injection Troubleshoot and resolve container infrastructure & deployment issues Continues Integration & Continues Deployment Develop and maintain CI/CD pipelines for software delivery using Git and tools such as Jenkins, GitLab, CircleCI, Bamboo and Travis CI Automate build, test, and deployment processes to ensure efficient release cycles and enforce software development best practices e.g., Quality Gates, Vulnerability Scans etc. Automate Build & Deployment process using Groovy, GO, Python, Shell, PowerShell Implement DevSecOps practices and tools to integrate security into the software development and deployment lifecycle. Manage artifact repositories such as Nexus and JFrog Artifactory for version control and release management. Design, implement, and maintain observability, monitoring, logging and alerting using below tools Observability: Jaeger, Kiali, CloudTrail, Open Telemetry, Dynatrace Logging: Elastic Stack (Elasticsearch, Logstash, Kibana), Fluentd, Splunk Monitoring: Prometheus, Grafana, Datadog, New Relic Good to Have: Associate Level Public Cloud Certifications Terraform Associate Level Certification Additional Information Benefits Of Working Here Gender-Neutral Policy 18 paid holidays throughout the year Generous parental leave and new parent transition program Flexible work arrangements Employee Assistance Programs to help you in wellness and well being Company Description Publicis Sapient is a digital transformation partner helping established organizations get to their future, digitally-enabled state, both in the way they work and the way they serve their customers. We help unlock value through a start-up mindset and modern methods, fusing strategy, consulting and customer experience with agile engineering and problem-solving creativity. United by our core values and our purpose of helping people thrive in the brave pursuit of next, our 20,000+ people in 53 offices around the world combine experience across technology, data sciences, consulting and customer obsession to accelerate our clients’ businesses through designing the products and services their customers truly value. Show more Show less

Posted 3 weeks ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

About McDonald’s: One of the world’s largest employers with locations in more than 100 countries, McDonald’s Corporation has corporate opportunities in Hyderabad. Our global offices serve as dynamic innovation and operations hubs, designed to expand McDonald's global talent base and in-house expertise. Our new office in Hyderabad will bring together knowledge across business, technology, analytics, and AI, accelerating our ability to deliver impactful solutions for the business and our customers across the globe. Position Summary: The Cloud Engineer II (DevSecOps) Who we’re looking for: This opportunity is part of the Global Technology Infrastructure & Operations team (GTIO), where our mission is to deliver modern and relevant technology that supports the way McDonald’s works! We provide outstanding foundational technology products and services including Global Networking, Cloud, End User Computing, and IT Service Management. It’s our goal to always provide an engaging, relevant, and simple experience for our customers. The Cloud Engineer II (DevSecOps) reports to the Director of Enterprise DevSecOps Platform and is responsible for supporting, migrating, automation and optimization of software development and deployment process. The Cloud DevSecOps Engineer will work closely with software developers, operations engineers, and other stakeholders to ensure that the software delivery process is efficient, secure, and scalable. You will support the Corporate, Digital, Data, Restaurant, and Market application and product teams by efficiently and optimally delivering DevOps standards and services. This is a great opportunity for an experienced technology leader to help craft the transformation of infrastructure and operations products and services to the entire McDonalds environment. In this role, you will: Participate in the management, design, and solutioning of software development and deployment process. Provide direction and guidance to vendors partnering on DevSecOps tools standardization and engineering support. Build reusable pipeline templates for automated deployment of cloud infrastructure and code. Research, analyze, design, develop and support high-quality automation workflows inside and outside the cloud platform that are appropriate for business and technology strategies. Develop and maintain infrastructure and tools that support the software development and deployment process. Automate the software development and deployment process. Monitor and troubleshoot the software delivery process. Work with software developers and operations engineers to improve the software delivery process. Stay up to date on the latest DevSecOps practices and technologies. Drive proof of concepts and conduct technical feasibility studies for business requirements. Strive to provide internal and external customers with excellent customer service and world-class service. Effectively communicate project health, risks, and issues to the program partners, sponsors, and management teams. Resolve most conflicts between timeline, budget, and scope independently but intuitively raise complex or consequential issues to senior management. Work well in an agile environment Qualifications : 3+ years hands-on DevOps pipeline for automating, building and deploying Microservice Applications, API's and Non-Container Artifacts (Preferred) 3+ years GitHub Actions, ArgoCD, Helm Charts, Haness and SonarQube (Preferred) 3+ years hands-on experience with CI/CD technologies including Microservices, Terraform and Pipeline creation/management (e.g., Github, Artifactory/JFROG, Harness,etc.) (Preferred) Over 3 + years of experience with cloud technologies, including extensive hands-on work with IaaS and PaaS offerings in GCP OR AWS services. More than 3 years of experience in developing application build and deployment pipelines for .NET, Java, and Python applications, is good to have. Hands-on experience in managing Kubernetes clusters. Experience with observability tools like Datadog, New Relic and open source (O11y) observability ecosystem (Prometheus, Grafana, Jaeger) (Preferred) 4+ years of Information Technology experience Bachelor's degree in information technology, or a related field or relevant experience. 3+ years of application development using agile methodology. Advanced knowledge of application, data, and infrastructure architecture disciplines + Experience with Kubernetes, and AWS platform Hands-on knowledge of a broad range of End-to-End DevOps technologies Ability to design, develop and implement scalable, elastic microservice based platforms. Ability to help/guide team in resolving technical issues through debugging, research, and investigation. Automate ios Infrastructure: Utilize infrastructure-as-code principles to simplify the deployment and maintenance of build environments using tools like Terraform, GitHub Actions, Chef, Ansible, and Puppet for configuration management. Enhance Developer Efficiency: Develop and maintain tools that boost the productivity of iOS developers, including build automation, testing, and release management tools. Automate Android Infrastructure: Utilize infrastructure-as-code principles to simplify the deployment and maintenance of build environments using tools like Terraform, GitHub Actions, Chef, Ansible, and Puppet for configuration management. Enhance Developer Efficiency: Develop and maintain tools that boost the productivity of Android developers, including build automation, testing, and release management tools. Good to have experience in working on Code Quality SAST and DAST tools like SonarQube/SonarCloud, Veracode, Checkmarx, and Snyk. Experience using container-based technologies. Any AWS Certification and?Agile certification preferably scaled agile. Good knowledge of IaaS and PaaS offerings in AWS, Azure and GCP Good knowledge of Infrastructure-as-Code and associated technologies (e.g. repos, pipelines, Terraform, etc.). Previous DevSecOps and automation experience Experience developing scripts or automating tasks using languages such as Bash, Powershell, Python, Perl, Ruby, etc. Strong desire to automate everything you touch. Self-starter, able to come up with solutions to problems and complete those solutions while coordinating with other teams. Knowledge of foundational cloud security principles Excellent problem-solving and analytical skills Strong communication and partnership skills Knowledge of AWS and Azure and a willingness to upskill as the company’s adoption grows (preferred) Experience with Software Development Life Cycle (SDLC) (Preferred) Hands-on knowledge of an Infrastructure-as-Code and associated technologies (e.g. repos, pipelines, Terraform, etc.) (Preferred) Advanced knowledge of "AWS" Platform preferably 3+ years AWS/Kubernetes experience (Preferred) Work location: Hyderabad, India Work pattern: Full time role. Work mode: Hybrid. Show more Show less

Posted 4 weeks ago

Apply

0 years

35 - 45 Lacs

Bengaluru, Karnataka

Work from Office

Indeed logo

Java (11/8 or higher). We are looking for 8 to 12 Years exp Guy on Observability tool:Prometheus/ Grafana/OpenTelemetry/ELK/Jaeger/Zipkin/New Relic,Docker/Kubernetes,CI/CD pipline,Architect Job Type: Full-time Pay: ₹3,500,000.00 - ₹4,500,000.00 per year Schedule: Day shift Work Location: In person

Posted 1 month ago

Apply

5 - 8 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Company Overview Bandgi Technologies is a SaaS product development company which provides niche technology skills and help organizations in innovation. Our specialization is on Industry 4.0/IIOT and we provide solutions to clients in US, Canada and Europe. We are Innovation Enablers and have offices in India (Hyderabad) and in the UK (Maidenhead). We are looking for an experienced DevOps Engineer with a background in AWS to join a growing enterprise organization. You will work within a growing AWS cloud team looking to build on and maintain their cloud infrastructure. The cloud engineer will split their time between supporting the transition of code through pipelines into a live state from software development, and evolving and maintaining cloud infrastructure and project/service introduction activities. Skill Sets - Must Have Solid Experience 5+yrs in Terraform writing Shell script, VPC'S creation, DevSecOps & sst.dev and Github actions is Mandatory. Working Knowledge & Experience On Linux operating systems & Experience building CI/CD pipelines using following : AWS : VPC, Security Group, IAM, S3, RDS, Lambda, EC2 (Autoscaling Group, Elastic beanstalk), CloudFormation and AWS stacks. Container : Docker, Kubernetes, Helm, Terraform. CI/CD pipelines : GitHub Actions(Mandatory). Databases SQL & NoSQL (MySQL, Postgres, DynamoDB). Observability best practices (Prometheus, Grafana, Jaeger, Elasticsearch) Good To Have Learning Attitude API Gateways. Microservices best practices (including design patterns) Authentication and Authorization (OIDC/SAML/OAuth 2.0) Your Responsibilities Will Include Operation and control of Cloud infrastructure (Docker platform services, network services and data storage). Preparation of new or changed services. Operation of the change/release process. Application of cloud management tools to automate the provisioning, testing, deployment and monitoring of cloud components. Designing cloud services and capabilities using appropriate modelling techniques. (ref:hirist.tech)

Posted 1 month ago

Apply

10 - 15 years

35 - 40 Lacs

Hyderabad

Work from Office

Naukri logo

Grade Level (for internal use): 12 The Team: A highly talented skilled developers, architects with solid development background who build frameworks, libraries, web apps, mobile apps, reference implementations, solutions, design blue prints etc. that are leveraged across the Ratings applications. This is the Core Services team who are experts in multiple technologies Microservices, Java, Python, Open Search, AI/ML , Apache Kafka, AWS. Kubernetes, Kong API Gateway. The Impact: The candidate will be part of the core services team responsible for building next gen solutions that are used across the Ratings division. This team will be responsible for building solution blue prints / reference implementations and build scalable, resilient low latency and highly performant applications Whats in it for you: S&P Global is an employee friendly company with various benefits and with primary focus on skill development. The technology division has a wide variety of yearly goals that help the employee train and certify in niche technologies like: Generative AI Transformation of applications to CaaS CI/CD/CD gold transformation Data Mining opportunities Develop leadership skills and business knowledge training. Essential Duties & Responsibilities: A very hands on developer who could work independently to build new applications or frameworks / solutions that are leveraged across Ratings division Strong focus on developing robust solutions meeting high-security standards. Build and maintain new applications/platforms for growing business needs. Quickly adapt to new technologies and solutions Very flexible at working with different solutions / products / legacy applications Design and build future state architecture to support new use cases. Ensure scalable and reusable architecture as well as code quality. Integrate new use cases and work with global teams. Work with/support users to understand issues, develop root cause analysis and work with the product team for the development of enhancements/fixes. As part of a global team of engineers/developers, deliver continuous high reliability to our technology services. Strong focus towards developing permanent fixes to issues and heavy automation of manual tasks. Provide technical guidance to junior level resources. Works on analyzing/researching alternative solutions and developing/implementing recommendations accordingly. Qualifications: Required: Bachelor / MS degree in Computer Science, Engineering or a related subject Good written and oral communication skills. Must have 10+ years of working experience in Microservices, Java, Python, React, Apache Solr or Open Search, AI/ML Must have API development experience Work experience with asynchronous/synchronous messaging using Apache Kafka, etc. Ability to use CICD flow and distribution pipelines to deploy applications Working experience with DevOps tools such as Git, Azure DevOps, Jenkins, Maven Solid understanding of Cloud technologies, Kubernetes (AWS preferred) Nice to have: Experience in building single-page applications with Angular or ReactJS in conjunction with Python scripting. Working experience with API Gateway, Apache and Tomcat server, Helm, Ansible, Terraform, CI/CD, Azure DevOps, Jenkins, Git, Splunk, Grafana, Prometheus, Jaeger (or other OTEL products), LDAP, OKTA, Confluent Platform, Active MQ Location: Hyderabad, India Grade: 12 {Architect} Hybrid model: twice a week work from office is mandatory. Shift time: 9 am to 6 pm IST.

Posted 2 months ago

Apply

5 - 7 years

0 - 0 Lacs

Chennai

Work from Office

Naukri logo

Job Title: Kibana/ELK Specialist Hiring Locations: Chennai / Mumbai / Gurgaon Experience: 5+ years Job Description: We are seeking an experienced Sr. Observability Analyst with expertise in the ELK (Elasticsearch, Logstash, Kibana) stack. In this role, you will be responsible for maintaining and enhancing our observability and monitoring platform, ensuring optimal performance and visibility across our systems. You will work collaboratively with development, operations, and the rest of the observability team to implement robust logging, monitoring, and integration solutions that provide actionable insights into our platform's health and performance. Key Responsibilities: Manage and optimize Elasticsearch log ingestion, indexing, and querying to support near real-time observability of operational logs, health metrics, distributed tracing, and automated processes. Develop and maintain Kibana dashboards, visualizations, and reporting to provide meaningful insights to various stakeholders. Implement automated integration based on log patterns and metrics, as well as synthetic probes to proactively identify emergent issues and potentially trigger automated mitigations. Integrate with other sources of operational data including Prometheus, Azure Monitor, Log Analytics, and Application Insights. Implement machine learning (ML) anomaly and outlier detection jobs on automated baseline metrics across hundreds of client/service norms. Collaborate with cross-functional teams to define logging standards and implement observability best practices, with an emphasis on the profusion of OpenTelemetry standards throughout the observability landscape. Must-Have Skills: Deep understanding of observability principles and architectures. Proven expertise in operational observability using Kibana and the ELK stack, as well as experience with other observability tools and platforms (Prometheus, Grafana, Jaeger, OpenTelemetry, etc.). Strong understanding of log management concepts, log aggregation techniques, and log analysis best practices. Proficiency in designing custom dashboards using Kibana's visualization capabilities (e.g., charts, graphs). Familiarity with Elasticsearch APIs, including connector, ingest, transform, and machine learning. Experience with cloud platforms (particularly Azure). Proficiency in scripting and automation (e.g., Python, Bash, TypeScript). Ability to collaborate effectively with technical teams as well as non-technical stakeholders to gather requirements and translate them into actionable solutions. Good-to-Have Skills: Knowledge of additional observability tools and frameworks. Experience with data visualization and reporting tools. Familiarity with agile methodologies. Required Skills Elk Stack,Kibana,Azure Cloud,Scripting

Posted 2 months ago

Apply

3 - 7 years

3 - 6 Lacs

Maharashtra

Work from Office

Naukri logo

Description Proficient in Java 8 Springboot Basic knowledge on Kafka, Hazelcast (distributed cache) Oracle / SQL Understanding on Observability tools like Splunk , Jaeger , AppDynamics, Grafana. Experience on any of Cloud Native TechnologiesKubernetes, Docker, Istio Service Mesh Experience Level 3 5 years in designing, building distributed containerized platforms. Named Job Posting? (if Yes - needs to be approved by SCSC) Additional Details Global Grade C Level LEVEL 3 - SENIOR6-9 Years Experience Named Job Posting? (if Yes - needs to be approved by SCSC) No Remote work possibility No Global Role Family To be defined Local Role Name To be defined Local Skills Java;microservices;Spring Languages RequiredENGLISH Role Rarity Niche

Posted 2 months ago

Apply

3 - 6 years

8 - 18 Lacs

Noida

Remote

Naukri logo

Role: Platform Engineer (Golang) Location: Remote Work timing: IST Hours Job Description: They need to have experience developing a platform, kubernetes platform specifically Minimum Qualifications: 2+ years of experience developing scalable applications using Golang Administration experience in container orchestration platforms, preferably using Kubernetes, and demonstrated by the Certified Kubernetes Administrator (CKA) certificate Experience in observability (monitoring, logging, and tracing) for cloud-based environments, using CNCF tools such as Grafana, Prometheus, Thanos, Jaeger, or SaaS tools such as Datadog Proficiency in operating public cloud services using infrastructure-as-code tools like Crossplane, Terraform or CloudFormation Expertise in maintaining self-service CI/CD platforms and supporting techniques like Trunk-Based development, GitOps-based deployments, or automated canary releases Experience with relational databases (Postgres/MySQL) Business-level communication in English Strong knowledge of fundamental AWS services with a certification as an AWS Associate or above Passion for staying up-to-date with the latest industry topics in Site Reliability Engineering (SRE), the Cloud Native Computing Foundation (CNCF), DevOps, and the AWS Well-Architected Framework Knowledge of common application architecture patterns: distributed systems, microservices, asynchronous processing, event-driven systems, and others Experience building Kubernetes controllers or operators at scale with Golang in AWS, GCP, Azure, or On-prem. Experience with Kubernetes core systems and APIs Please share your Resume at asingh21@fcsltd.com

Posted 3 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies