Jobs
Interviews

18 Newrelic Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

4.0 - 8.0 years

0 Lacs

karnataka

On-site

We are looking for a highly skilled Performance Testing Engineer with expertise in Apache JMeter to join our QA team. As a Performance Engineer at Boomi, you will be responsible for validating and recommending performance optimizations in our computing infrastructure and software. Working closely with Product Development and Site Reliability Engineering teams, you will be involved in performance monitoring, tuning, and tooling. Your role will involve analyzing software architecture, identifying potential areas for performance improvements, and working on capacity planning and benchmarking for new microservices. You will also design, automate, and execute scalability and resiliency tests using tools like blazemeter, Neoload, JMeter, and Chaos Monkey/Gremlin. Additionally, you will use observability stack to improve diagnosability and address performance bottlenecks. Your expertise in performance engineering fundamentals, monitoring performance using various tools, understanding AWS services, and recommending optimal resource configurations will be crucial. You should also have experience in analyzing heap dump, thread dump, SQL slow query log, and identifying performance bottlenecks. Flexibility to work in a remote and geographically distributed team environment is desired. Key Responsibilities: - Expert in performance engineering fundamentals such as arrival rate, workload models, responsiveness, computing resource utilization, scalability, and resiliency - Monitoring performance using native Linux OS, APM, and Infrastructure monitoring tools - Understanding AWS services to analyze infrastructure bottlenecks - Using tools like NewRelic and Splunk for APM and infrastructure monitoring - Analyzing heap dump, thread dump, SQL slow query log for performance optimization - Recommending optimal resource configurations in Cloud, Virtual Machine, Container, and Container Orchestration technologies - Flexibility to work in a remote and geographically distributed team environment Desirable Skills: - Experience in writing data extraction and custom monitoring tools using programming languages like Java, Python, R, Bash - Capacity planning and modeling using AI/ML or queueing models - Performance tuning experience in Java or similar application code Join us at Boomi as a Performance Engineer and contribute to the best work of your career while making a profound social impact. At Boomi, we value a culture of caring, continuous learning, interesting work, balance, and flexibility. If you are passionate about solving challenging problems, working with cutting-edge technology, and making a real impact, explore a career with us at Boomi.,

Posted 2 days ago

Apply

4.0 - 8.0 years

0 Lacs

noida, uttar pradesh

On-site

We are looking for a highly experienced and motivated Backend Solution Architect proficient in Node.js, with exposure to Python and modern cloud-native architectures. As the Backend Solution Architect, you will be responsible for designing and implementing robust, scalable, and secure backend systems. Your role will involve driving innovation with emerging technologies like AI/ML while leveraging deep expertise in AWS services, particularly EKS, ECS, and container orchestration. Your key responsibilities will include designing end-to-end backend architecture using Node.js (mandatory) and optionally Python. You will work with microservices and serverless frameworks to ensure scalability, maintainability, and security. Additionally, you will be tasked with architecting and managing AWS-based cloud solutions, integrating AI and ML components, designing containerized applications, setting up CI/CD pipelines, and optimizing database performance. As the ideal candidate, you should have at least 8 years of backend development experience, with a minimum of 4 years as a Solution/Technical Architect. Expertise in Node.js with frameworks like Express.js and NestJS is essential, along with strong experience in AWS services, microservices, event-driven architectures, and serverless computing. Proficiency in Docker, Kubernetes, CI/CD pipelines, authentication/authorization mechanisms, and API development is also required. Preferred qualifications include experience with AI/ML workflows, full-stack technologies like React, Next.js, or Angular, and hands-on AI/ML integration using platforms such as SageMaker or TensorFlow. Holding an AWS Solution Architect Certification or equivalent is a strong plus, along with knowledge of CDNs and high-performance, event-driven systems. At TechAhead, a global digital transformation company specializing in AI-first product design thinking, we are committed to driving digital innovation and delivering excellence. Join us to shape the future of digital innovation globally and make a significant impact with cutting-edge AI tools and strategies.,

Posted 3 days ago

Apply

10.0 - 14.0 years

0 Lacs

hyderabad, telangana

On-site

As a Principal DevOps Engineer at our company, you will play a crucial role in leading our DevOps initiatives to ensure the reliability, scalability, and security of our high-value transactional systems. Your responsibilities will include providing on-call support, responding to critical incidents, and analyzing issues to drive problem resolution with minimal downtime. You will also be responsible for incident response and escalation management for DevOps-related incidents. Collaboration with various stakeholders, including Service Reliability & Transition teams, CloudOps & Observability teams, SecOps teams, and Engineering teams, will be a key aspect of your role. You will work effectively in a multi-vendor enterprise environment to ensure seamless integration and communication. Additionally, you will guide and mentor DevOps engineers, provide training support, and drive work division and prioritization to optimize team performance. In terms of technical expertise, you should have experience with AWS services such as S3, Lambda, IAM, and CloudWatch, as well as infrastructure as code tools like Terraform and CloudFormation. Knowledge of CI/CD pipelines, Kubernetes, and observability tools is essential. You should also have experience with containerization and orchestration technologies like Kubernetes and RDS, as well as security compliance and best practices in a regulated environment. To be successful in this role, you should have at least 10 years of experience in DevOps, Cloud, or Site Reliability Engineering roles, with a strong understanding of AWS cloud services and infrastructure. Excellent problem-solving, communication, and stakeholder management skills are crucial, along with experience in multi-vendor enterprise environments and regulated industries. At GlobalLogic, we prioritize a culture of caring and offer continuous learning and development opportunities. You will have the chance to work on interesting and meaningful projects while enjoying balance and flexibility in your work-life integration. Join us in our mission to engineer impact for and with clients around the world, shaping the future through innovative digital solutions.,

Posted 3 days ago

Apply

5.0 - 8.0 years

15 - 25 Lacs

Kochi, Bengaluru, Thiruvananthapuram

Work from Office

NP: Immediate to 15 Days Mandatory Skills: OMS , L3 Support , Ecom Domain, Java , Microservice , AWS , NewRelic , DataDog, Graphana, Splunk ( Monitoring tool ). Should have good End to End knowledge of various Commerce subsystems which include Storefront, Core Commerce back end, Post Purchase processing, OMS, Store / Warehouse Management processes, Supply Chain and Logistic processes. Extensive backend development knowledge with core Java/J2EE and Microservice based event driven architecture with a cloud based architecture (preferably AWS). Should be cognizant of key integrations undertaken in eCommerce and associated downstream subsystems which should include but not limited to different Search frameworks, Payment gateways, Product Lifecycle Management Systems, Loyalty platforms, Recommendation engines, Promotion frameworks etc. Recommend someone having a good knowledge of integrations with downstream eCommerce systems like OMS, Store Systems, ERP etc. Good understanding of Data Structures and Entity models. Should have understanding of building, deploying and maintaining server based as well as serverless applications on cloud, preferably AWS. Expertise in integrating synchronously and asynchronously with third party web services. Good to have concrete knowledge of AWS Lambda functions, API Gateway, AWS. CloudWatch, SQS, SNS, Event bridge, Kinesis, Secret Manager, S3 storage, server architectural models etc. Must have a working knowledge of Production Application Support. Good knowledge of Agile methodology, CI/CD pipelines, code repo and branching strategies preferably with GitHub or Bitbucket. Good knowledge of observability tools like NewRelic , DataDog, Graphana, Splunk etc. Should have a fairly good understanding of L3 support processes, roles and responsibilities. Should be flexible to work with overlap with some part of onsite (PST) hours to hand off / transition work for the day to the onsite counterpart for L3.

Posted 4 days ago

Apply

12.0 - 16.0 years

0 Lacs

maharashtra

On-site

At Capgemini Engineering, the world leader in engineering services, a global team of engineers, scientists, and architects collaborate to empower the world's most innovative companies to reach their full potential. From cutting-edge technologies like autonomous cars to life-saving robots, our digital and software technology experts are known for their out-of-the-box thinking, providing unique R&D and engineering services across various industries. Embark on a career with us, where every day presents new opportunities to make a meaningful difference. As a part of our team, you will be responsible for developing and implementing techniques and analytics applications that convert raw data into valuable insights using data-oriented programming languages and visualization software. Utilize data mining, data modeling, natural language processing, and machine learning to extract and analyze information from large datasets, both structured and unstructured. Visualize, interpret, and report data findings, and potentially create dynamic data reports. We are currently seeking a highly experienced AI/ML Solution Architect to take the lead in designing and implementing scalable, enterprise-grade AI and machine learning solutions. The ideal candidate should possess a solid background in data science, cloud computing, and AI/ML frameworks, demonstrating the ability to translate business requirements into technical solutions effectively. **Primary Responsibilities:** - Design and architect end-to-end AI/ML solutions customized to meet specific business needs. - Lead the development and deployment of machine learning, deep learning, and generative AI models. - Collaborate with cross-functional teams, including data engineers, software developers, and business stakeholders. - Ensure seamless integration of AI models into existing systems and workflows. - Conduct architecture reviews, performance tuning, and scalability assessments. - Stay abreast of the latest trends, tools, and technologies in the field of AI/ML. **Primary Skills (Must-Have):** - Strong expertise in AI/ML algorithms, encompassing deep learning, NLP, and computer vision. - Proficiency in Python and ML libraries such as TensorFlow, PyTorch, Keras, and Scikit-learn. - Experience with cloud platforms, particularly Azure (mandatory); familiarity with AWS/GCP is desirable. - Hands-on experience with LLMs (e.g., Azure OpenAI, Mistral) and GenAI deployment. - API development using FastAPI, Flask; integration with third-party tools (e.g., SharePoint, SNOW). - Solid understanding of data security (at rest and in motion) and data flow diagrams. **Secondary Skills (Good-to-Have):** - UI development using Angular, Streamlit, HTML, CSS, and JavaScript. - Experience with containerization tools like Docker, Azure Container Registry/Instance. - Familiarity with monitoring tools such as Splunk and NewRelic. - Knowledge of Agentic AI, RAG (Retrieval-Augmented Generation), and MFA (Multi-Factor Authentication). - Cost estimation and infrastructure planning for AI deployments (on-prem/cloud). **Qualifications:** - Bachelor's or Master's degree in Computer Science, Engineering, or a related field. - 12-14 years of experience in AI/ML solution development and architecture. - Excellent communication and stakeholder management skills. Join Capgemini, a global partner in business and technology transformation, dedicated to accelerating organizations" transition to a digital and sustainable world while delivering tangible impact for enterprises and society. With a diverse team of over 340,000 members across 50 countries, Capgemini is a trusted leader with a heritage spanning more than 55 years. Clients rely on Capgemini to unlock technology's value and address their entire business needs, offering end-to-end services and solutions with expertise in AI, generative AI, cloud, and data, supported by industry knowledge and a strong partner ecosystem.,

Posted 1 week ago

Apply

4.0 - 8.0 years

0 Lacs

noida, uttar pradesh

On-site

We are looking for a highly skilled Performance Testing Engineer with expertise in Apache JMeter to join our QA team. The ideal candidate will be responsible for designing and executing performance tests, as well as gathering performance requirements from stakeholders to ensure systems meet expected load, responsiveness, and scalability criteria. As a Performance Engineer at Boomi, you will validate and recommend performance optimizations in our computing infrastructure and software. You will collaborate with Product Development and Site Reliability Engineering teams on performance monitoring, tuning, and tooling. Your responsibilities will include analyzing software architecture, working on capacity planning, identifying KPIs, and designing scalability and resiliency tests using tools like JMeter, blazemeter, and Neoload. Essential requirements for this role include expertise in performance engineering fundamentals, monitoring performance using native Linux OS and APM tools, understanding AWS services for infrastructure analysis, and experience with tools like NewRelic and Splunk. You should also be skilled in analyzing heap dumps, thread dumps, and SQL slow query logs, and recommending optimal resource configurations in Cloud, Virtual Machine, and Container technologies. Desirable requirements include experience in writing custom monitoring tools using Java, Python, or similar languages, capacity planning using AI/ML, and performance tuning in Java or similar application code. At Boomi, we offer a culture of caring, continuous learning and development opportunities, interesting and meaningful work, balance and flexibility, and a high-trust environment. If you are passionate about solving challenging problems, working with cutting-edge technology, and making a real impact, we encourage you to explore a career with Boomi. Join us in Bangalore/Hyderabad, India, and be a part of our Performance, Scalability, and Resiliency(PSR) Engineering team to do the best work of your career and make a profound social impact.,

Posted 1 week ago

Apply

2.0 - 6.0 years

0 Lacs

navi mumbai, maharashtra

On-site

As a Ruby on Rails Developer, you will be required to have hands-on knowledge in Ruby on Rails and familiarity with GIT's featured branch process. Additionally, you should have experience with AWS cloud CI/CD and server management as well as familiarity with performance improvement tools like Newrelic. It is essential to use Agile process methodologies and SCRUM practices to enhance efficiency. Your responsibilities will include creating high-quality web applications using Ruby on Rails, designing and developing databases and APIs, optimizing code for high performance, scalability, and security, troubleshooting, debugging, and fixing existing code. Moreover, you will need to implement best coding practices and collaborate with other developers to ensure that product features meet customer requirements. To qualify for this position, you should hold a B.E/B.Tech in Computer Science, IT, MCA, or a related domain with a minimum of 3 years of experience in Ruby on Rails development. You must have expertise in HTML, CSS, JavaScript, and Ruby technologies. Proficiency in working with databases such as MySQL, PostgreSQL, and MongoDB is required. A strong understanding of object-oriented design and development principles along with familiarity with version control systems like Git is necessary. Excellent problem-solving and analytical skills, and the ability to work effectively both independently and as part of a team are essential. This position is located in Mahape, Navi Mumbai in the Information Technology & Computer Software industry. The experience required for this role is between 2 to 5 years, and the salary will be commensurate with skills. This is a full-time employment opportunity.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

kochi, kerala

On-site

As an OMS Developer L3 Support with 9 years of experience, you will be expected to have a minimum of 8 years of experience, with at least 5 years in eCommerce and/or OMS domain. You should possess a good End to End knowledge of various Commerce subsystems, including Storefront, Core Commerce back end, Post Purchase processing, OMS, Store/Warehouse Management processes, Supply Chain, and Logistic processes. Mandatory requirements include good proficiency in at least two areas apart from eCommerce. Your role will involve extensive backend development knowledge utilizing core Java/J2EE and Microservice-based event-driven architecture with a preference for cloud-based architecture, especially AWS. It is recommended that you have a good understanding of integrations with downstream eCommerce systems like OMS, Store Systems, ERP, among others. Any additional knowledge of OMS and Store domains will be seen as an advantage. You should have experience in developing and securely exposing/consuming Web Services RESTful and integrating headless applications in a Service-Oriented Architecture. Moreover, you must be able to comprehend the system end-to-end, maintain applications, and troubleshoot issues effectively. A solid understanding of Data Structures and Entity models is essential. Your expertise should extend to building, deploying, and maintaining server-based as well as serverless applications on the cloud, preferably AWS. You are expected to be skilled in integrating synchronously and asynchronously with third-party web services. Familiarity with AWS services such as Lambda functions, API Gateway, CloudWatch, SQS, SNS, EventBridge, Kinesis, Secret Manager, S3 storage, and different server architectural models is advantageous. Knowledge of major eCommerce/OMS platforms will be an added advantage in this role. A working knowledge of Production Application Support is a must. Additionally, you should possess good knowledge of Agile methodology, CI/CD pipelines, code repo, and branching strategies, preferably using GitHub or Bitbucket. Proficiency in observability tools like New Relic, DataDog, Graphana, Splunk, etc., with the ability to configure new reports, set up proactive alerts, monitor KPIs, and more is expected. A solid understanding of L3 support processes, roles, and responsibilities is essential. You will collaborate closely with counterparts in L1/L2 teams to monitor, analyze, and expedite issue resolutions, reduce stereotypes, automate SoPs, or find proactive solutions. Flexibility to work overlapping with some part of onsite (PST) hours for handing off/transitioning work to the onsite counterpart for L3 support is required in this position.,

Posted 1 week ago

Apply

5.0 - 10.0 years

10 - 20 Lacs

Hyderabad

Hybrid

Were looking for a talented and results-oriented Cloud Solutions Architect to work as a key member of Sureifys engineering team. Youll help build and evolve our next-generation cloud-based compute platform for digitally-delivered life insurance. Youll consider many dimensions such as strategic goals, growth models, opportunity cost, talent, and reliability. Youll collaborate closely with the product development team on platform feature architecture such that the architecture aligns with operational needs and opportunities. With the number of customers growing and growing, it’s time for us to mature the fabric our software runs on. This is your opportunity to make a large impact at a high-growth enterprise software company. Key Responsibilities : Collaborate with key stakeholders across our product, delivery, data and support teams to design scalable and secure application architectures on AWS using AWS Services like EC2, ECS, EKS, Lambdas, VPC, RDS, ElastiCache provisioned via Terraform Design and Implement CICD pipelines using Github, Jenkins and Spinnaker and Helm to automate application deployment and updates with key focus on container management, orchestration, scaling, optimizing performance and resource utilization, and deployment strategies Design and Implement security best practices for AWS applications, including Identity and Access Management (IAM), encryption, container security and secure coding practices Design and Implement best practices for Design and implement application observability using Cloudwatch and NewRelic with key considerations and focus on monitoring, logging and alerting to provide insights into application performance and health. Design and implement key integrations of application components and external systems, ensuring smooth and efficient data flow Diagnose and resolve issues related to application performance, availability and reliability Create, maintain and prioritise a quarter over quarter backlog by identifying key areas of improvement such as cost optimization, process improvement, security enhancements etc. Create and maintain comprehensive documentation outlining the infrastructure design, integrations, deployment processes, and configuration Work closely with the DevOps team and as a guide / mentor and enabler to ensure that the practices that you design and implement are followed and imbibed by the team Required Skills: Proficiency in AWS Services such as EC2, ECS, EKS, S3, RDS, VPC, Lambda, SES, SQS, ElastiCache, Redshift, EFS Strong Programming skills in languages such as Groovy, Python, Bash Shell Scripting Experience with CICD tools and practices including Jenkins, Spinnaker, ArgoCD Familiarity with IaC tools like Terraform or Cloudformation Understanding of AWS security best practices, including IAM, KMS Familiarity with Agile development practices and methodologies Strong analytical skills with the ability to troubleshoot and resolve complex issues Proficiency in using observability, monitoring and logging tools like AWS Cloudwatch, NewRelic, Prometheus Knowledge of container orchestration tools and concepts including Kubernetes and Docker Strong teamwork and communication skills with the ability to work effectively with cross function teams Nice to haves AWS Certified Solutions Architect - Associate or Professional

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

thiruvananthapuram, kerala

On-site

The role displayed in this job posting revolves around supporting and addressing stakeholder needs within tight timelines, particularly focusing on modernizing and migrating to OCI. The Operations team is crucial in reducing engineering overhead by managing incoming issues, allowing engineers to concentrate on roadmap priorities. Handling over 2,000 tickets quarterly, the team ensures a seamless experience for Dev Tools consumers and prevents overwhelming engineering teams with ticket overload. With team members shifting to support key CP initiatives and recent departures, there is increased pressure on the team. To uphold service quality and achieve FY2026 goals, the addition of a full-time, permanent team member is recommended. This hire is vital to provide timely support and ensure the team can efficiently meet stakeholder demands. Key responsibilities for this position include designing, building, and maintaining scalable infrastructure, developing automation tools for operational efficiency, monitoring system performance, collaborating with other teams for process improvement, participating in on-call rotations, and ensuring system security and compliance. The ideal candidate should hold a Bachelor's degree in Computer Science, Engineering, or a related field, possess proficiency in various development tools, have hands-on experience with cloud platforms and container orchestration tools, understand networking and Linux administration, be familiar with CI/CD pipelines and monitoring tools, and exhibit strong problem-solving skills. As a world leader in cloud solutions, Oracle is committed to leveraging technology to address current challenges and foster innovation through an inclusive and diverse workforce. The company offers competitive benefits, flexible medical and retirement options, and encourages community involvement through volunteer programs. If you require accessibility assistance or accommodation for a disability during the employment process, please reach out via email at accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States.,

Posted 2 weeks ago

Apply

4.0 - 8.0 years

0 Lacs

delhi

On-site

As a DevOps Engineer specializing in App Infrastructure & Scaling, you will be a valuable addition to our technology team. Your primary responsibility will be to design, implement, and maintain scalable and secure cloud infrastructure that supports our mobile and web applications. Your role is crucial in ensuring system reliability, performance, and cost efficiency across different environments. You will work with Google Cloud Platform (GCP) to design, configure, and manage cloud infrastructure. Your tasks will include implementing horizontal scaling, load balancers, auto-scaling groups, and performance monitoring systems. Additionally, you will be developing and maintaining CI/CD pipelines using tools like GitHub Actions, Jenkins, or GitLab CI. Real-time monitoring, crash alerting, logging systems, and health dashboards will be set up by you using industry-leading tools. Managing and optimizing Redis, job queues, caching layers, and backend request loads will also be part of your responsibilities. You will automate data backups, enforce secure access protocols, and implement disaster recovery systems. Collaborating with Flutter and PHP (Laravel) teams to address performance bottlenecks and reduce system load is crucial. Infrastructure security audits will be conducted by you to recommend best practices for preventing downtime and security breaches. Monitoring and optimizing cloud usage and billing to ensure a cost-effective and scalable architecture will also fall under your purview. You should have at least 3-5 years of hands-on experience in a DevOps or Cloud Infrastructure role, preferably with GCP. Proficiency with Docker, Kubernetes, NGINX, and load balancing strategies is essential. Experience with CI/CD pipelines and tools like GitHub Actions, Jenkins, or GitLab CI is required. Familiarity with monitoring tools such as Grafana, Prometheus, NewRelic, or Datadog is expected. A deep understanding of API architecture, including rate limiting, error handling, and fallback mechanisms is necessary. Experience working with PHP/Laravel backends, Firebase, and modern mobile app infrastructure is beneficial. Working knowledge of Redis, Socket.IO, and message queuing systems like RabbitMQ or Kafka will be advantageous. Preferred qualifications include a Google Cloud Professional certification or equivalent. Experience in optimizing systems for high-concurrency, low-latency environments and familiarity with Infrastructure as Code (IaC) tools like Terraform or Ansible are considered a plus.,

Posted 2 weeks ago

Apply

4.0 - 8.0 years

0 Lacs

delhi

On-site

As a DevOps Engineer specializing in App Infrastructure & Scaling, you will be a crucial member of our technology team. Your primary responsibility will involve designing, implementing, and maintaining scalable and secure cloud infrastructure that supports our mobile and web applications. Your contributions will be essential in ensuring system reliability, performance optimization, and cost efficiency across different environments. Your key responsibilities will include designing and managing cloud infrastructure on Google Cloud Platform (GCP), implementing horizontal scaling, load balancers, auto-scaling groups, and performance monitoring systems. You will also be responsible for developing and managing CI/CD pipelines using tools like GitHub Actions, Jenkins, or GitLab CI. Setting up real-time monitoring, crash alerting, logging systems, and health dashboards using industry-leading tools will be part of your daily tasks. You will collaborate closely with Flutter and PHP (Laravel) teams to address performance bottlenecks and reduce system loads. Additionally, you will conduct infrastructure security audits, recommend best practices to prevent downtime and security breaches, and monitor and optimize cloud usage and billing for a cost-effective and scalable architecture. To be successful in this role, you should have at least 3-5 years of hands-on experience in a DevOps or Cloud Infrastructure role, preferably with GCP. Strong proficiency in Docker, Kubernetes, NGINX, and load balancing strategies is essential. Experience with CI/CD pipelines and tools like GitHub Actions, Jenkins, or GitLab CI, as well as familiarity with monitoring tools like Grafana, Prometheus, NewRelic, or Datadog, is required. Deep understanding of API architecture, PHP/Laravel backends, Firebase, and modern mobile app infrastructure is also necessary. Preferred qualifications include Google Cloud Professional certification or equivalent and experience in optimizing systems for high-concurrency, low-latency environments. Familiarity with Infrastructure as Code (IaC) tools such as Terraform or Ansible is a plus. In summary, as a DevOps Engineer specializing in App Infrastructure & Scaling, you will play a critical role in ensuring the scalability, reliability, and security of our cloud infrastructure that powers our applications. Your expertise will contribute to the overall performance and cost efficiency of our systems.,

Posted 2 weeks ago

Apply

6.0 - 10.0 years

0 Lacs

hyderabad, telangana

On-site

As a seasoned Artificial Intelligence and Machine Learning professional with over 6 years of experience, you will be responsible for leveraging your deep understanding of AI and machine learning algorithms, including large language models (LLMs) like Azure OpenAI and Mystral. Your proficiency in deploying LLM on On-Prem or local Server will be crucial in this role. In this position, it is mandatory for you to have a strong grasp of cloud platforms such as Azure for deploying AI solutions and utilizing existing AI capabilities of the cloud platform. Additionally, familiarity with Azure Container Registry, Azure Container Instance, and App Services will be essential. Your programming skills should include strong coding abilities in languages like Python, as well as experience in Fast API, Flask API creation, and integration of APIs from third-party tools such as SNOW and Sharepoint. Proficiency in UI development using Angular, Streamlit, HTML, CSS, and JavaScript will also be required. Expertise in TensorFlow, PyTorch, and other relevant AI/ML libraries to fine-tune models is a must-have for this role. Moreover, familiarity with monitoring tools like Solarwinds, Splunk, and NewRelic will be advantageous. As part of the Data Engineering aspect, you will need to create data flow diagrams and ensure data security at rest and in motion. Collaborating with the team to analyze business data, implement solutions, and ensure data security and compliance will also be part of your responsibilities. In terms of Solution Architecture, you should demonstrate the ability to design and implement scalable AI solutions tailored to business needs. This includes architecture, design development, and walkthroughs with customers. You will also be involved in designing hardware and software requirements to deploy AI solutions for On-Prem or Cloud environments. A good understanding of Multi-factor Authentication is essential for this role. Additionally, the ability to estimate costs for deploying AI solutions on Cloud Platforms will be beneficial. At Wipro, we are reinventing the future of technology. Join us in building a modern Wipro where constant evolution is encouraged. We are looking for individuals who are inspired by reinvention and are eager to realize their ambitions in a purpose-driven environment. If you are ready to design your own reinvention and be part of a business that values diversity and inclusivity, Wipro is the place for you. Applications from individuals with disabilities are encouraged and welcome.,

Posted 2 weeks ago

Apply

4.0 - 8.0 years

0 Lacs

noida, uttar pradesh

On-site

As a highly experienced and motivated Backend Solution Architect, you will be responsible for leading the design and implementation of robust, scalable, and secure backend systems. Your expertise in Node.js and exposure to Python will be crucial in architecting end-to-end backend solutions using microservices and serverless frameworks. You will play a key role in ensuring scalability, maintainability, and security, while also driving innovation through the integration of emerging technologies like AI/ML. Your primary responsibilities will include designing and optimizing backend architecture, managing AWS-based cloud solutions, integrating AI/ML components, containerizing applications, setting up CI/CD pipelines, designing and optimizing databases, implementing security best practices, developing APIs, monitoring system performance, and providing technical leadership and collaboration with cross-functional teams. To be successful in this role, you should have at least 8 years of backend development experience with a minimum of 4 years as a Solution/Technical Architect. Your expertise in Node.js, AWS services, microservices, event-driven architectures, Docker, Kubernetes, CI/CD pipelines, authentication/authorization mechanisms, and API development will be critical. Additionally, hands-on experience with AI/ML workflows, React, Next.js, Angular, and AWS Solution Architect Certification will be advantageous. At TechAhead, a global digital transformation company, you will have the opportunity to work on cutting-edge AI-first product design thinking and bespoke development solutions. By joining our team, you will contribute to shaping the future of digital innovation worldwide and driving impactful results with advanced AI tools and strategies.,

Posted 3 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

noida, uttar pradesh

On-site

The client's product enables the utilization of customer data through cutting-edge technologies to: - Enhance understanding of customer behavior to a previously unattainable level. - Determine the exact impact of advertising and promotions. - Create real-time profiles of customer segments. - Uncover the relationship between team member performance and customer loyalty. You should have: - Over 5 years of commercial experience as a DevOps professional. - Practical experience in cloud infrastructure provisioning, deployment, and monitoring on Azure for at least 2 years. - Strong familiarity with best DevOps practices and methodologies. - Good understanding of Computer Science and Computing Theory, including network interactions, protocols, deployment patterns, security patterns, software architecture (e.g., microservices, event-driven design), orchestration, and containerization (Docker, Kubernetes). - Hands-on experience with Infrastructure as Code (IaC), especially with ARM templates/Terraform. - Knowledge of logging and monitoring technologies like Zabbix, NewRelic, PagerDuty, Prometheus, and ELK stack. - Experience with CI/CD processes using AzureDevOps, Docker, Kubernetes (AKS), and product services written in .NET. - Proficiency in different delivery methodologies such as SCRUM, Agile, and Kanban. - Upper-Intermediate English language skills. Desirable qualifications include certifications in Azure and Kubernetes, along with practical experience in data engineering, Big Data stack, high-load systems, and microservices in a production environment. As part of the DevOps team, your responsibilities will include: - Collaborating on the creation of Azure infrastructure and setting up K8s clusters (AKS). - Managing CI/CD pipelines and automation processes. - Overseeing release management and infrastructure maintenance. - Participating in decision-making regarding infrastructure design. - Creating and managing dashboards for environments/builds. - Ensuring security controls do not adversely affect production by working with architects and developers. - Communicating effectively with various stakeholders including PM, PO, software developers, architects, and QA. GlobalLogic offers a stimulating work environment with diverse projects in industries like High-Tech, communication, media, healthcare, retail, and telecom. You will have the opportunity to collaborate with a talented team and enjoy work-life balance, professional development programs, competitive benefits, and fun perks. About GlobalLogic: GlobalLogic is a digital engineering leader that helps brands worldwide design and develop innovative products and digital experiences. Headquartered in Silicon Valley, GlobalLogic operates globally, assisting clients across various industries to envision and realize digital transformations.,

Posted 3 weeks ago

Apply

4.0 - 9.0 years

7 - 17 Lacs

Hyderabad

Work from Office

About this role: Wells Fargo is seeking a Senior Infrastructure Engineer The Senior Infrastructure Engineer will design, develop, and implement near real-time data streams, ingesting and enriching business and operational data from various sources across the organization as part of a new enterprise observability platform. In this role, you will: Lead or participate in high level technical concepts spanning technology and business Develop specifications for complex infrastructure systems, design and test solutions Contribute to the testing of business, application and technical infrastructure requirements Drive solutions to reduce recovery Review and analyze solutions for cloud security, secrets management and key rotations Design, code, test, debug and document programs using Agile development practices Design complex system upgrades Resolve troublesome trends as they develop Develop a long range plan designed to resolve problems and prevent them from recurring Direct the daily risk and control flow of operations, focusing on policies, procedures and work standards to ensure success Collaborate and consult with peers, colleagues and managers to resolve issues and achieve goals Required Qualifications: 4+ years of Technology Infrastructure Engineering and Solutions experience, or equivalent demonstrated through one or a combination of the following: work experience, training, military experience, education Desired Qualifications Platform Support and implementation experience 3+ years of experience in APM monitoring tools like Appdynamics / NewRelic 3 + years of experience with Elasticsearch development, integration, or support 2+ years of Ansible experience 2+ years of experience with scripting (i.e. JavaScript, Groovy, Python, Bash, etc.) and working from the command line in a Linux environment. 2+ years of experience using an automation/orchestration solution such as Ansible, Chef, Puppet, Salt, etc. 3+ Experience in Patching Support and Application Change management Knowledge and understanding of Cloud computing, PaaS design principles and micro services and containers Outstanding problem solving and decision-making skills Strong analytical skills with high attention to detail and accuracy Ability to develop partnerships and collaborate with other business and functional areas Design and implement multi-tenancy in Elasticsearch for data isolation and visualization in Kibana by different tenants of the shared service. Experience designing, implementing, deploying and managing large Elasticsearch clusters and ELK solutions. Advanced experience with Elastic Search to include designing Elastic infrastructure environments, performance tuning, managing Elasticsearch clusters, solution development for searching and analyzing indexed data. Advanced experience with Logstash to include collecting, parsing and transforming attribute data and log data from disparate sources, understanding the Elastic Common Schema and the use of Filter plugins. Excellent verbal, written, and interpersonal communication skills Job Expectations: Demonstrated experience with the Elasticsearch, Logstash and Kibana stack. Installation and configuration of Clusters Indexing Data Queries, Aggregations, Mappings Act as the subject matter expert for ELK implementation across the shared service platform. Ability to integrate with other operational data platforms and tools including Kafka, Splunk, etc. Experience with observability and monitoring products such as AppDynamics, Elastic Search, Datadog, NewRelic, Prometheus, Grafana, etc. Proven track-record of partnering with internal customers, architects, engineers and other technical partners to gather requirements, understand existing systems and develop value-added products/services. Experience in Automating with Ansible

Posted 1 month ago

Apply

9.0 - 13.0 years

7 - 11 Lacs

Hyderabad

Hybrid

Regular work hours No. of rounds : 1 internal technical round & client 2 rounds About You : The GCP CloudOps Engineer is accountable for a continuous, repeatable, secure, and automated deployment, integration, and test solutions utilizing Infrastructure as Code (IaC) and DevSecOps techniques. - 8+ years of hands-on experience in infrastructure design, implementation, and delivery - 3+ years of hands-on experience with monitoring tools (Datadog, New Relic, or Splunk) - 4+ years of hands-on experience with Container orchestration services, including Docker or Kubernetes, GKE. - Experience with working across time zones and with different cultures. - 5+ years of hands-on experience in Cloud technologies GCP is preferred. - Maintain an outstanding level of documentation, including principles, standards, practices, and project plans. - Having experience building a data warehouse using Databricks is a huge plus. - Hands-on experience with IaC patterns and practices and related automation tools such as Terraform, Jenkins, Spinnaker, CircleCI, etc., built automation and tools using Python, Go, Java, or Ruby. - Deep knowledge of CICD processes, tools, and platforms like GitHub workflows and Azure DevOps. - Proactive collaborator and can work in cross-team initiatives with excellent written and verbal communication skills. - Experience with automating long-term solutions to problems rather than applying a quick fix. - Extensive knowledge of improving platform observability and implementing optimizations to monitoring and alerting tools. - Experience measuring and modeling cost and performance metrics of cloud services and establishing a vision backed by data. - Develop tools and CI/CD framework to make it easier for teams to build, configure, and deploy applications - Contribute to Cloud strategy discussions and decisions on overall Cloud design and best approach for implementing Cloud solutions - Follow and Develop standards and procedures for all aspects of a Digital Platform in the Cloud - Identify system enhancements and automation opportunities for installing/maintaining digital platforms - Adhere to best practices on Incident, Problem, and Change management - Implementing automated procedures to handle issues and alerts proactively - Experience with debugging applications and a deep understanding of deployment architectures. Pluses : - Databricks - Experience with the Multicloud environment (GCP, AWS, Azure), GCP is the preferred cloud provider. - Experience with GitHub and GitHub Actions

Posted 2 months ago

Apply

3.0 - 8.0 years

16 - 20 Lacs

Mumbai

Work from Office

What will you do at Fynd? - Run the production environment by monitoring availability and taking a holistic view of system health. - Improve reliability, quality, and time-to-market of our suite of software solutions - Be the 1st person to report the incident. - Debug production issues across services and levels of the stack. - Envisioning the overall solution for defined functional and non-functional requirements, and being able to define technologies, patterns and frameworks to realise it. - Building automated tools in Python / Java / GoLang / Ruby etc. - Help Platform and Engineering teams gain visibility into our infrastructure. - Lead design of software components and systems, to ensure availability, scalability, latency, and efficiency of our services. - Participate actively in detecting, remediating and reporting on Production incidents, ensuring the SLAs are met and driving Problem Management for permanent remediation. - Participate in on-call rotation to ensure coverage for planned/unplanned events. - Perform other task like load-test & generating system health reports. - Periodically check for all dashboards readiness. - Engage with other Engineering organizations to implement processes, identify improvements, and drive consistent results. - Working with your SRE and Engineering counterparts for driving Game days, training and other response readiness efforts. - Participate in the 24x7 support coverage as needed Troubleshooting and problem-solving complex issues with thorough root cause analysis on customer and SRE production environments - Collaborate with Service Engineering organizations to build and automate tooling, implement best practices to observe and manage the services in production and consistently achieve our market leading SLA. - Improving the scalability and reliability of our systems in production. - Evaluating, designing and implementing new system architectures. Some specific Requirements : - B.Tech. in Engineering, Computer Science, technical degree, or equivalent work experience - At least 3 years of managing production infrastructure. - Leading / managing a team is a huge plus. - Experience with cloud platforms like - AWS, GCP. - Experience developing and operating large scale distributed systems with Kubernetes, Docker and and Serverless (Lambdas) - Experience in running real-time and low latency high available applications (Kafka, gRPC, RTP) - Comfortable with Python, Go, or any relevant programming language. - Experience with monitoring alerting using technologies like Newrelic / zybix /Prometheus / Garafana / cloudwatch / Kafka / PagerDuty etc. - Experience with one or more orchestration, deployment tools, e. CloudFormation / Terraform / Ansible / Packer / Chef. - Experience with configuration management systems such as Ansible / Chef / Puppet. - Knowledge of load testing methodologies, tools like Gating, Apache Jmeter. - Work your way around Unix shell. - Experience running hybrid clouds and on-prem infrastructures on Red Hat Enterprise Linux / CentOS - A focus on delivering high-quality code through strong testing practices.

Posted 2 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies