Description : Dynatrace Administration Experience : 5 + Years Location : Pan India (Other than Mumbai and Kolkata) Expertise Expert in creating solutions using Dynatrace and deploy them to production extracting best value for the customer Key Responsibilities End-to-end execution and implementation of Dynatrace solutions on complex customer environments - On-Prem/Cloud/Hybrid Cloud Involve in projects needing Dynatrace solution from inception, poc to complete implementation. Involve in Techno-Presales Activities and Solution Designing of various Enterprise IT estates of customers from different domain Should document resolved issues in an effective manner for knowledge management, cross-train peers with tool usage and assist in creation of best-practices, work independently on multiple assignments, proactively prioritizing focus and effort Present and demonstrate Application Performance Management to the prospective clients. Should understand the challenges of customers, co-ordinate with vendor for a customized solution if needed. Bring in best practices of the product to leverage the capabilities to the best. Understanding of evolutions and trends in the area of AIOps closing following the life cycle of new features of Dynatrace Key Skills Strong application performance analysis skills. Experience of deploying Dynatrace tools for different technologies and platforms Experience on Configuration and customization Dynatrace solution that include integration with other tools. Must have completed Implementor Certification in Dynatrace Very good understanding of tool capabilities for a given customer environment Good To Have Skills Python scripting, Dockers, Kubernetes, DevOps tools like Ansible, Jenkins, Chef, Puppet Proficient in giving product trainings, demonstrating Dynatrace capabilities and new features to existing or prospective customers (ref:hirist.tech)
Description Job details : Managed services Support / Administration of APPD Platform (Prod / Dev / Staging Environments) Perform Daily BAU tasks for managing the APPD environment. Data onboarding / Instrumentation of IT / Business Applications & Infrastructure Monitoring Install and configure APM tools, including agent deployment, instrumentation, and integration with various application stacks. Configure and manage synthetic monitoring jobs to simulate user interactions and proactively identify performance issues. Develop custom dashboards, reports, and alerts to monitor and track key performance indicators (KPIs). Document SOPs covering Deployment / Development / BAU tasks. Good understanding and experience on: Strong experience in Configuring and managing APPD solution. Proficiency in configuring and customizing APPD to monitor various application stacks, including web, mobile, and cloud-based applications. Experience with configuring and managing synthetic monitoring jobs Understanding of application performance metrics, tracing, and profiling techniques. Experience in configuring End user monitoring (Synthetic / Real user Monitoring) Expertise in troubleshooting performance issues, conducting root cause analysis, and implementing performance optimization strategies. Proficient in working with various APPD monitoring agents, setting up instrumentation, and integrating APM tools with application frameworks and platforms. Excellent problem-solving and analytical skills to diagnose complex performance issues and recommend effective solutions. Good To Have Skills AppDynamics Professional / black belt certified. Understanding of cloud-native architectures and the ability to monitor applications in cloud environments. Familiarity with scripting languages like Python or PowerShell for automation and extending APM tool functionality. (ref:hirist.tech)
Description Its an individual contributor Grafana Developer role with below minimum skillsets required . Design and Development of Dashboards & Integrating Grafana with data sources, such as Prometheus, Elasticsearch, Loki Writing efficient queries using various languages (e.g., PromQL, LogQL, Lucene, SQL) to extract, analyze, and visualize data accurately within Grafana panels. Configuring and optimizing alerting mechanisms, defining alert rules based on thresholds or anomalies, and integrating with notification channels (e.g., Slack, PagerDuty, email) to ensure timely issue identification Leveraging grafana template variables, transformations, custom visualizations, dashboard linking, and navigation Buiding Lucene Queries for Elasticsearch, PromQL queries for prometheus for efficient data retrieval and analysis from Prometheus & Elasticsearch as a datasources. Finally, troubleshooting and resolve issues from grafana side related to Elasticsearch/Prometheus, data collection, query performance, and dashboard rendering. (ref:hirist.tech)
Experience : 5- 8 Band : B2 Location - Pan India ( Expect Mumbai , Kolkata , Coimbatore , Bhuvneshwar ) Job Description 5+ years of Hands-on experience with Datadogs stack in multi-cloud or hybrid cloud environments. Strong background in systems engineering or software development. Experience with Kubernetes and cloud platforms (AWS, GCP, Azure). STRONG Proficiency in basic Programming & Scripting languages like Go, Python, or Java. Familiarity with monitoring, alerting, and incident response practices. Deep understanding of cloud-native architectures and microservices. Experience with high-throughput, low-latency systems. Strong communication skills. Experience with CI/CD pipelines and monitoring tools. Deep understanding of Windows and Linux systems, networking, and operating system internals. Experience with distributed systems and high-availability architectures. Strong experience with Docker, Kubernetes and service mesh technologies. Tools like Terraform, Ansible, or Pulumi (Optional) if present would be an extra advantage Building dashboards, Monitors, and Alert Setup systems. Familiarity with Jenkins, GitHub Actions, CircleCI, or similar. Automating deployments, rollbacks, and testing pipelines. (ref:hirist.tech)
Description Supports all current SAP PLM, Recipe Development, Recipe Management, Collaboration folders, Document Management (DMS), WWI reports, PLM functionality and integration with other SAP modules and applications outside SAP Identifies, analyse, design, document and test corrections to software problems Monitors and resolves system performance issues Monitors batch schedules Be proficient in configuring SAP PLM system modules, such as areas under specification management and Recipe Development. Works closely with Enterprise Application Development (EAD), BASIS, Master Data and Testing Center of Excellence (TCOE) cross functional MGTI team members to resolve incidents, support development and testing for PLM systems functionality as well as reports and interfaces that transfer data to downstream applications. Assist end users in resolving and identifying system issues by providing application knowledge and technical expertise Troubleshoot and resolve workflow failures and issues. Participates in on-call support activities Supports project implementations: Participates in gathering functional requirements in partnership with other application teams including but not limited to; Supply Chain and Master Data and prepares detailed functional specifications from which programs are written Designs, configures, tests, debugs and documents programs Collaborates with development teams for custom code Identifies proactive ways to improve and/or simplify solutions Provide technical mentoring to other technology teams by explaining, communicating, correlating the requirements and dependencies of PLM systems and business process to their respective systems and processes Train end users in application of SAP PLM systems Be able to write functional specifications and any necessary related documentation p Creates data required to technical unit test issue resolution and development Ability to Unit Test and Fit Test development objects related to PLM Enhancements and Break/Fix Incidents across SAP : Must have worked on S/4 HANA PLM Solution Must have experience with industries like FMCG/ Pharma Manufacturing or Life science Should have 10+ years of experience in PLM process definition, analysis, design and implementation. Extensive experience with configuration and testing of SAP PLM module. Design, configure, and customize SAP PLM modules, including Product Data Management (PDM), Document Management System (DMS), Engineering Change Management (ECM), and Recipe Development (RD). Provide end-user training and support during and after SAP PLM implementation. Troubleshoot and resolve issues related to SAP PLM modules. Stay updated with the latest SAP PLM functionalities, best practices, and industry trends. Assist in the continuous improvement of business processes related to product lifecycle management. Should be able to configure workflow in Fiori / Web UI, Synchronization of Recipe to BOM, ECM of Recipe changes, general and plant recipe, Specification management : Value assignment types, classes and characteristics assignments, Phrase management, Change Management through workflow, Materials, packaging specifications Should be able to work on Design document creation FS and Coordination for TS document creation with ABAP team EHS and specification management. Troubleshoot and provide resolution to the issues reported by users, integration issues reported / identified. Beneficial not required: ABAP program language skills a plus (ref:hirist.tech)
Description - Experienced PPDS expert with DP, SNP having hands-on experience and willing to do hands-on as individual contributor/in teams - Collaborate with clients and business stakeholders to gather requirements and design efficient production planning and scheduling processes using SAP PPDS. Identify opportunities for process improvement and recommend best practices. Configure, implement, and support SAP PPDS solutions, including Heuristics, Optimization, and Detailed Scheduling strategies and CIF (Core Interface). Strong knowledge of CIF (Core Interface) and its role in master data and transaction data transfer. Knowledge on ECC-PP Master data like - PV's, BOM's, Routing's, Work centres etc. APO Master data setup like- Setup Matrix, define planning parameters, parallel processing, etc, Configurations like Planning board, DS board, Different profiles, PPDS Subcontracting process, Alerts configuration. Document test results, configuration changes, and system design for future reference and knowledge sharing. Conduct workshops, system testing (SIT/UAT), user training, and provide post-implementation support during Hypercare. Collaborate with cross-functional teams to ensure seamless integration of SAP PPDS with other SAP modules, such as Materials Management (MM), Production Planning (PP), and Assist with data migration activities, ensuring accurate and timely transfer of relevant data into the PPDS system. Manage deliverables within agreed timelines. Collaborate with project teams, consultants, and stakeholders to ensure successful Implementation and smooth project execution. (ref:hirist.tech)
Description BAND : B3 Years of Experience : 6.0-10.0 Years Job Title : Machine Learning Engineer (Python Coding with ML Experience) Company : Wipro Location : Bangalore (5 Days Working from WIPRO KODATHI ODC) NO WFH allowed at the moment. Job Summary : We are seeking a highly skilled and versatile Machine Learning Engineer who embodies the rare combination of a strong software engineer and ML exposure with experience in designing, developing, and maintaining robust, scalable, and efficient software applications using Python, with a strong emphasis on Object-Oriented Programming principles to manage hyperparameters, encapsulate evaluation metrics, and create controlled interfaces for model wrappers. You will be instrumental in designing, developing, deploying, and maintaining our core AI-powered products and features. This demands a blend of analytical rigor, coding prowess, architectural foresight, and a deep understanding of the entire machine learning lifecycle, from data exploration and model development to deployment, monitoring, and continuous improvement. Key Responsibilities Coding : Write clean, efficient, and well-documented Python code adhering to OOP principles (encapsulation, inheritance, polymorphism, abstraction). Experience with Python and related libraries (e.g., TensorFlow, PyTorch, Scikit-Learn). They are responsible for the entire ML pipeline, from data ingestion and preprocessing to model training, evaluation, and deployment End-to-End ML Application Development : Design, development, and deployment of machine learning models and intelligent systems into production environments, ensuring they are robust, scalable, and performant. Software Design & Architecture : Apply strong software engineering principles to design and build clean, modular, testable, and maintainable ML pipelines, APIs, and services. Contribute significantly to the architectural decisions for our ML platform and applications. Data Engineering for ML : Design and implement data pipelines for feature engineering, data transformation, and data versioning to support ML model training and inference. MLOps & Productionization : Establish and implement best practices for MLOps, including CI/CD for ML, automated testing, model versioning, monitoring (performance, drift, bias), and alerting systems for production ML models. Performance & Scalability : Identify and resolve performance bottlenecks in ML systems. Ensure the scalability and reliability of deployed models under varying load conditions. Documentation : Create clear and comprehensive documentation for ML models, pipelines, and services. Required Qualifications Education : Masters degree in computer science, Machine Learning, Data Science, Electrical Engineering, or a related quantitative field. Experience : 5+ years of professional experience in Machine Learning Engineering, Software Engineering with a strong ML focus, or a similar role. Must have Programming Skills : Expert-level proficiency in Python, including experience with writing production-grade, clean, efficient, and well-documented code. Experience with other languages (e.g., Java, Go, C++) is a plus. Strong Software Engineering Fundamentals : Deep understanding of software design patterns, data structures, algorithms, object-oriented programming, and distributed systems. Good To Have Machine Learning Expertise Solid theoretical and practical understanding of various machine learning algorithms Proficiency with ML frameworks such as PyTorch, Scikit-learn. Experience with feature engineering, model evaluation metrics, and hyperparameter tuning Data Handling : Experience with SQL and NoSQL databases, data warehousing concepts, and processing large datasets. Problem-Solving : Excellent analytical and problem-solving skills, with a pragmatic approach to delivering solutions. Communication : Strong verbal and written communication skills, with the ability to explain complex technical concepts to both technical and non-technical audiences. Preferred Qualifications Experience with big data technologies (e.g., Spark, Hadoop, Kafka). Contributions to open-source projects or a strong portfolio of personal projects. (ref:hirist.tech)
Strong Python Programmer+ Machine Learning Lead+Agentic AI Framework (Certification or project exposure) BAND : C1 Experience : 10+ Years Job Title : Sr AI/ML/GenAI Lead/Architect Company : Wipro Location : Wipro Kodathi, Bangalore (5 Days work from office) NO WFH Allowed Job Summary : We are seeking a highly skilled and versatile Senior AIML/GenAI Lead who embodies the rare combination of a strong software engineer, a pragmatic data scientist, and an expert in building robust, scalable ML applications. Experience in Gen AI, LLM, ML/DL/NLP, RAG, Lang chain, Mistral, Llama, Hugging Face, Python, Tensorflow, Pytorch, Django, Vector DB. This role is critical to our mission, bridging the gap between cutting-edge ML research and robust, production-ready systems. You will be instrumental in designing, developing, deploying, and maintaining our core AI-powered products and features. This demands a blend of analytical rigor, architectural foresight, and a deep understanding of the entire machine learning lifecycle, from data exploration and model development to deployment, monitoring, and continuous improvement. If you thrive on taking ML models from concept to customer impact and possess exceptional software design skills, we encourage you to apply. Key Responsibilities ML Model Development & Optimization : Experience in developing and implementing generative AI models and algorithms. Collaborate with Data Scientists to understand business problems, explore data, develop, train, and evaluate machine learning models (e.g., supervised, unsupervised, deep learning, reinforcement learning). Optimize models for performance, efficiency, and interpretability. End-to-End ML Application Development : Lead the design, development, and deployment of machine learning models and intelligent systems into production environments, ensuring they are robust, scalable, and performant. Software Design & Architecture : Apply strong software engineering principles to design and build clean, modular, testable, and maintainable ML pipelines, APIs, and services. Contribute significantly to the architectural decisions for our ML platform and applications. Data Engineering for ML : Design and implement data pipelines for feature engineering, data transformation, and data versioning to support ML model training and inference. Performance & Scalability : Identify and resolve performance bottlenecks in ML systems. Ensure the scalability and reliability of deployed models under varying load conditions. Collaboration & Mentorship : Work closely with cross-functional teams including Data Scientists, Software Engineers, Product Managers, and DevOps to integrate ML solutions seamlessly into our products. Potentially mentor junior engineers on best practices in ML engineering and software design. Research & Innovation : Stay abreast of the latest advancements in machine learning, MLOps, and related technologies. Propose and experiment with new techniques and tools to improve our ML capabilities. Documentation : Create clear and comprehensive documentation for ML models, pipelines, and services. Required Qualifications Education : Master's degree in computer science, Machine Learning, Data Science, Electrical Engineering, or a related quantitative field. Experience : 10+ years of professional experience in Machine Learning Engineering, Software Engineering with a strong ML focus, or a similar role. Good to have Programming Skills : Proficiency in Python, including experience with writing production-grade, clean, efficient, and well-documented code. Experience with other languages (e.g., Java, Go, C++) is a plus. Strong Software Engineering Fundamentals : Deep understanding of software design patterns, data structures, algorithms, object-oriented programming, and distributed systems. Must Have Machine Learning Expertise Solid theoretical and practical understanding of various machine learning algorithms Proficiency with ML frameworks such as PyTorch, Scikit-learn. Experience with feature engineering, model evaluation metrics, and hyperparameter tuning. Data Handling : Experience with SQL and NoSQL databases, data warehousing concepts, and processing large datasets. Problem-Solving : Excellent analytical and problem-solving skills, with a pragmatic approach to delivering solutions. Communication : Strong verbal and written communication skills, with the ability to explain complex technical concepts to both technical and non-technical audiences. Preferred Qualifications Master's or Ph.D. in a relevant field. Contributions to open-source projects or a strong portfolio of personal projects. Experience with A/B testing and experimental design for ML models. Knowledge of data governance, privacy, and security best practices in ML. (ref:hirist.tech)
Description Project Role : Custom Software Engineer (Chennai) Project Role Description : Develop custom software solutions to design, code, and enhance components across systems or applications. Use modern frameworks and agile practices to deliver scalable, high-performing solutions tailored to specific business needs. Must have skills : BlueYonder Space & Floor Planning Good to have skills : A&D Aftermarket Minimum 7.5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. A typical day involves collaborating with various teams to understand their needs, developing solutions, and ensuring that applications are aligned with business objectives. You will engage in problem-solving activities, participate in team meetings, and contribute to the overall success of projects by delivering high-quality applications that enhance operational efficiency and user experience. Roles & Responsibilities Expected to be an SME. Collaborate and manage the team to perform. Responsible for team decisions. Engage with multiple teams and contribute on key decisions. Provide solutions to problems for their immediate team and across multiple teams. Facilitate knowledge sharing sessions to enhance team capabilities. Monitor project progress and ensure timely delivery of application features. Professional & Technical Skills Must To Have Skills : Proficiency in BlueYonder Space & Floor Planning. Good To Have Skills : Experience with A&D Aftermarket. Strong understanding of application development methodologies. Experience with integration of applications with existing systems. Familiarity with user interface design principles and best practices. Additional Information The candidate should have minimum 7.5 years of experience in BlueYonder Space & Floor Planning. This position is based at our Chennai office. A 15 years full time education is required. (ref:hirist.tech)
Description Project Role : Custom Software Engineer Project Role Description : Develop innovative technology solutions for emerging industries and products. Interpret system requirements into design specifications. Must have skills : BlueYonder Transportation Management Good to have skills : A&D Aftermarket Minimum 7.5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary As an Advanced Application Engineer, you will develop innovative technology solutions for emerging industries and products. Your typical day will involve interpreting system requirements into design specifications, collaborating with various teams to ensure alignment on project goals, and engaging in problem-solving activities to enhance the overall efficiency of the development process. You will also be responsible for managing project timelines and ensuring that deliverables meet quality standards, all while fostering a collaborative environment that encourages creativity and innovation among team members. Roles & Responsibilities Expected to be an SME. Collaborate and manage the team to perform. Responsible for team decisions. Engage with multiple teams and contribute on key decisions. Provide solutions to problems for their immediate team and across multiple teams. Facilitate knowledge sharing sessions to enhance team capabilities. Monitor project progress and implement necessary adjustments to meet deadlines. Professional & Technical Skills Must To Have Skills : Proficiency in BlueYonder Transportation Management. Good To Have Skills Experience with A&D Aftermarket. Strong understanding of system design and architecture. Experience in developing and implementing technology solutions. Proficient in project management methodologies and tools. Additional Information The candidate should have minimum 7.5 years of experience in BlueYonder Transportation Management. This position is based at our Chennai office. A 15 years full time education is required. (ref:hirist.tech)
Splunk ITSI - Experience - 5- 10 Years - Pan India Splunk ITSI : Expertise in utilizing Citrix observability data with Splunk IT Service Intelligence for comprehensive monitoring and analytics of Citrix environments. VDI and VM Observability : Proficient in monitoring Virtual Desktop Infrastructure (VDI) and Virtual Machines (VMs) for performance, reliability, and scalability. OpenTelemetry, Kafka, and Splunk/Grafana : Hands-on experience with OpenTelemetry for unified telemetry, Kafka for real-time data ingestion, and Splunk/Grafana for powerful data visualization and alerting. Data Science & Machine Learning : Proficient in Python, TensorFlow, PyTorch, Scikit-learn, Pandas, NumPy for developing machine learning models for anomaly detection and root cause analysis. ETL & Data Analysis : Extensive knowledge of ETL techniques using Apache Airflow, Apache NiFi, and Spark. Distributed Systems & Cloud : Thorough understanding of Kubernetes, Docker, and service mesh technologies (Istio, Linkerd) with experience in AWS, Azure, and GCP observability services. Event-Driven Architectures : Experience with Kafka, RabbitMQ, and integrating observability into event-driven architectures. Time-Series Analysis & Predictive Analytics : Skills in time-series analysis for predictive maintenance and alerting. Security & Compliance : Ensuring observability data is managed securely and in compliance with regulations like GDPR, HIPAA. Performance Optimization : Ability to conduct root cause analysis and improve observability for system optimization. Experience Lead Observability Project in Citrix Environment : Directed the implementation of Citrix observability with Splunk ITSI, enhancing the monitoring of over 1,000 virtual desktops and applications. Improved MTTR by 40% and increased user satisfaction. VDI and VM Observability : Designed and deployed observability solutions for VDI and VMs using OpenTelemetry and Splunk, ensuring performance and availability of critical applications and infrastructure. Advanced Monitoring & Analytics with Kafka and Splunk : Spearheaded the deployment of real-time monitoring solutions using Kafka for event streaming and Splunk for visualization and alerting in a high-traffic environment. Machine Learning-Driven Anomaly Detection : Developed and implemented machine learning algorithms in Python for anomaly detection in telemetry data. Cross-Functional Collaboration : Worked closely with SREs, DevOps, and development teams to enhance system reliability and incident response. (ref:hirist.tech)
Description APPD - Location - Pan India, Experience - 5- 10 Years Managed services Support / Administration of APPD Platform (Prod / Dev / Staging Environments) Perform Daily BAU tasks for managing the APPD environment. Data onboarding / Instrumentation of IT / Business Applications & Infrastructure Monitoring Install and configure APM tools, including agent deployment, instrumentation, and integration with various application stacks. Configure and manage synthetic monitoring jobs to simulate user interactions and proactively identify performance issues. Develop custom dashboards, reports, and alerts to monitor and track key performance indicators (KPIs). Document SOPs covering Deployment / Development / BAU tasks. Good Understanding And Experience On Strong experience in Configuring and managing APPD solution. Proficiency in configuring and customizing APPD to monitor various application stacks, including web, mobile, and cloud-based applications. Experience with configuring and managing synthetic monitoring jobs Understanding of application performance metrics, tracing, and profiling techniques. Experience in configuring End user monitoring (Synthetic / Real user Monitoring) Expertise in troubleshooting performance issues, conducting root cause analysis, and implementing performance optimization strategies. Proficient in working with various APPD monitoring agents, setting up instrumentation, and integrating APM tools with application frameworks and platforms. Excellent problem-solving and analytical skills to diagnose complex performance issues and recommend effective solutions. Good To Have Skills AppDynamics Professional / black belt certified. Understanding of cloud-native architectures and the ability to monitor applications in cloud environments. Familiarity with scripting languages like Python or PowerShell for automation and extending APM tool functionality. (ref:hirist.tech)
Description Dynatrace Exp - 5- 13 Years , Location - (Pan India - Except these locations : Mumbai , Kolkata, Coimbatore , Orissa ) Expertise Expert in creating solutions using Dynatrace and deploy them to production extracting best value for the customer Key Responsibilities End-to-end execution and implementation of Dynatrace solutions on complex customer environments - On-Prem/Cloud/Hybrid Cloud Involve in projects needing Dynatrace solution from inception, poc to complete implementation. Involve in Techno-Presales Activities and Solution Designing of various Enterprise IT estates of customers from different domain Should document resolved issues in an effective manner for knowledge management, cross-train peers with tool usage and assist in creation of best-practices, work independently on multiple assignments, proactively prioritizing focus and effort Present and demonstrate Application Performance Management to the prospective clients. Should understand the challenges of customers, co-ordinate with vendor for a customized solution if needed. Bring in best practices of the product to leverage the capabilities to the best. Understanding of evolutions and trends in the area of AIOps closing following the life cycle of new features of Dynatrace Key Skills Strong application performance analysis skills. Experience of deploying Dynatrace tools for different technologies and platforms Experience on Configuration and customization Dynatrace solution that include integration with other tools. Must have completed Implementor Certification in Dynatrace Very good understanding of tool capabilities for a given customer environment Good To Have Skills Python scripting, Dockers, Kubernetes, DevOps tools like Ansible, Jenkins, Chef, Puppet Proficient in giving product trainings, demonstrating Dynatrace capabilities and new features to existing or prospective customers (ref:hirist.tech)
Description : 5 - 10 Years Location : Pan India Key Responsibilities Extensive experience working with New Relic APM Solutions including Instrumentation, Creation of Business Journeys and optimization that enable proactive and efficient monitoring, alerting, and troubleshooting of complex systems, applications and critical business processes In depth knowledge of observability principles, practices, and industry standards. Strong understanding of application architecture, infrastructure, and cloud technologies - Kubernetes. Collaborate with clients to understand their business objectives and translate them into APM requirements. Develop custom dashboards, reports, and alerts to monitor and track key performance indicators (KPIs). Provide guidance and support to clients on leveraging APM / Observability data to drive proactive performance management. Collaborate with cross-functional teams, including developers, operations, and business stakeholders, to drive best practices and continuous improvement. Mandatory Skills Good understanding and experience on : Strong experience in implementing and managing New Relic APM / Observability Tools) Proficient in working with monitoring agents, setting up instrumentation, and integrating Observability tools with application frameworks and platforms Assessment of the right metrics, and traces to be configured relevant to the Tech stack Proficiency in configuring and customizing APM / Observability tools and should be capable of working on the data on boarding (eg : Db connect, Linux, Windows, Azure and other data sources) to monitor various application stacks, including web, mobile, and cloud-based applications. Understanding of application performance metrics, tracing, and profiling techniques. Knowledge of log management and analysis activities Expertise in troubleshooting performance issues, conducting root cause analysis, and implementing performance optimization strategies. Strong knowledge of distributed systems, microservices architectures, and cloud technologies (e.g., AWS, Azure, GCP). Experience in analysing Observability data, identifying performance patterns, and making data-driven recommendations. Familiarity with CI/CD pipelines and DevOps practices in a cloud context Excellent problem-solving and analytical skills to diagnose complex performance issues and recommend effective solutions. Strong communication and presentation skills to effectively communicate technical concepts and findings to both technical and non-technical stakeholders. Good hands-on with scripting languages like Python or PowerShell for automation and extending APM tool functionality. Ability to work collaboratively in cross-functional teams and manage client relationships effectively. New Relic full stack observability practitioner. (ref:hirist.tech)
Description Data Dog. Experience - 5- 10 Years Location - Pan India 5+ years of Hands-on experience with Datadogs stack in multi-cloud or hybrid cloud environments. Strong background in systems engineering or software development. Experience with Kubernetes and cloud platforms (AWS, GCP, Azure). STRONG Proficiency in basic Programming & Scripting languages like Go, Python, or Java. Familiarity with monitoring, alerting, and incident response practices. Deep understanding of cloud-native architectures and microservices. Experience with high-throughput, low-latency systems. Strong communication skills. Experience with CI/CD pipelines and monitoring tools. Deep understanding of Windows and Linux systems, networking, and operating system internals. Experience with distributed systems and high-availability architectures. Strong experience with Docker, Kubernetes and service mesh technologies. Tools like Terraform, Ansible, or Pulumi (Optional) if present would be an extra advantage, Building dashboards, Monitors, and Alert Setup systems. Familiarity with Jenkins, GitHub Actions, CircleCI, or similar. Automating deployments, rollbacks, and testing pipelines. (ref:hirist.tech)
Description New Relic Experience : 5- 10 Years Location : Pan India Key Responsibilities Extensive experience working with New Relic APM Solutions including Instrumentation, Creation of Business Journeys and optimization that enable proactive and efficient monitoring, alerting, and troubleshooting of complex systems, applications and critical business processes In depth knowledge of observability principles, practices, and industry standards. Strong understanding of application architecture, infrastructure, and cloud technologies - Kubernetes. Collaborate with clients to understand their business objectives and translate them into APM requirements. Develop custom dashboards, reports, and alerts to monitor and track key performance indicators (KPIs). Provide guidance and support to clients on leveraging APM / Observability data to drive proactive performance management. Collaborate with cross-functional teams, including developers, operations, and business stakeholders, to drive best practices and continuous improvement. Mandatory Skills Good understanding and experience on : Strong experience in implementing and managing New Relic APM / Observability Tools) Proficient in working with monitoring agents, setting up instrumentation, and integrating Observability tools with application frameworks and platforms Assessment of the right metrics, and traces to be configured relevant to the Tech stack Proficiency in configuring and customizing APM / Observability tools and should be capable of working on the data on boarding (eg: Db connect, Linux, Windows, Azure and other data sources) to monitor various application stacks, including web, mobile, and cloud-based applications. Understanding of application performance metrics, tracing, and profiling techniques. Knowledge of log management and analysis activities Expertise in troubleshooting performance issues, conducting root cause analysis, and implementing performance optimization strategies. Strong knowledge of distributed systems, microservices architectures, and cloud technologies (e.g., AWS, Azure, GCP). Experience in analysing Observability data, identifying performance patterns, and making data-driven recommendations. Familiarity with CI/CD pipelines and DevOps practices in a cloud context Excellent problem-solving and analytical skills to diagnose complex performance issues and recommend effective solutions. Strong communication and presentation skills to effectively communicate technical concepts and findings to both technical and non-technical stakeholders. Good hands-on with scripting languages like Python or PowerShell for automation and extending APM tool functionality. Ability to work collaboratively in cross-functional teams and manage client relationships effectively. New Relic full stack observability practitioner. (ref:hirist.tech)
Role Overview: As a Splunk Architect, you will play a crucial role in designing and implementing Splunk enterprise and cloud solutions for infrastructure and application logging, monitoring, and predictive analytics. Your expertise in integrating element monitoring tools with Splunk will be essential for ensuring effective data integrations across various domains such as Infrastructure, Network, OS, DB, Middleware, Storage, Application, Virtualization, and Cloud Architectures. Additionally, your experience with ITIL and DevOps practices will contribute to event management best practices. Key Responsibilities: - Design and architect Splunk solutions for infrastructure and application logging - Integrate element monitoring tools with Splunk-based solutions - Work on Apps and Add-ons for monitoring and data integrations - Integrate Splunk with LDAP/SSO/OKTA and ITSM solutions like Service Now - Implement ITIL and Event management best practices - Troubleshoot and debug Splunk solutions effectively - Implement automation and scripting using Python and/or bash scripting - Integrate third-party tools with Splunk for enhanced functionality Qualifications Required: - Minimum 8 years of technical IT experience - At least 6 years of experience in Splunk-based design and deployment - Hands-on experience in deploying Enterprise Management Software Solutions - Experience with Splunk Observability Cloud - Solid hands-on cloud infrastructure experience, preferably on AWS - Splunk certification at the Architect level - ITIL Certification (Note: The additional details of the company were not provided in the job description),
Experience 5+ years of professional experience in Machine Learning Engineering, Software Engineering with a strong ML focus, or a similar Programming Skills : Expert-level proficiency in Python, including experience with writing production-grade, clean, efficient, and well-documented code. Experience with other languages (e.g., Java, Go, C++) is a Software Engineering Fundamentals : Deep understanding of software design patterns, data structures, algorithms, object-oriented programming, and distributed Learning Expertise : Solid theoretical and practical understanding of various machine learning algorithms Proficiency with ML frameworks such as PyTorch, Scikit-learn. Experience with feature engineering, model evaluation metrics, and hyperparameter Handling : Experience with SQL and NoSQL databases, data warehousing concepts, and processing large : Excellent analytical and problem-solving skills, with a pragmatic approach to delivering : Strong verbal and written communication skills, with the ability to explain complex technical concepts to both technical and non-technical audiences. (ref:hirist.tech)