Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
6.0 years
0 Lacs
Pune, Maharashtra, India
On-site
The Applications Development Intermediate Programmer Analyst is an intermediate level position responsible for participation in the establishment and implementation of new or revised application systems and programs in coordination with the Technology team. The overall objective of this role is to contribute to applications systems analysis and programming activities. Responsibilities: Role Summary: We are seeking a skilled DevOps Engineer with a strong focus on Continuous Integration and Continuous Deployment (CI/CD) to join our engineering team. In this role, you will be responsible for designing, implementing, and maintaining robust CI/CD pipelines that enable fast, secure, and reliable software delivery. You will work closely with development, QA, and operations teams to automate and streamline the software release process. Key Responsibilities: Design, develop, and maintain scalable CI/CD pipelines using tools such as Jinkins, GitLab CI, GitHub Actions, git, Jenkins, TeamCity, Ignite, Kafka/Zookeeper, LightSpeed, Openshifts (ECS), Udeploy-UrbanCode Deploy. Automate build, test, and deployment workflows for various application environments (development, staging, production). Integrate unit testing, static code analysis, security scanning, and performance tests into pipelines. 6 to 8 years of hands-on experience Manage artifact repositories and development strategies. Collaborate with developers and QA engineers to improve software development practices and shorten release cycles. Monitor and optimize pipeline performance to ensure fast feedback and deployment reliability. Ensure compliance with security and governance policies throughout the CI/CD process. Troubleshoot pipeline failures, build issues, and deployment problems across environments Required skills: Monitoring - ELK - (Elasticsearch, Logstash, kibana, metricbeat, filebeat), AppO CI/CD - (git, Jenkins, TeamCity, Ignite, Kafka/Zookeeper, LightSpeed, Openshifts (ECS), Udeploy-UrbanCode Deploy) Data Quality Check - Drool workbench, Java(spring), KIE API, REST Qualifications: 5+ years of relevant experience in the Financial Service industry Intermediate level experience in Applications Development role Consistently demonstrates clear and concise written and verbal communication Demonstrated problem-solving and decision-making skills Ability to work under pressure and manage deadlines or unexpected changes in expectations or requirements Education: Bachelor’s degree/University degree or equivalent experience This job description provides a high-level review of the types of work performed. Other job-related duties may be assigned as required. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Applications Development ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Most Relevant Skills Please see the requirements listed above. ------------------------------------------------------ Other Relevant Skills CI/CD, DevOps, GitLab. ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster.
Posted 1 month ago
5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
NVIDIA is looking for a world class engineer to join its multifaceted and fast-paced Infrastructure, Planning and Processes organization where you will be working as a Senior Devops and SRE Engineer. The position will be part of a fast-paced crew that develops and maintains sophisticated build & test environments for a multitude of hardware platforms both NVIDIA GPUs and Tegra Processors along with various operating systems (Windows/Linux/Android). The team works with various other business units within NVIDIA Software such as Graphics Processors, Mobile Processors, Deep Learning, Artificial Intelligence, Robotics and Driverless Cars to cater to their infrastructure & system’s needs. What You’ll Be Doing End-to-end Implementation of the Kubernetes architecture - design, deploy, hardening, networking, sizing, scaling etc. Implementing high availability clusters and disaster recovery solutions Strong System Admin experience using Configuration as Code, infrastructure-as-code with tools such as ansible, puppet, chef & terraform. Design and implement logging & monitoring solution to gain more insight into applications and system health. Implement critical metric using various analytics methods and dashboards. Craft and develop tools needed for automating workflows. Reuse AI techniques to extract useful signals about machines and jobs from the data generated. Take part in prototyping, crafting and developing cloud infrastructure for Nvidia. Participating in on-call support and critical issue coverage as a SRE engineer. What We Need To See Solid programming background in python/Go and/or similar scripting languages. Excellent debugging, problem solving and analytical skills. Strong understanding of architectural requirements and development processes involved in building reliable, robust, scalable data products and pipelines. Proficient in configuration management & IaC tools like Ansible, Puppet, Chef, Terraform Strong background with Gitlab, Jenkins, Flux, ArgoCD and/or other tools to build secure CI/CD systems. Strong expertise in Kubernetes architecture, networking, RBAC, persistent storage solutions like Trident, Ceph, EBS, Longhorn, etc. Proficient in secret management tools like hashicorp vault, aws secrets manager, etc. Proficient in data analytics/visualization & monitoring tools like Kibana, Grafana, Splunk, Zabbix, Prometheus and/or similar systems. 5+ years of proven experience. Bachelor’s or master’s degree in computer science, Software Engineering, or equivalent experience. Ways To Stand Out From The Crowd Thrives in a multi-tasking environment with constantly evolving priorities. Prior experience with large scale operations team. Experience with using and improving data centers. Expertise with windows server infrastructure. Outstanding interpersonal skills and communication with all levels of management. Ability to analyze complex problems into simple sub problems and then reuse available solutions to implement most of those. Ability to design simple systems that can work efficiently without needing much support. Ability to leverage AI/ML to proactively detect & resolve incidents, automated alert triaging, log analysis and automate repetitive workflows. With competitive salaries and a generous benefits package, we are widely considered to be one of the technology world’s most desirable employers. We have some of the most forward-thinking and hardworking people in the world working for us and, due to outstanding growth, our exclusive engineering teams are rapidly growing. If you’re a creative and autonomous engineer with a real passion for technology, we want to hear from you. JR1997450
Posted 1 month ago
0 years
0 Lacs
India
Remote
Job Summary: WHO ARE YOU? Passionate and motivated. Driven, with an entrepreneurial spirit. Resourceful, innovative, forward thinking and committed. At Live Nation Entertainment, our people embrace these qualities, so if this sounds like you then please read on! THE ROLE As the Abuse Operations Engineering Lead, you'll be part of a mission critical team protecting the Ticketmaster platforms from abusive entities or those who deploy abusive digital behaviours designed to circumvent our controls that protect fair access to tickets. Abuse Operations is a centrally managed command and control centre for abuse investigations, escalations, policies, and tooling for all Ticketmaster properties systems. Abuse Operations Engineers must be able work independently across a broad tech stack, multi-task concurrent problems, perform triage and prioritization as necessary with discretion and pragmatic judgment. They provide expert coordination and perform analysis and remediation of abuse for supported products and services, maintaining a high standard from diagnostics and communication while driving to complete resolution. They actively reduce operational effort by creating/improving automation or working with Software Engineering teams to improve self-healing and self-service tooling, documentation, and processes. WHAT THIS ROLE WILL DO Provide 1st line support for all Ticketmaster abuse queries Perform on-call duty as part of a global team monitoring the availability and performance of the ticketing systems and APIs used by third-party services, as well as the various internal services and systems on which these interfaces depend. Resolve advanced issues and provide advanced troubleshooting for escalations. Provide Subject Matter Expertise to cross-functional teams on abuse issues, including strategy, issue troubleshooting, and product & tool requirements. Drive continuous improvements to our products, tools, configurations, APIs and processes by sharing learnings, constructive feedback, and design input with internal technical teams and integrators. Independently learn new technologies and master Ticketmaster ticketing platforms products and services to provide 'full stack' diagnostics to help determine the root cause of issues, and where appropriate help our integrators through their issues. Ensure runbooks, resolution responses, internal processes and integration documentation are up to date and to a high standard suitable for internal stakeholder usage. Work on automation to reduce toil WHAT THIS PERSON WILL BRING BA/BS degree in computer science or related field or relevant work experience in lieu of degree. Experience with bot detection and blocking systems. Troubleshooting skills ranging from diagnosing low-level request issues to large-scale issues with correlating data between various third-party partners and in-house systems Proficiency in Bash/Python/Go etc for operations scripts and text processing. Working knowledge of HTTP protocol and basic web systems, and analysis tools such as Splunk and Kibana/ELK stack, and database products (Oracle/MySQL/DataBricks/Snowflake/etc.) Experience working with a 24/7 shift based team. Experience in a global, fast-paced environment, resolving multiple interrupt-driven priorities simultaneously Passionate and motivated, resourceful, innovative, forward-thinking Strong English language communication skills and the ability to collaborate closely with remote team members Ability to work with autonomy while ensuring that new knowledge is shared with technology teams Committed and able to adapt quickly Embrace continuous learning and continuous improvement
Posted 1 month ago
5.0 years
3 Lacs
Thiruvananthapuram
On-site
Job Requirements Quest Global is an organization at the forefront of innovation and one of the world’s fastest growing engineering services firms with deep domain knowledge and recognized expertise in the top OEMs across seven industries. We are a twenty-five-year-old company on a journey to becoming a centenary one, driven by aspiration, hunger and humility. We are looking for humble geniuses, who believe that engineering has the potential to make the impossible, possible; innovators, who are not only inspired by technology and innovation, but also perpetually driven to design, develop, and test as a trusted partner for Fortune 500 customers. As a team of remarkably diverse engineers, we recognize that what we are really engineering is a brighter future for us all. If you want to contribute to meaningful work and be part of an organization that truly believes when you win, we all win, and when you fail, we all learn, then we’re eager to hear from you. The achievers and courageous challenge-crushers we seek, have the following characteristics and skills: Roles & Responsibilities: Collaborate with business stakeholders to gather and translate data requirements into analytical solutions. Analyze large and complex datasets to identify trends, patterns, and actionable insights. Design, develop, and maintain interactive dashboards and reports using Elasticsearch/Kibana or Power BI. Conduct ad-hoc analyses and deliver data-driven narratives to support business decision- making. Ensure data accuracy, consistency, and integrity through rigorous validation and quality checks. Write and optimize SQL queries, views, and data models for reporting and analysis. Present findings through compelling visualizations, presentations, and written summaries. Work closely with data engineers and architects to enhance data pipelines and infrastructure. Contribute to the development and standardization of KPIs, metrics, and data governance practices Work Experience Required Skills (Technical Competency): Bachelor’s or master’s degree in data science, Computer Science, Statistics, or a related field. 5+ years of experience in a data analyst or business intelligence role. Proficiency in SQL and data visualization tools such as Power BI, Kibana, or similar. Proficiency in Python, Excel and data storytelling. Understanding of data modelling, ETL concepts, and basic data architecture. Strong analytical thinking and problem-solving skills. Excellent communication and stakeholder management skills To adhere to the Information Security Management policies and procedures. Desired Skills: Elasticsearch/Kibana, Power BI, AWS, Python, SQL, Data modelling, Data analysis, Data quality checks, Data validation, Data visualization, Stakeholder communication, Excel, Data storytelling, Team collaboration, Problem-solving, Analytical thinking, Presentation skills, ETL concepts.
Posted 1 month ago
4.0 - 6.0 years
3 Lacs
Thiruvananthapuram
On-site
Job Requirements Quest Global is an organization at the forefront of innovation and one of the world’s fastest growing engineering services firms with deep domain knowledge and recognized expertise in the top OEMs across seven industries. We are a twenty-five-year-old company on a journey to becoming a centenary one, driven by aspiration, hunger and humility. We are looking for humble geniuses, who believe that engineering has the potential to make the impossible, possible; innovators, who are not only inspired by technology and innovation, but also perpetually driven to design, develop, and test as a trusted partner for Fortune 500 customers. As a team of remarkably diverse engineers, we recognize that what we are really engineering is a brighter future for us all. If you want to contribute to meaningful work and be part of an organization that truly believes when you win, we all win, and when you fail, we all learn, then we’re eager to hear from you. The achievers and courageous challenge-crushers we seek, have the following characteristics and skills: Roles & Responsibilities: Design and develop and maintain scalable and efficient data pipeline. Data engineering, ETL development and data integration. Development data pipeline using Python. Visualization of data in Kibana dashboards. SQL skills and working with relational databases. Knowledge of cloud-based data solutions and services. Take ownership of assigned jobs that are part of new feature implementations, bug fixes and enhancement activities. Document the projects according to project standards (Architecture, technical specifications) Technical communication with internal and external stake holders. Works independently and contributes to the immediate team and work with Architects and other leads. Work Experience Required Skills (Technical Competency): Bachelor’s or Master’s degree in Computer Science, Information Systems, or a related field. 4–6 years of experience in data engineering, preferably in a healthcare. Strong SQL and Python skills; experience with Spark is a plus Knowledge in Cloud Architecture. Excellent programming and debugging skills. Ability to write effective and reusable code according to best practices. Experience with HIPAA-compliant data handling and security practices Excellent communication and presentation skills. Good customer interfacing skills. Desired Skills: Experience in Python, PySpark Elastic Search, Kibana Knowledge in SQL Server Ability to deliver without much supervision from lead/managers
Posted 1 month ago
5.0 years
0 Lacs
Bengaluru
On-site
Job Summary We are seeking a Senior Software Engineer to join our Site Reliability Engineering team, with a focus on Observability and Reliability. As a key member of our SRE team, you will play a critical role in ensuring the performance, stability, and availability of our applications and systems with a focused approach in Application Performance Management, Observability & Reliability of the platform. The Senior Software Engineer will be responsible for the design, implementation, and maintenance of our observability and reliability infrastructure, with a primary focus on the ELK stack (Elasticsearch, Logstash, and Kibana). The role involves configuring, fine-tuning, and automating alerts, integrating Elastic solutions with other tools and applications, generating reports, and optimizing the observability and monitoring systems. Key Duties & Responsibilities 1 Collaborate with cross-functional teams to define and implement observability and reliability standards and best practices. 2 Design, deploy, and maintain the ELK stack for log aggregation, monitoring, and analysis. 3 Develop and maintain alerts and monitoring systems, ensuring early detection of issues and rapid incident response. 4 Create, customize, and maintain dashboards in Kibana for different stakeholders. 5 Collaborate with software development teams to identify performance bottlenecks and recommend solutions. 6 Automate manual tasks and workflows to streamline observability and reliability processes. 7 Conduct regular system and application performance analysis and optimization, effective automation & tooling, capacity planning and optimization, security practices and compliance adherence, documentation and knowledge sharing, Disaster Recovery and backup. 8 Generate and deliver detailed reports on system performance and reliability metrics. 9 Stay up to date with industry trends and best practices in observability and reliability engineering. Qualifications/Skills/Abilities Minimum Requirements Formal Education Bachelor’s degree in computer science, Information Technology, or a related field (or equivalent experience). Experience (type & duration) 5+ years of experience in Site Reliability Engineering, Obervability & reliability, DevOps Skills Proficiency in configuring and maintaining the ELK stack (Elasticsearch, Logstash, Kibana) is mandatory. Strong scripting and automation skills, with expertise in Python, Bash, or similar languages. Experience in Data structures using Elasticsearch Indices. Experience in writing Data Ingestion Pipelines using Logstash. Experience with infrastructure as code (IaC) and configuration management tools (e.g., Ansible, Terraform). Handson and experience with cloud platforms ( AWS preferred) and containerization technologies (e.g., Docker, Kubernetes). Good to have Telecom domain expertise but not mandatory Strong problem-solving skills and the ability to troubleshoot complex issues in a production environment. Excellent communication and collaboration skills. Accreditation/certifications/licenses Relevant certifications (e.g., Elastic Certified Engineer) are a plus.
Posted 1 month ago
7.0 - 10.0 years
2 - 6 Lacs
Noida
On-site
Req ID: 329228 NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a Platform Administrator to join our team in Noida, Uttar Pradesh (IN-UP), India (IN). Job highlights Qualifications Excellent oral and written English communication skills Specifically needs to be able to communicate well on phone calls Must be able to thrive under pressure and have a strong sense of ownership and responsibility for the project Minimum of 7-10 years of experience in the administration, architecture, design, and development of ETL programs using Informatica's data integration tools; Informatica experience must include working with large mission critical data systems (multiple terabytes in size with millions of rows processing per day) Must be able to demonstrate mastery of Informatica command line utilities Must be able to demonstrate mastery of quickly debugging and resolving issues that can arise with Informatica programs Must know how to navigate through the Informatica metadata repository Expert knowledge of Informatica version 8.5 and above Responsibilities This person will be part of the team that supports a growing Data Warehousing and Business Intelligence environment using Informatica, Oracle 11g/10g RDBMS, and SQL Server 2005/2008, Kibana , Elastic Search This individual will be responsible for the following: o Administering the Informatica environment on UNIX , Hands-on experience with Elastic Search Administrator. o Apply patches / upgrades / hotfixes for Informatica PowerCenter o Elasticsearch administration (cluster setup, Fleet server, Agent, Logstash, kibana, data modelling concepts) and Elastic Stack o Proficiency with Elasticsearch DSL for complex query development o Create ETL processes using Ingest Pipelines o Monitor performance, troubleshoot, and tune ETL processes o Will be responsible for 24x7 support o Work with Informatica vendor on any issues that arise o Support the Informatica developer team as needed o Ensure development teams are following appropriate standards o Promote and deploy Informatica code o Assist development team at design time ensuring code will run at scale o Serve as an Informatica Developer as needed to get work completed o Write and maintain BASH shell scripts for administering the Informatica environment o Design and implement appropriate error handling procedures o Develop project, documentation, and ETL standards in conjunction with data architects o Work with other team members to ensure standards around monitoring and management of the Informatica environment within the context of existing enterprise monitoring solutions #LI-INPAS About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at us.nttdata.com NTT DATA endeavors to make https://us.nttdata.com accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at https://us.nttdata.com/en/contact-us. This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click here. If you'd like more information on your EEO rights under the law, please click here. For Pay Transparency information, please click here.
Posted 1 month ago
5.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Infilon Technologies Pvt Ltd is a prominent software development company located in Ahmedabad, is hiring a Senior Site Reliability Engineer (Immediate Joiner) for one of its clients TenForce . TenForce is an expert in EHSQ and Operational Risk Management software, based in Belgium and part of Elisa Industriq - a Finnish group committed to making intelligent manufacturing happen. Job Location - Ahmedabad, Gujarat (Work from Office) Experience - 5+ Years The Site Reliability engineer we are looking for has the following characteristics: Strong team player skills, excellent communication skills, ability to communicate openly and contribute actively to group discussions and brainstorming sessions A proactive approach to identifying problems, performance bottlenecks, and areas for improvement. An affinity with DevOps best practices Willingness to perform root cause analysis on incidents, prepare detailed reports to present to the stakeholders, and develop solutions to prevent similar incidents from occurring in the future. An interest in developing tools to extend the functionality of the monitoring platform. Problem solving skills: you can identify problems, analyze them, and make them disappear. Strong collaboration skill to provide quick and accurate feedback. Who are we looking for? A hand-on experience working with an Enterprise Web Applications and IIS. A good understanding of GIT. Hans-on experience with SQL and REDIS. A working understanding of infrastructure and virtualized environments. Fluent in English (oral and written) and strong communicator. Knowledge of Scrum and of the product Owner role Experience with Elastic Search and Kibana for investigation data sets is a plus. Knowledge of log collection systems (I.e., Logstash, file beats, …) is a plus. Willingness to work with .Net A good knowledge of Linux OS and experience with bash and/or Linux command-line utilities is a plus. What is in it for you You become part of an international multicultural team that loves solving challenges through an unconventional and pragmatic approach but does not tolerate breaking the boundaries of trust, mutual respect, diversity, inclusion, and team-player spirit. Tackling a wide range of challenges daily, across multiple customers and continuously growing and expanding your expertise. A lot of responsibility and impact on a scaling organization with big goals. Eternal respect from your colleagues as you build simple, powerful and future proof solutions. Join us today and power your potential! Interested Candidates kindly share your CV at hr@infilon.com www.infilon.com Ahmedabad, Gujarat
Posted 1 month ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Join us as a Application Support Specialist at Barclays where you'll spearhead the evolution of our digital landscape, driving innovation and excellence. You'll harness cutting-edge technology to revolutionize our digital offerings, ensuring unapparelled customer experiences. As an API Application Support Specialist, you will be accountable for API production support, follow-the-sun mode with focus on exceling in service we provide to our colleagues and customers, you will be providing incident and problem management across the Product Tech API services, meeting the Banks SLA for incident management, joining MIM calls and supporting 24 x 7 x 365 system. To be successful as a Application Support Specialist you should have experienced with: API Support: Working experiance/understnding of API. Working Knowledge of API, aPaaS technologies, Openshift, database and interfaces. Linux/Unix Environment Expertise: Working knowledge of Linux/Unix commands and scripting for automation and optimization. Familiarity with server configurations, log management, and shell scripting. Flexible approach and ability to work under pressure. Communication and Collaboration: Ability to communicate effectively with cross-functional teams and stakeholders. Analytical and Problem-Solving Skills: Strong analytical skills to address complex challenges & effective trouble-shooter towards production issues in Prodtech API environments. Documenting configurations, processes, and best practices for the team. A proactive approach to identifying and mitigating risks. API issues analysis: Must have understanding of Kibana log aggregator tool. System Monitoring and Maintenance. Regularly monitoring system health and ensuring platform stability. Applying patches. Knowledge on Alerting & Monitoring tools like AppD, Netcool etc. Good to have knowledge on Jenkins and Bitbucket. ITIL v3 certified. Troubleshooting and Issue Resolution: Diagnosing and resolving system, application, and performance-related issues. Providing technical support and collaborating with other IT teams to resolve issues promptly. Some Other Highly Values Skills Include Work experience in incident and problem management /business analysis is strongly desired. Good analytical investigation techniques. Own maintain and track incidents through their entire lifecycle, Strong Analytical Skills. Flexible approach and ability to work under pressure. On call support, 24*7 available when he/she is on call. Hands-on and should be able to work independently and if required guide. Good written & oral communication skills. Ability to work under own initiative and handle pressure situations. Good time management skills. Previous second line support experience. You may be assessed on key critical skills relevant for success in role, such as risk and controls, change and transformation, business acumen, strategic thinking and digital and technology, as well as job-specific technical skills. This role is based in Pune. Purpose of the role To effectively monitor and maintain the bank’s critical technology infrastructure and resolve more complex technical issues, whilst minimising disruption to operations. Accountabilities Provision of technical support for the service management function to resolve more complex issues for a specific client of group of clients. Develop the support model and service offering to improve the service to customers and stakeholders. Execution of preventative maintenance tasks on hardware and software and utilisation of monitoring tools/metrics to identify, prevent and address potential issues and ensure optimal performance. Maintenance of a knowledge base containing detailed documentation of resolved cases for future reference, self-service opportunities and knowledge sharing. Analysis of system logs, error messages and user reports to identify the root causes of hardware, software and network issues, and providing a resolution to these issues by fixing or replacing faulty hardware components, reinstalling software, or applying configuration changes. Automation, monitoring enhancements, capacity management, resiliency, business continuity management, front office specific support and stakeholder management. Identification and remediation or raising, through appropriate process, of potential service impacting risks and issues. Proactively assess support activities implementing automations where appropriate to maintain stability and drive efficiency. Actively tune monitoring tools, thresholds, and alerting to ensure issues are known when they occur. Analyst Expectations To perform prescribed activities in a timely manner and to a high standard consistently driving continuous improvement. Requires in-depth technical knowledge and experience in their assigned area of expertise Thorough understanding of the underlying principles and concepts within the area of expertise They lead and supervise a team, guiding and supporting professional development, allocating work requirements and coordinating team resources. If the position has leadership responsibilities, People Leaders are expected to demonstrate a clear set of leadership behaviours to create an environment for colleagues to thrive and deliver to a consistently excellent standard. The four LEAD behaviours are: L – Listen and be authentic, E – Energise and inspire, A – Align across the enterprise, D – Develop others. OR for an individual contributor, they develop technical expertise in work area, acting as an advisor where appropriate. Will have an impact on the work of related teams within the area. Partner with other functions and business areas. Takes responsibility for end results of a team’s operational processing and activities. Escalate breaches of policies / procedure appropriately. Take responsibility for embedding new policies/ procedures adopted due to risk mitigation. Advise and influence decision making within own area of expertise. Take ownership for managing risk and strengthening controls in relation to the work you own or contribute to. Deliver your work and areas of responsibility in line with relevant rules, regulation and codes of conduct. Maintain and continually build an understanding of how own sub-function integrates with function, alongside knowledge of the organisations products, services and processes within the function. Demonstrate understanding of how areas coordinate and contribute to the achievement of the objectives of the organisation sub-function. Make evaluative judgements based on the analysis of factual information, paying attention to detail. Resolve problems by identifying and selecting solutions through the application of acquired technical experience and will be guided by precedents. Guide and persuade team members and communicate complex / sensitive information. Act as contact point for stakeholders outside of the immediate function, while building a network of contacts outside team and external to the organisation. All colleagues will be expected to demonstrate the Barclays Values of Respect, Integrity, Service, Excellence and Stewardship – our moral compass, helping us do what we believe is right. They will also be expected to demonstrate the Barclays Mindset – to Empower, Challenge and Drive – the operating manual for how we behave.
Posted 1 month ago
0 years
0 Lacs
Gurugram, Haryana, India
Remote
Job Summary: WHO ARE YOU? Passionate and motivated. Driven, with an entrepreneurial spirit. Resourceful, innovative, forward thinking and committed. At Live Nation Entertainment, our people embrace these qualities, so if this sounds like you then please read on! THE ROLE As the Abuse Operations Engineering Lead, you'll be part of a mission critical team protecting the Ticketmaster platforms from abusive entities or those who deploy abusive digital behaviours designed to circumvent our controls that protect fair access to tickets. Abuse Operations is a centrally managed command and control centre for abuse investigations, escalations, policies, and tooling for all Ticketmaster properties systems. Abuse Operations Engineers must be able work independently across a broad tech stack, multi-task concurrent problems, perform triage and prioritization as necessary with discretion and pragmatic judgment. They provide expert coordination and perform analysis and remediation of abuse for supported products and services, maintaining a high standard from diagnostics and communication while driving to complete resolution. They actively reduce operational effort by creating/improving automation or working with Software Engineering teams to improve self-healing and self-service tooling, documentation, and processes. What This Role Will Do Provide 1st line support for all Ticketmaster abuse queries Perform on-call duty as part of a global team monitoring the availability and performance of the ticketing systems and APIs used by third-party services, as well as the various internal services and systems on which these interfaces depend. Resolve advanced issues and provide advanced troubleshooting for escalations. Provide Subject Matter Expertise to cross-functional teams on abuse issues, including strategy, issue troubleshooting, and product & tool requirements. Drive continuous improvements to our products, tools, configurations, APIs and processes by sharing learnings, constructive feedback, and design input with internal technical teams and integrators. Independently learn new technologies and master Ticketmaster ticketing platforms products and services to provide 'full stack' diagnostics to help determine the root cause of issues, and where appropriate help our integrators through their issues. Ensure runbooks, resolution responses, internal processes and integration documentation are up to date and to a high standard suitable for internal stakeholder usage. Work on automation to reduce toil What This Person Will Bring BA/BS degree in computer science or related field or relevant work experience in lieu of degree. Experience with bot detection and blocking systems. Troubleshooting skills ranging from diagnosing low-level request issues to large-scale issues with correlating data between various third-party partners and in-house systems Proficiency in Bash/Python/Go etc for operations scripts and text processing. Working knowledge of HTTP protocol and basic web systems, and analysis tools such as Splunk and Kibana/ELK stack, and database products (Oracle/MySQL/DataBricks/Snowflake/etc.) Experience working with a 24/7 shift based team. Experience in a global, fast-paced environment, resolving multiple interrupt-driven priorities simultaneously Passionate and motivated, resourceful, innovative, forward-thinking Strong English language communication skills and the ability to collaborate closely with remote team members Ability to work with autonomy while ensuring that new knowledge is shared with technology teams Committed and able to adapt quickly Embrace continuous learning and continuous improvement
Posted 1 month ago
5.0 years
0 Lacs
Pune, Maharashtra, India
Remote
At NiCE, we don’t limit our challenges. We challenge our limits. Always. We’re ambitious. We’re game changers. And we play to win. We set the highest standards and execute beyond them. And if you’re like us, we can offer you the ultimate career opportunity that will light a fire within you. So, what’s the role all about? We are seeking a proactive and detail-oriented Technical Support Engineer to join our support operations team. In this role, you will provide Tier 3 support for deployed applications, monitor system dashboards and alerts, and collaborate with advanced support and R&D teams to ensure high availability, performance, and reliability of our services. This is a 24x7 rotational support role critical to maintaining seamless global operations. How will you make an impact? Monitor and manage production environments using tools like Azure Monitor, Application Insights, Grafana, and Kibana. Respond to Azure alerts, investigate telemetry and logs, and identify root causes of application issues. Troubleshoot REST APIs using Postman, diagnose request/response failures, and validate integrations. Perform log analysis and diagnostics using Kibana and Application Insights. Collaborate with Tier 4 support and R&D to escalate and resolve complex incidents. Ensure accurate and timely resolution of issues within defined SLAs and KPIs. Contribute to the creation of runbooks, knowledge base articles, and standard operating procedures (SOPs). Participate in 24x7 rotational shifts, including nights, weekends, and holidays. Have you got what it takes? Bachelor’s degree in Computer Science, Information Technology, or related field (B.E/B.Tech/BS). 3–5 years of experience in technical support, application monitoring, or cloud support services. Strong hands-on experience with: Azure Cloud Services (Monitor, Alerts, Application Insights) Grafana and Kibana for metrics/logs visualization Postman for API testing and troubleshooting Good understanding of cloud-native web applications and microservices architecture. Familiarity with Linux/Unix systems and basic shell commands. Experience with ITSM/ticketing tools like ServiceNow, Jira, or Zendesk. Excellent communication, analytical thinking, and problem-solving skills. Willingness to work in a 24x7 rotational support model. You will have an advantage if you also have: Scripting knowledge in Shell, PowerShell, or Python. Understanding of containerization (Docker) and orchestration (Kubernetes). ITIL Foundation certification or working knowledge of ITIL processes. Exposure to CI/CD pipelines and DevOps practices. What’s in it for you? Join an ever-growing, market disrupting, global company where the teams – comprised of the best of the best – work in a fast-paced, collaborative, and creative environment! As the market leader, every day at NiCE is a chance to learn and grow, and there are endless internal career opportunities across multiple roles, disciplines, domains, and locations. If you are passionate, innovative, and excited to constantly raise the bar, you may just be our next NiCEr! Enjoy NiCE-FLEX! At NiCE, we work according to the NiCE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere. Requisition ID: 7435 Reporting into: Technical Manager /Director of Engineering Role Type: Individual Contributor About NiCE NICE Ltd. (NASDAQ: NICE) software products are used by 25,000+ global businesses, including 85 of the Fortune 100 corporations, to deliver extraordinary customer experiences, fight financial crime and ensure public safety. Every day, NiCE software manages more than 120 million customer interactions and monitors 3+ billion financial transactions. Known as an innovation powerhouse that excels in AI, cloud and digital, NiCE is consistently recognized as the market leader in its domains, with over 8,500 employees across 30+ countries. NiCE is proud to be an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, age, sex, marital status, ancestry, neurotype, physical or mental disability, veteran status, gender identity, sexual orientation or any other category protected by law.
Posted 1 month ago
5.0 - 10.0 years
7 - 17 Lacs
Bengaluru
Work from Office
About this role: Wells Fargo is seeking a Lead Software Engineer. In this role, you will: Lead complex technology initiatives including those that are companywide with broad impact Act as a key participant in developing standards and companywide best practices for engineering complex and large scale technology solutions for technology engineering disciplines Design, code, test, debug, and document for projects and programs Review and analyze complex, large-scale technology solutions for tactical and strategic business objectives, enterprise technological environment, and technical challenges that require in-depth evaluation of multiple factors, including intangibles or unprecedented technical factors Make decisions in developing standard and companywide best practices for engineering and technology solutions requiring understanding of industry best practices and new technologies, influencing and leading technology team to meet deliverables and drive new initiatives Collaborate and consult with key technical experts, senior technology team, and external industry groups to resolve complex technical issues and achieve goals Lead projects, teams, or serve as a peer mentor Required Qualifications: 5+ years of Software Engineering experience, or equivalent demonstrated through one or a combination of the following: work experience, training, military experience, education Desired Qualifications: Tool/skills: Ability to troubleshoot issues in an enterprise application. Python, Power Shell Scripting/Programming CI/CD tools - Jenkins, Bamboo (or similar) GIT Version Controlling Software build & application packaging Maven, Gradle Application load-balancing Hardware or software load balancing experience AVI/F5 Application Performance Management tool - Appdynamics/Dynatrace Splunk, Kibana Monitoring. Loan IQ application setup/configuration Ansible, Puppet - Infrastructure automation and configuration management tools. Intermediate knowledge in Windows and Linux Operating systems Must have: Understanding of AutoSys (or similar job scheduler) related to job configuration and execution. Understanding of Tomcat configurations Understanding of uDeploy (or similar deployment tools) Understanding of Pac200/Remedy or ServiceNow enterprise request management tools Basic knowledge of SQL and RDBMS databases. Basic Understanding of Java runtime environment. Ability to troubleshoot job failures and ensure the correct resources are contacted for resolution. Windows Operating System fundamentals. Job Expectations: Experience supporting an enterprise level application as a technologist. Support Dev/SIT and UAT environments. Hosting and managing java and .net based applications on windows server operating systems. Execution and monitoring application jobs from a job scheduler (Autosys) Application code deployment using deployment tools (IBM udeploy) Working with enterprise change management tools (Service Now) Troubleshoot and identify application issues and fixing them. Excellent written and verbal skills - communicating technical issues to stakeholders, providing solutions, and documenting them. Ability to co-ordinate with various teams across geographies to resolve technical issues. Working with SQLs and Oracle database schema management. Support Java runtime on environments and application hosting using Apache Tomcat & NodeJs Automate routine tasks and remove toil. Create new application environment based on the requirements. Incorporate enterprise DevOps & SRE practices. Innovate to improve CI/CD tools/practices. Configure and work with application load balancing software and other monitoring tools. Support Build and Application packaging process. Apply enterprise level security mandates to the environments
Posted 1 month ago
4.0 - 8.0 years
12 - 15 Lacs
Gurugram
Hybrid
Proficiency with the web stack and web services applications Experience in troubleshooting and analytical skills to determine the root cause of issues Working understanding of relational and no-SQL database concepts Experience in Linux and Kibana Required Candidate profile Comfortable with 24*7*365 support role Exceptional verbal and written communication Docker containerization, virtualization Basic networking knowledge Experience in Application Monitoring Tools
Posted 1 month ago
4.0 - 9.0 years
11 - 14 Lacs
Bengaluru
Work from Office
Educational Bachelor of Engineering Service Line Quality Responsibilities Experience in one or more high level programming language like Python or Ruby or GoLang and familiar with Object Oriented Programming. Proficient with designing, deploying and managing distributed systems and service-oriented architectures Design and implement the CI/CD/CT pipeline on one or more tool stack, like Jenkins, Bamboo, azure DevOps, and AWS Code pipeline with hands on experience in common DevOps tools (Jenkins, Sonar, Maven, Git, Nexus, and UCD etc.) Experience in deploying, managing and monitoring applications and services on one or more Cloud and on-premises infrastructure like AWS, Azure, OpenStack, Cloud Foundry, Open shift etc. Proficiency in one or more Infrastructure as code tools (e.g. Terraform, Cloud Formation, Azure ARM etc) Developing, managing monitoring tools and log analysis tools to manage operations with exposure to tools such as App Dynamics, Data Dog, Splunk, Kibana, Prometheus, Grafana Elasticsearch etc. Proven ability to maintain enterprise-scale production software with the knowledge of heterogeneous system landscapes (e.g. Linux, Windows) Expertise in analyzing and troubleshooting large-scale distributed systems and Micro Services with experience with Unix/Linux operating systems internals and administration (e.g., file systems, inodes, system calls) and networking (e.g., TCP/IP, routing, network topologies). Preferred Skills: Technology-DevOps-Continuous Testing
Posted 1 month ago
9.0 - 11.0 years
12 - 17 Lacs
Thiruvananthapuram
Work from Office
Educational Bachelor of Engineering,Bachelor Of Science,Bachelor Of Technology,Bachelor Of Comp. Applications,Master Of Technology,Master Of Engineering,Master Of Science,Master Of Comp. Applications Service Line Engineering Services Responsibilities Collect, clean, and organize large datasets from various sources Perform data analysis using statistical methods, machine learning techniques, and data visualization tools Identify patterns, trends, and anomalies within datasets to uncover insights Develop and maintain data models to represent the organization's business operations Create interactive dashboards and reports to communicate data findings to stakeholders Document data analysis procedures and findings to ensure knowledge transfer Additional Responsibilities: High analytical skills A high degree of initiative and flexibility High customer orientation High quality awareness Excellent verbal and written communication skills Logical thinking and problem solving skills along with an ability to collaborate Two or three industry domain knowledge Understanding of the financial processes for various types of projects and the various pricing models available Client Interfacing skills Knowledge of SDLC and agile methodologies Project and Team management Technical and Professional : 5+ years of experience as a Data Analyst or similar role. Proven track record of collecting, cleaning, analyzing, and interpreting large datasets Expertise in Pipeline designing and Validation Expertise in statistical methods, machine learning techniques, and data mining techniques Proficiency in SQL, Python, PySpark, Looker, Prometheus, Carbon, Clickhouse, Kafka, HDFS and ELK stack (Elasticsearch, Logstash, and Kibana) Experience with data visualization tools such as Grafana and Looker Ability to work independently and as part of a team Problem-solving and analytical skills to extract meaningful insights from data Strong business acumen to understand the implications of data findings Preferred Skills: Technology-Analytics - Packages-Python - Big Data Technology-Reporting Analytics & Visualization-Pentaho Reporting Technology-Cloud Platform-Google Big Data Technology-Cloud Platform-GCP Container services-Google Container Registry(GCR) Generic Skills: Technology-Machine Learning-Python
Posted 1 month ago
0 years
0 Lacs
Gurugram, Haryana, India
Remote
Job Summary: WHO ARE YOU? Passionate and motivated. Driven, with an entrepreneurial spirit. Resourceful, innovative, forward thinking and committed. At Live Nation Entertainment, our people embrace these qualities, so if this sounds like you then please read on! THE ROLE As the Abuse Operations Engineering Lead, you'll be part of a mission critical team protecting the Ticketmaster platforms from abusive entities or those who deploy abusive digital behaviours designed to circumvent our controls that protect fair access to tickets. Abuse Operations is a centrally managed command and control centre for abuse investigations, escalations, policies, and tooling for all Ticketmaster properties systems. Abuse Operations Engineers must be able work independently across a broad tech stack, multi-task concurrent problems, perform triage and prioritization as necessary with discretion and pragmatic judgment. They provide expert coordination and perform analysis and remediation of abuse for supported products and services, maintaining a high standard from diagnostics and communication while driving to complete resolution. They actively reduce operational effort by creating/improving automation or working with Software Engineering teams to improve self-healing and self-service tooling, documentation, and processes. WHAT THIS ROLE WILL DO Provide 1st line support for all Ticketmaster abuse queries Perform on-call duty as part of a global team monitoring the availability and performance of the ticketing systems and APIs used by third-party services, as well as the various internal services and systems on which these interfaces depend. Resolve advanced issues and provide advanced troubleshooting for escalations. Provide Subject Matter Expertise to cross-functional teams on abuse issues, including strategy, issue troubleshooting, and product & tool requirements. Drive continuous improvements to our products, tools, configurations, APIs and processes by sharing learnings, constructive feedback, and design input with internal technical teams and integrators. Independently learn new technologies and master Ticketmaster ticketing platforms products and services to provide 'full stack' diagnostics to help determine the root cause of issues, and where appropriate help our integrators through their issues. Ensure runbooks, resolution responses, internal processes and integration documentation are up to date and to a high standard suitable for internal stakeholder usage. Work on automation to reduce toil WHAT THIS PERSON WILL BRING BA/BS degree in computer science or related field or relevant work experience in lieu of degree. Experience with bot detection and blocking systems. Troubleshooting skills ranging from diagnosing low-level request issues to large-scale issues with correlating data between various third-party partners and in-house systems Proficiency in Bash/Python/Go etc for operations scripts and text processing. Working knowledge of HTTP protocol and basic web systems, and analysis tools such as Splunk and Kibana/ELK stack, and database products (Oracle/MySQL/DataBricks/Snowflake/etc.) Experience working with a 24/7 shift based team. Experience in a global, fast-paced environment, resolving multiple interrupt-driven priorities simultaneously Passionate and motivated, resourceful, innovative, forward-thinking Strong English language communication skills and the ability to collaborate closely with remote team members Ability to work with autonomy while ensuring that new knowledge is shared with technology teams Committed and able to adapt quickly Embrace continuous learning and continuous improvement
Posted 1 month ago
6.0 - 11.0 years
8 - 13 Lacs
Pune
Work from Office
6+ years of overall experience Tableau Desktop and Tableau Server Must have experience in DB/SQL Must have experience in Charts Experience in Reports and Dashboards Experienced in creating custom views and custom subscriptions in tableau 6+ years of overall experience Tableau Desktop and Tableau Server Must have experience in DB/SQL Must have experience in Charts Experience in Reports and Dashboards Experienced in creating custom views and custom subscriptions in tableau server Experience in JIRA, Kibana, GIT, Bitbucket Strong verbal and written communication skills
Posted 1 month ago
1.0 - 3.0 years
3 - 7 Lacs
Bengaluru
Work from Office
As a Software QA Engineer, you have the opportunity to accelerate the delivery and improve the quality of the IBM Cloud and other enterprise Object Storage deployments.You will be part of a test team that is working on problems in a number of areas, including distributed systems, Exabyte-scale storage, Linux OS distribution and network protocols. IBM's breadth of technology offers an amazing range of opportunities for you to make a big impact on the quality of storage used in the IBM Cloud and in some of the largest storage installations in the world. The successful candidate will be a highly analytical, collaborative engineer who is capable of exercising judgment in selecting the appropriate test scenarios and test methodologies, interpreting and report system behavior against customer expectations and communicating anomalous observations to the development teams. Analyze and decompose a complicated software system with changing hardware platforms and design the correct strategy to test this system. Develop automated functional and system level tests using Python, participate in peer code reviews Participate in feature development teams and communicate potential system impacting risks, requirement gaps and test scenarios. Ensure comprehensive test coverage by participating in escaped defect analysis to identify deficiencies and improvements in the automated tests. Create and manage tasks for newly introduced features and document test strategies and plans. Communicate to team members on changes of functional system feature interactions and their system performance impacts. Identify, recommend and implement automated test improvements for new and existing test scenarios. Required education Bachelor's Degree Required technical and professional expertise Minimum Qualifications Ability and tenacity to solve increasingly complex technical issues through analysis and a variety of problem-solving techniques. Strong understanding of SQA methodologies and test development for complex systems. Working knowledge of Object-Oriented Python with demonstrable experience in applying these skills to system integration testing. Working knowledge of system APIs and their uses. Ability to develop, execute and debug functional/system level tests. Working knowledge ofLinux environments. Experience working in an Agile-Scrum development environment. Experience using tools such as Jira, GitHub, TestRail, Docker, Kibana, Grafana BS in CS, CE or similar field, plus 1 to 3 years relevant work experience. Preferred Qualifications Working knowledge of storage industry client tools (e.g. boto3, s3cmd, postman) to perform S3 Knowledge of distributed computing principles, enterprise storage configurations and their usages. Working knowledge of networking protocols Object storage APIs and S3 Analyzing and identifying system performance characterization Experience with Public Cloud services (IBM Cloud, AWS, Azure) MS in CS, CE or similar field and 3+ years of relevant work experience. Preferred technical and professional experience Hiring manager and Recruiter should collaborate to create the relevant verbiage.
Posted 1 month ago
1.0 - 3.0 years
8 - 12 Lacs
Bengaluru
Work from Office
As a DevOps + Site Reliability Engineer you will work in an agile, collaborative environment to build, deploy, configure, and support services in the IBM Cloud. Your responsibilities will encompass the design and implementation of innovative features/automation, fine-tuning and sustaining existing code for optimal performance, uncovering efficiencies, supporting adopters globally, and driving to deliver a highly available cloud offering within IBM Cloud Security Services. In this role, you will be implementing and consuming APIs in the IBM cloud infrastructure environment while configuring integrating services. You will be a motivated self-starter who loves to solve challenging problems and feels comfortable managing multiple and changing priorities, and meeting deadlines in an entrepreneurial environment. Your primary responsibilities include: Contributing to new features and improving existing capabilities or processes while relentlessly troubleshooting problems to deliver. Practice secure development principles supporting continuous integration and delivery leveraging tools such as Tekton, Ansible, and Terraform Orchestrate and maintain Kubernetes/OpenShift clusters to ensure high availability and resilience Collaborate across teams in activities including code reviews, testing, audit support, and mitigating issues. Continuously improve code, automation, testing, monitoring and alerting processes to ensure proactive identification and resolution of potential issues. Lead or contribute to the problem resolution process for our clients, from analysis and troubleshooting, to deploying workarounds or fixes Participate in on-call rotation and lead or contribute to the problem resolution process for our clients, from analysis and troubleshooting, to deploying workarounds or fixes Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise 1-3 Years Experience delivering code and debugging problems. 1-3 Years Experience in SRE, DevOps or similar role A strong preference for collaborative teamwork A rigorous approach to problem-solving Experience with cloud computing technologies Programming skills – scripting, Go, Python, or similar Hands-on experience with Container technologiesKubernetes (IKS), RedHat OpenShift, Docker, Rancher, Podman Proficient with automation tools and CI/CDs Preferred technical and professional experience Strongly preferred experience in working with production Kubernetes/OpenShift environments. Excellent Git skills (merges, rebase, branching, forking, submodules) Experience with Tekton, Ansible, Terraform, Jenkins Experience with Rust, C/C++, or Java Experience using, configuring and troubleshooting CI/CDs Excellent record of improving solutions through automation Experience with monitoring and alerting tools (e.g., Prometheus, Grafana, Kibana, Sysdig, LogDNA). SQL or Postgresql experience
Posted 1 month ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Our Company Changing the world through digital experiences is what Adobe’s all about. We give everyone—from emerging artists to global brands—everything they need to design and deliver exceptional digital experiences! We’re passionate about empowering people to create beautiful and powerful images, videos, and apps, and transform how companies interact with customers across every screen. We’re on a mission to hire the very best and are committed to creating exceptional employee experiences where everyone is respected and has access to equal opportunity. We realize that new ideas can come from everywhere in the organization, and we know the next big idea could be yours! Platform Development and Evangelism: Build scalable AI platforms that are customer-facing. Evangelize the platform with customers and internal stakeholders. Ensure platform scalability, reliability, and performance to meet business needs. Machine Learning Pipeline Design: Design ML pipelines for experiment management, model management, feature management, and model retraining. Implement A/B testing of models. Design APIs for model inferencing at scale. Proven expertise with MLflow, SageMaker, Vertex AI, and Azure AI. LLM Serving and GPU Architecture: Serve as an SME in LLM serving paradigms. Possess deep knowledge of GPU architectures. Expertise in distributed training and serving of large language models. Proficient in model and data parallel training using frameworks like DeepSpeed and service frameworks like vLLM. Model Fine-Tuning and Optimization: Demonstrate proven expertise in model fine-tuning and optimization techniques. Achieve better latencies and accuracies in model results. Reduce training and resource requirements for fine-tuning LLM and LVM models. LLM Models and Use Cases: Have extensive knowledge of different LLM models. Provide insights on the applicability of each model based on use cases. Proven experience in delivering end-to-end solutions from engineering to production for specific customer use cases. DevOps and LLMOps Proficiency: Proven expertise in DevOps and LLMOps practices. Knowledgeable in Kubernetes, Docker, and container orchestration. Deep understanding of LLM orchestration frameworks like Flowise, Langflow, and Langgraph. Skill Matrix LLM: Hugging Face OSS LLMs, GPT, Gemini, Claude, Mixtral, Llama LLM Ops: ML Flow, Langchain, Langraph, LangFlow, Flowise, LLamaIndex, SageMaker, AWS Bedrock, Vertex AI, Azure AI Databases/Datawarehouse: DynamoDB, Cosmos, MongoDB, RDS, MySQL, PostGreSQL, Aurora, Spanner, Google BigQuery. Cloud Knowledge: AWS/Azure/GCP Dev Ops (Knowledge): Kubernetes, Docker, FluentD, Kibana, Grafana, Prometheus Cloud Certifications (Bonus): AWS Professional Solution Architect, AWS Machine Learning Specialty, Azure Solutions Architect Expert Proficient in Python, SQL, Javascript Adobe is proud to be an Equal Employment Opportunity employer. We do not discriminate based on gender, race or color, ethnicity or national origin, age, disability, religion, sexual orientation, gender identity or expression, veteran status, or any other applicable characteristics protected by law. Learn more about our vision here. Adobe aims to make Adobe.com accessible to any and all users. If you have a disability or special need that requires accommodation to navigate our website or complete the application process, email accommodations@adobe.com or call (408) 536-3015.
Posted 1 month ago
2.0 - 4.0 years
4 - 6 Lacs
Pune
Work from Office
The ideal candidate must possess strong communication skills, with an ability to listen and comprehend information and share it with all the key stakeholders, highlighting opportunities for improvement and concerns, if any. He/she must be able to work collaboratively with teams to execute tasks within defined timeframes while maintaining high-quality standards and superior service levels. The ability to take proactive actions and willingness to take up responsibility beyond the assigned work area is a plus. Senior Analyst Roles and responsibilities: Act as gatekeeper for Incident Queues govern the queue flow Work with managers in streamlining & optimizing incidents handling - improve repeat instance rate & time for instance closure Work on different tools which will help to identify Customer experience monitoring Exposure to monitoring tools like Glassbox, Splunk, Dynatrace, Catchpoint etc. Monitoring traffic incoming-outgoing troubleshooting and reporting to stakeholders Conduct extensive quality check, pass feedback & maintain repository which team can refer while execution of task Interact with client stakeholders to understand the customer impact and severity of issue Create report on daily and weekly basis on alerts observed on different dashboard Lead documentation on new projects by getting first hand trained on different activities & pass refines / optimal knowledge to team Lead team on skill enhancement on monitoring, technical knowledge of environment & incident / service request handling Technical and Functional Skills: Bachelors Degree with 2 to 4 years of experience in Incident handling, forum / platform monitoring, incident / service request troubleshooting & reporting. Strong platform monitoring & troubleshooting knowledge is basic requirement. Application based server knowledge is must to handle troubleshooting Experience in synthetic monitoring (preferably e-commerce) & application based server troubleshooting Exposure to ITSM modules. ITIL certification will be an added advantage Strong proficiency in MS Office, especially MS Excel and PPT Good written and verbal communication - should be good to interact with stakeholders
Posted 1 month ago
5.0 years
0 Lacs
Thiruvananthapuram, Kerala, India
On-site
Job Requirements Quest Global is an organization at the forefront of innovation and one of the world’s fastest growing engineering services firms with deep domain knowledge and recognized expertise in the top OEMs across seven industries. We are a twenty-five-year-old company on a journey to becoming a centenary one, driven by aspiration, hunger and humility. We are looking for humble geniuses, who believe that engineering has the potential to make the impossible, possible; innovators, who are not only inspired by technology and innovation, but also perpetually driven to design, develop, and test as a trusted partner for Fortune 500 customers. As a team of remarkably diverse engineers, we recognize that what we are really engineering is a brighter future for us all. If you want to contribute to meaningful work and be part of an organization that truly believes when you win, we all win, and when you fail, we all learn, then we’re eager to hear from you. The achievers and courageous challenge-crushers we seek, have the following characteristics and Skills Roles & Responsibilities: Collaborate with business stakeholders to gather and translate data requirements into analytical solutions. Analyze large and complex datasets to identify trends, patterns, and actionable insights. Design, develop, and maintain interactive dashboards and reports using Elasticsearch/Kibana or Power BI. Conduct ad-hoc analyses and deliver data-driven narratives to support business decision- making. Ensure data accuracy, consistency, and integrity through rigorous validation and quality checks. Write and optimize SQL queries, views, and data models for reporting and analysis. Present findings through compelling visualizations, presentations, and written summaries. Work closely with data engineers and architects to enhance data pipelines and infrastructure. Contribute to the development and standardization of KPIs, metrics, and data governance practices Work Experience Required Skills (Technical Competency): Bachelor’s or master’s degree in data science, Computer Science, Statistics, or a related field. 5+ years of experience in a data analyst or business intelligence role. Proficiency in SQL and data visualization tools such as Power BI, Kibana, or similar. Proficiency in Python, Excel and data storytelling. Understanding of data modelling, ETL concepts, and basic data architecture. Strong analytical thinking and problem-solving skills. Excellent communication and stakeholder management skills To adhere to the Information Security Management policies and procedures. Desired Skills Elasticsearch/Kibana, Power BI, AWS, Python, SQL, Data modelling, Data analysis, Data quality checks, Data validation, Data visualization, Stakeholder communication, Excel, Data storytelling, Team collaboration, Problem-solving, Analytical thinking, Presentation skills, ETL concepts.
Posted 1 month ago
6.0 years
0 Lacs
Thiruvananthapuram, Kerala, India
On-site
Job Requirements Quest Global is an organization at the forefront of innovation and one of the world’s fastest growing engineering services firms with deep domain knowledge and recognized expertise in the top OEMs across seven industries. We are a twenty-five-year-old company on a journey to becoming a centenary one, driven by aspiration, hunger and humility. We are looking for humble geniuses, who believe that engineering has the potential to make the impossible, possible; innovators, who are not only inspired by technology and innovation, but also perpetually driven to design, develop, and test as a trusted partner for Fortune 500 customers. As a team of remarkably diverse engineers, we recognize that what we are really engineering is a brighter future for us all. If you want to contribute to meaningful work and be part of an organization that truly believes when you win, we all win, and when you fail, we all learn, then we’re eager to hear from you. The achievers and courageous challenge-crushers we seek, have the following characteristics and Skills Roles & Responsibilities: Design and develop and maintain scalable and efficient data pipeline. Data engineering, ETL development and data integration. Development data pipeline using Python. Visualization of data in Kibana dashboards. SQL skills and working with relational databases. Knowledge of cloud-based data solutions and services. Take ownership of assigned jobs that are part of new feature implementations, bug fixes and enhancement activities. Document the projects according to project standards (Architecture, technical specifications) Technical communication with internal and external stake holders. Works independently and contributes to the immediate team and work with Architects and other leads. Work Experience Required Skills (Technical Competency): Bachelor’s or Master’s degree in Computer Science, Information Systems, or a related field. 4–6 years of experience in data engineering, preferably in a healthcare. Strong SQL and Python skills; experience with Spark is a plus Knowledge in Cloud Architecture. Excellent programming and debugging skills. Ability to write effective and reusable code according to best practices. Experience with HIPAA-compliant data handling and security practices Excellent communication and presentation skills. Good customer interfacing skills. Desired Skills Experience in Python, PySpark Elastic Search, Kibana Knowledge in SQL Server Ability to deliver without much supervision from lead/managers
Posted 1 month ago
7.0 - 12.0 years
5 - 15 Lacs
Pune
Work from Office
Job Description: Ability to work independently and proactively resolve issues Effective team player with good interpersonal skills Kubernetes administration , Linux Systems Administration, Containerization & Orchestration, Windows Server Administration, Virtualization & Infrastructure, SCOM & Nagios , GitOps Administration, In-depth knowledge of VMware ESXi 6 or above /Kubernetes on prem Experience with HP servers and storage platforms Understanding of server hardware and monitoring principles Experience with Linux and container platforms Automation and scripting expertise middleware technologies Kafka, logstash, elastic search, kibana, ELK. CI/CD, Rancher (VCP certification ) Optional Kubernetes certification (CKA, CKAD, or CKS).
Posted 1 month ago
7.0 - 10.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Req ID: 329228 NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a Platform Administrator to join our team in Noida, Uttar Pradesh (IN-UP), India (IN). Job highlights Qualifications Excellent oral and written English communication skills Specifically needs to be able to communicate well on phone calls Must be able to thrive under pressure and have a strong sense of ownership and responsibility for the project Minimum of 7-10 years of experience in the administration, architecture, design, and development of ETL programs using Informatica’s data integration tools; Informatica experience must include working with large mission critical data systems (multiple terabytes in size with millions of rows processing per day) Must be able to demonstrate mastery of Informatica command line utilities Must be able to demonstrate mastery of quickly debugging and resolving issues that can arise with Informatica programs Must know how to navigate through the Informatica metadata repository Expert knowledge of Informatica version 8.5 and above Responsibilities This person will be part of the team that supports a growing Data Warehousing and Business Intelligence environment using Informatica, Oracle 11g/10g RDBMS, and SQL Server 2005/2008, Kibana , Elastic Search This individual will be responsible for the following: Administering the Informatica environment on UNIX , Hands-on experience with Elastic Search Administrator. Apply patches / upgrades / hotfixes for Informatica PowerCenter Elasticsearch administration (cluster setup, Fleet server, Agent, Logstash, kibana, data modelling concepts) and Elastic Stack Proficiency with Elasticsearch DSL for complex query development Create ETL processes using Ingest Pipelines Monitor performance, troubleshoot, and tune ETL processes Will be responsible for 24x7 support Work with Informatica vendor on any issues that arise Support the Informatica developer team as needed Ensure development teams are following appropriate standards Promote and deploy Informatica code Assist development team at design time ensuring code will run at scale Serve as an Informatica Developer as needed to get work completed Write and maintain BASH shell scripts for administering the Informatica environment Design and implement appropriate error handling procedures Develop project, documentation, and ETL standards in conjunction with data architects Work with other team members to ensure standards around monitoring and management of the Informatica environment within the context of existing enterprise monitoring solutions About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at us.nttdata.com NTT DATA endeavors to make https://us.nttdata.com accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at https://us.nttdata.com/en/contact-us . This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click here . If you'd like more information on your EEO rights under the law, please click here . For Pay Transparency information, please click here .
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39817 Jobs | Dublin
Wipro
19388 Jobs | Bengaluru
Accenture in India
15458 Jobs | Dublin 2
EY
14907 Jobs | London
Uplers
11185 Jobs | Ahmedabad
Amazon
10459 Jobs | Seattle,WA
IBM
9256 Jobs | Armonk
Oracle
9226 Jobs | Redwood City
Accenture services Pvt Ltd
7971 Jobs |
Capgemini
7704 Jobs | Paris,France