Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
7.0 - 12.0 years
5 - 13 Lacs
Pune
Hybrid
So, what’s the role all about? NICE APA is a comprehensive platform that combines Robotic Process Automation, Desktop Automation, Desktop Analytics, AI and Machine Learning solutions as Neva Discover NICE APA is more than just RPA, it's a full platform that brings together automation, analytics, and AI to enhance both front-office and back-office operations. It’s widely used in industries like banking, insurance, telecom, healthcare, and customer service We are seeking a Senior/Specialist Technical Support Engineer with a strong understanding of RPA applications and exceptional troubleshooting skills. The ideal candidate will have hands-on experience in Application Support, the ability to inspect and analyze RPA solutions and Application Server (e.g., Tomcat, Authentication, certificate renewal), and a solid understanding of RPA deployments in both on-premises and cloud-based environments (such as AWS). You should be comfortable supporting hybrid RPA architectures, handling bot automation, licensing, and infrastructure configuration in various environments. Familiarity with cloud-native services used in automation (e.g., AMQ queues, storage, virtual machines, containers) is a plus. Additionally, you’ll need a working knowledge of underlying databases and query optimization to assist with performance and integration issues. You will be responsible for diagnosing and resolving technical issues, collaborating with development and infrastructure teams, contributing to documentation and knowledge bases, and ensuring a seamless and reliable customer experience across multiple systems and platforms How will you make an impact? Interfacing with various R&D groups, Customer Support teams, Business Partners and Customers Globally to address and resolve product issues. Maintain quality and on-going internal and external communication throughout your investigation. Provide high level of support and minimize R&D escalations. Prioritize daily missions/cases and mange critical issues and situations. Contribute to the Knowledge Base, document troubleshooting and problem resolution steps and participate in Educating/Mentoring other support engineers. Willing to perform on call duties as required. Excellent problem-solving skills with the ability to analyze complex issues and implement effective solutions. Good communication skills with the ability to interact with technical and non-technical stakeholders. Have you got what it takes? Minimum of 8 to 12 years of experience in supporting global enterprise customers. Monitor, troubleshoot, and maintain RPA bots in production environments. Monitor, troubleshoot, system performance, application health, and resource usage using tools like Prometheus, Grafana, or similar Data Analytics - Analyze trends, patterns, and anomalies in data to identify product bugs Familiarity with ETL processes and data pipelines - Advantage Provide L1/L2/L3 support for RPA application, ensuring timely resolution of incidents and service requests Familiarity applications running on Linux-based Kubernetes clusters Troubleshoot and resolve incidents related to pods, services, and deployments Provide technical support for applications running on both Windows and Linux platforms, including troubleshooting issues, diagnosing problems, and implementing solutions to ensure optimal performance. Familiarity with Authentication methods like WinSSO and SAML. Knowledge in Windows/Linux Hardening like TLS enforcement, Encryption Enforcement, Certificate Configuration Working and Troubleshooting knowledge in Apache Software components like Tomcat, Apache and ActiveMQ. Working and Troubleshooting knowledge in SVN/Version Control applications Knowledge in DB schema, structure, SQL queries (DML, DDL) and troubleshooting Collect and analyze logs from servers, network devices, applications, and security tools to identify Environment/Application issues. Knowledge in terminal server (Citrix)- advantage Basic understanding on AWS Cloud systems. Network troubleshooting skills (working with different tools) Certification in RPA platforms and working knowledge in RPA application development/support – advantage. NICE Certification - Knowledge in RTI/RTS/APA products – Advantage Integrate NICE's applications with customers on-prem and cloud-based 3rd party tools and applications to ingest/transform/store/validate data. Shift- 24*7 Rotational Shift (include night shift) Other Required Skills: Excellent verbal and written communication skills Strong troubleshooting and problem-solving skills. Self-motivated and directed, with keen attention to details. Team Player - ability to work well in a team-oriented, collaborative environment. Enjoy NICE-FLEX! At NICE, we work according to the NICE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere. Requisition ID: 7326 Reporting into: Tech Manager Role Type: Individual Contributor
Posted 19 hours ago
6.0 - 9.0 years
4 - 9 Lacs
Pune
Hybrid
So, what’s the role all about? NICE APA is a comprehensive platform that combines Robotic Process Automation, Desktop Automation, Desktop Analytics, AI and Machine Learning solutions as Neva Discover NICE APA is more than just RPA, it's a full platform that brings together automation, analytics, and AI to enhance both front-office and back-office operations. It’s widely used in industries like banking, insurance, telecom, healthcare, and customer service We are seeking a Senior/Specialist Technical Support Engineer with a strong understanding of RPA applications and exceptional troubleshooting skills. The ideal candidate will have hands-on experience in Application Support, the ability to inspect and analyze RPA solutions and Application Server (e.g., Tomcat, Authentication, certificate renewal), and a solid understanding of RPA deployments in both on-premises and cloud-based environments (such as AWS). You should be comfortable supporting hybrid RPA architectures, handling bot automation, licensing, and infrastructure configuration in various environments. Familiarity with cloud-native services used in automation (e.g., AMQ queues, storage, virtual machines, containers) is a plus. Additionally, you’ll need a working knowledge of underlying databases and query optimization to assist with performance and integration issues. You will be responsible for diagnosing and resolving technical issues, collaborating with development and infrastructure teams, contributing to documentation and knowledge bases, and ensuring a seamless and reliable customer experience across multiple systems and platforms How will you make an impact? Interfacing with various R&D groups, Customer Support teams, Business Partners and Customers Globally to address and resolve product issues. Maintain quality and on-going internal and external communication throughout your investigation. Provide high level of support and minimize R&D escalations. Prioritize daily missions/cases and mange critical issues and situations. Contribute to the Knowledge Base, document troubleshooting and problem resolution steps and participate in Educating/Mentoring other support engineers. Willing to perform on call duties as required. Excellent problem-solving skills with the ability to analyze complex issues and implement effective solutions. Good communication skills with the ability to interact with technical and non-technical stakeholders. Have you got what it takes? Minimum of 5 to 7 years of experience in supporting global enterprise customers. Monitor, troubleshoot, and maintain RPA bots in production environments. Monitor, troubleshoot, system performance, application health, and resource usage using tools like Prometheus, Grafana, or similar Data Analytics - Analyze trends, patterns, and anomalies in data to identify product bugs Familiarity with ETL processes and data pipelines - Advantage Provide L1/L2/L3 support for RPA application, ensuring timely resolution of incidents and service requests Familiarity applications running on Linux-based Kubernetes clusters Troubleshoot and resolve incidents related to pods, services, and deployments Provide technical support for applications running on both Windows and Linux platforms, including troubleshooting issues, diagnosing problems, and implementing solutions to ensure optimal performance. Familiarity with Authentication methods like WinSSO and SAML. Knowledge in Windows/Linux Hardening like TLS enforcement, Encryption Enforcement, Certificate Configuration Working and Troubleshooting knowledge in Apache Software components like Tomcat, Apache and ActiveMQ. Working and Troubleshooting knowledge in SVN/Version Control applications Knowledge in DB schema, structure, SQL queries (DML, DDL) and troubleshooting Collect and analyze logs from servers, network devices, applications, and security tools to identify Environment/Application issues. Knowledge in terminal server (Citrix)- advantage Basic understanding on AWS Cloud systems. Network troubleshooting skills (working with different tools) Certification in RPA platforms and working knowledge in RPA application development/support – advantage. NICE Certification - Knowledge in RTI/RTS/APA products – Advantage Integrate NICE's applications with customers on-prem and cloud-based 3rd party tools and applications to ingest/transform/store/validate data. Shift- 24*7 Rotational Shift (include night shift) Other Required Skills: Excellent verbal and written communication skills Strong troubleshooting and problem-solving skills. Self-motivated and directed, with keen attention to details. Team Player - ability to work well in a team-oriented, collaborative environment. Enjoy NICE-FLEX! At NICE, we work according to the NICE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere. Requisition ID: 7556 Reporting into: Tech Manager Role Type: Individual Contributor
Posted 19 hours ago
4.0 - 9.0 years
6 - 11 Lacs
Gurugram
Work from Office
Company: Mercer Description: We are seeking a talented individual to join our Technology team at Mercer. This role will be based in Gurugram. This is a hybrid role that has a requirement of working at least three days a week in the office. Senior Devops Engineer We are looking for an ideal candidate with minimum 4 years of experience in Devops. The candidate should have strong and deep understanding of Amazon Web Services (AWS) & Devops tools like Terraform, Ansible, Jenkins. LocationGurgaon Functional AreaEngineering Education QualificationGraduate/ Postgraduate Experience4-6 Years We will count on you to: Deploy infrastructure on AWS cloud using Terraform Deploy updates and fixes Build tools to reduce occurrence of errors and improve customer experience Perform root cause analysis of production errors and resolve technical issues Develop scripts to automation Troubleshooting and maintenance What you need to have: 4+ years of technical experience in devops area. Knowledge of the following technologies and applications: AWS Terraform Linux Administration, Shell Script Ansible CI ServerJenkins Apache/Nginx/Tomcat Good to have Experience in following technologies: Python What makes you stand out: Excellent verbal and written communication skills, comfortable interfacing with business users Good troubleshooting and technical skills Able to work independently Why join our team: We help you be your best through professional development opportunities, interesting work and supportive leaders. We foster a vibrant and inclusive culture where you can work with talented colleagues to create new solutions and have impact for colleagues, clients and communities. Our scale enables us to provide a range of career opportunities, as well as benefits and rewards to enhance your well-being. Mercer, a business ofMarsh McLennan (NYSEMMC),is a global leader in helping clients realize their investment objectives, shape the future of work and enhance health and retirement outcomesfor their people. Marsh McLennan is a global leader in risk, strategy and people, advising clients in 130 countries across four businessesMarsh, Guy Carpenter, Mercer and Oliver Wyman. With annual revenue of $23 billion and more than 85,000 colleagues, Marsh McLennan helps build the confidence to thrive through the power of perspective. For more information, visit mercer.com, or follow on LinkedIn and X. Mercer Assessments business, one of the fastest-growing verticals within the Mercer brand, is a leading global provider of talent measurement and assessment solutions. As part of Mercer, the worlds largest HR consulting firm and a wholly owned subsidiary of Marsh McLennanwe are dedicated to delivering talent foresight that empowers organizations to make informed, critical people decisions. Leveraging a robust, cloud-based assessment platform, Mercer Assessments partners with over 6,000 corporations, 31 sector skill councils, government agencies, and more than 700 educational institutions across 140 countries. Our mission is to help organizations build high-performing teams through effective talent acquisition, development, and workforce transformation strategies. Our research-backed assessments, advanced technology, and comprehensive analytics deliver transformative outcomes for both clients and their employees. We specialize in designing tailored assessment solutions across the employee lifecycle, including pre-hire evaluations, skills assessments, training and development, certification exams, competitions and more. At Mercer Assessments, we are committed to enhancing the way organizations identify, assess, and develop talent. By providing actionable talent foresight, we enable our clients to anticipate future workforce needs and make strategic decisions that drive sustainable growth and innovation. Mercer, a business of Marsh McLennan (NYSEMMC), is a global leader in helping clients realize their investment objectives, shape the future of work and enhance health and retirement outcomes for their people. Marsh McLennan is a global leader in risk, strategy and people, advising clients in 130 countries across four businessesMarsh, Guy Carpenter, Mercer and Oliver Wyman. With annual revenue of $24 billion and more than 90,000 colleagues, Marsh McLennan helps build the confidence to thrive through the power of perspective. For more information, visit mercer.com, or follow on LinkedIn and X. Marsh McLennan is committed to embracing a diverse, inclusive and flexible work environment. We aim to attract and retain the best people and embrace diversity of age, background, caste, disability, ethnic origin, family duties, gender orientation or expression, gender reassignment, marital status, nationality, parental status, personal or social status, political affiliation, race, religion and beliefs, sex/gender, sexual orientation or expression, skin color, or any other characteristic protected by applicable law. Marsh McLennan is committed to hybrid work, which includes the flexibility of working remotely and the collaboration, connections and professional development benefits of working together in the office. All Marsh McLennan colleagues are expected to be in their local office or working onsite with clients at least three days per week. Office-based teams will identify at least one anchor day per week on which their full team will be together in person.
Posted 19 hours ago
5.0 - 6.0 years
55 - 60 Lacs
Pune
Work from Office
At Capgemini Invent, we believe difference drives change. As inventive transformation consultants, we blend our strategic, creative and scientific capabilities,collaborating closely with clients to deliver cutting-edge solutions. Join us to drive transformation tailored to our client's challenges of today and tomorrow. Informed and validated by science and data. Superpowered by creativity and design. All underpinned by technology created with purpose. Data engineers are responsible for building reliable and scalable data infrastructure that enables organizations to derive meaningful insights, make data-driven decisions, and unlock the value of their data assets. - Grade Specific The role support the team in building and maintaining data infrastructure and systems within an organization. Skills (competencies) Ab Initio Agile (Software Development Framework) Apache Hadoop AWS Airflow AWS Athena AWS Code Pipeline AWS EFS AWS EMR AWS Redshift AWS S3 Azure ADLS Gen2 Azure Data Factory Azure Data Lake Storage Azure Databricks Azure Event Hub Azure Stream Analytics Azure Sunapse Bitbucket Change Management Client Centricity Collaboration Continuous Integration and Continuous Delivery (CI/CD) Data Architecture Patterns Data Format Analysis Data Governance Data Modeling Data Validation Data Vault Modeling Database Schema Design Decision-Making DevOps Dimensional Modeling GCP Big Table GCP BigQuery GCP Cloud Storage GCP DataFlow GCP DataProc Git Google Big Tabel Google Data Proc Greenplum HQL IBM Data Stage IBM DB2 Industry Standard Data Modeling (FSLDM) Industry Standard Data Modeling (IBM FSDM)) Influencing Informatica IICS Inmon methodology JavaScript Jenkins Kimball Linux - Redhat Negotiation Netezza NewSQL Oracle Exadata Performance Tuning Perl Platform Update Management Project Management PySpark Python R RDD Optimization SantOs SaS Scala Spark Shell Script Snowflake SPARK SPARK Code Optimization SQL Stakeholder Management Sun Solaris Synapse Talend Teradata Time Management Ubuntu Vendor Management Capgemini is a global business and technology transformation partner, helping organizations to accelerate their dual transition to a digital and sustainable world, while creating tangible impact for enterprises and society. It is a responsible and diverse group of 340,000 team members in more than 50 countries. With its strong over 55-year heritage, Capgemini is trusted by its clients to unlock the value of technology to address the entire breadth of their business needs. It delivers end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fuelled by its market leading capabilities in AI, cloud and data, combined with its deep industry expertise and partner ecosystem. The Group reported 2023 global revenues of "22.5 billion.
Posted 19 hours ago
15.0 - 20.0 years
17 - 22 Lacs
Bengaluru
Work from Office
Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NAMinimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand their data needs and provide effective solutions, ensuring that the data infrastructure is robust and scalable to meet the demands of the organization. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior team members to enhance their skills and knowledge in data engineering.- Continuously evaluate and improve data processes to enhance efficiency and effectiveness. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform.- Experience with data pipeline orchestration tools such as Apache Airflow or similar.- Strong understanding of ETL processes and data warehousing concepts.- Familiarity with cloud platforms like AWS, Azure, or Google Cloud.- Knowledge of programming languages such as Python or Scala for data manipulation. Additional Information:- The candidate should have minimum 7.5 years of experience in Databricks Unified Data Analytics Platform.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 20 hours ago
15.0 - 20.0 years
17 - 22 Lacs
Chennai
Work from Office
Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : PySpark Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand their data needs and provide effective solutions, ensuring that the data infrastructure is robust and scalable to meet the demands of the organization. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior team members to enhance their skills and knowledge in data engineering.- Continuously evaluate and improve data processes to enhance efficiency and effectiveness. Professional & Technical Skills: - Must To Have Skills: Proficiency in PySpark.- Good To Have Skills: Experience with Apache Kafka.- Strong understanding of data warehousing concepts and architecture.- Familiarity with cloud platforms such as AWS or Azure.- Experience in SQL and NoSQL databases for data storage and retrieval. Additional Information:- The candidate should have minimum 5 years of experience in PySpark.- This position is based in Chennai.- A 15 years full time education is required. Qualification 15 years full time education
Posted 20 hours ago
15.0 - 20.0 years
17 - 22 Lacs
Pune
Work from Office
Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NAMinimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand their data needs and provide effective solutions, ensuring that the data infrastructure is robust and scalable to meet the demands of the organization. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior team members to enhance their skills and knowledge in data engineering.- Continuously evaluate and improve data processes to enhance efficiency and effectiveness. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform.- Experience with data pipeline orchestration tools such as Apache Airflow or similar.- Strong understanding of ETL processes and data warehousing concepts.- Familiarity with cloud platforms like AWS, Azure, or Google Cloud.- Knowledge of programming languages such as Python or Scala for data manipulation. Additional Information:- The candidate should have minimum 7.5 years of experience in Databricks Unified Data Analytics Platform.- This position is based at our Pune office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 20 hours ago
15.0 - 20.0 years
17 - 22 Lacs
Pune
Work from Office
Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : PySpark Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to effectively migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand their data needs and provide innovative solutions to enhance data accessibility and usability. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior team members to enhance their skills and knowledge in data engineering.- Continuously evaluate and improve data processes to ensure efficiency and effectiveness. Professional & Technical Skills: - Must To Have Skills: Proficiency in PySpark.- Good To Have Skills: Experience with Apache Kafka, Apache Airflow, and cloud platforms such as AWS or Azure.- Strong understanding of data modeling and database design principles.- Experience with SQL and NoSQL databases for data storage and retrieval.- Familiarity with data warehousing concepts and tools. Additional Information:- The candidate should have minimum 5 years of experience in PySpark.- This position is based in Pune.- A 15 years full time education is required. Qualification 15 years full time education
Posted 20 hours ago
15.0 - 20.0 years
17 - 22 Lacs
Hyderabad
Work from Office
Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NAMinimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand their data needs and provide effective solutions, ensuring that the data infrastructure is robust and scalable to meet the demands of the organization. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior team members to enhance their skills and knowledge in data engineering.- Continuously evaluate and improve data processes to enhance efficiency and effectiveness. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform.- Experience with data pipeline orchestration tools such as Apache Airflow or similar.- Strong understanding of ETL processes and data warehousing concepts.- Familiarity with cloud platforms like AWS, Azure, or Google Cloud.- Knowledge of programming languages such as Python or Scala for data manipulation. Additional Information:- The candidate should have minimum 7.5 years of experience in Databricks Unified Data Analytics Platform.- This position is based at our Hyderabad office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 20 hours ago
5.0 - 10.0 years
7 Lacs
Hyderabad
Work from Office
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Apache JMeter Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will be involved in designing, building, and configuring applications to meet business process and application requirements. Your typical day will revolve around creating innovative solutions to address various business needs and ensuring seamless application functionality. Roles & Responsibilities:- Expected to be an SME- Collaborate and manage the team to perform- Responsible for team decisions- Engage with multiple teams and contribute on key decisions- Provide solutions to problems for their immediate team and across multiple teams- Lead the application development process- Conduct code reviews and ensure coding standards are met- Implement best practices for application design and development Professional & Technical Skills: - Must To Have Skills: Proficiency in Apache JMeter- Strong understanding of performance testing methodologies- Experience in load testing and stress testing- Knowledge of scripting languages for test automation- Familiarity with performance monitoring tools Additional Information:- The candidate should have a minimum of 5 years of experience in Apache JMeter- This position is based at our Hyderabad office- A 15 years full-time education is required Qualification 15 years full time education
Posted 20 hours ago
15.0 - 20.0 years
17 - 22 Lacs
Gurugram
Work from Office
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Spring Boot Good to have skills : Apache SparkMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various teams to ensure project milestones are met, facilitating discussions to address challenges, and guiding your team through the development process. You will also engage in strategic planning to align application development with business objectives, ensuring that the solutions provided are effective and efficient. Your role will require you to stay updated with industry trends and best practices to enhance the overall performance of the applications being developed. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Facilitate knowledge sharing sessions to enhance team capabilities.- Monitor project progress and ensure adherence to timelines and quality standards. Professional & Technical Skills: - Must To Have Skills: Proficiency in Spring Boot.- Good To Have Skills: Experience with Apache Spark.- Strong understanding of microservices architecture and RESTful APIs.- Experience with cloud platforms such as AWS or Azure.- Familiarity with containerization technologies like Docker and Kubernetes. The candidate should have minimum 5 years of experience in Spring Boot. Additional Information:- This position is based at our Gurugram office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 20 hours ago
15.0 - 20.0 years
17 - 22 Lacs
Mumbai
Work from Office
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Apache Spark Good to have skills : Java Enterprise EditionMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. A typical day involves collaborating with team members to understand project needs, developing innovative solutions, and ensuring that applications are aligned with business objectives. You will engage in problem-solving activities, participate in team meetings, and contribute to the overall success of the projects you are involved in, ensuring that the applications you develop are efficient and effective in meeting user needs. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior team members to enhance their skills and knowledge.- Continuously evaluate and improve application performance and user experience. Professional & Technical Skills: - Must To Have Skills: Proficiency in Apache Spark.- Good To Have Skills: Experience with Java Enterprise Edition.- Strong understanding of distributed computing principles.- Experience with data processing frameworks and tools.- Familiarity with cloud platforms and services. Additional Information:- The candidate should have minimum 5 years of experience in Apache Spark.- This position is based in Mumbai.- A 15 years full time education is required. Qualification 15 years full time education
Posted 21 hours ago
5.0 - 10.0 years
7 - 12 Lacs
Kochi
Work from Office
Develop user-friendly web applications using Java and React.js while ensuring high performance. Design, develop, test, and deploy robust and scalable applications. Building and consuming RESTful APIs. Collaborate with the design and development teams to translate UI/UX design wireframes into functional components. Optimize applications for maximum speed and scalability. Stay up-to-date with the latest Java and React.js trends, techniques, and best practices. Participate in code reviews to maintain code quality and ensure alignment with coding standards. Identify and address performance bottlenecks and other issues as they arise. Help us shape the future of Event Driven technologies, including contributing to Apache Kafka, Strimzi, Apache Flink, Vert.x and other relevant open-source projects. Collaborate within a dynamic team environment to comprehend and dissect intricate requirements for event processing solutions. Translate architectural blueprints into actualized code, employing your technical expertise to implement innovative and effective solutions. Conduct comprehensive testing of the developed solutions, ensuring their reliability, efficiency, and seamless integration. Provide ongoing support for the implemented applications, responding promptly to customer inquiries, resolving issues, and optimizing performance. Serve as a subject matter expert, sharing insights and best practices related to product development, fostering knowledge sharing within the team. Continuously monitor the evolving landscape of event-driven technologies, remaining updated on the latest trends and advancements. Collaborate closely with cross-functional teams, including product managers, designers, and developers, to ensure a holistic and harmonious product development process. Take ownership of technical challenges and lead your team to ensure successful delivery, using your problem-solving skills to overcome obstacles. Mentor and guide junior developers, nurturing their growth and development by providing guidance, knowledge transfer, and hands-on training. Engage in agile practices, contributing to backlog grooming, sprint planning, stand-ups, and retrospectives to facilitate effective project management and iteration. Foster a culture of innovation and collaboration, contributing to brainstorming sessions and offering creative ideas to push the boundaries of event processing solutions. Maintain documentation for the developed solutions, ensuring comprehensive and up-to-date records for future reference and knowledge sharing. Involve in building and orchestrating containerized services Required education Bachelor's Degree Preferred education Bachelor's Degree Required technical and professional expertise Proven 5+ years of experience as aFull stack developer (Java and React.js) with a strong portfolio of previous projects. Proficiency in Java, JavaScript, HTML, CSS, and related web technologies. Familiarity with RESTfulAPIs and their integration into applications. Knowledge of modern CICD pipelines and tools like Jenkinsand Travis. Strong understanding of version control systems, particularly Git. Good communication skills and the ability to articulate technical concepts to both technical and non-technical team members. Familiarity with containerizationand orchestrationtechnologies like Docker and Kubernetes for deploying event processing applications. Proficiency in troubleshootingand debugging. Exceptional problem-solving and analytical abilities, with a knack for addressing technical challenges. Ability to work collaboratively in an agile and fast-paced development environment. Leadership skills to guide and mentorjunior developers, fostering their growth and skill development. Strong organizational and time management skills to manage multiple tasks and priorities effectively. Adaptability to stay current with evolving event-driven technologies and industry trends. Customer-focused mindset, with a dedication to delivering solutions that meet or exceed customer expectations. Creative thinking and innovation mindset to drive continuous improvement and explore new possibilities. Collaborative and team-oriented approach to work, valuing open communication and diverse perspectives. Preferred technical and professional ex
Posted 21 hours ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Purpose As a key member of the support team, the Application Support Engineer is responsible for ensuring the stability and availability of critical applications. This role involves monitoring, troubleshooting, and resolving application issues, adhering to defined SLAs and processes. Desired Skills And Experience Experience in an application support or technical support role with strong troubleshooting, problem-solving, and analytical skills. Ability to work independently and effectively and to thrive in a fast-paced, high-pressure environment. Experience in either C# or Java preferred, to support effective troubleshooting and understanding of application code Knowledge of various operating systems (Windows, Linux, macOS) and familiarity with software applications and tools used in the industry. Proficiency in programming languages such as Python, and scripting languages like Bash or PowerShell. Experience with database systems such as MySQL, Oracle, SQL Server, and the ability to write and optimize SQL queries. Understanding of network protocols, configurations, and troubleshooting network-related issues. Skills in managing and configuring servers, including web servers (Apache, Nginx) and application servers (Desirable) Familiarity with ITIL incident management processes. Familiarity with monitoring and logging tools like Nagios, Splunk, or ELK stack to track application performance and issues. Knowledge of version control systems like Git to manage code changes and collaborate with development teams. (Desirable) Experience with cloud platforms such as AWS, Azure, or Google Cloud for deploying and managing applications. (Desirable) Experience in Fixed Income Markets or financial applications support is preferred Strong attention to detail and ability to follow processes. Ability to adapt to changing priorities and client needs with good verbal and written communication skills. Key Responsibilities Provide L1/L2 technical support for applications Monitor application performance and system health, proactively identifying potential issues. Investigate, diagnose, and resolve application incidents and service requests within agreed SLAs. Escalate complex or unresolved issues to the Service Manager or relevant senior teams. Document all support activities, including incident details, troubleshooting steps, and resolutions. Participate in shift handovers and knowledge sharing. Perform routine maintenance tasks to ensure optimal application performance. Collaborate with other support teams to ensure seamless issue resolution. Develop and maintain technical documentation and knowledge base articles. Assist in the implementation of new applications and updates. Provide training and support to junior team members. Show more Show less
Posted 22 hours ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Purpose As a key member of the support team, the Application Support Engineer is responsible for ensuring the stability and availability of critical applications. This role involves monitoring, troubleshooting, and resolving application issues, adhering to defined SLAs and processes. Desired Skills And Experience Experience in an application support or technical support role with strong troubleshooting, problem-solving, and analytical skills. Ability to work independently and effectively and to thrive in a fast-paced, high-pressure environment. Experience in either C# or Java preferred, to support effective troubleshooting and understanding of application code Knowledge of various operating systems (Windows, Linux, macOS) and familiarity with software applications and tools used in the industry. Proficiency in programming languages such as Python, and scripting languages like Bash or PowerShell. Experience with database systems such as MySQL, Oracle, SQL Server, and the ability to write and optimize SQL queries. Understanding of network protocols, configurations, and troubleshooting network-related issues. Skills in managing and configuring servers, including web servers (Apache, Nginx) and application servers (Desirable) Familiarity with ITIL incident management processes. Familiarity with monitoring and logging tools like Nagios, Splunk, or ELK stack to track application performance and issues. Knowledge of version control systems like Git to manage code changes and collaborate with development teams. (Desirable) Experience with cloud platforms such as AWS, Azure, or Google Cloud for deploying and managing applications. (Desirable) Experience in Fixed Income Markets or financial applications support is preferred Strong attention to detail and ability to follow processes. Ability to adapt to changing priorities and client needs with good verbal and written communication skills. Key Responsibilities Provide L1/L2 technical support for applications Monitor application performance and system health, proactively identifying potential issues. Investigate, diagnose, and resolve application incidents and service requests within agreed SLAs. Escalate complex or unresolved issues to the Service Manager or relevant senior teams. Document all support activities, including incident details, troubleshooting steps, and resolutions. Participate in shift handovers and knowledge sharing. Perform routine maintenance tasks to ensure optimal application performance. Collaborate with other support teams to ensure seamless issue resolution. Develop and maintain technical documentation and knowledge base articles. Assist in the implementation of new applications and updates. Provide training and support to junior team members. Show more Show less
Posted 22 hours ago
5.0 - 10.0 years
0 Lacs
Cochin
On-site
Orion Innovation is a premier, award-winning, global business and technology services firm. Orion delivers game-changing business transformation and product development rooted in digital strategy, experience design, and engineering, with a unique combination of agility, scale, and maturity. We work with a wide range of clients across many industries including financial services, professional services, telecommunications and media, consumer products, automotive, industrial automation, professional sports and entertainment, life sciences, ecommerce, and education. Data Engineer Locations- Kochi/Chennai/Coimbatore/Mumbai/Pune/Hyderabad Job Overview : We are seeking a highly skilled and experienced Senior Data Engineer to join our growing data team. The ideal candidate will have deep expertise in Azure Databricks and Python, and experience building scalable data pipelines. Familiarity with Data Fabric architectures is a plus. You'll work closely with data scientists, analysts, and business stakeholders to deliver robust data solutions that drive insights and innovation. Key Responsibilities: Design, build, and maintain large-scale, distributed data pipelines using Azure Databricks and Py Spark. Design, build, and maintain large-scale, distributed data pipelines using Azure Data Factory Develop and optimize data workflows and ETL processes in Azure Cloud environments. Write clean, maintainable, and efficient code in Python for data engineering tasks. Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions. • Monitor and troubleshoot data pipelines for performance and reliability issues. • Implement data quality checks, validations, and ensure data lineage and governance. Contribute to the design and implementation of a Data Fabric architecture (desirable). Required Qualifications: Bachelor's or Master's degree in Computer Science, Engineering, or a related field. 5–10 years of experience in data engineering or related roles. • Expertise in Azure Databricks, Delta Lake, and Spark. • Strong proficiency in Python, especially in a data processing context. Experience with Azure Data Lake, Azure Data Factory, and related Azure services. Hands-on experience in building data ingestion and transformation pipelines. Familiarity with CI/CD pipelines and version control systems (e.g., Git). Good to Have: Experience or understanding of Data Fabric concepts (e.g., data virtualization, unified data access, metadata-driven architectures). • Knowledge of modern data warehousing and lakehouse principles. • Exposure to tools like Apache Airflow, dbt, or similar. Experience working in agile/scrum environments. DP-500 and DP-600 Certifications What We Offer: Competitive salary and performance-based bonuses. Flexible work arrangements. Opportunities for continuous learning and career growth. A collaborative, inclusive, and innovative work culture. www.orioninc.com (21) Orion Innovation: Company Page Admin | LinkedIn Orion is an equal opportunity employer, and all qualified applicants will receive consideration for employment without regard to race, color, creed, religion, sex, sexual orientation, gender identity or expression, pregnancy, age, national origin, citizenship status, disability status, genetic information, protected veteran status, or any other characteristic protected by law. Candidate Privacy Policy Orion Systems Integrators, LLC and its subsidiaries and its affiliates (collectively, "Orion," "we" or "us") are committed to protecting your privacy. This Candidate Privacy Policy (orioninc.com) ("Notice") explains: What information we collect during our application and recruitment process and why we collect it; How we handle that information; and How to access and update that information. Your use of Orion services is governed by any applicable terms in this notice and our general Privacy Policy.
Posted 23 hours ago
3.0 years
0 Lacs
India
Remote
About CuringBusy: CuringBusy is a Fully Remote company , providing subscription-based, remote Executive Assistant services to busy Entrepreneurs, Business owners, and Professionals across the globe. We help entrepreneurs free up their time by outsourcing their everyday, routine admin work like calendar management, email, customer service, and marketing tasks like social media, digital marketing, website management, etc. Job Role : The Digital Marketing Specialist is responsible for developing, implementing, and managing website and marketing strategies that promote products and services across multiple digital channels. This includes creating campaigns and driving digital marketing initiatives on search engine marketing, email marketing, display advertising, website creation & optimization, paid social media, email, and mobile marketing. This role will develop the digital marketing plan and coordinate with the sales, product, content, and other teams to ensure the successful execution of the campaigns . Responsibilities: ● Develop effective digital marketing plans to drive our products/services awareness that align with the company's business needs. ● Website development on WordPress. ● Manage the Search Engine Marketing (SEM), Display Advertising, Website Optimization & Conversion Rate Optimization efforts. ● Lead paid social media strategies & campaigns (LinkedIn, Facebook & Instagram) and identify opportunities to leverage emerging platforms. ● Manage email campaigns including segmentation strategies & automation pieces. ● Provide reporting on the various online performance KPIs such as CTRs, CPMs & CPCs. ● Design, build, and maintain our social media presence. ● Design, and manage Social media and digital marketing Advertising campaigns and implement social media strategy to align with business goals. ● Measures and reports the performance of all digital marketing campaigns and assesses against goals (ROI and KPIs). ● Utilizes strong analytical ability to evaluate end-to-end customer experience across multiple channels and customer touchpoints. Job Qualifications and Skill Sets: ● Bachelor’s or master’s degree in Digital Marketing. ● Demonstrable 3+ years of experience leading and managing SEO/SEM, marketing database, email, social media, and display advertising campaigns. ● Highly creative with experience in identifying target audiences and devising digital campaigns that engage, inform, and motivate ● Experience in optimizing landing pages and user funnels. ● Proficiency in graphic design software including Adobe Photoshop, Adobe Illustrator, and other visual design tools. ● Knowledge of both front-end and back-end languages. ● Familiarity with databases (e.g. MySQL, MongoDB), web servers (e.g. Apache), and UI/UX design ● Solid knowledge of website and marketing analytics tools (e.g., Google Analytics, NetInsight, Omniture, WebTrends, SEMRush, etc.) ● Experienced in any of the Website Platforms: WordPress, Wix, Shopify, WooCommerce, PrestaShop, and Squarespace. ● Experience with advertisement tools (e.g., Google Ads, Facebook Ads, Bing Ads, Instagram Ads, YouTube ads, etc.) ● Knowledge of Software like Mailerlite, Mailchimp, Sendinblue, Sender, Hubspot email marketing, Omnisend, Sendpulse, Mailjet, Moosend, etc. ● Proficient in marketing research and statistical analysis. Your Benefits ● Work from Home Job/Completely Remote. ● Opportunity to grow with a Fast-Growing Startup. ● Exposure to International Clients. Work Timings: Evening Shift or Night Shift 3 pm-12 am/6 pm-3 am ( Monday- Friday) Salary: Based on company standards and skill sets. Job Type: Full-time Pay: As per Industry Standards Show more Show less
Posted 23 hours ago
0 years
0 Lacs
Greater Bengaluru Area
On-site
We are looking for skilled ETL pipeline support engineer to join DevOps team. In this role, you will be ensuring the smooth operation of PROD ETL pipelines. Also responsible for monitoring, troubleshooting existing pipelines. This role requires a strong understanding of SQL, Spark, and experience with AWS Glue and Redshift . Required Skills and Experience: Bachelor's degree in Computer Science, Engineering, or a related field. Proven experience in supporting and maintaining ETL pipelines. Strong proficiency in SQL and experience with relational databases (e.g., Redshift). Solid understanding of distributed computing concepts and experience with Apache Spark. Hands-on experience with AWS Glue and other AWS data services (e.g., S3, Lambda). Experience with data warehousing concepts and best practices. Excellent problem-solving, analytical skills and strong communication and collaboration skills. Ability to work independently and as part of a team. Preferred Skills and Experience: Experience with other ETL tools and technologies Experience with scripting languages (e.g., Python). Familiarity with Agile development methodologies. Experience with data visualization tools (e.g., Tableau, Power BI). Roles & Responsibilities Monitor and maintain existing ETL pipelines, ensuring data quality and availability. identify and resolve pipeline issues and data errors. Troubleshoot data integration processes. If needed, collaborate with data engineers and other stakeholders to resolve complex issues Develop and maintain necessary documentation for ETL processes and pipelines. Participate in on-call rotation for production support. Show more Show less
Posted 23 hours ago
1.0 years
11 - 13 Lacs
Hyderābād
Remote
Experience : 1 + Years Work location: Bangalore, Chennai, Hyderabad, Pune- Hybrid Job Description : GCP Cloud Engineer Shift Time:- 2 to 11 PM IST Budget:- Max 13 LPA Primary Skill & Weightage GCP -50% Kubernetes -25% NodeJS -25% Technical Skills Cloud: Experience working with Google Cloud Platform (GCP) services. Containers & Orchestration: Practical experience deploying and managing applications on Kubernetes. Programming: Proficiency in Node.js development, including building and maintaining RESTful APIs or backend services. Messaging: Familiarity with Apache Kafka for producing and consuming messages. Databases: Experience with PostgreSQL or similar relational databases (writing queries, basic schema design). Version Control: Proficient with Git and GitHub workflows (branching, pull requests, code reviews). Development Tools: Comfortable using Visual Studio Code (VSCode) or similar IDEs. Additional Requirements • Communication: Ability to communicate clearly in English (written and verbal). Collaboration: Experience working in distributed or remote teams. Problem Solving: Demonstrated ability to troubleshoot and debug issues independently. Learning: Willingness to learn new technologies and adapt to changing requirements. ________________________________________ Preferred but not required: Experience with CI/CD pipelines. Familiarity with Agile methodologies. Exposure to monitoring/logging tools (e.g., Prometheus, Grafana, ELK stack). Job Type: Full-time Pay: ₹1,100,000.00 - ₹1,300,000.00 per year Schedule: UK shift Work Location: In person
Posted 23 hours ago
3.0 - 7.0 years
7 - 16 Lacs
Hyderābād
On-site
AI Specialist / Machine Learning Engineer Location: On-site (hyderabad) Department: Data Science & AI Innovation Experience Level: Mid–Senior Reports To: Director of AI / CTO Employment Type: Full-time Job Summary We are seeking a skilled and forward-thinking AI Specialist to join our advanced technology team. In this role, you will lead the design, development, and deployment of cutting-edge AI/ML solutions, including large language models (LLMs), multimodal systems, and generative AI. You will collaborate with cross-functional teams to develop intelligent systems, automate complex workflows, and unlock insights from data at scale. Key Responsibilities Design and implement machine learning models for natural language processing (NLP), computer vision, predictive analytics, and generative AI. Fine-tune and deploy LLMs using frameworks such as Hugging Face Transformers, OpenAI APIs, and Anthropic Claude. Develop Retrieval-Augmented Generation (RAG) pipelines using tools like LangChain, LlamaIndex, and vector databases (e.g., Pinecone, Weaviate, Qdrant). Productionize ML workflows using MLflow, TensorFlow Extended (TFX), or AWS SageMaker Pipelines. Integrate generative AI with business applications, including Copilot-style features, chat interfaces, and workflow automation. Collaborate with data scientists, software engineers, and product managers to build and scale AI-powered products. Monitor, evaluate, and optimize model performance, focusing on fairness, explainability (e.g., SHAP, LIME), and data/model drift. Stay informed on cutting-edge AI research (e.g., NeurIPS, ICLR, arXiv) and evaluate its applicability to business challenges. Tools & Technologies Languages & Frameworks Python, PyTorch, TensorFlow, JAX FastAPI, LangChain, LlamaIndex ML & AI Platforms OpenAI (GPT-4/4o), Anthropic Claude, Mistral, Cohere Hugging Face Hub & Transformers Google Vertex AI, AWS SageMaker, Azure ML Data & Deployment MLflow, DVC, Apache Airflow, Ray Docker, Kubernetes, RESTful APIs, GraphQL Snowflake, BigQuery, Delta Lake Vector Databases & RAG Tools Pinecone, Weaviate, Qdrant, FAISS ChromaDB, Milvus Generative & Multimodal AI DALL·E, Sora, Midjourney, Runway Whisper, CLIP, SAM (Segment Anything Model) Qualifications Bachelor’s or Master’s in Computer Science, AI, Data Science, or related discipline 3–7 years of experience in machine learning or applied AI Hands-on experience deploying ML models to production environments Familiarity with LLM prompt engineering and fine-tuning Strong analytical thinking, problem-solving ability, and communication skills Preferred Qualifications Contributions to open-source AI projects or academic publications Experience with multi-agent frameworks (e.g., AutoGPT, OpenDevin) Knowledge of synthetic data generation and augmentation techniques Job Type: Permanent Pay: ₹734,802.74 - ₹1,663,085.14 per year Benefits: Health insurance Provident Fund Schedule: Day shift Work Location: In person
Posted 23 hours ago
0 years
0 Lacs
India
On-site
Job Title: PHP Intern (Full Stack Preferred) Location: Laxmi Nagar Employment Type: Internship / Entry-Level Experience: Freshers / Interns Job Description We are looking for a highly motivated PHP Intern with a strong foundational knowledge of website design and development, a creative mindset, and a willingness to learn and grow in a fast-paced environment. As part of our cross-functional development team, you will assist in building scalable software solutions and contribute across all stages of the software development life cycle — from ideation to deployment. Full Stack developers will be given preference. Freshers and interns with a strong learning attitude and technical base are encouraged to apply. Key Responsibilities Assist in the creation and implementation of various web-based applications and platforms. Work on development tasks using Core PHP, LAMP stack, WordPress, Magento, and other CMSs. Support integration of third-party APIs and external systems. Help design intuitive, user-friendly front-end experiences using HTML5, CSS3, JavaScript, jQuery, and AJAX. Work alongside senior developers on Shopify, React, Flutter, and other latest tech stacks. Collaborate on database design and management using MySQL or NoSQL. Participate in DevOps processes and deployment via Nginx, Apache, and AWS. Utilize version control and collaboration through GitHub. Stay current with new technologies and industry trends to improve performance and usability. Preferred Skills & Qualifications Basic experience or academic knowledge in PHP and Full Stack Development. Familiarity with CMS platforms like WordPress, Magento, and Shopify. Understanding of front-end frameworks and responsive design principles. Exposure to cloud services like AWS is a plus. Good analytical, debugging, and problem-solving skills. Job Type: Full-time Pay: ₹5,000.00 per month Schedule: Day shift Work Location: In person Application Deadline: 22/06/2025 Expected Start Date: 22/06/2025
Posted 23 hours ago
5.0 years
0 Lacs
Gurgaon
Remote
About Us: At apexanalytix, we’re lifelong innovators! Since the date of our founding nearly four decades ago we’ve been consistently growing, profitable, and delivering the best procure-to-pay solutions to the world. We’re the perfect balance of established company and start-up. You will find a unique home here. And you’ll recognize the names of our clients. Most of them are on The Global 2000. They trust us to give them the latest in controls, audit and analytics software every day. Industry analysts consistently rank us as a top supplier management solution, and you’ll be helping build that reputation. Read more about apexanalytix - https://www.apexanalytix.com/about/ Job Details The Role Quick Take - We are looking for a highly skilled systems engineer with experience working with Virtualization, Linux, Kubernetes, and Server Infrastructure. The engineer will be responsible to design, deploy, and maintain enterprise-grade cloud infrastructure using Apache CloudStack or similar technology, Kubernetes on Linux operating system. The Work - Hypervisor Administration & Engineering Architect, deploy, and manage Apache CloudStack for private and hybrid cloud environments. Manage and optimize KVM or similar virtualization technology Implement high-availability cloud services using redundant networking, storage, and compute. Automate infrastructure provisioning using OpenTofu, Ansible, and API scripting. Troubleshoot and optimize hypervisor networking (virtual routers, isolated networks), storage, and API integrations. Working experience with shared storage technologies like GFS and NFS. Kubernetes & Container Orchestration Deploy and manage Kubernetes clusters in on-premises and hybrid environments. Integrate Cluster API (CAPI) for automated K8s provisioning. Manage Helm, Azure Devops, and ingress (Nginx/Citrix) for application deployment. Implement container security best practices, policy-based access control, and resource optimization. Linux Administration Configure and maintain RedHat HA Clustering (Pacemaker, Corosync) for mission-critical applications. Manage GFS2 shared storage, cluster fencing, and high-availability networking. Ensure seamless failover and data consistency across cluster nodes. Perform Linux OS hardening, security patching, performance tuning, and troubleshooting. Physical Server Maintenance & Hardware Management Perform physical server installation, diagnostics, firmware upgrades, and maintenance. Work with SAN/NAS storage, network switches, and power management in data centers. Implement out-of-band management (IPMI/iLO/DRAC) for remote server monitoring and recovery. • Ensure hardware resilience, failure prediction, and proper capacity planning. Automation, Monitoring & Performance Optimization • Automate infrastructure provisioning, monitoring, and self-healing capabilities. Implement Prometheus, Grafana, and custom scripting via API for proactive monitoring. • Optimize compute, storage, and network performance in large-scale environments. • Implement disaster recovery (DR) and backup solutions for cloud workloads. Collaboration & Documentation • Work closely with DevOps, Enterprise Support, and software Developers to streamline cloud workflows. • Maintain detailed infrastructure documentation, playbooks, and incident reports. Train and mentor junior engineers on CloudStack, Kubernetes, and HA Clustering. The Must-Haves - 5+ years of experience in CloudStack or similar virtualization platform, Kubernetes, and Linux system administration. Strong expertise in Apache CloudStack (4.19+) or similar virtualization platform, KVM hypervisor, and Cluster API (CAPI). Extensive experience in RedHat HA Clustering (Pacemaker, Corosync) and GFS2 shared storage. Proficiency in OpenTofu, Ansible, Bash, Python, and Go for infrastructure automation. Experience with networking (VXLAN, SDN, BGP) and security best practices. Hands-on expertise in physical server maintenance, IPMI/iLO, RAID, and SAN storage. Strong troubleshooting skills in Linux performance tuning, logs, and kernel debugging. Knowledge of monitoring tools (Prometheus, Grafana, Alert manager). Preferred Qualifications • Experience with multi-cloud (AWS, Azure, GCP) or hybrid cloud environments. • Familiarity with CloudStack API customization, plugin development. • Strong background in disaster recovery (DR) and backup solutions for cloud environments. • Understanding of service meshes, ingress, and SSO. • Experience is Cisco UCS platform management. Over the years, we’ve discovered that the most effective and successful associates at apexanalytix are people who have a specific combination of values, skills, and behaviors that we call “The apex Way”. Read more about The apex Way - https://www.apexanalytix.com/careers/ Benefits At apexanalytix we know that our associates are the reason behind our successes. We truly value you as an associate and part of our professional family. Our goal is to offer the very best benefits possible to you and your loved ones. When it comes to benefits, whether for yourself or your family the most important aspect is choice. And we get that. apexanalytix offers competitive benefits for the countries that we serve, in addition to our BeWell@apex initiative that encourages employees’ growth in six key wellness areas: Emotional, Physical, Community, Financial, Social, and Intelligence. With resources such as a strong Mentor Program, Internal Training Portal, plus Education, Tuition, and Certification Assistance, we provide tools for our associates to grow and develop.
Posted 23 hours ago
1.0 years
11 - 13 Lacs
Pune
Remote
Experience : 1 + Years Work location: Bangalore, Chennai, Hyderabad, Pune- Hybrid Job Description : GCP Cloud Engineer Shift Time:- 2 to 11 PM IST Budget:- Max 13 LPA Primary Skill & Weightage GCP -50% Kubernetes -25% NodeJS -25% Technical Skills Cloud: Experience working with Google Cloud Platform (GCP) services. Containers & Orchestration: Practical experience deploying and managing applications on Kubernetes. Programming: Proficiency in Node.js development, including building and maintaining RESTful APIs or backend services. Messaging: Familiarity with Apache Kafka for producing and consuming messages. Databases: Experience with PostgreSQL or similar relational databases (writing queries, basic schema design). Version Control: Proficient with Git and GitHub workflows (branching, pull requests, code reviews). Development Tools: Comfortable using Visual Studio Code (VSCode) or similar IDEs. Additional Requirements • Communication: Ability to communicate clearly in English (written and verbal). Collaboration: Experience working in distributed or remote teams. Problem Solving: Demonstrated ability to troubleshoot and debug issues independently. Learning: Willingness to learn new technologies and adapt to changing requirements. ________________________________________ Preferred but not required: Experience with CI/CD pipelines. Familiarity with Agile methodologies. Exposure to monitoring/logging tools (e.g., Prometheus, Grafana, ELK stack). Job Type: Full-time Pay: ₹1,100,000.00 - ₹1,300,000.00 per year Schedule: UK shift Work Location: In person
Posted 23 hours ago
130.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description Manager, Quality Engineer The Opportunity Based in Hyderabad, join a global healthcare biopharma company and be part of a 130- year legacy of success backed by ethical integrity, forward momentum, and an inspiring mission to achieve new milestones in global healthcare. Be part of an organisation driven by digital technology and data-backed approaches that support a diversified portfolio of prescription medicines, vaccines, and animal health products. Drive innovation and execution excellence. Be a part of a team with passion for using data, analytics, and insights to drive decision-making, and which creates custom software, allowing us to tackle some of the world's greatest health threats. Our Technology Centres focus on creating a space where teams can come together to deliver business solutions that save and improve lives. An integral part of our company’s’ IT operating model, Tech Centres are globally distributed locations where each IT division has employees to enable our digital transformation journey and drive business outcomes. These locations, in addition to the other sites, are essential to supporting our business and strategy. A focused group of leaders in each Tech Centre helps to ensure we can manage and improve each location, from investing in growth, success, and well-being of our people, to making sure colleagues from each IT division feel a sense of belonging to managing critical emergencies. And together, we must leverage the strength of our team to collaborate globally to optimize connections and share best practices across the Tech Centres. Role Overview Develop and Implement Advanced Automated Testing Frameworks Architect, design, and maintain sophisticated automated testing frameworks for data pipelines and ETL processes, ensuring robust data quality and reliability. Conduct Comprehensive Quality Assurance Testing Lead the execution of extensive testing strategies, including functional, regression, performance, and security testing, to validate data accuracy and integrity across the bronze layer. Monitor and Enhance Data Reliability Collaborate with the data engineering team to establish and refine monitoring and alerting systems that proactively identify data quality issues and system failures, implementing corrective actions as needed. What Will You Do In This Role Develop and Implement Advanced Automated Testing Frameworks Architect, design, and maintain sophisticated automated testing frameworks for data pipelines and ETL processes, ensuring robust data quality and reliability. Conduct Comprehensive Quality Assurance Testing Lead the execution of extensive testing strategies, including functional, regression, performance, and security testing, to validate data accuracy and integrity across the bronze layer. Monitor and Enhance Data Reliability Collaborate with the data engineering team to establish and refine monitoring and alerting systems that proactively identify data quality issues and system failures, implementing corrective actions as needed. Leverage Generative AI Innovate and apply generative AI techniques to enhance testing processes, automate complex data validation scenarios, and improve overall data quality assurance workflows. Collaborate with Cross-Functional Teams Serve as a key liaison between Data Engineers, Product Analysts, and other stakeholders to deeply understand data requirements and ensure that testing aligns with strategic business objectives. Document and Standardize Testing Processes Create and maintain comprehensive documentation of testing procedures, results, and best practices, facilitating knowledge sharing and continuous improvement across the organization. Drive Continuous Improvement Initiatives Lead efforts to develop and implement best practices for QA automation and reliability, including conducting code reviews, mentoring junior team members, and optimizing testing processes. What You Should Have Educational Background Bachelor's degree in computer science, Engineering, Information Technology, or a related field Experience 4+ years of experience in QA automation, with a strong focus on data quality and reliability testing in complex data engineering environments. Technical Skills Advanced proficiency in programming languages such as Python, Java, or similar for writing and optimizing automated tests. Extensive experience with testing frameworks and tools (e.g., Selenium, JUnit, pytest) and data validation tools, with a focus on scalability and performance. Deep familiarity with data processing frameworks (e.g., Apache Spark) and data storage solutions (e.g., SQL, NoSQL), including performance tuning and optimization. Strong understanding of generative AI concepts and tools, and their application in enhancing data quality and testing methodologies. Proficiency in using Jira Xray for advanced test management, including creating, executing, and tracking complex test cases and defects. Analytical Skills Exceptional analytical and problem-solving skills, with a proven ability to identify, troubleshoot, and resolve intricate data quality issues effectively. Communication Skills Outstanding verbal and written communication skills, with the ability to articulate complex technical concepts to both technical and non-technical stakeholders. Preferred Qualifications Experience with Cloud Platforms Extensive familiarity with cloud data services (e.g., AWS, Azure, Google Cloud) and their QA tools, including experience in cloud-based testing environments. Knowledge of Data Governance In-depth understanding of data governance principles and practices, including data lineage, metadata management, and compliance requirements. Experience with CI/CD Pipelines Strong knowledge of continuous integration and continuous deployment (CI/CD) practices and tools (e.g., Jenkins, GitLab CI), with experience in automating testing within CI/CD workflows. Certifications Relevant certifications in QA automation or data engineering (e.g., ISTQB, AWS Certified Data Analytics) are highly regarded. Agile Methodologies Proven experience working in Agile/Scrum environments, with a strong understanding of Agile testing practices and principles. Our technology teams operate as business partners, proposing ideas and innovative solutions that enable new organizational capabilities. We collaborate internationally to deliver services and solutions that help everyone be more productive and enable innovation. Who We Are We are known as Merck & Co., Inc., Rahway, New Jersey, USA in the United States and Canada and MSD everywhere else. For more than a century, we have been inventing for life, bringing forward medicines and vaccines for many of the world's most challenging diseases. Today, our company continues to be at the forefront of research to deliver innovative health solutions and advance the prevention and treatment of diseases that threaten people and animals around the world. What We Look For Imagine getting up in the morning for a job as important as helping to save and improve lives around the world. Here, you have that opportunity. You can put your empathy, creativity, digital mastery, or scientific genius to work in collaboration with a diverse group of colleagues who pursue and bring hope to countless people who are battling some of the most challenging diseases of our time. Our team is constantly evolving, so if you are among the intellectually curious, join us—and start making your impact today. #HYDIT2025 Current Employees apply HERE Current Contingent Workers apply HERE Search Firm Representatives Please Read Carefully Merck & Co., Inc., Rahway, NJ, USA, also known as Merck Sharp & Dohme LLC, Rahway, NJ, USA, does not accept unsolicited assistance from search firms for employment opportunities. All CVs / resumes submitted by search firms to any employee at our company without a valid written search agreement in place for this position will be deemed the sole property of our company. No fee will be paid in the event a candidate is hired by our company as a result of an agency referral where no pre-existing agreement is in place. Where agency agreements are in place, introductions are position specific. Please, no phone calls or emails. Employee Status Regular Relocation VISA Sponsorship Travel Requirements Flexible Work Arrangements Hybrid Shift Valid Driving License Hazardous Material(s) Required Skills Business Intelligence (BI), Database Administration, Data Engineering, Data Management, Data Modeling, Data Visualization, Design Applications, Information Management, Software Development, Software Development Life Cycle (SDLC), System Designs Preferred Skills Job Posting End Date 08/31/2025 A job posting is effective until 11 59 59PM on the day BEFORE the listed job posting end date. Please ensure you apply to a job posting no later than the day BEFORE the job posting end date. Requisition ID R345312 Show more Show less
Posted 23 hours ago
0.0 - 1.0 years
0 Lacs
Mumbai
On-site
Job Information Industry IT Services Date Opened 06/16/2025 Job Type Software Engineering Work Experience 0-1 years City Mumbai State/Province Maharashtra Country India Zip/Postal Code 400080 Job Description What we want: We are looking for a Intern DevOps Engineer who should have good experience with Linux and exposure to DevOps Tools. Who we are: Vertoz (NSEI: VERTOZ), an AI-powered MadTech and CloudTech Platform offering Digital Advertising, Marketing and Monetization (MadTech) & Digital Identity, and Cloud Infrastructure (CloudTech) caters to Businesses, Digital Marketers, Advertising Agencies, Digital Publishers, Cloud Providers, and Technology companies. For more details, please visit our website here. What you will do: Linux - be comfortable with the command line. (Preferably on Ubuntu. Completion of a course will be an advantage) Possess knowledge of AWS or equivalent cloud services provider. Virtualization (KVM, VMware, or VirtualBox) Knowledge of networking (OSI, basic troubleshooting, Internet services) Knowledge of web technologies like Redis, Apache Tomcat, or Apache Web server. Should know any SQL-based DB (MySQL MariaDB or PostgreSQL) Must be self-driven and able to follow and execute instructions specified in user guides. Knowledge of Jenkins, Ansible /chef/puppet, git, and docker preferred. Must be able to document activities, procedures, etc. Requirements BE, BSC in CS/IT, ME in CS & MSC in CS/IT Linux (RHCE/RHCSA) Certification is must Mumbai candidates only Willing to work in a 24x7 environment Benefits No dress codes Flexible working hours 5 days working 24 Annual Leaves International Presence Celebrations Team outings
Posted 23 hours ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2