Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
4.0 years
0 Lacs
Gurugram, Haryana, India
On-site
R1 is a leading provider of technology-driven solutions that help hospitals and health systems to manage their financial systems and improve patients’ experience. We are the one company that combines the deep expertise of a global workforce of revenue cycle professionals with the industry's most advanced technology platform, encompassing sophisticated analytics, Al, intelligent automation and workflow orchestration. R1 is a place where we think boldly to create opportunities for everyone to innovate and grow. A place where we partner with purpose through transparency and inclusion. We are a global community of engineers, front-line associates, healthcare operators, and RCM experts that work together to go beyond for all those we serve. Because we know that all this adds up to something more, a place where we're all together better R1 India is proud to be recognized amongst Top 25 Best Companies to Work For 2024, by the Great Place to Work Institute. This is our second consecutive recognition on this prestigious Best Workplaces list, building on the Top 50 recognition we achieved in 2023. Our focus on employee wellbeing and inclusion and diversity is demonstrated through prestigious recognitions with R1 India being ranked amongst Best in Healthcare, Top 100 Best Companies for Women by Avtar & Seramount, and amongst Top 10 Best Workplaces in Health & Wellness. We are committed to transform the healthcare industry with our innovative revenue cycle management services. Our goal is to ‘make healthcare work better for all’ by enabling efficiency for healthcare systems, hospitals, and physician practices. With over 30,000 employees globally, we are about 16,000+ strong in India with presence in Delhi NCR, Hyderabad, Bangalore, and Chennai. Our inclusive culture ensures that every employee feels valued, respected, and appreciated with a robust set of employee benefits and engagement activities. Designation : Lead Associate Reports to (level of category) : Individual – COA(Performance Management) Role Objective Identifying revenue gain opportunity or denial prevention opportunities by reviewing the open AR claims/denied claims Essential Duties And Responsibilities Denied Claim Reviews/Account level reviews Identifying themes/trends through data reviews Coordinating with requirement stakeholders on the issues/themes/trends identifies Publishing assigned reports/tasks Analysis data to identify process gaps, prepare reports and share findings for Metrics improvement. Identifying automation/process efficiencies Maintain a strong focus on identifying the root cause of denials while creating sustainable solutions to prevent future denials. Able to interact independently with counterparts if required Must operate utilizing aggressive operating metrics. Quality Maintenance as per the required standards Understanding client requests requirement and develop a solution Creating adhoc reports utilizing SQL/snowflake, Excel, PowerBI or R1 inhouse applications/tool Required Skill Set Candidate should be good in Denial Management/AR Follow up (4-8 years exp required) Ability to interact positively with team members, peer group and seniors. Good analytical skills and proficiency with MS Word, Excel and Powerpoint Good communication Skills (both written & verbal) Qualifications Graduate in any discipline from a recognized educational Certifications in Power BI, Excel, SQL/Snowflake would add advantage Working in an evolving healthcare setting, we use our shared expertise to deliver innovative solutions. Our fast-growing team has opportunities to learn and grow through rewarding interactions, collaboration and the freedom to explore professional interests. Our associates are given valuable opportunities to contribute, to innovate and create meaningful work that makes an impact in the communities we serve around the world. We also offer a culture of excellence that drives customer success and improves patient care. We believe in giving back to the community and offer a competitive benefits package. To learn more, visit: r1rcm.com Visit us on Facebook
Posted 2 days ago
10.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Locations: Noida/ Gurgaon/ Indore/ Bangalore/ Pune/ Hyderabad Job Description DevOps architect with Docker,Kubernetes expertise. Seeking a highly skilled DevOps Architect with deep expertise in Linux, Kubernetes, Docker , and related technologies. The ideal candidate will design, implement, and manage scalable, secure, and automated infrastructure solutions, ensuring the seamless integration of development and operational processes. You will be a key player in the architecture and implementation of CI/CD pipelines, managing infrastructure, container orchestration, and system monitoring. Roles & Responsibilities Key Responsibilities: Design and implement DevOps solutions that automate software delivery pipelines and infrastructure provisioning. Architect and maintain scalable Kubernetes clusters to manage containerized applications across multiple environments. Leverage Docker to build, deploy, and manage containerized applications in development, staging, and production environments. Optimize and secure Linux-based environments for application performance, reliability, and security. Collaborate with development teams to implement CI/CD pipelines using tools like Jenkins, GitLab CI, CircleCI, or similar . Monitor, troubleshoot, and improve system performance, security, and availability through effective monitoring and logging solutions (e.g., Prometheus, Grafana, ELK Stack). Automate configuration management and system provisioning tasks on-premise environments. Implement security best practices and compliance measures, including secrets management, network segmentation, and vulnerability scanning. Mentor and guide junior DevOps engineers and promote best practices in DevOps, automation, and cloud-native architecture. Stay up-to-date with industry trends and evolving DevOps tools and technologies to continuously improve systems and processes. Required Skills and Experience: 10+ years of experience in IT infrastructure, DevOps, or systems engineering. Strong experience with Linux systems administration (Red Hat, Ubuntu, CentOS). 3+ years of hands-on experience with Kubernetes in production environments, including managing and scaling clusters. Extensive knowledge of Docker for building, deploying, and managing containers. Proficiency with CI/CD tools such as Jenkins, GitLab CI, Bamboo , or similar. Familiarity with monitoring and logging solutions (Prometheus, Grafana, ELK Stack, etc.). Strong understanding of networking, security best practices , and cloud-based security solutions. Hands-on experience with scripting and automation tools like Bash, Python Excellent troubleshooting, problem-solving, and analytical skills. Experience with Git or other version control systems. Good to have Skills: Experience with service mesh technologies (e.g., Istio, Linkerd) and API gateways . Familiarity with container security tools such as Aqua Security, Twistlock , or similar. Familiarity with Kafka , RabbitMQ, SOLR
Posted 2 days ago
8.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Reference # 323131BR Job Type Full Time Your role The individual in this role will be accountable for successful and timely delivery of projects in an agile environment where digital products are designed and built using cutting-edge technology for WMA clients and Advisors.. It is a devops role that entails working with teams located in US and India This role will include, but not be limited to, the following: maintain and build ci/cd pipelines migrate applications to cloud environment build scripts and dashboards for monitoring health of application build tools to reduce occurrence of errors and improve customer experience deployment of changes in prod and non-prod environments follow release management processes for application releases maintain stability of non-prod environments work with development, qa and support groups in trouble shooting environment issues Your team You'll be working as an engineering leader in the Client Data and Onboarding Team in India. We are responsible for WMA (Wealth Management Americas) client facing technology applications. This leadership role entails working with teams in US and India. You will play an important role of ensuring scalable development methodology is followed across multiple teams and participate in strategy discussions with business, and technology strategy discussions with architects. Our culture centers around innovation, partnership, transparency, and passion for the future. Diversity helps us grow, together. That’s why we are committed to fostering and advancing diversity, equity, and inclusion. It strengthens our business and brings value to our clients. Your expertise should carry 8+ years of experience to develop, build and maintain gitlab ci/cd pipelines use containerization technologies, orchestration tools (k8s), build tools (maven, gradle), vcs (gitlab), sonar, fortify tools to build robust deploy and release infrastructure deploy changes in prod and non prod azure cloud infrastructure using helm, terraform, ansible and setup appropriate observability measures build scripts (bash, python, puppet) and dashboards for monitoring health of applications (appdynamics, splunk, appinsights) possess basic networking knowledge (load balancing, ssh, certificates), middleware knowledge (mq, kafka, azure service bus, event hub) follow release management processes for application releases maintain stability of non-prod environments work with development, qa and support groups in trouble shooting environment issues About Us UBS is the world’s largest and the only truly global wealth manager. We operate through four business divisions: Global Wealth Management, Personal & Corporate Banking, Asset Management and the Investment Bank. Our global reach and the breadth of our expertise set us apart from our competitors. We have a presence in all major financial centers in more than 50 countries. How We Hire We may request you to complete one or more assessments during the application process. Learn more Join us At UBS, we know that it's our people, with their diverse skills, experiences and backgrounds, who drive our ongoing success. We’re dedicated to our craft and passionate about putting our people first, with new challenges, a supportive team, opportunities to grow and flexible working options when possible. Our inclusive culture brings out the best in our employees, wherever they are on their career journey. We also recognize that great work is never done alone. That’s why collaboration is at the heart of everything we do. Because together, we’re more than ourselves. We’re committed to disability inclusion and if you need reasonable accommodation/adjustments throughout our recruitment process, you can always contact us. Disclaimer / Policy Statements UBS is an Equal Opportunity Employer. We respect and seek to empower each individual and support the diverse cultures, perspectives, skills and experiences within our workforce.
Posted 2 days ago
10.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Locations: Noida/ Gurgaon/ Indore/ Bangalore/ Pune/ Hyderabad Job Description DevOps architect with Docker,Kubernetes expertise. Seeking a highly skilled DevOps Architect with deep expertise in Linux, Kubernetes, Docker , and related technologies. The ideal candidate will design, implement, and manage scalable, secure, and automated infrastructure solutions, ensuring the seamless integration of development and operational processes. You will be a key player in the architecture and implementation of CI/CD pipelines, managing infrastructure, container orchestration, and system monitoring. Roles & Responsibilities Key Responsibilities: Design and implement DevOps solutions that automate software delivery pipelines and infrastructure provisioning. Architect and maintain scalable Kubernetes clusters to manage containerized applications across multiple environments. Leverage Docker to build, deploy, and manage containerized applications in development, staging, and production environments. Optimize and secure Linux-based environments for application performance, reliability, and security. Collaborate with development teams to implement CI/CD pipelines using tools like Jenkins, GitLab CI, CircleCI, or similar . Monitor, troubleshoot, and improve system performance, security, and availability through effective monitoring and logging solutions (e.g., Prometheus, Grafana, ELK Stack). Automate configuration management and system provisioning tasks on-premise environments. Implement security best practices and compliance measures, including secrets management, network segmentation, and vulnerability scanning. Mentor and guide junior DevOps engineers and promote best practices in DevOps, automation, and cloud-native architecture. Stay up-to-date with industry trends and evolving DevOps tools and technologies to continuously improve systems and processes. Required Skills and Experience: 10+ years of experience in IT infrastructure, DevOps, or systems engineering. Strong experience with Linux systems administration (Red Hat, Ubuntu, CentOS). 3+ years of hands-on experience with Kubernetes in production environments, including managing and scaling clusters. Extensive knowledge of Docker for building, deploying, and managing containers. Proficiency with CI/CD tools such as Jenkins, GitLab CI, Bamboo , or similar. Familiarity with monitoring and logging solutions (Prometheus, Grafana, ELK Stack, etc.). Strong understanding of networking, security best practices , and cloud-based security solutions. Hands-on experience with scripting and automation tools like Bash, Python Excellent troubleshooting, problem-solving, and analytical skills. Experience with Git or other version control systems. Good to have Skills: Experience with service mesh technologies (e.g., Istio, Linkerd) and API gateways . Familiarity with container security tools such as Aqua Security, Twistlock , or similar. Familiarity with Kafka , RabbitMQ, SOLR
Posted 2 days ago
5.0 years
5 - 9 Lacs
Hyderābād
Remote
About the role: As a Senior DevOps Engineer focused on Vulnerability Remediation within Infrastructure Engineering and Cloud Operations (IECO), you will contribute to the development of our solution delivery platforms supporting our web-based applications on the latest cloud technologies within a DevSecOps culture. You will have the opportunity to utilize automation technologies and private/public cloud technologies to provide world-class solutions that serve the non-profit industry and ensure the security of the environment. What you'll do: Build automation leveraging CI/CD processes, automated testing, unit testing, code coverage and other software development best practices Contribute to reusable automation scripts, libraries, services, and tools to increase system and process efficiencies Partnering with the security teams and tools to continually review and understand new industry security threats, associated technologies and quickly addressing vulnerabilities Partnering with the application management teams to continually review and understand the impact of resolving open vulnerabilities and execute those resolutions. Pursue opportunities to further operational excellence by increasing efficiency and reducing risk, complexity, waste and cost Partner with key stakeholders to establish technical direction and negotiate technical decision points to drive innovative solutions Drive technical design and validation, while ensuring implementation aligns with our technical strategies and strategic business goals Develop architectural designs for applications building something to delight clients while managing costs to deliver these applications What you’ll bring: 5+ years of experience with common web technologies required – C#, .NET, Java or other equivalent Object-Oriented language 5+ years of experience in the implementation of cloud technologies (Microsoft Azure) and an understanding of SAAS, PAAS, and IAAS models Experience building high performance, scalable, robust, 24x7 environments and/or applications Experience creating scripts or automation, such as Perl, PowerShell, Python, TCL/TK, Ruby or similar for cloud orchestration required (PowerShell preferred) Available on a 24x7x365 basis when needed for production impacting incidents or key customer events. Ability to create quality code that is secure and operable at scale. Stay up to date on everything Blackbaud, Blackbaud is a digital-first company which embraces a flexible remote or hybrid work culture. Blackbaud supports hiring and career development for all roles from the location you are in today! Blackbaud is proud to be an equal opportunity employer and is committed to maintaining an inclusive work environment. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, physical or mental disability, age, or veteran status or any other basis protected by federal, state, or local law.
Posted 2 days ago
10.0 years
5 - 9 Lacs
Hyderābād
Remote
Job Posting Description About the role: As a Principal DevOps Engineer focused on Vulnerability Remediation within Infrastructure Engineering and Cloud Operations (IECO), you will contribute to the development of our solution delivery platforms supporting our web-based applications on the latest cloud technologies within a DevSecOps culture. You will have the opportunity to utilize automation technologies and private/public cloud technologies to provide world-class solutions that serve the non-profit industry and ensure the security of the environment. What you'll do: Build automation leveraging CI/CD processes, automated testing, unit testing, code coverage and other software development best practices Contribute to reusable automation scripts, libraries, services, and tools to increase system and process efficiencies Partnering with the security teams and tools to continually review and understand new industry security threats, associated technologies and quickly addressing vulnerabilities Partnering with the application management teams to continually review and understand the impact of resolving open vulnerabilities and execute those resolutions. Pursue opportunities to further operational excellence by increasing efficiency and reducing risk, complexity, waste and cost Partner with key stakeholders to establish technical direction and negotiate technical decision points to drive innovative solutions Drive technical design and validation, while ensuring implementation aligns with our technical strategies and strategic business goals Develop architectural designs for applications building something to delight clients while managing costs to deliver these applications What you’ll bring: 10+ years of experience with common web technologies required – C#, .NET, Java or other equivalent Object-Oriented language 10+ years of experience in the implementation of cloud technologies (Microsoft Azure) and an understanding of SAAS, PAAS, and IAAS models Experience building high performance, scalable, robust, 24x7 environments and/or applications Experience creating scripts or automation, such as Perl, PowerShell, Python, TCL/TK, Ruby or similar for cloud orchestration required (PowerShell preferred) Available on a 24x7x365 basis when needed for production impacting incidents or key customer events. Ability to create quality code that is secure and operable at scale. Stay up to date on everything Blackbaud, Blackbaud is a digital-first company which embraces a flexible remote or hybrid work culture. Blackbaud supports hiring and career development for all roles from the location you are in today! Blackbaud is proud to be an equal opportunity employer and is committed to maintaining an inclusive work environment. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, physical or mental disability, age, or veteran status or any other basis protected by federal, state, or local law.
Posted 2 days ago
5.0 - 7.0 years
4 - 10 Lacs
Hyderābād
On-site
Description The U.S. Pharmacopeial Convention (USP) is an independent scientific organization that collaborates with the world's top authorities in health and science to develop quality standards for medicines, dietary supplements, and food ingredients. USP's fundamental belief that Equity = Excellence manifests in our core value of Passion for Quality through our more than 1,300 hard-working professionals across twenty global locations to deliver the mission to strengthen the supply of safe, quality medicines and supplements worldwide. At USP, we value inclusivity for all. We recognize the importance of building an organizational culture with meaningful opportunities for mentorship and professional growth. From the standards we create, the partnerships we build, and the conversations we foster, we affirm the value of Diversity, Equity, Inclusion, and Belonging in building a world where everyone can be confident of quality in health and healthcare. USP is proud to be an equal employment opportunity employer (EEOE) and affirmative action employer. We are committed to creating an inclusive environment in all aspects of our work—an environment where every employee feels fully empowered and valued irrespective of, but not limited to, race, ethnicity, physical and mental abilities, education, religion, gender identity, and expression, life experience, sexual orientation, country of origin, regional differences, work experience, and family status. We are committed to working with and providing reasonable accommodation to individuals with disabilities. Brief Job Overview The Digital & Innovation group at USP is seeking a Full Stack Developers with programming skills in Cloud technologies to be able to build innovative digital products. We are seeking someone who understands the power of Digitization and help drive an amazing digital experience to our customers. How will YOU create impact here at USP? In this role at USP, you contribute to USP's public health mission of increasing equitable access to high-quality, safe medicine and improving global health through public standards and related programs. In addition, as part of our commitment to our employees, Global, People, and Culture, in partnership with the Equity Office, regularly invests in the professional development of all people managers. This includes training in inclusive management styles and other competencies necessary to ensure engaged and productive work environments. The Sr. Software Engineer/Software Engineer has the following responsibilities: Build scalable applications/ platforms using cutting edge cloud technologies. Constantly review and upgrade the systems based on governance principles and security policies. Participate in code reviews, architecture discussions, and agile development processes to ensure high-quality, maintainable, and scalable code. Document and communicate technical designs, processes, and solutions to both technical and non-technical stakeholders Who is USP Looking For? The successful candidate will have a demonstrated understanding of our mission, commitment to excellence through inclusive and equitable behaviors and practices, ability to quickly build credibility with stakeholders, along with the following competencies and experience: Education Bachelor's or Master's degree in Computer Science, Engineering, or a related field Experience Sr. Software Engineer: 5-7 years of experience in software development, with a focus on cloud computing Software Engineer: 2-4 years of experience in software development, with a focus on cloud computing Strong knowledge of cloud platforms (e.g., AWS , Azure, Google Cloud) and services, including compute, storage, networking, and security Extensive knowledge on Java spring boot applications and design principles. Strong programming skills in languages such as Python Good experience with AWS / Azure services, such as EC2, S3, IAM, Lambda, RDS, DynamoDB, API Gateway, and Cloud Formation Knowledge of cloud architecture patterns, best practices, and security principles Familiarity with data pipeline / ETL / Orchestration tools, such as Apache NiFi, AWS Glue, or Apache Airflow. Good experience with front end technologies like React.js/Node.js etc Strong experience in micro services, automated testing practices. Experience leading initiatives related to continuous improvement or implementation of new technologies. Works independently on most deliverables Strong analytical and problem-solving skills, with the ability to develop creative solutions to complex problems Ability to manage multiple projects and priorities in a fast-paced, dynamic environment Additional Desired Preferences Experience with scientific chemistry nomenclature or prior work experience in life sciences, chemistry, or hard sciences or degree in sciences Experience with pharmaceutical datasets and nomenclature Experience with containerization technologies, such as Docker and Kubernetes, is a plus Experience working with knowledge graphs Ability to explain complex technical issues to a non-technical audience Self-directed and able to handle multiple concurrent projects and prioritize tasks independently Able to make tough decisions when trade-offs are required to deliver results Strong communication skills required: Verbal, written, and interpersonal Supervisory Responsibilities No Benefits USP provides the benefits to protect yourself and your family today and tomorrow. From company-paid time off and comprehensive healthcare options to retirement savings, you can have peace of mind that your personal and financial well-being is protected Who is USP? The U.S. Pharmacopeial Convention (USP) is an independent scientific organization that collaborates with the world's top authorities in health and science to develop quality standards for medicines, dietary supplements, and food ingredients. USP's fundamental belief that Equity = Excellence manifests in our core value of Passion for Quality through our more than 1,300 hard-working professionals across twenty global locations to deliver the mission to strengthen the supply of safe, quality medicines and supplements worldwide. At USP, we value inclusivity for all. We recognize the importance of building an organizational culture with meaningful opportunities for mentorship and professional growth. From the standards we create, the partnerships we build, and the conversations we foster, we affirm the value of Diversity, Equity, Inclusion, and Belonging in building a world where everyone can be confident of quality in health and healthcare. USP is proud to be an equal employment opportunity employer (EEOE) and affirmative action employer. We are committed to creating an inclusive environment in all aspects of our work—an environment where every employee feels fully empowered and valued irrespective of, but not limited to, race, ethnicity, physical and mental abilities, education, religion, gender identity, and expression, life experience, sexual orientation, country of origin, regional differences, work experience, and family status. We are committed to working with and providing reasonable accommodation to individuals with disabilities.
Posted 2 days ago
2.0 years
5 - 9 Lacs
Hyderābād
Remote
About the role As a Staff DevOps Engineer focused on Site Reliability within Infrastructure Engineering and Cloud Operations (IECO), you will contribute to the development of our solution delivery platforms supporting our web-based applications on the latest cloud technologies within a DevSecOps culture. You will have the opportunity to utilize automation technologies and private/public cloud technologies to provide world-class solutions that serve the non-profit industry. What you'll do Build automation leveraging CI/CD processes, automated testing, unit testing, code coverage and other software development best practices Contribute to reusable automation scripts, libraries, services, and tools to increase system and process efficiencies Partnering with the security teams and tools to continually review and understand new industry security threats, associated technologies and quickly addressing vulnerabilities Pursue opportunities to further operational excellence by increasing efficiency and reducing risk, complexity, waste and cost Partner with key stakeholders to establish technical direction and negotiate technical decision points to drive innovative solutions Drive technical design and validation, while ensuring implementation aligns with our technical strategies and strategic business goals Develop architectural designs for applications building something to delight clients while managing costs to deliver these applications What you’ll bring 2+ years of experience with common web technologies required – Javascript, C#, .NET, HTML, AJAX or other equivalent Object-Oriented language 2+ years of experience in the implementation of cloud technologies (Microsoft Azure) and an understanding of SAAS, PAAS, and IAAS models Experience building high performance, scalable, robust, 24x7 environments and/or applications Experience creating scripts or automation, such as Perl, PowerShell, Python, TCL/TK, Ruby or similar for cloud orchestration required (PowerShell preferred) Available on a 24x7x365 basis when needed for production impacting incidents or key customer events Ability to develop quality code that is secure and operable at scale Stay up to date on everything Blackbaud, Blackbaud is a digital-first company which embraces a flexible remote or hybrid work culture. Blackbaud supports hiring and career development for all roles from the location you are in today! Blackbaud is proud to be an equal opportunity employer and is committed to maintaining an inclusive work environment. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, physical or mental disability, age, or veteran status or any other basis protected by federal, state, or local law.
Posted 2 days ago
3.0 - 5.0 years
3 - 6 Lacs
Hyderābād
On-site
Job Description: GCP DevOps – Senior Programmer Analyst We are looking for a highly skilled and experienced GCP DevOps as a Senior Programmer Analyst. Technical Expertise: Experience should have 3 to 5 years of Cloud DevOps. Proven experience with GCP services, particularly BigQuery. Strong understanding of cloud solution design patterns and DevOps best practices. Hands-on experience with CI/CD pipelines using Jenkins, GitLab, or similar tools. Proficiency in Python and Bash scripting. Solid knowledge of container orchestration tools like Kubernetes. Familiarity with monitoring and logging tools (Grafana, Prometheus). Excellent communication and client-facing skills. Demonstrated experience working in an Agile/SCRUM environment. Key Responsibilities: Design, build, and maintain scalable infrastructure solutions using GCP services including BigQuery (mandatory), DataFlow. Implement Infrastructure as Code (IaC) using tools such as Terraform. Automate deployment pipelines using Jenkins, GitLab CI/CD, Nexus, or equivalent tools. Develop and maintain automation scripts using Python, Bash, and optionally Ansible. Manage and orchestrate containerized applications using Docker and Kubernetes. Write efficient SQL scripts, preferably for BigQuery. Operate within Unix/Linux environments, including shell scripting. Collaborate in an Agile/SCRUM development environment. Present solutions and technical designs to internal stakeholders and executive leadership. Design and implement cloud-native architectures using microservices, distributed caching, messaging, IAM, and other cloud design patterns. Experience with Google Dataproc. Experience with Ansible for configuration management. Familiarity with other CI tools like TravisCI, CircleCI. Monitor systems using tools such as Grafana and Prometheus Understanding on the SDLC. Understanding on the Agile methodologies. Communication with customer and producing the Daily status report. Should have good oral and written communication. Should be a good team player. Should be proactive and adaptive.
Posted 2 days ago
5.0 - 7.0 years
3 - 6 Lacs
Hyderābād
On-site
Job Description: GCP DevOps – Lead Programmer Analyst We are looking for a highly skilled and experienced GCP DevOps as a Lead Programmer Analyst. Technical Expertise: Experience should have 5 to 7 years of Cloud DevOps. Proven experience with GCP services, particularly BigQuery. Strong understanding of cloud solution design patterns and DevOps best practices. Hands-on experience with CI/CD pipelines using Jenkins, GitLab, or similar tools. Proficiency in Python and Bash scripting. Solid knowledge of container orchestration tools like Kubernetes. Familiarity with monitoring and logging tools (Grafana, Prometheus). Excellent communication and client-facing skills. Demonstrated experience working in an Agile/SCRUM environment. Key Responsibilities: Design, build, and maintain scalable infrastructure solutions using GCP services including BigQuery (mandatory), Dataflow. Implement Infrastructure as Code (IaC) using tools such as Terraform. Automate deployment pipelines using Jenkins, GitLab CI/CD, Nexus, or equivalent tools. Develop and maintain automation scripts using Python, Bash, and optionally Ansible. Manage and orchestrate containerized applications using Docker and Kubernetes. Write efficient SQL scripts, preferably for BigQuery. Operate within Unix/Linux environments, including shell scripting. Collaborate in an Agile/SCRUM development environment. Present solutions and technical designs to internal stakeholders and executive leadership. Design and implement cloud-native architectures using microservices, distributed caching, messaging, IAM, and other cloud design patterns. Experience with Google Dataproc. Experience with Ansible for configuration management. Familiarity with other CI tools like TravisCI, CircleCI. Monitor systems using tools such as Grafana and Prometheus Understanding on the SDLC. Understanding on the Agile methodologies. Communication with customer and producing the Daily status report. Should have good oral and written communication. Should be a good team player. Should be proactive and adaptive.
Posted 2 days ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
TransUnion's Job Applicant Privacy Notice What We'll Bring Experienced Sr. Analyst Responsible for providing Application operations support for critical business applications, ensuring system stability and resolving incidents with in SLA. Collaborates with cross-functional teams to troubleshoot issues, monitor performance, and implement process improvements. Mentor junior team, proficient in leveraging latest DevOps tools and practices Docker, K8, Containerization and cloud ,Various monitoring Tools to enhance efficiency. What You'll Bring Provide Applications Operation support for critical business applications, ensuring high availability, quick incident resolution, and minimal business disruption. Proactively monitor application and system health using tools like Grafana, Splunk, and AppDynamics; respond to alerts and system anomalies. Troubleshoot and resolve incidents, perform root cause analysis, and work collaboratively with development and infrastructure teams for permanent fixes.(Excellent Working knowledge in LINUX, SQL, SPLUNK, Grafana and Various other monitoring Tools. (AppDynamics, SPOTFIRE) Document knowledge base articles, RCA reports, and support runbooks to streamline operational workflows and ensure team alignment. Participate in 24x7 Shift, on-call support rotation, ensuring timely handling of high-priority incidents and escalations. Follow ITIL processes such as Incident, Problem, and Change Management; experience with tools like ServiceNow or BMC Remedy is preferred. Support deployments, release coordination, and post-deployment validation as part of the release and change management cycle. Work with modern DevOps tools like Git, Jenkins, Docker, Kubernetes, and CI/CD pipelines in cloud-based environments (AWS/Azure). Mentor and guide junior support analysts, fostering knowledge sharing and best practices for consistent service delivery. Communicate clearly and professionally with stakeholders, providing timely updates, impact assessments, and issue resolution plans. Bachelor’s degree in Computer Science, IT, or a related field. Certifications: ITIL Foundation (required), and any of the following are a plus: AWS Cloud Practitioner, Microsoft Azure Fundamentals, Docker/Kubernetes certifications, or DevOps-related credentials. Excellent written and verbal communication skills, with a focus on clarity, responsiveness, and stakeholder engagement. Impact You'll Make Strong hands-on expertise in Linux/Unix environments is mandatory, including shell scripting and system troubleshooting. Experienced in ITSM tools like BMC Remedy and ServiceNow for incident, problem, and service request tracking. Hands-on experience in containerization and orchestration using Docker and Kubernetes; working knowledge of monitoring/logging tools (Grafana, Splunk). Familiarity with cloud-based applications and environments, with the ability to support and troubleshoot distributed systems. Proficiency in SQL for data investigation and support, with the ability to write queries and analyze logs for issue resolution. Additional Automation experience is an Added advantage. This is a hybrid position and involves regular performance of job responsibilities virtually as well as in-person at an assigned TU office location for a minimum of two days a week. TransUnion Job Title Sr Analyst, Applications Support
Posted 2 days ago
15.0 years
0 Lacs
Hyderābād
On-site
Project Role : DevOps Engineer Project Role Description : Responsible for building and setting up new development tools and infrastructure utilizing knowledge in continuous integration, delivery, and deployment (CI/CD), Cloud technologies, Container Orchestration and Security. Build and test end-to-end CI/CD pipelines, ensuring that systems are safe against security threats. Must have skills : Kubernetes Good to have skills : Ansible on Microsoft Azure, Terraform, Jenkins Minimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary: As a DevOps Engineer, you will be responsible for building and setting up new development tools and infrastructure. A typical day involves utilizing your expertise in continuous integration, delivery, and deployment, as well as cloud technologies and container orchestration. You will work on ensuring that systems are secure against potential threats while collaborating with various teams to enhance the development process and streamline operations. Roles & Responsibilities: - Expected to be an SME. - Collaborate and manage the team to perform. - Responsible for team decisions. - Engage with multiple teams and contribute on key decisions. - Provide solutions to problems for their immediate team and across multiple teams. - Facilitate knowledge sharing sessions to enhance team capabilities. - Monitor and optimize the performance of CI/CD pipelines. - Must To Have Skills: Proficiency in DevOps, EKS, Helm Charts, Ansible, Terraform and Docker Skills. - Experience and skills in setup the infrastructure on AWS cloud with EKS, Helm charts. - Proficient in developing CI/CD pipelines using Jenkins/Github or other CI/CD tool.s - Ability to debug and fix the issues in environment setup and in CI/CD pipelines. - Knowledge and experience doing automation of infra and application setup using Ansible and Terraform. - Good To Have Skills: Experience with continuous integration and continuous deployment tools. - Strong understanding of cloud services and infrastructure management. - Familiarity with containerization technologies such as Openshift or other containerization technical skills - Experience in scripting languages for automation and configuration management. Additional Information: - The candidate should have minimum 5 years of experience in DevOps. - This position is based in Hyderabad. - A 15 years full time education is required. 15 years full time education
Posted 2 days ago
18.0 years
2 - 7 Lacs
Hyderābād
On-site
AI-First. Future-Driven. Human-Centered. At OpenText, AI is at the heart of everything we do—powering innovation, transforming work, and empowering digital knowledge workers. We're hiring talent that AI can't replace to help us shape the future of information management. Join us. Your Impact: We are looking for an execution-focused Director of Product Management to lead our strategy and roadmap spanning multi-tenant cloud and off-cloud solutions. The ideal candidate has deep product knowledge in automation technologies including RPA, workflow orchestration, low-code and AI driven process. What the Role Offers: The Director of Product Management will be responsible for defining and evolving the product vision and roadmap for our process automation and low code solutions to align with business goals. This includes a mature on-premises as well as a SaaS based solution. You will identify opportunities to leverage machine learning and low code/no code capabilities to enhance automation outcomes. You will lead the strategy for a team of product managers spanning multiple solutions. You will drive a high-performing environment that thrives on innovation working closely with engineering, UX, Sales and Solutions Consultants to deliver a scalable, highly performant solution. You will engage with Sales and customers to understand pain points and develop solutions to address them through automation. You will understand industry trends and conduct regular competitive analysis to deliver best in class solutions. You will define and track KPIs such as customer adoption, win/loss analysis to inform priorities and product improvement. What you need to succeed: 18+ years in software product management with at least three years in a leadership role. Strong understanding of automation technologies. Excellent communication skills with the ability to present to all levels of management. Experience with Agile methodologies. Bachelor’s Degree in Computer Science, Engineering or Business. OpenText is an equal opportunity employer that hires and attracts talent regardless of race, religious creed, color, national origin, ancestry, physical disability, mental disability, medical condition, marital status, sex, age, veteran status, or sexual orientation. At OpenText we acknowledge, value and respect diversity. We draw on diversity of thought and experience to reflect the rich array of cultures representing our broad global customer base OpenText's efforts to build an inclusive work environment go beyond simply complying with applicable laws. Our Employment Equity and Diversity Policy provides direction on maintaining a working environment that is inclusive of everyone, regardless of culture, national origin, race, color, gender, gender identification, sexual orientation, family status, age, veteran status, disability, religion, or other basis protected by applicable laws. If you need assistance and/or a reasonable accommodation due to a disability during the application or recruiting process, please submit a ticket atAsk HR. Our proactive approach fosters collaboration, innovation, and personal growth, enriching OpenText's vibrant workplace.
Posted 2 days ago
3.0 years
4 - 10 Lacs
Hyderābād
On-site
About the job: Sanofi is a global pharmaceuticals and biologics company headquartered in Paris, France, and a leader in the research and development, manufacturing, and marketing of pharmaceutical drugs principally in the prescription market. The firm also develops well known over-the-counter medication. The company covers seven major therapeutic areas: cardiovascular, central nervous system, diabetes, internal medicine, oncology, thrombosis and vaccines. It is the world's largest producer of vaccines. Sanofi has recently embarked on a vast and ambitious digital transformation program. A first step to this transformation was bringing all IT, Digital and Data functions under a Global Chief Digital Officer reporting to Sanofi’s CEO. The new Digital organization is implementing a 3-year strategy that will drive business growth, operating income and cost efficiency from enterprise-wide agile digital transformation. The digital roadmap will facilitate the acceleration of R&D drug discovery, intelligent supply chain, manufacturing digital factory of the future and commercial performance, bringing better drugs and vaccines to patients faster, to improve health and save lives. It is our aspiration to be a leader in biopharmaceuticals, driven by world-class digital technology, to improve people’s lives everywhere. We put our colleagues on the highest value work, where they can best build their industry leading technical and business expertise in digital technology (digital experience, automation, software defined networks, cloud technologies, integration technologies, network security, digital workplace). We make Sanofi a great place to work with Digital capabilities. We leverage the best and brightest leaders and technical talent to build systems, rearchitect business process, generate value and drive competitive advantage. Candidate Profile The ServiceNow Administrator will create governance standards and processes, validate data accuracy, and develop documentation for multiple modules. The Administrator will work closely with the Architect to take direction and help create an environment of empowerment for the internal team. This position involves frequent interaction and collaboration with a variety of IT and business team members, assist with processes, developments, requireents gathering, upgrades and cloning and provide any needed guidance, support, and maintenance on the ServiceNow platform. The role(s) will take direction from the platform architect and platform leader. What you will be doing: Configure and enhance core application including, but not limited to, Service Catalog, Service Portal, Knowledge Base, Platform, and Reporting. Understanding of Core modules within ServiceNow that are not limited to: ITSM, ITAM, ITBM, ITOM, HRSD, CSM and App Engine Conduct Incident and Request Management: Resolve business incidents and request ServiceNow tickets independently. Support implemented and proposed solutions on the ServiceNow platform. Load, manipulate, and maintain data between ServiceNow and other systems. Participate in deployment of features and any ServiceNow releases. Perform code reviews and development standards are met. Work closely with business stakeholders to draft requirements and solve business problems Multitasker and be able to work with multiple products Identify opportunities to improve overall quality of the platform using health scan, ATF, etc. CSA, CAD or a mainline certification is a plus Qualifications Bachelor's Degree in Computer Science, Information Technology, Architecting, or related field/certified preferred 5+ years applied experience and Certification across an array of critical ServiceNow IT Modules (i.e. ITSM, ITOM, ITBM, HRSD, CSM, IRM, SecOps, Vulnerability Response, Service Portal, SAM Pro, Integration Hub, and/or Performance Analytics.) o Extensive experience using Flow Designer and Integrations Hub Prior development experience using JavaScript/Perl/PHP on the ServiceNow Platform Extensive applied experience in the design and architecture of ServiceNow HR Service Modules Experience in functional ServiceNow Integrations (e.g., REST APIs, LDAP, Active Directory, JDBC, Orchestration, etc.) ServiceNow certification a plus ITIL Process familiarity, certification a plus Base understanding of Cloud 4+ years of experience with Agile scrum/Kanban methodology. Why choose us? Bring the miracles of science to life alongside a supportive, future-focused team. Discover endless opportunities to grow your talent and drive your career, whether it’s through a promotion or lateral move, at home or internationally. Enjoy a thoughtful, well-crafted rewards package that recognizes your contribution and amplifies your impact. Take good care of yourself and your family, with a wide range of health and wellbeing benefits including high-quality healthcare, prevention and wellness programs and at least 14 weeks’ gender-neutral parental leave. Opportunity to work in an international environment, collaborating with diverse business teams and vendors, working in a dynamic team, and fully empowered to propose and implement innovative ideas. Pursue Progress . Discover Extraordinary . Progress doesn’t happen without people – people from different backgrounds, in different locations, doing different roles, all united by one thing: a desire to make miracles happen. You can be one of those people. Chasing change, embracing new ideas and exploring all the opportunities we have to offer. Let’s pursue Progress. And let’s discover Extraordinary together. At Sanofi, we provide equal opportunities to all regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, or gender identity. Watch our ALL IN video and check out our Diversity Equity and Inclusion actions at sanofi.com!
Posted 2 days ago
0 years
3 - 18 Lacs
Hyderābād
On-site
Interested candidate shares their resume at hr@globalitfamily.com Position: Backend Engineer Location: Bangalore / Hyderabad / Chennai ( Onsite ) Experience: 4 - 6 yrs Notice Period: Immediate ( Max 15 days ) Requirements: Backend Development : Proficiency in server-side languages such as Java and Kotlin. Database Management : Experience with relational databases (e.g., SQL Server, PostgreSQL) and NoSQL databases (e.g., MongoDB). API Development : Skilled in designing, developing, and consuming RESTful APIs and microservices. Authentication & Security : Understanding of security best practices and authentication mechanisms, including OAuth and JWT. DevOps : Experience with CI/CD pipelines, containerization (e.g., Docker), and orchestration tools (e.g., Kubernetes). Performance Optimization : Ability to optimize application performance and troubleshoot performance issues. Version Control : Proficiency with version control systems, particularly Git. Problem-Solving : Strong analytical and problem-solving skills. Collaboration : Ability to work effectively with front-end developers, designers, and other team members. Education : Bachelor’s degree in Computer Science, Engineering, or a related field, or equivalent experience. Communication : Good communication skills for discussing project requirements, updates, and technical issues. Cloud Experience : Hands-on experience with cloud platforms, preferably Microsoft Azureis desired, like familiarity with Azure services Azure Cosmos Database. Summary: Looking for strong Java/backend developer. Kotlin and Python knowledge highly desired. Job Type: Full-time Pay: ₹388,989.14 - ₹1,816,394.11 per year Location Type: In-person Work Location: In person
Posted 2 days ago
10.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Locations: Noida/ Gurgaon/ Indore/ Bangalore/ Pune/ Hyderabad Job Description DevOps architect with Docker,Kubernetes expertise. Seeking a highly skilled DevOps Architect with deep expertise in Linux, Kubernetes, Docker , and related technologies. The ideal candidate will design, implement, and manage scalable, secure, and automated infrastructure solutions, ensuring the seamless integration of development and operational processes. You will be a key player in the architecture and implementation of CI/CD pipelines, managing infrastructure, container orchestration, and system monitoring. Roles & Responsibilities Key Responsibilities: Design and implement DevOps solutions that automate software delivery pipelines and infrastructure provisioning. Architect and maintain scalable Kubernetes clusters to manage containerized applications across multiple environments. Leverage Docker to build, deploy, and manage containerized applications in development, staging, and production environments. Optimize and secure Linux-based environments for application performance, reliability, and security. Collaborate with development teams to implement CI/CD pipelines using tools like Jenkins, GitLab CI, CircleCI, or similar . Monitor, troubleshoot, and improve system performance, security, and availability through effective monitoring and logging solutions (e.g., Prometheus, Grafana, ELK Stack). Automate configuration management and system provisioning tasks on-premise environments. Implement security best practices and compliance measures, including secrets management, network segmentation, and vulnerability scanning. Mentor and guide junior DevOps engineers and promote best practices in DevOps, automation, and cloud-native architecture. Stay up-to-date with industry trends and evolving DevOps tools and technologies to continuously improve systems and processes. Required Skills and Experience: 10+ years of experience in IT infrastructure, DevOps, or systems engineering. Strong experience with Linux systems administration (Red Hat, Ubuntu, CentOS). 3+ years of hands-on experience with Kubernetes in production environments, including managing and scaling clusters. Extensive knowledge of Docker for building, deploying, and managing containers. Proficiency with CI/CD tools such as Jenkins, GitLab CI, Bamboo , or similar. Familiarity with monitoring and logging solutions (Prometheus, Grafana, ELK Stack, etc.). Strong understanding of networking, security best practices , and cloud-based security solutions. Hands-on experience with scripting and automation tools like Bash, Python Excellent troubleshooting, problem-solving, and analytical skills. Experience with Git or other version control systems. Good to have Skills: Experience with service mesh technologies (e.g., Istio, Linkerd) and API gateways . Familiarity with container security tools such as Aqua Security, Twistlock , or similar. Familiarity with Kafka , RabbitMQ, SOLR
Posted 2 days ago
7.0 years
6 - 9 Lacs
Thiruvananthapuram
On-site
7 - 9 Years 2 Openings Trivandrum Role description Senior Data Engineer – Azure/Snowflake Migration Key Responsibilities Design and develop scalable data pipelines using Snowflake as the primary data platform, integrating with tools like Azure Data Factory, Synapse Analytics, and AWS services. Build robust, efficient SQL and Python-based data transformations for cleansing, enrichment, and integration of large-scale datasets. Lead migration initiatives from AWS-based data platforms to a Snowflake-centered architecture, including: o Rebuilding AWS Glue pipelines in Azure Data Factory or using Snowflake-native ELT approaches. o Migrating EMR Spark jobs to Snowflake SQL or Python-based pipelines. o Migrating Redshift workloads to Snowflake with schema conversion and performance optimization. o Transitioning S3-based data lakes (Hudi, Hive) to Snowflake external tables via ADLS Gen2 or Azure Blob Storage. o Redirecting Kinesis/MSK streaming data to Azure Event Hubs, followed by ingestion into Snowflake using Streams & Tasks or Snowpipe. Support database migrations from AWS RDS (Aurora PostgreSQL, MySQL, Oracle) to Snowflake, focusing on schema translation, compatibility handling, and data movement at scale. Design modern Snowflake lakehouse-style architectures that incorporate raw, staging, and curated zones, with support for time travel, cloning, zero-copy restore, and data sharing. Integrate Azure Functions or Logic Apps with Snowflake for orchestration and trigger-based automation. Implement security best practices, including Azure Key Vault integration and Snowflake role-based access control, data masking, and network policies. Optimize Snowflake performance and costs using clustering, multi-cluster warehouses, materialized views, and result caching. Support CI/CD processes for Snowflake pipelines using Git, Azure DevOps or GitHub Actions, and SQL code versioning. Maintain well-documented data engineering workflows, architecture diagrams, and technical documentation to support collaboration and long-term platform maintainability. Required Qualifications 7+ years of data engineering experience, with 3+ years on Microsoft Azure stack and hands-on Snowflake expertise. Proficiency in: o Python for scripting and ETL orchestration o SQL for complex data transformation and performance tuning in Snowflake o Azure Data Factory and Synapse Analytics (SQL Pools) Experience in migrating workloads from AWS to Azure/Snowflake, including services such as Glue, EMR, Redshift, Lambda, Kinesis, S3, and MSK. Strong understanding of cloud architecture and hybrid data environments across AWS and Azure. Hands-on experience with database migration, schema conversion, and tuning in PostgreSQL, MySQL, and Oracle RDS. Familiarity with Azure Event Hubs, Logic Apps, and Key Vault. Working knowledge of CI/CD, version control (Git), and DevOps principles applied to data engineering workloads. Preferred Qualifications Extensive experience with Snowflake Streams, Tasks, Snowpipe, external tables, and data sharing. Exposure to MSK-to-Event Hubs migration and streaming data integration into Snowflake. Familiarity with Terraform or ARM templates for Infrastructure-as-Code (IaC) in Azure environments. Certification such as SnowPro Core, Azure Data Engineer Associate, or equivalent. Skills Aws,Azure Data Lake,Python About UST UST is a global digital transformation solutions provider. For more than 20 years, UST has worked side by side with the world’s best companies to make a real impact through transformation. Powered by technology, inspired by people and led by purpose, UST partners with their clients from design to operation. With deep domain expertise and a future-proof philosophy, UST embeds innovation and agility into their clients’ organizations. With over 30,000 employees in 30 countries, UST builds for boundless impact—touching billions of lives in the process.
Posted 2 days ago
6.0 years
3 - 6 Lacs
Cochin
On-site
Minimum Required Experience : 6 years Full Time Skills SQL Microservices Java Kubernetes Linux Spring Boot Docker Description Job Description – SSE Java Experience Range & Quantity 6 - 10 YOE Location Requirement Bangalore – Whitefield / Kochi (Hybrid) Fulfil by date ASAP Responsibilities Provide technology leadership in Working in an agile development environment Translating business requirements into low-level application design Application code development through a collaborative approach Doing Full-scale unit testing Applying test-driven and behavior-driven development (TDD/BDD) QA concepts Applying continuous integration and continuous deployment (CI/CD) concepts Mandatory Soft Skills Should be able to contribute as an individual contributor Should be able to execute his/her responsibility independently Focus on self-planning activities Mandatory Skills Practical knowledge of the following tools & technologies … Java, Springboot, Micro services Git Container orchestration (Kubernetes, Docker) Basic knowledge in Linux & SQL Nice-to-have Skills BDD Mandatory Experience Design, implementation, and optimization of the following: Golang stack-based micro services design-oriented application development and deploying the same using Container orchestration in the cloud environment Understanding CI/CD pipeline & related system development environment
Posted 2 days ago
9.0 years
5 - 10 Lacs
Thiruvananthapuram
On-site
9 - 12 Years 1 Opening Trivandrum Role description Tech Lead – Azure/Snowflake & AWS Migration Key Responsibilities Design and develop scalable data pipelines using Snowflake as the primary data platform, integrating with tools like Azure Data Factory, Synapse Analytics, and AWS services. Build robust, efficient SQL and Python-based data transformations for cleansing, enrichment, and integration of large-scale datasets. Lead migration initiatives from AWS-based data platforms to a Snowflake-centered architecture, including: o Rebuilding AWS Glue pipelines in Azure Data Factory or using Snowflake-native ELT approaches. o Migrating EMR Spark jobs to Snowflake SQL or Python-based pipelines. o Migrating Redshift workloads to Snowflake with schema conversion and performance optimization. o Transitioning S3-based data lakes (Hudi, Hive) to Snowflake external tables via ADLS Gen2 or Azure Blob Storage. o Redirecting Kinesis/MSK streaming data to Azure Event Hubs, followed by ingestion into Snowflake using Streams & Tasks or Snowpipe. Support database migrations from AWS RDS (Aurora PostgreSQL, MySQL, Oracle) to Snowflake, focusing on schema translation, compatibility handling, and data movement at scale. Design modern Snowflake lakehouse-style architectures that incorporate raw, staging, and curated zones, with support for time travel, cloning, zero-copy restore, and data sharing. Integrate Azure Functions or Logic Apps with Snowflake for orchestration and trigger-based automation. Implement security best practices, including Azure Key Vault integration and Snowflake role-based access control, data masking, and network policies. Optimize Snowflake performance and costs using clustering, multi-cluster warehouses, materialized views, and result caching. Support CI/CD processes for Snowflake pipelines using Git, Azure DevOps or GitHub Actions, and SQL code versioning. Maintain well-documented data engineering workflows, architecture diagrams, and technical documentation to support collaboration and long-term platform maintainability. Required Qualifications 9+ years of data engineering experience, with 3+ years on Microsoft Azure stack and hands-on Snowflake expertise. Proficiency in: o Python for scripting and ETL orchestration o SQL for complex data transformation and performance tuning in Snowflake o Azure Data Factory and Synapse Analytics (SQL Pools) Experience in migrating workloads from AWS to Azure/Snowflake, including services such as Glue, EMR, Redshift, Lambda, Kinesis, S3, and MSK. Strong understanding of cloud architecture and hybrid data environments across AWS and Azure. Hands-on experience with database migration, schema conversion, and tuning in PostgreSQL, MySQL, and Oracle RDS. Familiarity with Azure Event Hubs, Logic Apps, and Key Vault. Working knowledge of CI/CD, version control (Git), and DevOps principles applied to data engineering workloads. Preferred Qualifications Extensive experience with Snowflake Streams, Tasks, Snowpipe, external tables, and data sharing. Exposure to MSK-to-Event Hubs migration and streaming data integration into Snowflake. Familiarity with Terraform or ARM templates for Infrastructure-as-Code (IaC) in Azure environments. Certification such as SnowPro Core, Azure Data Engineer Associate, or equivalent. Skills Azure,AWS REDSHIFT,Athena,Azure Data Lake About UST UST is a global digital transformation solutions provider. For more than 20 years, UST has worked side by side with the world’s best companies to make a real impact through transformation. Powered by technology, inspired by people and led by purpose, UST partners with their clients from design to operation. With deep domain expertise and a future-proof philosophy, UST embeds innovation and agility into their clients’ organizations. With over 30,000 employees in 30 countries, UST builds for boundless impact—touching billions of lives in the process.
Posted 2 days ago
5.0 - 7.0 years
0 Lacs
Thiruvananthapuram
Remote
5 - 7 Years 1 Opening Trivandrum Role description Role Proficiency: Resolve enterprise trouble tickets within agreed SLA and raise problem tickets for permanent resolution and/or provide mentorship (Hierarchical or Lateral) to junior associates Outcomes: 1) Update SOP with updated troubleshooting instructions and process changes2) Mentor new team members in understanding customer infrastructure and processes3) Perform analysis for driving incident reduction4) Escalate high priority incidents to customer and organization stakeholders for quicker resolution5) Contribute to planning and successful migration of platforms 6) Resolve enterprise trouble tickets within agreed SLA and raise problem tickets for permanent resolution7) Provide inputs for root cause analysis after major incidents to define preventive and corrective actions Measures of Outcomes: 1) SLA Adherence2) Time bound resolution of elevated tickets - OLA3) Manage ticket backlog timelines - OLA4) Adhere to defined process – Number of NCs in internal/external Audits5) Number of KB articles created6) Number of incidents and change ticket handled 7) Number of elevated tickets resolved8) Number of successful change tickets9) % Completion of all mandatory training requirements Outputs Expected: Resolution: Understand Priority and Severity based on ITIL practice resolve trouble ticket within agreed resolution SLA Execute change control tickets as documented in implementation plan Troubleshooting: Troubleshooting based on available information from previous tickets or consulting with seniors Participate in online knowledge forums reference. Covert the new steps to KB article Perform logical/analytical troubleshooting Escalation/Elevation: Escalate within organization/customer peer in case of resolution delay. Understand OLA between delivery layers (L1 L2 L3 etc) adhere to OLA. Elevate to next level work on elevated tickets from L1 Tickets Backlog/Resolution: Follow up on tickets based on agreed timelines manage ticket backlogs/last activity as per defined process. Resolve incidents and SRs within agreed timelines. Execute change tickets for infrastructure Installation: Install and configure tools software and patches Runbook/KB: Update KB with new findings Document and record troubleshooting steps as knowledge base Collaboration: Collaborate with different towers of delivery for ticket resolution (within SLA resolve L1 tickets with help from respective tower. Collaborate with other team members for timely resolution of tickets. Actively participate in team/organization-wide initiatives. Co-ordinate with UST ISMS teams for resolving connectivity related issues. Stakeholder Management: Lead the customer calls and vendor calls. Organize meeting with different stake holders. Take ownership for function's internal communications and related change management. Strategic: Define the strategy on data management policy management and data retention management. Support definition of the IT strategy for the function’s relevant scope and be accountable for ensuring the strategy is tracked benchmarked and updated for the area owned. Process Adherence: Thorough understanding of organization and customer defined process. Suggest process improvements and CSI ideas. Adhere to organization’ s policies and business conduct. Process/efficiency Improvement: Proactively identify opportunities to increase service levels and mitigate any issues in service delivery within the function or across functions. Take accountability for overall productivity efforts within the function including coordination of function specific tasks and close collaboration with Finance. Process Implementation: Coordinate and monitor IT process implementation within the function Compliance: Support information governance activities and audit preparations within the function. Act as a function SPOC for IT audits in local sites (incl. preparation interface to local organization mitigation of findings etc.) and work closely with ISRM (Information Security Risk Management). Coordinate overall objective setting preparation and facilitate process in order to achieve consistent objective setting in function Job Description. Coordination Support for CSI across all services in CIS and beyond. Training: On time completion of all mandatory training requirements of organization and customer. Provide On floor training and one to one mentorship for new joiners. Complete certification of respective career paths. Performance Management: Update FAST Goals in NorthStar track report and seek continues feedback from peers and manager. Set goals for team members and mentees and provide feedback Assist new team members to understand the customer environment Skill Examples: 1) Good communication skills (Written verbal and email etiquette) to interact with different teams and customers. 2) Modify / Create runbooks based on suggested changes from juniors or newly identified steps3) Ability to work on an elevated server ticket and solve4) Networking:a. Trouble shooting skills in static and Dynamic routing protocolsb. Should be capable of running netflow analyzers in different product lines5) Server:a. Skills in installing and configuring active directory DNS DHCP DFS IIS patch managementb. Excellent troubleshooting skills in various technologies like AD replication DNS issues etc.c. Skills in managing high availability solutions like failover clustering Vmware clustering etc.6) Storage and Back up:a. Ability to give recommendations to customers. Perform Storage & backup enhancements. Perform change management.b. Skilled in in core fabric technology Storage design and implementation. Hands on experience on backup and storage Command Line Interfacesc. Perform Hardware upgrades firmware upgrades Vulnerability remediation storage and backup commissioning and de-commissioning replication setup and management.d. Skilled in server Network and virtualization technologies. Integration of virtualization storage and backup technologiese. Review the technical diagrams architecture diagrams and modify the SOP and documentations based on business requirements.f. Ability to perform the ITSM functions for storage & backup team and review the quality of ITSM process followed by the team.7) Cloud:a. Skilled in any one of the cloud technologies - AWS Azure GCP.8) Tools:a. Skilled in administration and configuration of monitoring tools like CA UIM SCOM Solarwinds Nagios ServiceNow etcb. Skilled in SQL scriptingc. Skilled in building Custom Reports on Availability and performance of IT infrastructure building based on the customer requirements9) Monitoring:a. Skills in monitoring of infrastructure and application components10) Database:a. Data modeling and database design Database schema creation and managementb. Identify the data integrity violations so that only accurate and appropriate data is entered and maintained.c. Backup and recoveryd. Web-specific tech expertise for e-Biz Cloud etc. Examples of this type of technology include XML CGI Java Ruby firewalls SSL and so on.e. Migrating database instances to new hardware and new versions of software from on premise to cloud based databases and vice versa.11) Quality Analysis: a. Ability to drive service excellence and continuous improvement within the framework defined by IT Operations Knowledge Examples: 1) Good understanding of customer infrastructure and related CIs. 2) ITIL Foundation certification3) Thorough hardware knowledge 4) Basic understanding of capacity planning5) Basic understanding of storage and backup6) Networking:a. Hands-on experience in Routers and switches and Firewallsb. Should have minimum knowledge and hands-on with BGPc. Good understanding in Load balancers and WAN optimizersd. Advance back and restore knowledge in backup tools7) Server:a. Basic to intermediate powershell / BASH/Python scripting knowledge and demonstrated experience in script based tasksb. Knowledge of AD group policy management group policy tools and troubleshooting GPO sc. Basic AD object creation DNS concepts DHCP DFSd. Knowledge with tools like SCCM SCOM administration8) Storage and Backup:a. Subject Matter Expert in any of the Storage & Backup technology9) Tools:a. Proficient in the understanding and troubleshooting of Windows and Linux family of operating systems10) Monitoring:a. Strong knowledge in ITIL process and functions11) Database:a. Knowledge in general database management b. Knowledge in OS System and networking skills Additional Comments: Role - Cloud Engineer Primary Responsibilities • Engineer and support a portfolio of tools including: o HashiCorp Vault (HCP Dedicated), Terraform (HCP), Cloud Platform o GitHub Enterprise Cloud (Actions, Advanced Security, Copilot) o Ansible Automation Platform, Env0, Docker Desktop o Elastic Cloud, Cloudflare, Datadog, PagerDuty, SendGrid, Teleport • Manage infrastructure using Terraform, Ansible, and scripting languages such as Python and PowerShell • Enable security controls including dynamic secrets management, secrets scanning workflows, and cloud access quotas • Design and implement automation for self-service adoption, access provisioning, and compliance monitoring • Respond to user support requests via ServiceNow and continuously improve platform support documentation and onboarding workflows • Participate in Agile sprints, sprint planning, and cross-team technical initiatives • Contribute to evaluation and onboarding of new tools (e.g., remote developer access, artifact storage) Key Projects You May Lead or Support • GitHub secrets scanning and remediation with integration to HashiCorp Vault • Lifecycle management of developer access across tools like GitHub and Teleport • Upgrades to container orchestration environments and automation platforms (EKS, AKS) Technical Skills and Experience • Proficiency with Terraform (IaC) and Ansible • Strong scripting experience in Python, PowerShell, or Bash • Experience operating in cloud environments (AWS, Azure, or GCP) • Familiarity with secure development practices and DevSecOps tooling • Exposure to or experience with: o CI/CD automation (GitHub Actions) o Monitoring and incident management platforms (Datadog, PagerDuty) o Identity providers (AzureAD, Okta) o Containers and orchestration (Docker, Kubernetes) o Secrets management and vaulting platforms Soft Skills and Attributes • Strong cross-functional communication skills with technical and non-technical stakeholders • Ability to work independently while knowing when to escalate or align with other engineers or teams. • Comfort managing complexity and ambiguity in a fast-paced environment • Ability to balance short-term support needs with longer-term infrastructure automation and optimization. • Proactive, service-oriented mindset focused on enabling secure and scalable development • Detail-oriented, structured approach to problem-solving with an emphasis on reliability and repeatability. Skills Terraform,Ansible,Python,PowershellorBash,AWS,AzureorGCP,CI/CDautomation About UST UST is a global digital transformation solutions provider. For more than 20 years, UST has worked side by side with the world’s best companies to make a real impact through transformation. Powered by technology, inspired by people and led by purpose, UST partners with their clients from design to operation. With deep domain expertise and a future-proof philosophy, UST embeds innovation and agility into their clients’ organizations. With over 30,000 employees in 30 countries, UST builds for boundless impact—touching billions of lives in the process.
Posted 2 days ago
0 years
4 - 7 Lacs
Gurgaon
On-site
Job Purpose The UI Automation Engineer will be responsible for front-office application testing, leveraging tools such as Playwright, Node.js, and related frameworks. This role involves close collaboration with the QA team to automate test cases transitioned from manual testing. The engineer will focus on developing and executing test scripts, with a particular emphasis on Fixed Income trading workflows. Desired Skills and experience Strong hands-on experience with Playwright or similar modern web automation tools, with a proven ability to design and implement robust UI test automation for complex web applications. Proficiency in Node.js, with working knowledge of Cucumber for behavior-driven development and Jenkins for continuous integration and test execution. Experience in building and maintaining UI automation frameworks, including reusable components, test data management, and reporting mechanisms. Familiarity with test case management tools such as JIRA and XRay, including test planning, execution tracking, and defect lifecycle management. Clear and effective communication skills, both written and verbal, with the ability to collaborate across teams and articulate technical concepts to non-technical stakeholders. Self-driven and proactive, capable of working independently with minimal supervision while aligning with broader team objectives and timelines. Nice to have: Exposure to Eggplant automation tool, with an understanding of its scripting and testing capabilities. Experience working in Agile, sprint-based delivery teams, with a strong grasp of iterative development, sprint planning, and backlog grooming. Understanding of test orchestration and regression planning, including test suite optimization, scheduling, and integration into CI/CD pipelines for scalable test execution. Key Responsibilities Automate UI test cases based on requirements defined by the manual QA team Integrate with test case management and reporting tools Contribute to improving the automation framework as per architectural guidance Deliver consistent scripts in alignment with sprint goals Establish and implement comprehensive QA strategies and test plans from scratch. Develop and execute test cases with a focus on Fixed Income trading workflows. Collaborate with development, business analysts, and project managers to ensure quality throughout the SDLC. Provide clear and concise reporting on QA progress and metrics to management. Bring strong subject matter expertise in the Financial Services Industry, particularly fixed income trading products and workflows. Ensure effective, efficient, and continuous communication (written and verbally) with global stakeholders Independently troubleshoot difficult and complex issues on different environments Responsible for end-to-end delivery of projects, coordination between client and internal offshore teams and managing client queries Demonstrate high attention to detail, should work in a dynamic environment whilst maintaining high quality standards, a natural aptitude to develop good internal working relationships and a flexible work ethic Responsible for Quality Checks and adhering to the agreed Service Level Agreement (SLA) / Turn Around Time (TAT)
Posted 2 days ago
4.0 years
0 Lacs
Gurgaon
On-site
Global Position Description Title: Test Automation (Test Suite) Engineer - UiPath Hierarchical Level: Professional Division/Department: IT Reports to: Supervisor Automation Delivery Number of Direct Reports: 0 Travel: 0% Revision Date: 7/4/2025 Job Level: HR To Determine FLSA (US Only) : HR To Determine Type of Position: Salary Summary: Ready to join us in transforming testing through intelligent automation? Apply now or reach out for a detailed discussion about this opportunity! We’re seeking a hands-on Test Engineer/Developer to drive automation efforts using the UiPath Test Suite. You’ll focus on designing and executing automated tests for web-based applications and Microsoft Dynamics 365 modules. Your work will integrate test automation into CI/CD pipelines, ensuring high-quality releases and rapid feedback loops. Responsibilities: Test Automation Environment Setup: Define and configure test automation environments according to project requirements. Install and configure UiPath Test Suite, Studio, and Orchestrator for test automation purposes. Integrate external systems like SAP and manage data access for test execution. Integrate ADO to Test Manager. Test Automation Design & Development: Analyze and comprehend detailed test cases for various functionalities. Design and develop reusable, structured automated test scripts using UiPath Test Suite. Integrate assertions and verification points to ensure the validation of application behaviors. Implement robust exception handling mechanisms within test scripts. Conduct unit testing and dry runs to ensure script reliability before execution. Test Execution & Reporting: Execute automated test scripts and generate detailed reports summarizing results. Develop and maintain automated scripts for change requests, optimizing regression testing. Identify areas for enhancing test execution performance and provide recommendations. Collaboration & Maintenance: Collaborate with development teams to address test case feedback, technical issues, and improvements. Maintain and update test scripts to accommodate application changes and new functionalities. Identify and implement technical solutions to enhance test automation efficiency and accuracy. Version Control & Integration: Maintain version control for all test scripts using Azure Repo. Integrate test automation scripts with CI/CD pipelines and implement API automation where necessary. Preferred Qualification: UiPath certifications (e.g., UiPath Test Automation Certified Professional) Experience with performance or load testing tools Knowledge of RPA orchestration and advanced automation patterns Exposure to Agile/Scrum methodologies and DevOps practices 4+ years’ experience in testing with proven hands-on experience with UiPath Test Suite in enterprise environments Solid background in web-based test development (HTML, JavaScript, REST APIs) Experience testing Microsoft Dynamics or Dynamics 365 applications Familiarity with Azure DevOps (ADO) and connecting it to UiPath Test Suite Practical use of UiPath Autopilot for intelligent test automation Strong scripting, debugging, and problem-solving skills Excellent communication and documentation abilities Work Experience Requirements Number of Overall Years Necessary: 2-5 Minimum of 4 years of UiPath Test Automation experience with bachelor’s degree Certification and Training: Click here to enter text.Microsoft Certified: Dynamics 365 Fundamentals or Functional Consultant Associate ISTQB (International Software Testing Qualifications Board) certification Azure DevOps Engineer or related Azure certifications Advanced training or badges from UiPath Academy Specialized Skills/Technical Knowledge/ Soft Skills & Team Attributes Experience with custom connector development or workflow automation in Microsoft Power Platform Knowledge of testing frameworks like Selenium, Cypress, or Postman (for API testing) Familiarity with source control systems such as Git or GitHub, especially when used alongside UiPath Understanding of test data management and virtualization strategies Background in setting up or maintaining test environments and virtual machines Strong stakeholder communication and ability to collaborate with cross-functional teams Analytical mindset with a knack for troubleshooting edge-case issues Agile thinking and willingness to iterate in fast-paced sprints Exposure to product lifecycle management, especially in enterprise SaaS or ERP environments Local Specifications (English and Local Language): ** To comply with regulations by the American with Disabilities Act (ADA), the principal duties in job descriptions must be essential to the job. To identify essential functions, focus on the purpose and the result of the duties rather than the manner in which they are performed. The following definition applies: a job function is essential if removal of that function would fundamentally change the job. Date Posted: Click here to enter a date. Location - Gurugram Mode - Hybrid
Posted 2 days ago
2.0 years
4 - 10 Lacs
Gurgaon
On-site
Expedia Group brands power global travel for everyone, everywhere. We design cutting-edge tech to make travel smoother and more memorable, and we create groundbreaking solutions for our partners. Our diverse, vibrant, and welcoming community is essential in driving our success. Why Join Us? To shape the future of travel, people must come first. Guided by our Values and Leadership Agreements, we foster an open culture where everyone belongs, differences are celebrated and know that when one of us wins, we all win. We provide a full benefits package, including exciting travel perks, generous time-off, parental leave, a flexible work model (with some pretty cool offices), and career development resources, all to fuel our employees' passion for travel and ensure a rewarding career journey. We’re building a more open world. Join us. Why Join Us? Are you an technologist who is passionate about building robust, scalable, and performant applications & data products? This is exactly what we do, join Data Engineering & Tooling Team! Data Engineering & Tooling Team (part of Enterprise Data Products at Expedia) is responsible for making traveler, partner & supply data accessible, unlocking insights and value! Our Mission is to build and manage the travel industry's premier Data Products and SDKs. Software Development Engineer II Introduction to team Our team is looking for an Software Engineer who applies engineering principles to build & improve existing systems. We follow Agile principles, and we're proud to offer a dynamic, diverse and collaborative environment where you can play an impactful role and build your career. Would you like to be part of a Global Tech company that does Travel? Don't wait, Apply Now! In this role, you will - Implement products and solutions that are highly scalable with high-quality, clean, maintainable, optimized, modular and well-documented code across the technology stack. [OR - Writing code that is clean, maintainable, optimized, modular.] Crafting API's, developing and testing applications and services to ensure they meet design requirements. Work collaboratively with all members of the technical staff and other partners to build and ship outstanding software in a fast-paced environment. Applying knowledge of software design principles and Agile methodologies & tools. Resolve problems and roadblocks as they occur with help from peers or managers. Follow through on details and drive issues to closure. Assist with supporting production systems (investigate issues and working towards resolution). Experience and qualifications: Bachelor's degree or Masters in Computer Science & Engineering, or a related technical field; or equivalent related professional experience. 2+ years of software development or data engineering experience in an enterprise-level engineering environment. Proficient with Object Oriented Programming concepts with a strong understanding of Data Structures, Algorithms, Data Engineering (at scale), and Computer Science fundamentals. Experience with Java, Scala, Spring framework, Micro-service architecture, Orchestration of containerized applications along with a good grasp of OO design with strong design patterns knowledge. Solid understanding of different API types (e.g. REST, GraphQL, gRPC), access patterns and integration. Prior knowledge & experience of NoSQL databases (e.g. ElasticSearch, ScyllaDB, MongoDB). Prior knowledge & experience of big data platforms, batch processing (e.g. Spark, Hive), stream processing (e.g. Kafka, Flink) and cloud-computing platforms such as Amazon Web Services. Knowledge & Understanding of monitoring tools, testing (performance, functional), application debugging & tuning. Good communication skills in written and verbal form with the ability to present information in a clear and concise manner. Accommodation requests If you need assistance with any part of the application or recruiting process due to a disability, or other physical or mental health conditions, please reach out to our Recruiting Accommodations Team through the Accommodation Request. We are proud to be named as a Best Place to Work on Glassdoor in 2024 and be recognized for award-winning culture by organizations like Forbes, TIME, Disability:IN, and others. Expedia Group's family of brands includes: Brand Expedia®, Hotels.com®, Expedia® Partner Solutions, Vrbo®, trivago®, Orbitz®, Travelocity®, Hotwire®, Wotif®, ebookers®, CheapTickets®, Expedia Group™ Media Solutions, Expedia Local Expert®, CarRentals.com™, and Expedia Cruises™. © 2024 Expedia, Inc. All rights reserved. Trademarks and logos are the property of their respective owners. CST: 2029030-50 Employment opportunities and job offers at Expedia Group will always come from Expedia Group’s Talent Acquisition and hiring teams. Never provide sensitive, personal information to someone unless you’re confident who the recipient is. Expedia Group does not extend job offers via email or any other messaging tools to individuals with whom we have not made prior contact. Our email domain is @expediagroup.com. The official website to find and apply for job openings at Expedia Group is careers.expediagroup.com/jobs. Expedia is committed to creating an inclusive work environment with a diverse workforce. All qualified applicants will receive consideration for employment without regard to race, religion, gender, sexual orientation, national origin, disability or age.
Posted 2 days ago
0 years
0 Lacs
India
Remote
Company Description At Trigonal AI, we specialize in building and managing end-to-end data ecosystems that empower businesses to make data-driven decisions with confidence. From data ingestion to advanced analytics, we offer the expertise and technology to transform data into actionable insights. Our core services include data pipeline orchestration, real-time analytics, and business intelligence & visualization. We use modern technologies such as Apache Airflow, Kubernetes, Apache Druid, Kafka, and leading BI tools to create reliable and scalable solutions. Let us help you unlock the full potential of your data. Role Description This is a full-time remote role for a Business Development Specialist. The specialist will focus on day-to-day tasks including lead generation, market research, customer service, and communication with potential clients. The role also includes analytical tasks and collaborating with the sales and marketing teams to develop and implement growth strategies. Qualifications Strong Analytical Skills for data-driven decision-making Effective Communication skills for engaging with clients and team members Experience in Lead Generation and Market Research Proficiency in Customer Service to maintain client relationships Proactive and independent work style Experience in the tech or data industry is a plus Bachelor's degree in Business, Marketing, or related field
Posted 2 days ago
9.0 - 13.0 years
0 Lacs
Kolkata, West Bengal, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY-Consulting – AI Enabled Automation -GenAI/Agentic – Manager We are looking to hire people with strong AI Enabled Automation skills and who are interested in applying AI in the process automation space – Azure, AI, ML, Deep Learning, NLP, GenAI , large Lang Models(LLM), RAG ,Vector DB , Graph DB, Python. At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture, and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Responsibilities: Development and implementation of AI enabled automation solutions, ensuring alignment with business objectives. Design and deploy Proof of Concepts (POCs) and Points of View (POVs) across various industry verticals, demonstrating the potential of AI enabled automation applications. Ensure seamless integration of optimized solutions into the overall product or system Collaborate with cross-functional teams to understand requirements, to integrate solutions into cloud environments (Azure, GCP, AWS, etc.) and ensure it aligns with business goals and user needs Educate team on best practices and keep updated on the latest tech advancements to bring innovative solutions to the project Technical Skills Requirements 9 to 13 years of relevant professional experience Proficiency in Python and frameworks like PyTorch, TensorFlow, Hugging Face Transformers. Strong foundation in ML algorithms, feature engineering, and model evaluation.- Must Strong foundation in Deep Learning, Neural Networks, RNNs, CNNs, LSTMs, Transformers (BERT, GPT), and NLP.-Must Experience in GenAI technologies — LLMs (GPT, Claude, LLaMA), prompting, fine-tuning. Experience with LangChain, LlamaIndex, LangGraph, AutoGen, or CrewAI.(Agentic Framework) Knowledge of retrieval augmented generation (RAG) Knowledge of Knowledge Graph RAG Experience with multi-agent orchestration, memory, and tool integrations Experience/Implement MLOps practices and tools (CI/CD for ML, containerization, orchestration, model versioning and reproducibility)--Good to have Experience with cloud platforms (AWS, Azure, GCP) for scalable ML model deployment. Good understanding of data pipelines, APIs, and distributed systems. Build observability into AI systems — latency, drift, performance metrics. Strong written and verbal communication, presentation, client service and technical writing skills in English for both technical and business audiences. Strong analytical, problem solving and critical thinking skills. Ability to work under tight timelines for multiple project deliveries. What we offer: At EY GDS, we support you in achieving your unique potential both personally and professionally. We give you stretching and rewarding experiences that keep you motivated, working in an atmosphere of integrity and teaming with some of the world's most successful companies. And while we encourage you to take personal responsibility for your career, we support you in your professional development in every way we can. You enjoy the flexibility to devote time to what matters to you, in your business and personal lives. At EY you can be who you are and express your point of view, energy and enthusiasm, wherever you are in the world. It's how you make a difference. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 2 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough