Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 years
0 Lacs
Hyderābād
On-site
JOB DESCRIPTION Join us to lead technology support in a dynamic environment, enhancing your career with growth opportunities. Job Summary As a Technology Support Lead at JPMorgan Chase within the Consumer & Community Banking division, you will play a pivotal leadership role in maintaining the operational stability, availability, and performance of our production services. Your responsibilities will include identifying, troubleshooting, and resolving issues to guarantee a seamless user experience. Job Responsibilities Provide end-to-end application and infrastructure service delivery for successful business operations. Execute policies and procedures ensuring engineering and operational stability. Monitor production environments for anomalies and address issues using standard observability tools. Escalate and communicate issues and solutions to business and technology stakeholders. Lead incident, problem, and change management in support of full stack technology systems. Required Qualifications, Capabilities, and Skills Formal training or certification on software engineering concepts and 3+ years applied experience Proficiency on AWS Cloud Platform, with system design, application development, testing, and operational stability. Hands-on experience with infrastructure as code tools, such as Terraform and Helm chart. Experience in designing, deploying, and managing Kubernetes clusters across various environments. Minimum 4+ years of experience in Kubernetes, Terraform, Python, and Shell scripting technologies. Experience with Continuous Integration and Delivery tools like Jenkins. Preferred Qualifications, Capabilities, and Skills Ability to lead by example and guide the team with technical expertise. Ability to identify risks/issues for the project and manage them accordingly. Experience with PostgreSQL, AWS RDS, Aurora, or Teradata preferred. ABOUT US
Posted 5 days ago
3.0 years
6 - 9 Lacs
Hyderābād
On-site
Role Specific Responsibilities 3-4 years of working experience with VDI and DaaS technologies. knowledge of Microsoft AD, GPO domain level and Windows operating system. Scripting in any of the programming language is preferable. Troubleshooting VMware VDI issues within the SLA. Installation, Configuration and Management of VMware horizon infrastructure. Technical experience in managing IAAS in any of the cloud services i.e Azure, AWS and GCP cloud services. Technical experience in managing VMware vSphere infrastructure. Technical Knowledge on Azure Virtual desktop and its related services. Technical Knowledge on AWS workspaces and its related services is preferable. Strong interpersonal, written, and oral communication skills, Ability to actively listen and clearly communicate with other teams globally. Qualifications Bachelor’s degree in computer science, Business Information Systems or relevant experience and accomplishments 2-4 years of experience in the IT field 2+ years of experience with virtual desktop services such as VMware Horizon View, Citrix XenDesktop, AVD. 0-1 years of experience with Python, Terraform, PowerShell, Power Cli, or other scripting languages Experience with the Scrum Agile methodology and working on Scrum teams. 1+ years direct, hands-on experience with automated cloud provisioning and management on any of the Cloud Platforms – Azure, AWS, GCP (Must Have). knowledge of Microsoft AD, GPO domain level and Windows operating system. Knowledge of networking, firewalls, load balancers etc. Education (degree): Bachelor’s degree in computer science, Business Information Systems or relevant experience and accomplishments Years of Experience: 3-5 Years Technical Skills VDI and DaaS Administration Python, PowerShell and Ansible Cloud provisioning and management – Azure, AWS, GCP. Citrix, Microsoft RDS, AVD, AWS workspaces and VMware Horizon. Active Directory JIRA or Azure Boards Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Professional development At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India. Benefits to help you thrive At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Requisition code: 307864
Posted 5 days ago
0 years
7 - 9 Lacs
Hyderābād
On-site
India Information Technology (IT) Group Functions Job Reference # 322748BR City Hyderabad, Pune Job Type Full Time Your role Are you innovative and passionate about building secure and reliable solutions? We are looking for Tech Engineers specializing in either DevSecOps, Data Engineering or Full-Stack web development to join our team in building firmwide Data Observability Components on Azure. We are open to adapting the role suited to your career aspirations and skillset. Responsibilities include: Design/document, develop, review, test, release, support Data Observability components/platforms/environments. Contribute to agile ceremonies e.g. daily stand-ups, backlog refinement, iteration planning, iteration reviews, retrospectives. Comply with the firm’s applicable policies and processes. Collaborate with other teams and divisions using Data Observability services, related guilds and other Data Mesh Services teams. Ensure delivery deadlines are met. Your team You will be part of a diverse global team consisting of data scientists, data engineers, full-stack developers, DevSecOps engineers and knowledge engineers within Group CTO working primarily in a local team with some interactions with other teams and divisions. We are providing Data Observability services as part of our firmwide Data Mesh strategy to automate and scale data management to improve time-to-market for data and reduce data downtime. We provide learning opportunities and a varied technology landscape. Technologies include Azure Cloud, AI (ML and GenAI models), web user interface (React), data storage (Postgres, Azure), REST APIs, Kafka, Great Expectations, ontology models. Your expertise Experience in the following (or similar transferrable skills): Hands-on delivery in any of the following (or related): full-stack web development (e.g. React, APIs), data transformations, Spark, python, database design and development in any database, CI/CD pipelines, security risk mitigation, infrastructure as code (e.g. Terraform), monitoring, Azure development. Agile software practices and tools, performance testing, unit and integration testing. Identifying root-causes and designing and implementing the solution. Collaborating with other teams to achieve common goals. Learning and reskilling in new technologies. About us UBS is the world’s largest and the only truly global wealth manager. We operate through four business divisions: Global Wealth Management, Personal & Corporate Banking, Asset Management and the Investment Bank. Our global reach and the breadth of our expertise set us apart from our competitors. We have a presence in all major financial centers in more than 50 countries. How we hire We may request you to complete one or more assessments during the application process. Learn more Join us At UBS, we know that it's our people, with their diverse skills, experiences and backgrounds, who drive our ongoing success. We’re dedicated to our craft and passionate about putting our people first, with new challenges, a supportive team, opportunities to grow and flexible working options when possible. Our inclusive culture brings out the best in our employees, wherever they are on their career journey. We also recognize that great work is never done alone. That’s why collaboration is at the heart of everything we do. Because together, we’re more than ourselves. We’re committed to disability inclusion and if you need reasonable accommodation/adjustments throughout our recruitment process, you can always contact us. Disclaimer / Policy statements UBS is an Equal Opportunity Employer. We respect and seek to empower each individual and support the diverse cultures, perspectives, skills and experiences within our workforce.
Posted 5 days ago
0 years
0 Lacs
Hyderābād
On-site
India Information Technology (IT) Group Functions Job Reference # 322747BR City Hyderabad Job Type Full Time Your role Are you innovative and passionate about building secure and reliable solutions? We are looking for Tech Engineers specializing in either DevSecOps, Data Engineering or Full-Stack web development to join our team in building firmwide Data Observability Components on Azure. We are open to adapting the role suited to your career aspirations and skillset. Responsibilities include: Design/document, develop, review, test, release, support Data Observability components/platforms/environments. Contribute to agile ceremonies e.g. daily stand-ups, backlog refinement, iteration planning, iteration reviews, retrospectives. Comply with the firm’s applicable policies and processes. Collaborate with other teams and divisions using Data Observability services, related guilds and other Data Mesh Services teams. Ensure delivery deadlines are met. Your team You will be part of a diverse global team consisting of data scientists, data engineers, full-stack developers, DevSecOps engineers and knowledge engineers within Group CTO working primarily in a local team with some interactions with other teams and divisions. We are providing Data Observability services as part of our firmwide Data Mesh strategy to automate and scale data management to improve time-to-market for data and reduce data downtime. We provide learning opportunities and a varied technology landscape. Technologies include Azure Cloud, AI (ML and GenAI models), web user interface (React), data storage (Postgres, Azure), REST APIs, Kafka, Great Expectations, ontology models. Your expertise Experience in the following (or similar transferrable skills): Hands-on delivery in any of the following (or related): full-stack web development (e.g. React, APIs), data transformations, Spark, python, database design and development in any database, CI/CD pipelines, security risk mitigation, infrastructure as code (e.g. Terraform), monitoring, Azure development. Agile software practices and tools, performance testing, unit and integration testing. Identifying root-causes and designing and implementing the solution. Collaborating with other teams to achieve common goals. Learning and reskilling in new technologies. About us UBS is the world’s largest and the only truly global wealth manager. We operate through four business divisions: Global Wealth Management, Personal & Corporate Banking, Asset Management and the Investment Bank. Our global reach and the breadth of our expertise set us apart from our competitors. We have a presence in all major financial centers in more than 50 countries. How we hire We may request you to complete one or more assessments during the application process. Learn more Join us At UBS, we know that it's our people, with their diverse skills, experiences and backgrounds, who drive our ongoing success. We’re dedicated to our craft and passionate about putting our people first, with new challenges, a supportive team, opportunities to grow and flexible working options when possible. Our inclusive culture brings out the best in our employees, wherever they are on their career journey. We also recognize that great work is never done alone. That’s why collaboration is at the heart of everything we do. Because together, we’re more than ourselves. We’re committed to disability inclusion and if you need reasonable accommodation/adjustments throughout our recruitment process, you can always contact us. Disclaimer / Policy statements UBS is an Equal Opportunity Employer. We respect and seek to empower each individual and support the diverse cultures, perspectives, skills and experiences within our workforce.
Posted 5 days ago
0 years
0 Lacs
Hyderābād
On-site
India Information Technology (IT) Group Functions Job Reference # 322746BR City Hyderabad, Pune Job Type Full Time Your role Are you innovative and passionate about building secure and reliable solutions? We are looking for Data Engineers and DevSecOps Engineers to join our team in building the Enterprise Data Mesh at UBS. We are open to adapting the role suited to your career aspirations and skillset. Responsibilities include: Design/document, develop, review, test, release, support Data Mesh components/platforms/environments. Contribute to agile ceremonies e.g. daily stand-ups, backlog refinement, iteration planning, iteration reviews, retrospectives. Comply with the firm’s applicable policies and processes. Collaborate with other teams and divisions using Data Mesh services, related guilds and other Data Mesh Services teams. Ensure delivery deadlines are met. Your team You will be part of a diverse global team consisting of data scientists, data engineers, full-stack developers, DevSecOps engineers and knowledge engineers within Group CTO working primarily in a local team with some interactions with other teams and divisions. We are providing many services as part of our Data Mesh strategy firmwide to automate and scale data management to improve time-to-market for data and reduce data downtime. We provide learning opportunities and a varied technology landscape. Technologies include Azure Cloud, AI (ML and GenAI models), web user interface (React), data storage (Postgres, Azure), REST APIs, Kafka, Great Expectations, ontology models. Your expertise Experience in the following (or similar transferrable skills): Hands-on delivery in any of the following (or related): data transformations, Spark, python, database design and development in any database, CI/CD pipelines, security risk mitigation, infrastructure as code (e.g. Terraform), monitoring, Azure development. Agile software practices and tools, performance testing, unit and integration testing. Identifying root-causes and designing and implementing the solution. Collaborating with other teams to achieve common goals. Learning and reskilling in new technologies. About us UBS is the world’s largest and the only truly global wealth manager. We operate through four business divisions: Global Wealth Management, Personal & Corporate Banking, Asset Management and the Investment Bank. Our global reach and the breadth of our expertise set us apart from our competitors. We have a presence in all major financial centers in more than 50 countries. How we hire We may request you to complete one or more assessments during the application process. Learn more Join us At UBS, we know that it's our people, with their diverse skills, experiences and backgrounds, who drive our ongoing success. We’re dedicated to our craft and passionate about putting our people first, with new challenges, a supportive team, opportunities to grow and flexible working options when possible. Our inclusive culture brings out the best in our employees, wherever they are on their career journey. We also recognize that great work is never done alone. That’s why collaboration is at the heart of everything we do. Because together, we’re more than ourselves. We’re committed to disability inclusion and if you need reasonable accommodation/adjustments throughout our recruitment process, you can always contact us. Disclaimer / Policy statements UBS is an Equal Opportunity Employer. We respect and seek to empower each individual and support the diverse cultures, perspectives, skills and experiences within our workforce.
Posted 5 days ago
7.0 years
6 - 9 Lacs
Thiruvananthapuram
On-site
7 - 9 Years 2 Openings Trivandrum Role description Senior Data Engineer – Azure/Snowflake Migration Key Responsibilities Design and develop scalable data pipelines using Snowflake as the primary data platform, integrating with tools like Azure Data Factory, Synapse Analytics, and AWS services. Build robust, efficient SQL and Python-based data transformations for cleansing, enrichment, and integration of large-scale datasets. Lead migration initiatives from AWS-based data platforms to a Snowflake-centered architecture, including: o Rebuilding AWS Glue pipelines in Azure Data Factory or using Snowflake-native ELT approaches. o Migrating EMR Spark jobs to Snowflake SQL or Python-based pipelines. o Migrating Redshift workloads to Snowflake with schema conversion and performance optimization. o Transitioning S3-based data lakes (Hudi, Hive) to Snowflake external tables via ADLS Gen2 or Azure Blob Storage. o Redirecting Kinesis/MSK streaming data to Azure Event Hubs, followed by ingestion into Snowflake using Streams & Tasks or Snowpipe. Support database migrations from AWS RDS (Aurora PostgreSQL, MySQL, Oracle) to Snowflake, focusing on schema translation, compatibility handling, and data movement at scale. Design modern Snowflake lakehouse-style architectures that incorporate raw, staging, and curated zones, with support for time travel, cloning, zero-copy restore, and data sharing. Integrate Azure Functions or Logic Apps with Snowflake for orchestration and trigger-based automation. Implement security best practices, including Azure Key Vault integration and Snowflake role-based access control, data masking, and network policies. Optimize Snowflake performance and costs using clustering, multi-cluster warehouses, materialized views, and result caching. Support CI/CD processes for Snowflake pipelines using Git, Azure DevOps or GitHub Actions, and SQL code versioning. Maintain well-documented data engineering workflows, architecture diagrams, and technical documentation to support collaboration and long-term platform maintainability. Required Qualifications 7+ years of data engineering experience, with 3+ years on Microsoft Azure stack and hands-on Snowflake expertise. Proficiency in: o Python for scripting and ETL orchestration o SQL for complex data transformation and performance tuning in Snowflake o Azure Data Factory and Synapse Analytics (SQL Pools) Experience in migrating workloads from AWS to Azure/Snowflake, including services such as Glue, EMR, Redshift, Lambda, Kinesis, S3, and MSK. Strong understanding of cloud architecture and hybrid data environments across AWS and Azure. Hands-on experience with database migration, schema conversion, and tuning in PostgreSQL, MySQL, and Oracle RDS. Familiarity with Azure Event Hubs, Logic Apps, and Key Vault. Working knowledge of CI/CD, version control (Git), and DevOps principles applied to data engineering workloads. Preferred Qualifications Extensive experience with Snowflake Streams, Tasks, Snowpipe, external tables, and data sharing. Exposure to MSK-to-Event Hubs migration and streaming data integration into Snowflake. Familiarity with Terraform or ARM templates for Infrastructure-as-Code (IaC) in Azure environments. Certification such as SnowPro Core, Azure Data Engineer Associate, or equivalent. Skills Aws,Azure Data Lake,Python About UST UST is a global digital transformation solutions provider. For more than 20 years, UST has worked side by side with the world’s best companies to make a real impact through transformation. Powered by technology, inspired by people and led by purpose, UST partners with their clients from design to operation. With deep domain expertise and a future-proof philosophy, UST embeds innovation and agility into their clients’ organizations. With over 30,000 employees in 30 countries, UST builds for boundless impact—touching billions of lives in the process.
Posted 5 days ago
12.0 years
1 - 8 Lacs
Cochin
On-site
Job Information We are looking for a highly skilled and experienced .NET Architect to lead the design, development, and deployment of enterprise-grade applications using Microsoft technologies. The ideal candidate will have deep expertise in .NET architecture, cloud computing, microservices, and secure API development. You will collaborate with cross-functional teams to drive innovation, scalability, and performance. Your Responsibilities Design end-to-end architecture for scalable and maintainable enterprise applications using .NET (Core/Framework). Provide technical leadership and guidance to development teams, ensuring adherence to best practices. Define architectural standards, design patterns, and governance processes. Lead solution design using Microservices, Clean Architecture, and Domain-Driven Design (DDD). Review code and architecture, ensuring quality, performance, and security compliance. Architect and deploy applications on Azure (App Services, Functions, API Gateway, Key Vault, etc.). · Collaborate with product owners, business analysts, and stakeholders to convert business needs into technical solutions.· Implement DevOps pipelines for continuous integration and deployment (CI/CD) using Azure DevOps or GitHub Actions. Oversee security architecture including authentication (OAuth 2.0, OpenID Connect) and data protection. Develop proof-of-concepts (PoCs) and technical prototypes to validate solution approaches. Required Skills 12+ years of experience in software development using Microsoft technologies. · 3+ years in an architectural or senior design role.· Proficiency in C#, ASP.NET Core, Web API, Entity Framework, LINQ.· Strong experience in Microservices architecture and distributed systems. Expertise in Azure services (App Services, Azure Functions, Blob Storage, Key Vault, etc.) · Hands-on with CI/CD, DevOps, Docker, Kubernetes.· Deep understanding of SOLID principles, design patterns, and architectural best practices. Experience in secure coding practices and API security (JWT, OAuth2, IdentityServer). Strong background in relational and NoSQL databases (SQL Server, Cosmos DB, MongoDB). Excellent communication, leadership, and documentation skills. Preferred Qualifications Microsoft Certified: Azure Solutions Architect Expert or equivalent certification. Experience with frontend frameworks (React, Angular, Blazor) is a plus. Knowledge of event-driven architecture and message queues (e.g., Kafka, RabbitMQ). Exposure to Infrastructure as Code (Terraform, ARM, Bicep). Experience working in Agile/Scrum environments. Experience 12+ Years Work Location Kochi Work Type Full Time Please send your resume to careers@cabotsolutions.com
Posted 5 days ago
9.0 years
5 - 10 Lacs
Thiruvananthapuram
On-site
9 - 12 Years 1 Opening Trivandrum Role description Tech Lead – Azure/Snowflake & AWS Migration Key Responsibilities Design and develop scalable data pipelines using Snowflake as the primary data platform, integrating with tools like Azure Data Factory, Synapse Analytics, and AWS services. Build robust, efficient SQL and Python-based data transformations for cleansing, enrichment, and integration of large-scale datasets. Lead migration initiatives from AWS-based data platforms to a Snowflake-centered architecture, including: o Rebuilding AWS Glue pipelines in Azure Data Factory or using Snowflake-native ELT approaches. o Migrating EMR Spark jobs to Snowflake SQL or Python-based pipelines. o Migrating Redshift workloads to Snowflake with schema conversion and performance optimization. o Transitioning S3-based data lakes (Hudi, Hive) to Snowflake external tables via ADLS Gen2 or Azure Blob Storage. o Redirecting Kinesis/MSK streaming data to Azure Event Hubs, followed by ingestion into Snowflake using Streams & Tasks or Snowpipe. Support database migrations from AWS RDS (Aurora PostgreSQL, MySQL, Oracle) to Snowflake, focusing on schema translation, compatibility handling, and data movement at scale. Design modern Snowflake lakehouse-style architectures that incorporate raw, staging, and curated zones, with support for time travel, cloning, zero-copy restore, and data sharing. Integrate Azure Functions or Logic Apps with Snowflake for orchestration and trigger-based automation. Implement security best practices, including Azure Key Vault integration and Snowflake role-based access control, data masking, and network policies. Optimize Snowflake performance and costs using clustering, multi-cluster warehouses, materialized views, and result caching. Support CI/CD processes for Snowflake pipelines using Git, Azure DevOps or GitHub Actions, and SQL code versioning. Maintain well-documented data engineering workflows, architecture diagrams, and technical documentation to support collaboration and long-term platform maintainability. Required Qualifications 9+ years of data engineering experience, with 3+ years on Microsoft Azure stack and hands-on Snowflake expertise. Proficiency in: o Python for scripting and ETL orchestration o SQL for complex data transformation and performance tuning in Snowflake o Azure Data Factory and Synapse Analytics (SQL Pools) Experience in migrating workloads from AWS to Azure/Snowflake, including services such as Glue, EMR, Redshift, Lambda, Kinesis, S3, and MSK. Strong understanding of cloud architecture and hybrid data environments across AWS and Azure. Hands-on experience with database migration, schema conversion, and tuning in PostgreSQL, MySQL, and Oracle RDS. Familiarity with Azure Event Hubs, Logic Apps, and Key Vault. Working knowledge of CI/CD, version control (Git), and DevOps principles applied to data engineering workloads. Preferred Qualifications Extensive experience with Snowflake Streams, Tasks, Snowpipe, external tables, and data sharing. Exposure to MSK-to-Event Hubs migration and streaming data integration into Snowflake. Familiarity with Terraform or ARM templates for Infrastructure-as-Code (IaC) in Azure environments. Certification such as SnowPro Core, Azure Data Engineer Associate, or equivalent. Skills Azure,AWS REDSHIFT,Athena,Azure Data Lake About UST UST is a global digital transformation solutions provider. For more than 20 years, UST has worked side by side with the world’s best companies to make a real impact through transformation. Powered by technology, inspired by people and led by purpose, UST partners with their clients from design to operation. With deep domain expertise and a future-proof philosophy, UST embeds innovation and agility into their clients’ organizations. With over 30,000 employees in 30 countries, UST builds for boundless impact—touching billions of lives in the process.
Posted 5 days ago
5.0 - 7.0 years
0 Lacs
Thiruvananthapuram
Remote
5 - 7 Years 1 Opening Trivandrum Role description Role Proficiency: Resolve enterprise trouble tickets within agreed SLA and raise problem tickets for permanent resolution and/or provide mentorship (Hierarchical or Lateral) to junior associates Outcomes: 1) Update SOP with updated troubleshooting instructions and process changes2) Mentor new team members in understanding customer infrastructure and processes3) Perform analysis for driving incident reduction4) Escalate high priority incidents to customer and organization stakeholders for quicker resolution5) Contribute to planning and successful migration of platforms 6) Resolve enterprise trouble tickets within agreed SLA and raise problem tickets for permanent resolution7) Provide inputs for root cause analysis after major incidents to define preventive and corrective actions Measures of Outcomes: 1) SLA Adherence2) Time bound resolution of elevated tickets - OLA3) Manage ticket backlog timelines - OLA4) Adhere to defined process – Number of NCs in internal/external Audits5) Number of KB articles created6) Number of incidents and change ticket handled 7) Number of elevated tickets resolved8) Number of successful change tickets9) % Completion of all mandatory training requirements Outputs Expected: Resolution: Understand Priority and Severity based on ITIL practice resolve trouble ticket within agreed resolution SLA Execute change control tickets as documented in implementation plan Troubleshooting: Troubleshooting based on available information from previous tickets or consulting with seniors Participate in online knowledge forums reference. Covert the new steps to KB article Perform logical/analytical troubleshooting Escalation/Elevation: Escalate within organization/customer peer in case of resolution delay. Understand OLA between delivery layers (L1 L2 L3 etc) adhere to OLA. Elevate to next level work on elevated tickets from L1 Tickets Backlog/Resolution: Follow up on tickets based on agreed timelines manage ticket backlogs/last activity as per defined process. Resolve incidents and SRs within agreed timelines. Execute change tickets for infrastructure Installation: Install and configure tools software and patches Runbook/KB: Update KB with new findings Document and record troubleshooting steps as knowledge base Collaboration: Collaborate with different towers of delivery for ticket resolution (within SLA resolve L1 tickets with help from respective tower. Collaborate with other team members for timely resolution of tickets. Actively participate in team/organization-wide initiatives. Co-ordinate with UST ISMS teams for resolving connectivity related issues. Stakeholder Management: Lead the customer calls and vendor calls. Organize meeting with different stake holders. Take ownership for function's internal communications and related change management. Strategic: Define the strategy on data management policy management and data retention management. Support definition of the IT strategy for the function’s relevant scope and be accountable for ensuring the strategy is tracked benchmarked and updated for the area owned. Process Adherence: Thorough understanding of organization and customer defined process. Suggest process improvements and CSI ideas. Adhere to organization’ s policies and business conduct. Process/efficiency Improvement: Proactively identify opportunities to increase service levels and mitigate any issues in service delivery within the function or across functions. Take accountability for overall productivity efforts within the function including coordination of function specific tasks and close collaboration with Finance. Process Implementation: Coordinate and monitor IT process implementation within the function Compliance: Support information governance activities and audit preparations within the function. Act as a function SPOC for IT audits in local sites (incl. preparation interface to local organization mitigation of findings etc.) and work closely with ISRM (Information Security Risk Management). Coordinate overall objective setting preparation and facilitate process in order to achieve consistent objective setting in function Job Description. Coordination Support for CSI across all services in CIS and beyond. Training: On time completion of all mandatory training requirements of organization and customer. Provide On floor training and one to one mentorship for new joiners. Complete certification of respective career paths. Performance Management: Update FAST Goals in NorthStar track report and seek continues feedback from peers and manager. Set goals for team members and mentees and provide feedback Assist new team members to understand the customer environment Skill Examples: 1) Good communication skills (Written verbal and email etiquette) to interact with different teams and customers. 2) Modify / Create runbooks based on suggested changes from juniors or newly identified steps3) Ability to work on an elevated server ticket and solve4) Networking:a. Trouble shooting skills in static and Dynamic routing protocolsb. Should be capable of running netflow analyzers in different product lines5) Server:a. Skills in installing and configuring active directory DNS DHCP DFS IIS patch managementb. Excellent troubleshooting skills in various technologies like AD replication DNS issues etc.c. Skills in managing high availability solutions like failover clustering Vmware clustering etc.6) Storage and Back up:a. Ability to give recommendations to customers. Perform Storage & backup enhancements. Perform change management.b. Skilled in in core fabric technology Storage design and implementation. Hands on experience on backup and storage Command Line Interfacesc. Perform Hardware upgrades firmware upgrades Vulnerability remediation storage and backup commissioning and de-commissioning replication setup and management.d. Skilled in server Network and virtualization technologies. Integration of virtualization storage and backup technologiese. Review the technical diagrams architecture diagrams and modify the SOP and documentations based on business requirements.f. Ability to perform the ITSM functions for storage & backup team and review the quality of ITSM process followed by the team.7) Cloud:a. Skilled in any one of the cloud technologies - AWS Azure GCP.8) Tools:a. Skilled in administration and configuration of monitoring tools like CA UIM SCOM Solarwinds Nagios ServiceNow etcb. Skilled in SQL scriptingc. Skilled in building Custom Reports on Availability and performance of IT infrastructure building based on the customer requirements9) Monitoring:a. Skills in monitoring of infrastructure and application components10) Database:a. Data modeling and database design Database schema creation and managementb. Identify the data integrity violations so that only accurate and appropriate data is entered and maintained.c. Backup and recoveryd. Web-specific tech expertise for e-Biz Cloud etc. Examples of this type of technology include XML CGI Java Ruby firewalls SSL and so on.e. Migrating database instances to new hardware and new versions of software from on premise to cloud based databases and vice versa.11) Quality Analysis: a. Ability to drive service excellence and continuous improvement within the framework defined by IT Operations Knowledge Examples: 1) Good understanding of customer infrastructure and related CIs. 2) ITIL Foundation certification3) Thorough hardware knowledge 4) Basic understanding of capacity planning5) Basic understanding of storage and backup6) Networking:a. Hands-on experience in Routers and switches and Firewallsb. Should have minimum knowledge and hands-on with BGPc. Good understanding in Load balancers and WAN optimizersd. Advance back and restore knowledge in backup tools7) Server:a. Basic to intermediate powershell / BASH/Python scripting knowledge and demonstrated experience in script based tasksb. Knowledge of AD group policy management group policy tools and troubleshooting GPO sc. Basic AD object creation DNS concepts DHCP DFSd. Knowledge with tools like SCCM SCOM administration8) Storage and Backup:a. Subject Matter Expert in any of the Storage & Backup technology9) Tools:a. Proficient in the understanding and troubleshooting of Windows and Linux family of operating systems10) Monitoring:a. Strong knowledge in ITIL process and functions11) Database:a. Knowledge in general database management b. Knowledge in OS System and networking skills Additional Comments: Role - Cloud Engineer Primary Responsibilities • Engineer and support a portfolio of tools including: o HashiCorp Vault (HCP Dedicated), Terraform (HCP), Cloud Platform o GitHub Enterprise Cloud (Actions, Advanced Security, Copilot) o Ansible Automation Platform, Env0, Docker Desktop o Elastic Cloud, Cloudflare, Datadog, PagerDuty, SendGrid, Teleport • Manage infrastructure using Terraform, Ansible, and scripting languages such as Python and PowerShell • Enable security controls including dynamic secrets management, secrets scanning workflows, and cloud access quotas • Design and implement automation for self-service adoption, access provisioning, and compliance monitoring • Respond to user support requests via ServiceNow and continuously improve platform support documentation and onboarding workflows • Participate in Agile sprints, sprint planning, and cross-team technical initiatives • Contribute to evaluation and onboarding of new tools (e.g., remote developer access, artifact storage) Key Projects You May Lead or Support • GitHub secrets scanning and remediation with integration to HashiCorp Vault • Lifecycle management of developer access across tools like GitHub and Teleport • Upgrades to container orchestration environments and automation platforms (EKS, AKS) Technical Skills and Experience • Proficiency with Terraform (IaC) and Ansible • Strong scripting experience in Python, PowerShell, or Bash • Experience operating in cloud environments (AWS, Azure, or GCP) • Familiarity with secure development practices and DevSecOps tooling • Exposure to or experience with: o CI/CD automation (GitHub Actions) o Monitoring and incident management platforms (Datadog, PagerDuty) o Identity providers (AzureAD, Okta) o Containers and orchestration (Docker, Kubernetes) o Secrets management and vaulting platforms Soft Skills and Attributes • Strong cross-functional communication skills with technical and non-technical stakeholders • Ability to work independently while knowing when to escalate or align with other engineers or teams. • Comfort managing complexity and ambiguity in a fast-paced environment • Ability to balance short-term support needs with longer-term infrastructure automation and optimization. • Proactive, service-oriented mindset focused on enabling secure and scalable development • Detail-oriented, structured approach to problem-solving with an emphasis on reliability and repeatability. Skills Terraform,Ansible,Python,PowershellorBash,AWS,AzureorGCP,CI/CDautomation About UST UST is a global digital transformation solutions provider. For more than 20 years, UST has worked side by side with the world’s best companies to make a real impact through transformation. Powered by technology, inspired by people and led by purpose, UST partners with their clients from design to operation. With deep domain expertise and a future-proof philosophy, UST embeds innovation and agility into their clients’ organizations. With over 30,000 employees in 30 countries, UST builds for boundless impact—touching billions of lives in the process.
Posted 5 days ago
5.0 years
10 Lacs
Thiruvananthapuram
Remote
Technology: Cloud Infrastructure Engineer with Azure, Kubernetes, Terraform Experience:5+ Years Location: 100% Remote Duration: 6 months Cost: 80K per month Working Time: - 4.30 PM to 12:30 AM IST OR 7:30 PM to 3.30 AM IST PRIMARY SKILLS 5+ years of experience in cloud engineering, infrastructure architecture, or platform engineering roles. Experience with Kubernetes operations and architecture in production environments. Strong knowledge of cloud IaaS and PaaS services, and how to design reliable solutions leveraging them (e.g., VMs, load balancers, managed databases, identity platforms, messaging queues, etc.). Advanced proficiency in Terraform and Git-based infrastructure workflows. Experience building and maintaining CI/CD pipelines. Solid scripting abilities in Python, Bash, or PowerShell. A strong understanding of infrastructure security, governance, and identity best practices. Ability to work collaboratively across engineering. SECONDARY SKILLS (IF ANY) Familiarity with GitOps tooling. Experience with policy-as-code and container security best practices. Experience with Microsoft Power Platform (Dynamics 365) Google Cloud Knowledge
Posted 5 days ago
3.0 - 5.0 years
2 - 5 Lacs
Gurgaon
On-site
Expedia Group brands power global travel for everyone, everywhere. We design cutting-edge tech to make travel smoother and more memorable, and we create groundbreaking solutions for our partners. Our diverse, vibrant, and welcoming community is essential in driving our success. Why Join Us? To shape the future of travel, people must come first. Guided by our Values and Leadership Agreements, we foster an open culture where everyone belongs, differences are celebrated and know that when one of us wins, we all win. We provide a full benefits package, including exciting travel perks, generous time-off, parental leave, a flexible work model (with some pretty cool offices), and career development resources, all to fuel our employees' passion for travel and ensure a rewarding career journey. We’re building a more open world. Join us. As an Infrastructure Engineer, you will be responsible for the technical design, planning, implementation, and optimization of performance tuning and recovery procedures for critical enterprise systems and applications. You will serve as the technical authority in system administration for complex SaaS, local, and cloud-based environments. Your role is critical in ensuring the high availability, reliability, and scalability of our infrastructure components. You will also be involved in designing philosophies, tools, and processes to enable the rapid delivery of evolving products. In this role you will : Design, configure, and document cloud-based infrastructures using AWS Virtual Private Cloud (VPC) and EC2 instances in AWS. Secure and monitor hosted production SaaS environments provided by third-party partners. Define, document, and manage network configurations within AWS VPCs and between VPCs and data center networks, including firewall, DNS, and ACL configurations. Lead the design and review of developer work on DevOps tools and practices. Ensure high availability and reliability of infrastructure components through monitoring and performance tuning. Implement and maintain security measures to protect infrastructure from threats. Collaborate with cross-functional teams to design and deploy scalable solutions. Automate repetitive tasks and improve processes using scripting languages such as Python, PowerShell, or BASH. Support Airflow DAGs in the Data Lake, utilizing the Spark framework and Big Data technologies. Provide support for infrastructure-related issues and conduct root cause analysis. Develop and maintain documentation for infrastructure configurations and procedures. Administer databases, handle data backups, monitor databases, and manage data rotation. Work with RDBMS and NoSQL systems, leading stateful data migration between different data systems. Experience & Qualifications: Bachelor’s or Master’s degree in Information Science, Computer Science, Business, or equivalent work experience. 3-5 years of experience with Amazon Web Services, particularly VPC, S3, EC2, and EMR. Experience in setting up new VPCs and integrating them with existing networks is highly desirable. Experience in maintaining infrastructure for Data Lake/Big Data systems built on the Spark framework and Hadoop technologies. Experience with Active Directory and LDAP setup, maintenance, and policies. Workday certification is preferred but not required. Exposure to Workday Integrations and Configuration is preferred. Strong knowledge of networking concepts and technologies. Experience with infrastructure automation tools (e.g., Terraform, Ansible, Chef). Familiarity with containerization technologies like Docker and Kubernetes. Excellent problem-solving skills and attention to detail. Strong verbal and written communication skills. Understanding of Agile project methodologies, including Scrum and Kanban, is required. Accommodation requests If you need assistance with any part of the application or recruiting process due to a disability, or other physical or mental health conditions, please reach out to our Recruiting Accommodations Team through the Accommodation Request. We are proud to be named as a Best Place to Work on Glassdoor in 2024 and be recognized for award-winning culture by organizations like Forbes, TIME, Disability:IN, and others. Expedia Group's family of brands includes: Brand Expedia®, Hotels.com®, Expedia® Partner Solutions, Vrbo®, trivago®, Orbitz®, Travelocity®, Hotwire®, Wotif®, ebookers®, CheapTickets®, Expedia Group™ Media Solutions, Expedia Local Expert®, CarRentals.com™, and Expedia Cruises™. © 2024 Expedia, Inc. All rights reserved. Trademarks and logos are the property of their respective owners. CST: 2029030-50 Employment opportunities and job offers at Expedia Group will always come from Expedia Group’s Talent Acquisition and hiring teams. Never provide sensitive, personal information to someone unless you’re confident who the recipient is. Expedia Group does not extend job offers via email or any other messaging tools to individuals with whom we have not made prior contact. Our email domain is @expediagroup.com. The official website to find and apply for job openings at Expedia Group is careers.expediagroup.com/jobs. Expedia is committed to creating an inclusive work environment with a diverse workforce. All qualified applicants will receive consideration for employment without regard to race, religion, gender, sexual orientation, national origin, disability or age.
Posted 5 days ago
0 years
6 - 10 Lacs
Gurgaon
On-site
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. We are seeking a proactive and technically skilled DevOps & Java Support Engineer to join our Core Operations team. The ideal candidate will have hands-on experience in Java-based application support, CI/CD pipelines, infrastructure automation, and production monitoring. This role is critical in ensuring the stability, scalability, and performance of our core systems. Primary Responsibilities: Provide L2 support for Java & microservices based applications in production and staging environments Monitor system health and performance using tools like Prometheus, Grafana, ELK, or equivalent Troubleshoot and resolve incidents, perform root cause analysis, and implement preventive measures Develop and maintain CI/CD pipelines using Jenkins, GitLab CI, or similar tools Automate infrastructure provisioning and configuration using tools like Ansible, Terraform, or CloudFormation Collaborate with development, QA, and infrastructure teams to ensure smooth deployments and releases Participate in on-call rotations and incident response processes Maintain documentation for operational procedures, runbooks, and support guides Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualification: B.E/B.Tech At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.
Posted 5 days ago
10.0 years
8 - 10 Lacs
Gurgaon
On-site
Additional Locations: India-Haryana, Gurgaon Diversity - Innovation - Caring - Global Collaboration - Winning Spirit - High Performance At Boston Scientific, we’ll give you the opportunity to harness all that’s within you by working in teams of diverse and high-performing employees, tackling some of the most important health industry challenges. With access to the latest tools, information and training, we’ll help you in advancing your skills and career. Here, you’ll be supported in progressing – whatever your ambitions. Senior Software Engineer-MLOps We are looking for a highly skilled Senior Software Engineer – MLOps with deep expertise in building and managing production-grade ML pipelines in AWS and Azure cloud environments. This role requires a strong foundation in software engineering, DevOps principles, and ML model lifecycle automation to enable reliable and scalable machine learning operations across the organization Key Responsibilities include: Design and build robust MLOps pipelines for model training, validation, deployment, and monitoring Automate workflows using CI/CD tools such as GitLab Actions, Azure DevOps, Jenkins, or Argo Workflows Build and manage ML workloads on AWS (SageMaker Unified studio, Bedrock, EKS, Lambda, S3, Athena) and Azure (Azure ML Foundry, AKS, ADF, Blob Storage) Design secure and cost-efficient ML architecture leveraging cloud-native services Manage infrastructure using IaC tools such as Terraform, Bicep, or CloudFormation Implement cost optimization and performance tuning for cloud workloads Package ML models using Docker, and orchestrate deployments with Kubernetes on EKS/AKS Ensure robust CI/CD pipelines and infrastructure as code (IaC) using tools like Terraform or CloudFormation Integrate observability tools for model performance, drift detection, and lineage tracking (e.g., Fiddler, MLflow, Prometheus, Grafana, Azure Monitor, CloudWatch) Ensure model reproducibility, versioning, and compliance with audit and regulatory requirements Collaborate with data scientists, software engineers, DevOps, and cloud architects to operationalize AI/ML use cases Mentor junior MLOps engineers and evangelize MLOps best practices across teams Required Qualification: Bachelor's/Master’s in Computer Science, Engineering, or related discipline 10 years in Devops, with 2+ years in MLOps. Proficient with MLflow, Airflow, FastAPI, Docker, Kubernetes, and Git. Experience with feature stores (e.g., Feast), model registries, and experiment tracking. Proficiency in Devops & MLOps, Automation Cloud formation/Teraform/BICEP Requisition ID: 610750 As a leader in medical science for more than 40 years, we are committed to solving the challenges that matter most – united by a deep caring for human life. Our mission to advance science for life is about transforming lives through innovative medical solutions that improve patient lives, create value for our customers, and support our employees and the communities in which we operate. Now more than ever, we have a responsibility to apply those values to everything we do – as a global business and as a global corporate citizen. So, choosing a career with Boston Scientific (NYSE: BSX) isn’t just business, it’s personal. And if you’re a natural problem-solver with the imagination, determination, and spirit to make a meaningful difference to people worldwide, we encourage you to apply and look forward to connecting with you!
Posted 5 days ago
11.0 years
2 - 7 Lacs
Gurgaon
On-site
Company Description We are a Digital Product Engineering company that is scaling in a big way! We build products, services, and experiences that inspire, excite, and delight. We work at scale — across all devices and digital mediums, and our people exist everywhere in the world (17500+ experts across 39 countries, to be exact). Our work culture is dynamic and non-hierarchical. We are looking for great new colleagues. That is where you come in! Job Description REQUIREMENTS: Total experience 11+ years Strong working experience with architecture and development in C#,.net core,.net framework, entity framework, ASP.NET MVC, ASP.NET Web API and unit testing. Well versed with front end technologies like HTML5, CSS JavaScript, React framework. Strong knowledge of Object-Oriented Programming System, Microservices architecture (MSA), Rest and Service-oriented architecture. Should have experience with Azure DevOps or CI/CD tools such as Docker, Kubernetes, Jenkins, Git, Azure DevOps, PowerShell, NPM, Terraform, ARM, IIS. Hands on experience in database like SQL Server, Oracle, MySQL Good understanding of design patterns, Concurrent design and multithreading. Strong troubleshooting skills in different disparate technologies and environments. Enthusiastic about different areas of work and exploring new technologies. Clarity of thought and strong communication skills to effectively pitch solutions. Ability to explore and grasp new technologies. Mentoring your team members in projects and helping them keep up with new technologies. Empowering the team members to be solution providers and enable a flat environment where everyone’s point of view is considered and feedback is encouraged. RESPONSIBILITIES: Writing and reviewing great quality code Understanding the client’s business use cases and technical requirements and be able to convert them into technical design which elegantly meets the requirements Mapping decisions with requirements and be able to translate the same to developers Identifying different solutions and being able to narrow down the best option that meets the client’s requirements Defining guidelines and benchmarks for NFR considerations during project implementation Writing and reviewing design document explaining overall architecture, framework, and high-level design of the application for the developers Reviewing architecture and design on various aspects like extensibility, scalability, security, design patterns, user experience, NFRs, etc., and ensure that all relevant best practices are followed Developing and designing the overall solution for defined functional and non-functional requirements; and defining technologies, patterns, and frameworks to materialize it Understanding and relating technology integration scenarios and applying these learnings in projects Resolving issues that are raised during code/review, through exhaustive systematic analysis of the root cause, and being able to justify the decision taken Carrying out POCs to make sure that suggested design/technologies meet the requirements Qualifications Bachelor’s or master’s degree in computer science, Information Technology, or a related field. Additional Information Click here to access the application privacy notice
Posted 5 days ago
4.0 years
4 - 9 Lacs
Gurgaon
On-site
Job Description: POSITION RESPONSIBILITES Monitor the ServiceNow ticket queue and event monitoring tools (Zenoss) for incoming incidents & requests Perform initial investigation and/or troubleshooting of systems (windows/ Linux/ AWS) and network issues to resolve issue basis SOPs available Process all support incidents and Task requests within SLA by following procedural requirements Escalate to secondary support teams in timely manner, where necessary, to ensure timely resolution Thoroughly document steps taken to resolve or escalate incidents within ServiceNow tickets Participate in Bridge calls to help resolve system outages and restore service to users and Guardian partners Identify and address repeating alert trends or non-actionable alerts to streamline and optimize services Suggest defects and product/infrastructure enhancements to improve stability and automation Perform Incident management based on ITIL principles Participate in periodic skills enhancement sessions and training courses Prepare and deliver standard scheduled reports to support service trending and optimization Develop, document and update standard operating procedures and knowledgebase articles. REPORTING RELATIONSHIPS This position reports to the EOC Manager. CANDIDATE QUALIFICATIONS Functional Skills EOC team needs to perform on 4 Technologies primarily, and candidate needs to one expertise in 1 of these and working knowledge in others: The technologies are: Windows Server Administration Linux and Unix Server Administration Network Administration and Telecom services AWS DevOps Working knowledge of the following industry standard technologies is required for this role, including: Server Hardware (Cisco UCS, IBM P-Series) Cloud Technologies (Amazon Web Services (AWS) Core Services, Terraform, Security Groups, Jenkins) Citrix Microsoft Active Directory Networking (TCP/IP, QIP (DNS), Wireless, F5, Riverbed) Security (Anti-virus (Trend Micro, Symantec), SSL Certificate Management) Strong experience working with ticketing tools such as ServiceNow, Zenoss or any other monitoring tool, Cloud monitoring tools (CloudWatch, CloudTrail), AppDynamics (or similar APM tool) Strong problem-solving and troubleshooting skills Keen analytical and structured approach to problem solving Ability to follow instructions and Standard Operating Procedures (SOPs) Excellent written and spoken English language skills with an ability to speak loudly and clearly Outstanding customer service skills and dedication to customer satisfaction Excellent documentation skills Proven ability to work independently Ability to work well in a team environment Ability to accommodate flexible work schedules Ability to triage outage bridge calls and drive calls to closure. Comfortable with “crisis” situations that require critical thinking, problem definition and diagnosis skills Ability to speak confidently with Developers, Engineers and Management Leadership Behaviors Take ownership & accountability for actions and results Takes action to resolve customer problems promptly & to ensure customer satisfaction Demonstrates high standards of professionalism, integrity & customer service POSITION QUALIFICATIONS Total of 4 years+ experience including a minimum of 2 years of experience in a 24x7 Network Operations Center & Service Management role Strong Microsoft Word, Excel, PowerPoint skills Bachelor’s Degree or similar required A +, Network +, Security +, Microsoft, Cisco Certifications preferred Flexibility to work in 24x7x365 shifts on rotational basis Must be comfortable working in a highly critical, fast paced environment with shifting priorities The EOC is available 24x7x365 and requires onsite coverage. Shifts can vary across a 24-hour clock. Shifts may change periodically to vary work days. Guardian- https://youtu.be/QEtkY6EkEuQ Location: This position can be based in any of the following locations: Gurgaon Current Guardian Colleagues: Please apply through the internal Jobs Hub in Workday
Posted 5 days ago
10.0 years
4 - 9 Lacs
Gurgaon
On-site
Additional Locations: India-Haryana, Gurgaon Diversity - Innovation - Caring - Global Collaboration - Winning Spirit - High Performance At Boston Scientific, we’ll give you the opportunity to harness all that’s within you by working in teams of diverse and high-performing employees, tackling some of the most important health industry challenges. With access to the latest tools, information and training, we’ll help you in advancing your skills and career. Here, you’ll be supported in progressing – whatever your ambitions. Manager, AI Platform We are seeking a result-driven AI ML Platform Manager with a strong background in cloud technologies ( AWS, Azure) to lead the strategic development and delivery of enterprise-grade AI/ML platforms. This role is pivotal in enabling scalable, secure, and resilient business applications, integrating cloud-based systems, and driving digital transformation initiatives. This role will lead teams to achieve performance objectives and provide deep insights into best practices for solving complex problems. Key Responsibilities include: Lead an platform org of Cloud Administrators, Support Engineers, GenAI Ops Engineers and GenAI architects Collaborate with business unit heads, PMOs, and product managers to translate requirements into reliable platform capabilities Leading engagement delivery and managing client relationships on daily basis Standardize platform services across cloud and on-prem environments, ensuring alignment with enterprise architecture Accountable for program/project management and engagement economics Implement cost optimization and performance tuning for cloud workloads Lead cross-functional teams in developing APIs, integrations, and microservices that support data flow across systems Ensure robust CI/CD pipelines and infrastructure as code (IaC) using tools like Terraform or CloudFormation Implement observability tools (e.g., Fiddler, Datadog, Prometheus, Splunk) across enterprise workloads Enforce zero trust principles, encryption standards, and cloud security baselines Strong knowledge on micro-services deployment architecture with K8S experience Required Qualification: Bachelor's/Master’s in Computer Science, Engineering, or related discipline 10+ years of experience in enterprise platform tools with 4 years of strong AI ML platform experience and 6+ years of cloud experience Proven experience in managing infrastructure and workloads on AWS, Azure, or GCP Strong communication and stakeholder management skills, with the ability to collaborate effectively across diverse teams and functions Strong understanding of business, budget, vendor management, financial management, team management. Requisition ID: 610749 As a leader in medical science for more than 40 years, we are committed to solving the challenges that matter most – united by a deep caring for human life. Our mission to advance science for life is about transforming lives through innovative medical solutions that improve patient lives, create value for our customers, and support our employees and the communities in which we operate. Now more than ever, we have a responsibility to apply those values to everything we do – as a global business and as a global corporate citizen. So, choosing a career with Boston Scientific (NYSE: BSX) isn’t just business, it’s personal. And if you’re a natural problem-solver with the imagination, determination, and spirit to make a meaningful difference to people worldwide, we encourage you to apply and look forward to connecting with you!
Posted 5 days ago
7.0 years
8 - 10 Lacs
Gurgaon
On-site
Additional Locations: India-Haryana, Gurgaon Diversity - Innovation - Caring - Global Collaboration - Winning Spirit - High Performance At Boston Scientific, we’ll give you the opportunity to harness all that’s within you by working in teams of diverse and high-performing employees, tackling some of the most important health industry challenges. With access to the latest tools, information and training, we’ll help you in advancing your skills and career. Here, you’ll be supported in progressing – whatever your ambitions. Software Engineer-MLOps We are seeking an enthusiastic and detail-oriented MLOps Engineer to support the development, deployment, and monitoring of machine learning models in production environments. This is a hands-on role ideal for candidates looking to grow their skills at the intersection of data science, software engineering, and DevOps. You will work closely with senior MLOps engineers, data scientists, and software developers to build scalable, reliable, and automated ML workflows across cloud platforms like AWS and Azure Key Responsibilities include: Assist in building and maintaining ML pipelines for data preparation, training, testing, and deployment Support the automation of model lifecycle tasks including versioning, packaging, and monitoring Build and manage ML workloads on AWS (SageMaker Unified studio, Bedrock, EKS, Lambda, S3, Athena) and Azure (Azure ML Foundry, AKS, ADF, Blob Storage) Assist with containerizing ML models using Docker, and deploying using Kubernetes or cloud-native orchestrators Manage infrastructure using IaC tools such as Terraform, Bicep, or CloudFormation Participate in implementing CI/CD pipelines for ML workflows using GitHub Actions, Azure DevOps, or Jenkins Contribute to testing frameworks for ML models and data validation (e.g., pytest, Great Expectations). Ensure robust CI/CD pipelines and infrastructure as code (IaC) using tools like Terraform or CloudFormation Participate in diagnosing issues related to model accuracy, latency, or infrastructure bottlenecks Continuously improve knowledge of MLOps tools, ML frameworks, and cloud practices. Required Qualification: Bachelor's/Master’s in Computer Science, Engineering, or related discipline 7 years in Devops, with 2+ years in MLOps. Good Understanding of MLflow, Airflow, FastAPI, Docker, Kubernetes, and Git. Proficient in Python and familiar with bash scripting Exposure to MLOps platforms or tools such as SageMaker Studio, Azure ML, or GCP Vertex AI. Requisition ID: 610751 As a leader in medical science for more than 40 years, we are committed to solving the challenges that matter most – united by a deep caring for human life. Our mission to advance science for life is about transforming lives through innovative medical solutions that improve patient lives, create value for our customers, and support our employees and the communities in which we operate. Now more than ever, we have a responsibility to apply those values to everything we do – as a global business and as a global corporate citizen. So, choosing a career with Boston Scientific (NYSE: BSX) isn’t just business, it’s personal. And if you’re a natural problem-solver with the imagination, determination, and spirit to make a meaningful difference to people worldwide, we encourage you to apply and look forward to connecting with you!
Posted 5 days ago
6.0 - 8.0 years
0 Lacs
Navi Mumbai, Maharashtra, India
On-site
We deliver the world’s most complex projects. Work as part of a collaborative and inclusive team. Enjoy a varied & challenging role. Building on our past. Ready for the future Worley is a global professional services company of energy, chemicals and resources experts headquartered in Australia. Right now, we’re bridging two worlds as we accelerate to more sustainable energy sources, while helping our customers provide the energy, chemicals, and resources that society needs now. We partner with our customers to deliver projects and create value over the life of their portfolio of assets. We solve complex problems by finding integrated data-centric solutions from the first stages of consulting and engineering to installation and commissioning, to the last stages of decommissioning and remediation. Join us and help drive innovation and sustainability in our projects. The Role As a DevOps Engineer with Worley, you will work closely with our existing team to deliver projects for our clients while continuing to develop your skills and experience etc. A DevOps Engineer with 6-8 years of experience on Azure DevOps is a highly skilled professional responsible for designing, implementing, and managing DevOps practices and workflows within an organization's Azure DevOps environment. He/She plays a critical role in streamlining software delivery, automation, and collaboration between development and operations teams. Azure DevOps Strategy and Execution: Execute a comprehensive Azure DevOps strategy and roadmap that aligns with the organization's goals and objectives. Collaborate with stakeholders to define and prioritize DevOps initiatives using Azure DevOps tools and services. Continuous Integration and Deployment (CI/CD): Design, implement, and maintain robust CI/CD pipelines using Azure DevOps tools and services. Automate the build, test, and deployment processes for applications running on Azure. Azure DevOps Configuration and Administration: Configure and manage Azure DevOps tools and services, including source control, build agents, release pipelines, and artifact repositories. Set up and maintain project structures, access controls, and security configurations. Infrastructure as Code (IaC) and Release Automation: Implement Infrastructure as Code (IaC) principles using Azure Resource Manager (ARM) templates or other relevant tools. Automate the release and deployment of infrastructure components using Azure DevOps release pipelines. Azure DevOps Toolchain Integration: Integrate and orchestrate various Azure DevOps tools and services, including source control, work item tracking, test management, and release management. Ensure smooth collaboration and data synchronization between different toolchain components. Monitoring and Alerting: Implement monitoring and alerting solutions within Azure DevOps using tools such as Azure Monitor, Application Insights, or custom integrations. Configure alerts and notifications for proactive issue identification and resolution. Security and Compliance: Implement Azure DevOps security best practices, including access controls, secure coding practices, and vulnerability management. Collaborate with security teams to address vulnerabilities, perform security assessments, and ensure compliance with industry regulations. Troubleshooting and Incident Management: Respond to and resolve critical incidents within the Azure DevOps environment. Apply troubleshooting skills to identify root causes and implement preventive measures. Participate in the on-call rotation to provide 24/7 support when required. Documentation and Knowledge Sharing: Create and maintain technical documentation, guidelines, and runbooks specific to Azure DevOps workflows and configurations. Share knowledge and best practices with team members, contributing to the development of a learning culture. About You To be considered for this role it is envisaged you will possess the following attributes: Bachelor's or master's degree in Computer Science, Engineering, or a related field. 6-8 years of experience in DevOps or a related role, with a strong focus on Azure DevOps. In-depth knowledge of Azure DevOps tools and services, including source control, build pipelines, release pipelines, and artifact repositories. Proficiency in scripting and automation using PowerShell, Azure CLI, or other relevant scripting languages. Experience with implementing CI/CD pipelines and release management processes using Azure DevOps. Strong understanding of Infrastructure as Code (IaC) principles and tools like terraform, Azure Resource Manager (ARM) templates. Experience with Azure services, including Azure App Services, Azure Functions, and Azure Kubernetes Service (AKS). Familiarity of software development practices and Agile methodologies. Experience with configuring and managing test management and work item tracking systems within Azure DevOps. Strong problem-solving and troubleshooting skills, with the ability to analyze complex systems and identify practical solutions. Excellent communication and collaboration skills, with the ability to work effectively in cross-functional teams. Relevant certifications will be added advantage. Good to have knowledge and skills : Visual Studio, Maven, Python pip, PHP Composer, NuGet, MSBuild, Trivy, Prisma Cloud, Microsoft Cloud Defender, Service Mesh tools like Istilo, and Jenkins. Moving forward together We want our people to be energized and empowered to drive sustainable impact. So, our focus is on a values-inspired culture that unlocks brilliance through belonging, connection and innovation. We’re building a diverse, inclusive and respectful workplace. Creating a space where everyone feels they belong, can be themselves, and are heard. And we're not just talking about it; we're doing it. We're reskilling our people, leveraging transferable skills, and supporting the transition of our workforce to become experts in today's low carbon energy infrastructure and technology. Whatever your ambition, there’s a path for you here. And there’s no barrier to your potential career success. Join us to broaden your horizons, explore diverse opportunities, and be part of delivering sustainable change. Worley takes personal data protection seriously and respects EU and local data protection laws. You can read our full Recruitment Privacy Notice Here. Please note: If you are being represented by a recruitment agency you will not be considered, to be considered you will need to apply directly to Worley. Company Worley Primary Location IND-MM-Navi Mumbai Job Digital Solutions Schedule Full-time Employment Type Employee Job Level Experienced Job Posting Jul 31, 2025 Unposting Date Aug 30, 2025 Reporting Manager Title Senior Principal Digital Solutions Consultant
Posted 5 days ago
8.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Reference # 323129BR Job Type Full Time Your role The individual in this role will be accountable for successful and timely delivery of projects in an agile environment where digital products are designed and built using cutting-edge technology for WMA clients and Advisors.. It is a devops role that entails working with teams located in Budapest – Hungary, Wroclaw - Poland, Pune - India and New Jersey, US. This role will include, but not be limited to, the following: maintain and build ci/cd pipelines migrate applications to cloud environment build scripts and dashboards for monitoring health of application build tools to reduce occurrence of errors and improve customer experience deployment of changes in prod and non-prod environments follow release management processes for application releases maintain stability of non-prod environments work with development, qa and support groups in trouble shooting environment issues Your team You'll be working as an engineering leader in the Client Data and Onboarding Team in India. We are responsible for WMA (Wealth Management Americas) client facing technology applications. This leadership role entails working with teams in US and India. You will play an important role of ensuring scalable development methodology is followed across multiple teams and participate in strategy discussions with business, and technology strategy discussions with architects. Our culture centers around innovation, partnership, transparency, and passion for the future. Diversity helps us grow, together. That’s why we are committed to fostering and advancing diversity, equity, and inclusion. It strengthens our business and brings value to our clients. Your expertise should carry 8+ years of experience to develop, build and maintain gitlab CI/CD pipelines use containerization technologies, orchestration tools (k8s), build tools (maven, gradle), VCS (gitlab), Sonar, Fortify tools to build robust deploy and release infrastructure deploy changes in prod and non prod Azure cloud infrastructure using helm, terraform, ansible and setup appropriate observability measures build scripts (bash, python, puppet) and dashboards for monitoring health of applications (AppDynamics, Splunk, AppInsights) possess basic networking knowledge (load balancing, ssh, certificates), middleware knowledge (MQ, Kafka, Azure Service Bus, Event hub) follow release management processes for application releases Maintain stability of non-prod environments Work with development, QA and support groups in trouble shooting environment issues About Us UBS is the world’s largest and the only truly global wealth manager. We operate through four business divisions: Global Wealth Management, Personal & Corporate Banking, Asset Management and the Investment Bank. Our global reach and the breadth of our expertise set us apart from our competitors. We have a presence in all major financial centers in more than 50 countries. How We Hire We may request you to complete one or more assessments during the application process. Learn more Join us At UBS, we know that it's our people, with their diverse skills, experiences and backgrounds, who drive our ongoing success. We’re dedicated to our craft and passionate about putting our people first, with new challenges, a supportive team, opportunities to grow and flexible working options when possible. Our inclusive culture brings out the best in our employees, wherever they are on their career journey. We also recognize that great work is never done alone. That’s why collaboration is at the heart of everything we do. Because together, we’re more than ourselves. We’re committed to disability inclusion and if you need reasonable accommodation/adjustments throughout our recruitment process, you can always contact us. Disclaimer / Policy Statements UBS is an Equal Opportunity Employer. We respect and seek to empower each individual and support the diverse cultures, perspectives, skills and experiences within our workforce.
Posted 5 days ago
8.0 years
0 Lacs
India
On-site
Business Summary The Deltek Global Cloud team focuses on the delivery of first-class services and solutions for our customers. We are an innovative and dynamic team that is passionate about transforming the Deltek cloud services that power our customers' project success. Our diverse, global team works cross-functionally to make an impact on the business. If you want to work in a transformational environment, where education and training are encouraged, consider Deltek as the next step in your career! Position Responsibilities As a Senior Manager for the DevOps Engineering and Automation team, you will lead a team of skilled DevOps engineers responsible for automating infrastructure provisioning, configuration, and CI/CD pipelines for a portfolio of Enterprise solutions. With a strong DevOps transformational background, you will leverage your expertise in DevOps practices and tools and public clouds (AWS, OCI) to develop strategic initiatives that enhance the efficiency, scalability, and reliability of our deployment processes. Additionally, you will have significant experience in people management, strategy development, and cross-functional collaboration. Key Responsibilities: Strategic Leadership: Help develop and implement a strategic roadmap for DevOps practices, automation, and infrastructure management. Identify and prioritize opportunities for process improvements, cost efficiencies, and technological advancements. Collaborate with senior leadership to align DevOps strategies with business objectives and goals. Team Management: Lead, mentor, and develop a team of DevOps engineers, fostering a culture of collaboration, innovation, and continuous improvement. Manage team performance, set clear goals, and provide regular feedback and professional development opportunities. Recruit and onboard top talent to build a high-performing DevOps team. Infrastructure Provisioning and Configuration: Oversee the development and maintenance of infrastructure as code (IaC) using Terraform for provisioning cloud resources. Ensure the creation and maintenance of Ansible playbooks for automated configuration and management of infrastructure and applications. Implement best practices for infrastructure scalability, security, and cost management. CI/CD Pipeline Implementation: Guide and support the design, implementation, and management of CI/CD pipelines to automate the build, testing, and deployment of applications and services. Ensure integration of CI/CD pipelines with version control systems, build tools, and monitoring solutions. Promote practices that support automated testing, security scans, and compliance checks. Cloud Deployment and Management: Direct the deployment and management of applications and services in public cloud environments such as AWS and OCI. Utilize cloud-native services and tools to enhance application performance and reliability. Implement robust monitoring, troubleshooting, and disaster recovery solutions for cloud deployments. Cross-Functional Collaboration: Work closely with Engineering and Delivery stakeholders to ensure alignment and successful deployments. Facilitate design and code reviews, ensuring adherence to high standards of quality and performance. Drive cross-functional initiatives to improve process efficiency and project outcomes. Qualifications Qualifications: Education: Bachelor’s degree in Computer Science (strongly preferred), Information Technology, or a related field. Master’s degree preferred. Experience: Minimum of 8 years of experience in DevOps, cloud infrastructure, and automation, with at least 3 years in a leadership role. Skills: Expertise in Infrastructure and automated configuration tools for infrastructure provisioning or automated configuration management. Proven experience in designing and implementing CI/CD pipelines using tools such as Jenkins, Azure DevOps, GitLab CI, or CircleCI. Extensive hands-on experience with AWS and OCI, including services like EC2, S3, Lambda, VCN, and OCI Compute. Strong understanding of containerization and orchestration tools like Docker and Kubernetes. Knowledge of Oracle and SQL Server, including clustering, replication, partitioning, and indexing. Excellent scripting skills in languages such as Python, Bash, or PowerShell. Proficiency in monitoring and logging tools like Prometheus, Grafana, ELK stack, or CloudWatch. Strong leadership, communication, and interpersonal skills. Preferred Qualifications: Certifications: AWS Certified DevOps Engineer, Terraform Certified Associate, or similar.
Posted 5 days ago
0 years
3 - 8 Lacs
Mohali
On-site
Role Summary: A DevOps Developer bridges the gap between developers and IT operations. They write code , automate infrastructure , and ensure smooth CI/CD pipelines , enabling faster and more reliable software delivery. Key Responsibilities: Develop and maintain CI/CD pipelines (e.g., Jenkins, GitLab, GitHub Actions). Write automation scripts using Shell, Python, or Groovy . Configure and manage cloud infrastructure (AWS, Azure, GCP). Use Infrastructure as Code (IaC) tools like Terraform or Ansible. Monitor performance using tools like Prometheus , Grafana , ELK Stack . Support containerized deployments using Docker and Kubernetes . Collaborate with development teams to optimize build & deployment processes. Job Type: Full-time Pay: ₹50,000.00 - ₹80,000.00 per year Schedule: Day shift Monday to Friday Morning shift Weekend availability Work Location: In person Speak with the employer +91 8195806334
Posted 5 days ago
1.0 years
2 - 3 Lacs
India
On-site
Key Responsibilities: Design, implement, and manage CI/CD pipelines using tools like Jenkins, GitLab CI, or GitHub Actions. Maintain and monitor cloud infrastructure (AWS, Azure, GCP). Automate infrastructure provisioning using tools such as Terraform, Ansible, or CloudFormation. Manage containerization and orchestration using Docker and Kubernetes. Implement system monitoring, logging, and alerting (e.g., Prometheus, Grafana, ELK stack). Collaborate with development and QA teams to ensure smooth release processes. Ensure system availability, scalability, and performance. Enforce security best practices across all DevOps processes. Job Type: Full-time Pay: ₹18,000.00 - ₹25,000.00 per month Experience: DevOps: 1 year (Preferred) Work Location: In person
Posted 5 days ago
0 years
3 - 7 Lacs
Chennai
On-site
Site Reliability Engineer III Are you passionate about working with a wide variety of products and technologies in an environment that values collaboration, support, and inclusion? Do you enjoy sharing knowledge and learning alongside colleagues from diverse backgrounds? About the Business LexisNexis Risk Solutions supports organizations around the world with innovative tools for risk assessment. Our insurance division delivers advanced technology and analytics that help drive better decision-making throughout the insurance policy lifecycle. We’re committed to creating positive outcomes for our clients by improving efficiency and reducing risk. Learn more about our Insurance Risk Solutions on our website https://risk.lexisnexis.com/insurance . About the Team You will be a vital member of a collaborative and forward-thinking team that works with a diverse range of technologies. We foster an inclusive, supportive environment and invest in team skill-building and cross-training, leveraging each other’s strengths. This position is highly visible across many teams, working closely with groups such as Development, Quality Assurance, IT Operations, and Customer Operations. About the Role We are seeking a Site Reliability Engineer (SRE) with experience in Azure and a track record of success in cloud migration project initiatives. The successful candidate will help design and coordinate the implementation of cloud infrastructure, including Kubernetes clusters, databases and storage, serverless functions, CI/CD pipelines, and solutions for monitoring, alerting, and security. In this role, you will work to understand business needs and technical solutions, provide input based on evidence, communicate effectively with people of different technical backgrounds, influence decision-making, implement best-practice solutions, and maintain them, all within a fast-paced environment managing multiple projects in parallel. Responsibilities Develop, deploy, and maintain scalable and highly available systems on Kubernetes. Design and implement automation processes for system deployments and scaling. Monitor system performance, troubleshoot issues, and drive ongoing improvements. Collaborate with development teams to enhance infrastructure, including CI/CD pipelines. Respond to and resolve operational incidents, provide detailed reports, and participate in post-incident reviews. Manage code deployments, updates, and processes across multiple environments. Requirements Hands-on experience with Azure solutions and observability tools (preferably Grafana), including designing and implementing observability pipelines for logs, metrics, and traces, along with setting up dashboards and alerts. Understanding of authentication and authorization mechanisms in Azure, including Microsoft Entra ID. Experience with Infrastructure-as-Code (IaC) tools such as Terraform (Ansible, Puppet, ARM templates also valued). Knowledge of automated CI/CD pipelines (GitHub Actions preferred; Jenkins, Argo CD also relevant). Familiarity with containerized workloads (EKS, other Kubernetes distributions, Docker, JFrog). Exposure to serverless solutions (e.g., Logic Apps, Function Apps, Functions, WebJobs). Experience with logging and monitoring tools (Azure Monitor, Log Analytics, Metrics Explorer, Activity Log). Preferred Experience and Skills Enthusiasm for technology, and a broad understanding of cloud solutions. Willingness to share expertise and advise on best practices. Experience with budget management and cost control is a plus. Skills in system integration and troubleshooting. Experience with performance analysis and optimization. Knowledge of Kubernetes service meshes (Linkerd preferred; Istio, Traefik Mesh also valued). Ability to code or script (for example: Linux/Bash/Sh, Windows/PowerShell/Batch, Python, Java). Familiarity with load balancing and service proxies (Nginx, Traefik, HAProxy, F5). Experience with tools such as Jira, Confluence, MySQL Workbench, Maven. Professional certifications for Cloud Developers or Architects (Azure preferred; AWS also beneficial). Accessibility and Inclusion We are committed to fostering an inclusive workplace where everyone feels welcome. If you require any accommodations or adjustments during the application or interview process, please let us know. Candidates from all backgrounds are encouraged to apply, including those with non-traditional career paths, gaps in employment, or alternative educational experiences. We value diversity and are dedicated to providing equal opportunities for all. Learn more about our team and culture here . We are committed to providing a fair and accessible hiring process. If you have a disability or other need that requires accommodation or adjustment, please let us know by completing our Applicant Request Support Form or please contact 1-855-833-5120. Criminals may pose as recruiters asking for money or personal information. We never request money or banking details from job applicants. Learn more about spotting and avoiding scams here . Please read our Candidate Privacy Policy . We are an equal opportunity employer: qualified applicants are considered for and treated during employment without regard to race, color, creed, religion, sex, national origin, citizenship status, disability status, protected veteran status, age, marital status, sexual orientation, gender identity, genetic information, or any other characteristic protected by law. USA Job Seekers: EEO Know Your Rights .
Posted 5 days ago
0 years
0 Lacs
Chennai
On-site
Dear Candidate, Greetings from Genworx.ai About Us: Genworx.ai is a pioneering startup at the forefront of generative AI innovation, dedicated to transforming how enterprises harness artificial intelligence. We specialize in developing sophisticated AI agents and platforms that bridge the gap between cutting-edge AI technology and practical business applications. We have an opening for Full Stack Developer (Generative AI) position at Genworx.ai. please find below detailed Job Description for your understanding. Required Skills and Qualifications: Job Title: Full Stack Developer (Generative AI) Experience: 5+yrs No of Opening: 5 Education: Bachelor’s or Master’s degree in Computer Science, Software Engineering, Artificial Intelligence, or a related field Work Location: Chennai Job Type: Full-Time Website: https://genworx.ai/ Key Responsibilities: 1. Problem Solving and Design: Analyse and address complex challenges in generative AI and application development. Debug and troubleshoot application and AI workflow issues for optimal performance Propose and implement scalable, efficient solutions tailored to real-world use cases. 2. Software Development and Integration: Coding: Design, Develop, test, and maintain front-end and back-end components for generative AI applications. Integration: Implement and manage APIs to integrate AI models into web and mobile platforms. Debugging: Debug and resolve application issues, ensuring smooth operation and integration. Build and maintain CI/CD pipelines for seamless deployment of AI-driven applications. Implement DevOps best practices to ensure efficient and secure operations. 3. Generative AI Development: Collaborate with machine learning teams to integrate pre-trained AI models into scalable applications. Work with state-of-the-art models, including large language models (LLMs), image generators, and more. Integrate pre-trained models, including large language models (LLMs). Optimize AI solutions for performance, scalability, and reliability. Experiment with new generative AI frameworks and technologies to enhance application capabilities. 4. Operational Excellence: Maintain high standards of performance, security, and scalability in design and implementation. Monitor and troubleshoot system performance to ensure reliability. Document technical workflows and ensure team adherence to operational guidelines. 5. Team Collaboration and Mentorship: Work closely with cross-functional teams, including product managers, ML engineers, and designers, to deliver high-quality solutions. Mentor junior engineers, fostering a collaborative and learning-oriented environment. Participate in code reviews, brainstorming sessions, and team discussions. 6. Continuous Learning and Innovation: Stay updated on the latest trends in generative AI, cloud platforms, and full-stack development. Experiment with emerging tools and frameworks to enhance system capabilities. Technical Skills: Proficiency in front-end technologies: HTML, CSS, JavaScript (React, Angular, or Vue.js). Proficiency in back-end technologies: Node.js, Python, or Java. Strong understanding of databases: SQL (MySQL/PostgreSQL) and NoSQL (MongoDB). Hands-on experience with CI/CD tools (Jenkins, GitHub Actions, GitLab CI/CD, CircleCI). Familiarity with AI frameworks such as TensorFlow, PyTorch, or Hugging Face. Nice-to-Have Skills: Experience with cloud platforms such as AWS, Azure, or Google Cloud. Knowledge of containerization tools (Docker, Kubernetes) and infrastructure as code (Terraform, CloudFormation). Understanding of transformer models like GPT, BERT, or similar architectures. Exposure to Agile development practices. Problem Solver: Ability to address and solve technical challenges effectively. Team Player: Collaborative and proactive attitude in achieving team goals. Strong communication skills to explain technical concepts clearly. Interested candidates, kindly send your updated resume and a link to your portfolio to anandraj@genworx.ai. Thank you Regards, Anandraj B Lead Recruiter Mail ID: anandraj@genworx.ai Contact: 9656859037 Website: https://genworx.ai/ Job Type: Full-time Pay: ₹500,000.00 - ₹4,000,000.00 per year Benefits: Cell phone reimbursement Food provided Health insurance Life insurance Paid sick time Provident Fund
Posted 5 days ago
0 years
5 - 9 Lacs
Chennai
On-site
Site Reliability Engineer III Are you passionate about working with a wide variety of products and technologies in an environment that values collaboration, support, and inclusion? Do you enjoy sharing knowledge and learning alongside colleagues from diverse backgrounds? About the Business LexisNexis Risk Solutions supports organizations around the world with innovative tools for risk assessment. Our insurance division delivers advanced technology and analytics that help drive better decision-making throughout the insurance policy lifecycle. We’re committed to creating positive outcomes for our clients by improving efficiency and reducing risk. Learn more about our Insurance Risk Solutions on our website https://risk.lexisnexis.com/insurance . About the Team You will be a vital member of a collaborative and forward-thinking team that works with a diverse range of technologies. We foster an inclusive, supportive environment and invest in team skill-building and cross-training, leveraging each other’s strengths. This position is highly visible across many teams, working closely with groups such as Development, Quality Assurance, IT Operations, and Customer Operations. About the Role We are seeking a Site Reliability Engineer (SRE) with experience in Azure and a track record of success in cloud migration project initiatives. The successful candidate will help design and coordinate the implementation of cloud infrastructure, including Kubernetes clusters, databases and storage, serverless functions, CI/CD pipelines, and solutions for monitoring, alerting, and security. In this role, you will work to understand business needs and technical solutions, provide input based on evidence, communicate effectively with people of different technical backgrounds, influence decision-making, implement best-practice solutions, and maintain them, all within a fast-paced environment managing multiple projects in parallel. Responsibilities Develop, deploy, and maintain scalable and highly available systems on Kubernetes. Design and implement automation processes for system deployments and scaling. Monitor system performance, troubleshoot issues, and drive ongoing improvements. Collaborate with development teams to enhance infrastructure, including CI/CD pipelines. Respond to and resolve operational incidents, provide detailed reports, and participate in post-incident reviews. Manage code deployments, updates, and processes across multiple environments. Requirements Hands-on experience with Azure solutions and observability tools (preferably Grafana), including designing and implementing observability pipelines for logs, metrics, and traces, along with setting up dashboards and alerts. Understanding of authentication and authorization mechanisms in Azure, including Microsoft Entra ID. Experience with Infrastructure-as-Code (IaC) tools such as Terraform (Ansible, Puppet, ARM templates also valued). Knowledge of automated CI/CD pipelines (GitHub Actions preferred; Jenkins, Argo CD also relevant). Familiarity with containerized workloads (EKS, other Kubernetes distributions, Docker, JFrog). Exposure to serverless solutions (e.g., Logic Apps, Function Apps, Functions, WebJobs). Experience with logging and monitoring tools (Azure Monitor, Log Analytics, Metrics Explorer, Activity Log). Preferred Experience and Skills Enthusiasm for technology, and a broad understanding of cloud solutions. Willingness to share expertise and advise on best practices. Experience with budget management and cost control is a plus. Skills in system integration and troubleshooting. Experience with performance analysis and optimization. Knowledge of Kubernetes service meshes (Linkerd preferred; Istio, Traefik Mesh also valued). Ability to code or script (for example: Linux/Bash/Sh, Windows/PowerShell/Batch, Python, Java). Familiarity with load balancing and service proxies (Nginx, Traefik, HAProxy, F5). Experience with tools such as Jira, Confluence, MySQL Workbench, Maven. Professional certifications for Cloud Developers or Architects (Azure preferred; AWS also beneficial). Accessibility and Inclusion We are committed to fostering an inclusive workplace where everyone feels welcome. If you require any accommodations or adjustments during the application or interview process, please let us know. Candidates from all backgrounds are encouraged to apply, including those with non-traditional career paths, gaps in employment, or alternative educational experiences. We value diversity and are dedicated to providing equal opportunities for all. Learn more about our team and culture here . We are committed to providing a fair and accessible hiring process. If you have a disability or other need that requires accommodation or adjustment, please let us know by completing our Applicant Request Support Form or please contact 1-855-833-5120. Criminals may pose as recruiters asking for money or personal information. We never request money or banking details from job applicants. Learn more about spotting and avoiding scams here . Please read our Candidate Privacy Policy . We are an equal opportunity employer: qualified applicants are considered for and treated during employment without regard to race, color, creed, religion, sex, national origin, citizenship status, disability status, protected veteran status, age, marital status, sexual orientation, gender identity, genetic information, or any other characteristic protected by law. USA Job Seekers: EEO Know Your Rights .
Posted 5 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39817 Jobs | Dublin
Wipro
19388 Jobs | Bengaluru
Accenture in India
15458 Jobs | Dublin 2
EY
14907 Jobs | London
Uplers
11185 Jobs | Ahmedabad
Amazon
10459 Jobs | Seattle,WA
IBM
9256 Jobs | Armonk
Oracle
9226 Jobs | Redwood City
Accenture services Pvt Ltd
7971 Jobs |
Capgemini
7704 Jobs | Paris,France