Jobs
Interviews

14240 Orchestration Jobs - Page 8

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

18.0 years

2 - 7 Lacs

Hyderābād

On-site

AI-First. Future-Driven. Human-Centered. At OpenText, AI is at the heart of everything we do—powering innovation, transforming work, and empowering digital knowledge workers. We're hiring talent that AI can't replace to help us shape the future of information management. Join us. Your Impact: We are looking for an execution-focused Director of Product Management to lead our strategy and roadmap spanning multi-tenant cloud and off-cloud solutions. The ideal candidate has deep product knowledge in automation technologies including RPA, workflow orchestration, low-code and AI driven process. What the Role Offers: The Director of Product Management will be responsible for defining and evolving the product vision and roadmap for our process automation and low code solutions to align with business goals. This includes a mature on-premises as well as a SaaS based solution. You will identify opportunities to leverage machine learning and low code/no code capabilities to enhance automation outcomes. You will lead the strategy for a team of product managers spanning multiple solutions. You will drive a high-performing environment that thrives on innovation working closely with engineering, UX, Sales and Solutions Consultants to deliver a scalable, highly performant solution. You will engage with Sales and customers to understand pain points and develop solutions to address them through automation. You will understand industry trends and conduct regular competitive analysis to deliver best in class solutions. You will define and track KPIs such as customer adoption, win/loss analysis to inform priorities and product improvement. What you need to succeed: 18+ years in software product management with at least three years in a leadership role. Strong understanding of automation technologies. Excellent communication skills with the ability to present to all levels of management. Experience with Agile methodologies. Bachelor’s Degree in Computer Science, Engineering or Business. OpenText is an equal opportunity employer that hires and attracts talent regardless of race, religious creed, color, national origin, ancestry, physical disability, mental disability, medical condition, marital status, sex, age, veteran status, or sexual orientation. At OpenText we acknowledge, value and respect diversity. We draw on diversity of thought and experience to reflect the rich array of cultures representing our broad global customer base OpenText's efforts to build an inclusive work environment go beyond simply complying with applicable laws. Our Employment Equity and Diversity Policy provides direction on maintaining a working environment that is inclusive of everyone, regardless of culture, national origin, race, color, gender, gender identification, sexual orientation, family status, age, veteran status, disability, religion, or other basis protected by applicable laws. If you need assistance and/or a reasonable accommodation due to a disability during the application or recruiting process, please submit a ticket atAsk HR. Our proactive approach fosters collaboration, innovation, and personal growth, enriching OpenText's vibrant workplace.

Posted 1 day ago

Apply

3.0 years

4 - 10 Lacs

Hyderābād

On-site

About the job: Sanofi is a global pharmaceuticals and biologics company headquartered in Paris, France, and a leader in the research and development, manufacturing, and marketing of pharmaceutical drugs principally in the prescription market. The firm also develops well known over-the-counter medication. The company covers seven major therapeutic areas: cardiovascular, central nervous system, diabetes, internal medicine, oncology, thrombosis and vaccines. It is the world's largest producer of vaccines. Sanofi has recently embarked on a vast and ambitious digital transformation program. A first step to this transformation was bringing all IT, Digital and Data functions under a Global Chief Digital Officer reporting to Sanofi’s CEO. The new Digital organization is implementing a 3-year strategy that will drive business growth, operating income and cost efficiency from enterprise-wide agile digital transformation. The digital roadmap will facilitate the acceleration of R&D drug discovery, intelligent supply chain, manufacturing digital factory of the future and commercial performance, bringing better drugs and vaccines to patients faster, to improve health and save lives. It is our aspiration to be a leader in biopharmaceuticals, driven by world-class digital technology, to improve people’s lives everywhere. We put our colleagues on the highest value work, where they can best build their industry leading technical and business expertise in digital technology (digital experience, automation, software defined networks, cloud technologies, integration technologies, network security, digital workplace). We make Sanofi a great place to work with Digital capabilities. We leverage the best and brightest leaders and technical talent to build systems, rearchitect business process, generate value and drive competitive advantage. Candidate Profile The ServiceNow Administrator will create governance standards and processes, validate data accuracy, and develop documentation for multiple modules. The Administrator will work closely with the Architect to take direction and help create an environment of empowerment for the internal team. This position involves frequent interaction and collaboration with a variety of IT and business team members, assist with processes, developments, requireents gathering, upgrades and cloning and provide any needed guidance, support, and maintenance on the ServiceNow platform. The role(s) will take direction from the platform architect and platform leader. What you will be doing: Configure and enhance core application including, but not limited to, Service Catalog, Service Portal, Knowledge Base, Platform, and Reporting. Understanding of Core modules within ServiceNow that are not limited to: ITSM, ITAM, ITBM, ITOM, HRSD, CSM and App Engine Conduct Incident and Request Management: Resolve business incidents and request ServiceNow tickets independently. Support implemented and proposed solutions on the ServiceNow platform. Load, manipulate, and maintain data between ServiceNow and other systems. Participate in deployment of features and any ServiceNow releases. Perform code reviews and development standards are met. Work closely with business stakeholders to draft requirements and solve business problems Multitasker and be able to work with multiple products Identify opportunities to improve overall quality of the platform using health scan, ATF, etc. CSA, CAD or a mainline certification is a plus Qualifications Bachelor's Degree in Computer Science, Information Technology, Architecting, or related field/certified preferred 5+ years applied experience and Certification across an array of critical ServiceNow IT Modules (i.e. ITSM, ITOM, ITBM, HRSD, CSM, IRM, SecOps, Vulnerability Response, Service Portal, SAM Pro, Integration Hub, and/or Performance Analytics.) o Extensive experience using Flow Designer and Integrations Hub Prior development experience using JavaScript/Perl/PHP on the ServiceNow Platform Extensive applied experience in the design and architecture of ServiceNow HR Service Modules Experience in functional ServiceNow Integrations (e.g., REST APIs, LDAP, Active Directory, JDBC, Orchestration, etc.) ServiceNow certification a plus ITIL Process familiarity, certification a plus Base understanding of Cloud 4+ years of experience with Agile scrum/Kanban methodology. Why choose us? Bring the miracles of science to life alongside a supportive, future-focused team. Discover endless opportunities to grow your talent and drive your career, whether it’s through a promotion or lateral move, at home or internationally. Enjoy a thoughtful, well-crafted rewards package that recognizes your contribution and amplifies your impact. Take good care of yourself and your family, with a wide range of health and wellbeing benefits including high-quality healthcare, prevention and wellness programs and at least 14 weeks’ gender-neutral parental leave. Opportunity to work in an international environment, collaborating with diverse business teams and vendors, working in a dynamic team, and fully empowered to propose and implement innovative ideas. Pursue Progress . Discover Extraordinary . Progress doesn’t happen without people – people from different backgrounds, in different locations, doing different roles, all united by one thing: a desire to make miracles happen. You can be one of those people. Chasing change, embracing new ideas and exploring all the opportunities we have to offer. Let’s pursue Progress. And let’s discover Extraordinary together. At Sanofi, we provide equal opportunities to all regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, or gender identity. Watch our ALL IN video and check out our Diversity Equity and Inclusion actions at sanofi.com!

Posted 1 day ago

Apply

0 years

3 - 18 Lacs

Hyderābād

On-site

Interested candidate shares their resume at hr@globalitfamily.com Position: Backend Engineer Location: Bangalore / Hyderabad / Chennai ( Onsite ) Experience: 4 - 6 yrs Notice Period: Immediate ( Max 15 days ) Requirements: Backend Development : Proficiency in server-side languages such as Java and Kotlin. Database Management : Experience with relational databases (e.g., SQL Server, PostgreSQL) and NoSQL databases (e.g., MongoDB). API Development : Ski­lled in designing, developing, and consuming RESTful APIs and microservices. Authentication & Security : Understanding of security best practices and authentication mechanisms, including OAuth and JWT. DevOps : Experience with CI/CD pipelines, containerization (e.g., Docker), and orchestration tools (e.g., Kubernetes). Performance Optimization : Ability to optimize application performance and troubleshoot performance issues. Version Control : Proficiency with version control systems, particularly Git. Problem-Solving : Strong analytical and problem-solving skills. Collaboration : Ability to work effectively with front-end developers, designers, and other team members. Education : Bachelor’s degree in Computer Science, Engineering, or a related field, or equivalent experience. Communication : Good communication skills for discussing project requirements, updates, and technical issues. Cloud Experience : Hands-on experience with cloud platforms, preferably Microsoft Azureis desired, like familiarity with Azure services Azure Cosmos Database. Summary: Looking for strong Java/backend developer. Kotlin and Python knowledge highly desired. Job Type: Full-time Pay: ₹388,989.14 - ₹1,816,394.11 per year Location Type: In-person Work Location: In person

Posted 1 day ago

Apply

10.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Locations: Noida/ Gurgaon/ Indore/ Bangalore/ Pune/ Hyderabad Job Description DevOps architect with Docker,Kubernetes expertise. Seeking a highly skilled DevOps Architect with deep expertise in Linux, Kubernetes, Docker , and related technologies. The ideal candidate will design, implement, and manage scalable, secure, and automated infrastructure solutions, ensuring the seamless integration of development and operational processes. You will be a key player in the architecture and implementation of CI/CD pipelines, managing infrastructure, container orchestration, and system monitoring. Roles & Responsibilities Key Responsibilities: Design and implement DevOps solutions that automate software delivery pipelines and infrastructure provisioning. Architect and maintain scalable Kubernetes clusters to manage containerized applications across multiple environments. Leverage Docker to build, deploy, and manage containerized applications in development, staging, and production environments. Optimize and secure Linux-based environments for application performance, reliability, and security. Collaborate with development teams to implement CI/CD pipelines using tools like Jenkins, GitLab CI, CircleCI, or similar . Monitor, troubleshoot, and improve system performance, security, and availability through effective monitoring and logging solutions (e.g., Prometheus, Grafana, ELK Stack). Automate configuration management and system provisioning tasks on-premise environments. Implement security best practices and compliance measures, including secrets management, network segmentation, and vulnerability scanning. Mentor and guide junior DevOps engineers and promote best practices in DevOps, automation, and cloud-native architecture. Stay up-to-date with industry trends and evolving DevOps tools and technologies to continuously improve systems and processes. Required Skills and Experience: 10+ years of experience in IT infrastructure, DevOps, or systems engineering. Strong experience with Linux systems administration (Red Hat, Ubuntu, CentOS). 3+ years of hands-on experience with Kubernetes in production environments, including managing and scaling clusters. Extensive knowledge of Docker for building, deploying, and managing containers. Proficiency with CI/CD tools such as Jenkins, GitLab CI, Bamboo , or similar. Familiarity with monitoring and logging solutions (Prometheus, Grafana, ELK Stack, etc.). Strong understanding of networking, security best practices , and cloud-based security solutions. Hands-on experience with scripting and automation tools like Bash, Python Excellent troubleshooting, problem-solving, and analytical skills. Experience with Git or other version control systems. Good to have Skills: Experience with service mesh technologies (e.g., Istio, Linkerd) and API gateways . Familiarity with container security tools such as Aqua Security, Twistlock , or similar. Familiarity with Kafka , RabbitMQ, SOLR

Posted 1 day ago

Apply

7.0 years

6 - 9 Lacs

Thiruvananthapuram

On-site

7 - 9 Years 2 Openings Trivandrum Role description Senior Data Engineer – Azure/Snowflake Migration Key Responsibilities Design and develop scalable data pipelines using Snowflake as the primary data platform, integrating with tools like Azure Data Factory, Synapse Analytics, and AWS services. Build robust, efficient SQL and Python-based data transformations for cleansing, enrichment, and integration of large-scale datasets. Lead migration initiatives from AWS-based data platforms to a Snowflake-centered architecture, including: o Rebuilding AWS Glue pipelines in Azure Data Factory or using Snowflake-native ELT approaches. o Migrating EMR Spark jobs to Snowflake SQL or Python-based pipelines. o Migrating Redshift workloads to Snowflake with schema conversion and performance optimization. o Transitioning S3-based data lakes (Hudi, Hive) to Snowflake external tables via ADLS Gen2 or Azure Blob Storage. o Redirecting Kinesis/MSK streaming data to Azure Event Hubs, followed by ingestion into Snowflake using Streams & Tasks or Snowpipe. Support database migrations from AWS RDS (Aurora PostgreSQL, MySQL, Oracle) to Snowflake, focusing on schema translation, compatibility handling, and data movement at scale. Design modern Snowflake lakehouse-style architectures that incorporate raw, staging, and curated zones, with support for time travel, cloning, zero-copy restore, and data sharing. Integrate Azure Functions or Logic Apps with Snowflake for orchestration and trigger-based automation. Implement security best practices, including Azure Key Vault integration and Snowflake role-based access control, data masking, and network policies. Optimize Snowflake performance and costs using clustering, multi-cluster warehouses, materialized views, and result caching. Support CI/CD processes for Snowflake pipelines using Git, Azure DevOps or GitHub Actions, and SQL code versioning. Maintain well-documented data engineering workflows, architecture diagrams, and technical documentation to support collaboration and long-term platform maintainability. Required Qualifications 7+ years of data engineering experience, with 3+ years on Microsoft Azure stack and hands-on Snowflake expertise. Proficiency in: o Python for scripting and ETL orchestration o SQL for complex data transformation and performance tuning in Snowflake o Azure Data Factory and Synapse Analytics (SQL Pools) Experience in migrating workloads from AWS to Azure/Snowflake, including services such as Glue, EMR, Redshift, Lambda, Kinesis, S3, and MSK. Strong understanding of cloud architecture and hybrid data environments across AWS and Azure. Hands-on experience with database migration, schema conversion, and tuning in PostgreSQL, MySQL, and Oracle RDS. Familiarity with Azure Event Hubs, Logic Apps, and Key Vault. Working knowledge of CI/CD, version control (Git), and DevOps principles applied to data engineering workloads. Preferred Qualifications Extensive experience with Snowflake Streams, Tasks, Snowpipe, external tables, and data sharing. Exposure to MSK-to-Event Hubs migration and streaming data integration into Snowflake. Familiarity with Terraform or ARM templates for Infrastructure-as-Code (IaC) in Azure environments. Certification such as SnowPro Core, Azure Data Engineer Associate, or equivalent. Skills Aws,Azure Data Lake,Python About UST UST is a global digital transformation solutions provider. For more than 20 years, UST has worked side by side with the world’s best companies to make a real impact through transformation. Powered by technology, inspired by people and led by purpose, UST partners with their clients from design to operation. With deep domain expertise and a future-proof philosophy, UST embeds innovation and agility into their clients’ organizations. With over 30,000 employees in 30 countries, UST builds for boundless impact—touching billions of lives in the process.

Posted 1 day ago

Apply

6.0 years

3 - 6 Lacs

Cochin

On-site

Minimum Required Experience : 6 years Full Time Skills SQL Microservices Java Kubernetes Linux Spring Boot Docker Description Job Description – SSE Java Experience Range & Quantity 6 - 10 YOE Location Requirement Bangalore – Whitefield / Kochi (Hybrid) Fulfil by date ASAP Responsibilities Provide technology leadership in Working in an agile development environment Translating business requirements into low-level application design Application code development through a collaborative approach Doing Full-scale unit testing Applying test-driven and behavior-driven development (TDD/BDD) QA concepts Applying continuous integration and continuous deployment (CI/CD) concepts Mandatory Soft Skills Should be able to contribute as an individual contributor Should be able to execute his/her responsibility independently Focus on self-planning activities Mandatory Skills Practical knowledge of the following tools & technologies … Java, Springboot, Micro services Git Container orchestration (Kubernetes, Docker) Basic knowledge in Linux & SQL Nice-to-have Skills BDD Mandatory Experience Design, implementation, and optimization of the following: Golang stack-based micro services design-oriented application development and deploying the same using Container orchestration in the cloud environment Understanding CI/CD pipeline & related system development environment

Posted 1 day ago

Apply

9.0 years

5 - 10 Lacs

Thiruvananthapuram

On-site

9 - 12 Years 1 Opening Trivandrum Role description Tech Lead – Azure/Snowflake & AWS Migration Key Responsibilities Design and develop scalable data pipelines using Snowflake as the primary data platform, integrating with tools like Azure Data Factory, Synapse Analytics, and AWS services. Build robust, efficient SQL and Python-based data transformations for cleansing, enrichment, and integration of large-scale datasets. Lead migration initiatives from AWS-based data platforms to a Snowflake-centered architecture, including: o Rebuilding AWS Glue pipelines in Azure Data Factory or using Snowflake-native ELT approaches. o Migrating EMR Spark jobs to Snowflake SQL or Python-based pipelines. o Migrating Redshift workloads to Snowflake with schema conversion and performance optimization. o Transitioning S3-based data lakes (Hudi, Hive) to Snowflake external tables via ADLS Gen2 or Azure Blob Storage. o Redirecting Kinesis/MSK streaming data to Azure Event Hubs, followed by ingestion into Snowflake using Streams & Tasks or Snowpipe. Support database migrations from AWS RDS (Aurora PostgreSQL, MySQL, Oracle) to Snowflake, focusing on schema translation, compatibility handling, and data movement at scale. Design modern Snowflake lakehouse-style architectures that incorporate raw, staging, and curated zones, with support for time travel, cloning, zero-copy restore, and data sharing. Integrate Azure Functions or Logic Apps with Snowflake for orchestration and trigger-based automation. Implement security best practices, including Azure Key Vault integration and Snowflake role-based access control, data masking, and network policies. Optimize Snowflake performance and costs using clustering, multi-cluster warehouses, materialized views, and result caching. Support CI/CD processes for Snowflake pipelines using Git, Azure DevOps or GitHub Actions, and SQL code versioning. Maintain well-documented data engineering workflows, architecture diagrams, and technical documentation to support collaboration and long-term platform maintainability. Required Qualifications 9+ years of data engineering experience, with 3+ years on Microsoft Azure stack and hands-on Snowflake expertise. Proficiency in: o Python for scripting and ETL orchestration o SQL for complex data transformation and performance tuning in Snowflake o Azure Data Factory and Synapse Analytics (SQL Pools) Experience in migrating workloads from AWS to Azure/Snowflake, including services such as Glue, EMR, Redshift, Lambda, Kinesis, S3, and MSK. Strong understanding of cloud architecture and hybrid data environments across AWS and Azure. Hands-on experience with database migration, schema conversion, and tuning in PostgreSQL, MySQL, and Oracle RDS. Familiarity with Azure Event Hubs, Logic Apps, and Key Vault. Working knowledge of CI/CD, version control (Git), and DevOps principles applied to data engineering workloads. Preferred Qualifications Extensive experience with Snowflake Streams, Tasks, Snowpipe, external tables, and data sharing. Exposure to MSK-to-Event Hubs migration and streaming data integration into Snowflake. Familiarity with Terraform or ARM templates for Infrastructure-as-Code (IaC) in Azure environments. Certification such as SnowPro Core, Azure Data Engineer Associate, or equivalent. Skills Azure,AWS REDSHIFT,Athena,Azure Data Lake About UST UST is a global digital transformation solutions provider. For more than 20 years, UST has worked side by side with the world’s best companies to make a real impact through transformation. Powered by technology, inspired by people and led by purpose, UST partners with their clients from design to operation. With deep domain expertise and a future-proof philosophy, UST embeds innovation and agility into their clients’ organizations. With over 30,000 employees in 30 countries, UST builds for boundless impact—touching billions of lives in the process.

Posted 1 day ago

Apply

5.0 - 7.0 years

0 Lacs

Thiruvananthapuram

Remote

5 - 7 Years 1 Opening Trivandrum Role description Role Proficiency: Resolve enterprise trouble tickets within agreed SLA and raise problem tickets for permanent resolution and/or provide mentorship (Hierarchical or Lateral) to junior associates Outcomes: 1) Update SOP with updated troubleshooting instructions and process changes2) Mentor new team members in understanding customer infrastructure and processes3) Perform analysis for driving incident reduction4) Escalate high priority incidents to customer and organization stakeholders for quicker resolution5) Contribute to planning and successful migration of platforms 6) Resolve enterprise trouble tickets within agreed SLA and raise problem tickets for permanent resolution7) Provide inputs for root cause analysis after major incidents to define preventive and corrective actions Measures of Outcomes: 1) SLA Adherence2) Time bound resolution of elevated tickets - OLA3) Manage ticket backlog timelines - OLA4) Adhere to defined process – Number of NCs in internal/external Audits5) Number of KB articles created6) Number of incidents and change ticket handled 7) Number of elevated tickets resolved8) Number of successful change tickets9) % Completion of all mandatory training requirements Outputs Expected: Resolution: Understand Priority and Severity based on ITIL practice resolve trouble ticket within agreed resolution SLA Execute change control tickets as documented in implementation plan Troubleshooting: Troubleshooting based on available information from previous tickets or consulting with seniors Participate in online knowledge forums reference. Covert the new steps to KB article Perform logical/analytical troubleshooting Escalation/Elevation: Escalate within organization/customer peer in case of resolution delay. Understand OLA between delivery layers (L1 L2 L3 etc) adhere to OLA. Elevate to next level work on elevated tickets from L1 Tickets Backlog/Resolution: Follow up on tickets based on agreed timelines manage ticket backlogs/last activity as per defined process. Resolve incidents and SRs within agreed timelines. Execute change tickets for infrastructure Installation: Install and configure tools software and patches Runbook/KB: Update KB with new findings Document and record troubleshooting steps as knowledge base Collaboration: Collaborate with different towers of delivery for ticket resolution (within SLA resolve L1 tickets with help from respective tower. Collaborate with other team members for timely resolution of tickets. Actively participate in team/organization-wide initiatives. Co-ordinate with UST ISMS teams for resolving connectivity related issues. Stakeholder Management: Lead the customer calls and vendor calls. Organize meeting with different stake holders. Take ownership for function's internal communications and related change management. Strategic: Define the strategy on data management policy management and data retention management. Support definition of the IT strategy for the function’s relevant scope and be accountable for ensuring the strategy is tracked benchmarked and updated for the area owned. Process Adherence: Thorough understanding of organization and customer defined process. Suggest process improvements and CSI ideas. Adhere to organization’ s policies and business conduct. Process/efficiency Improvement: Proactively identify opportunities to increase service levels and mitigate any issues in service delivery within the function or across functions. Take accountability for overall productivity efforts within the function including coordination of function specific tasks and close collaboration with Finance. Process Implementation: Coordinate and monitor IT process implementation within the function Compliance: Support information governance activities and audit preparations within the function. Act as a function SPOC for IT audits in local sites (incl. preparation interface to local organization mitigation of findings etc.) and work closely with ISRM (Information Security Risk Management). Coordinate overall objective setting preparation and facilitate process in order to achieve consistent objective setting in function Job Description. Coordination Support for CSI across all services in CIS and beyond. Training: On time completion of all mandatory training requirements of organization and customer. Provide On floor training and one to one mentorship for new joiners. Complete certification of respective career paths. Performance Management: Update FAST Goals in NorthStar track report and seek continues feedback from peers and manager. Set goals for team members and mentees and provide feedback Assist new team members to understand the customer environment Skill Examples: 1) Good communication skills (Written verbal and email etiquette) to interact with different teams and customers. 2) Modify / Create runbooks based on suggested changes from juniors or newly identified steps3) Ability to work on an elevated server ticket and solve4) Networking:a. Trouble shooting skills in static and Dynamic routing protocolsb. Should be capable of running netflow analyzers in different product lines5) Server:a. Skills in installing and configuring active directory DNS DHCP DFS IIS patch managementb. Excellent troubleshooting skills in various technologies like AD replication DNS issues etc.c. Skills in managing high availability solutions like failover clustering Vmware clustering etc.6) Storage and Back up:a. Ability to give recommendations to customers. Perform Storage & backup enhancements. Perform change management.b. Skilled in in core fabric technology Storage design and implementation. Hands on experience on backup and storage Command Line Interfacesc. Perform Hardware upgrades firmware upgrades Vulnerability remediation storage and backup commissioning and de-commissioning replication setup and management.d. Skilled in server Network and virtualization technologies. Integration of virtualization storage and backup technologiese. Review the technical diagrams architecture diagrams and modify the SOP and documentations based on business requirements.f. Ability to perform the ITSM functions for storage & backup team and review the quality of ITSM process followed by the team.7) Cloud:a. Skilled in any one of the cloud technologies - AWS Azure GCP.8) Tools:a. Skilled in administration and configuration of monitoring tools like CA UIM SCOM Solarwinds Nagios ServiceNow etcb. Skilled in SQL scriptingc. Skilled in building Custom Reports on Availability and performance of IT infrastructure building based on the customer requirements9) Monitoring:a. Skills in monitoring of infrastructure and application components10) Database:a. Data modeling and database design Database schema creation and managementb. Identify the data integrity violations so that only accurate and appropriate data is entered and maintained.c. Backup and recoveryd. Web-specific tech expertise for e-Biz Cloud etc. Examples of this type of technology include XML CGI Java Ruby firewalls SSL and so on.e. Migrating database instances to new hardware and new versions of software from on premise to cloud based databases and vice versa.11) Quality Analysis: a. Ability to drive service excellence and continuous improvement within the framework defined by IT Operations Knowledge Examples: 1) Good understanding of customer infrastructure and related CIs. 2) ITIL Foundation certification3) Thorough hardware knowledge 4) Basic understanding of capacity planning5) Basic understanding of storage and backup6) Networking:a. Hands-on experience in Routers and switches and Firewallsb. Should have minimum knowledge and hands-on with BGPc. Good understanding in Load balancers and WAN optimizersd. Advance back and restore knowledge in backup tools7) Server:a. Basic to intermediate powershell / BASH/Python scripting knowledge and demonstrated experience in script based tasksb. Knowledge of AD group policy management group policy tools and troubleshooting GPO sc. Basic AD object creation DNS concepts DHCP DFSd. Knowledge with tools like SCCM SCOM administration8) Storage and Backup:a. Subject Matter Expert in any of the Storage & Backup technology9) Tools:a. Proficient in the understanding and troubleshooting of Windows and Linux family of operating systems10) Monitoring:a. Strong knowledge in ITIL process and functions11) Database:a. Knowledge in general database management b. Knowledge in OS System and networking skills Additional Comments: Role - Cloud Engineer Primary Responsibilities • Engineer and support a portfolio of tools including: o HashiCorp Vault (HCP Dedicated), Terraform (HCP), Cloud Platform o GitHub Enterprise Cloud (Actions, Advanced Security, Copilot) o Ansible Automation Platform, Env0, Docker Desktop o Elastic Cloud, Cloudflare, Datadog, PagerDuty, SendGrid, Teleport • Manage infrastructure using Terraform, Ansible, and scripting languages such as Python and PowerShell • Enable security controls including dynamic secrets management, secrets scanning workflows, and cloud access quotas • Design and implement automation for self-service adoption, access provisioning, and compliance monitoring • Respond to user support requests via ServiceNow and continuously improve platform support documentation and onboarding workflows • Participate in Agile sprints, sprint planning, and cross-team technical initiatives • Contribute to evaluation and onboarding of new tools (e.g., remote developer access, artifact storage) Key Projects You May Lead or Support • GitHub secrets scanning and remediation with integration to HashiCorp Vault • Lifecycle management of developer access across tools like GitHub and Teleport • Upgrades to container orchestration environments and automation platforms (EKS, AKS) Technical Skills and Experience • Proficiency with Terraform (IaC) and Ansible • Strong scripting experience in Python, PowerShell, or Bash • Experience operating in cloud environments (AWS, Azure, or GCP) • Familiarity with secure development practices and DevSecOps tooling • Exposure to or experience with: o CI/CD automation (GitHub Actions) o Monitoring and incident management platforms (Datadog, PagerDuty) o Identity providers (AzureAD, Okta) o Containers and orchestration (Docker, Kubernetes) o Secrets management and vaulting platforms Soft Skills and Attributes • Strong cross-functional communication skills with technical and non-technical stakeholders • Ability to work independently while knowing when to escalate or align with other engineers or teams. • Comfort managing complexity and ambiguity in a fast-paced environment • Ability to balance short-term support needs with longer-term infrastructure automation and optimization. • Proactive, service-oriented mindset focused on enabling secure and scalable development • Detail-oriented, structured approach to problem-solving with an emphasis on reliability and repeatability. Skills Terraform,Ansible,Python,PowershellorBash,AWS,AzureorGCP,CI/CDautomation About UST UST is a global digital transformation solutions provider. For more than 20 years, UST has worked side by side with the world’s best companies to make a real impact through transformation. Powered by technology, inspired by people and led by purpose, UST partners with their clients from design to operation. With deep domain expertise and a future-proof philosophy, UST embeds innovation and agility into their clients’ organizations. With over 30,000 employees in 30 countries, UST builds for boundless impact—touching billions of lives in the process.

Posted 1 day ago

Apply

0 years

4 - 7 Lacs

Gurgaon

On-site

Job Purpose The UI Automation Engineer will be responsible for front-office application testing, leveraging tools such as Playwright, Node.js, and related frameworks. This role involves close collaboration with the QA team to automate test cases transitioned from manual testing. The engineer will focus on developing and executing test scripts, with a particular emphasis on Fixed Income trading workflows. Desired Skills and experience Strong hands-on experience with Playwright or similar modern web automation tools, with a proven ability to design and implement robust UI test automation for complex web applications. Proficiency in Node.js, with working knowledge of Cucumber for behavior-driven development and Jenkins for continuous integration and test execution. Experience in building and maintaining UI automation frameworks, including reusable components, test data management, and reporting mechanisms. Familiarity with test case management tools such as JIRA and XRay, including test planning, execution tracking, and defect lifecycle management. Clear and effective communication skills, both written and verbal, with the ability to collaborate across teams and articulate technical concepts to non-technical stakeholders. Self-driven and proactive, capable of working independently with minimal supervision while aligning with broader team objectives and timelines. Nice to have: Exposure to Eggplant automation tool, with an understanding of its scripting and testing capabilities. Experience working in Agile, sprint-based delivery teams, with a strong grasp of iterative development, sprint planning, and backlog grooming. Understanding of test orchestration and regression planning, including test suite optimization, scheduling, and integration into CI/CD pipelines for scalable test execution. Key Responsibilities Automate UI test cases based on requirements defined by the manual QA team Integrate with test case management and reporting tools Contribute to improving the automation framework as per architectural guidance Deliver consistent scripts in alignment with sprint goals Establish and implement comprehensive QA strategies and test plans from scratch. Develop and execute test cases with a focus on Fixed Income trading workflows. Collaborate with development, business analysts, and project managers to ensure quality throughout the SDLC. Provide clear and concise reporting on QA progress and metrics to management. Bring strong subject matter expertise in the Financial Services Industry, particularly fixed income trading products and workflows. Ensure effective, efficient, and continuous communication (written and verbally) with global stakeholders Independently troubleshoot difficult and complex issues on different environments Responsible for end-to-end delivery of projects, coordination between client and internal offshore teams and managing client queries Demonstrate high attention to detail, should work in a dynamic environment whilst maintaining high quality standards, a natural aptitude to develop good internal working relationships and a flexible work ethic Responsible for Quality Checks and adhering to the agreed Service Level Agreement (SLA) / Turn Around Time (TAT)

Posted 1 day ago

Apply

4.0 years

0 Lacs

Gurgaon

On-site

Global Position Description Title: Test Automation (Test Suite) Engineer - UiPath Hierarchical Level: Professional Division/Department: IT Reports to: Supervisor Automation Delivery Number of Direct Reports: 0 Travel: 0% Revision Date: 7/4/2025 Job Level: HR To Determine FLSA (US Only) : HR To Determine Type of Position: Salary Summary: Ready to join us in transforming testing through intelligent automation? Apply now or reach out for a detailed discussion about this opportunity! We’re seeking a hands-on Test Engineer/Developer to drive automation efforts using the UiPath Test Suite. You’ll focus on designing and executing automated tests for web-based applications and Microsoft Dynamics 365 modules. Your work will integrate test automation into CI/CD pipelines, ensuring high-quality releases and rapid feedback loops. Responsibilities: Test Automation Environment Setup: Define and configure test automation environments according to project requirements. Install and configure UiPath Test Suite, Studio, and Orchestrator for test automation purposes. Integrate external systems like SAP and manage data access for test execution. Integrate ADO to Test Manager. Test Automation Design & Development: Analyze and comprehend detailed test cases for various functionalities. Design and develop reusable, structured automated test scripts using UiPath Test Suite. Integrate assertions and verification points to ensure the validation of application behaviors. Implement robust exception handling mechanisms within test scripts. Conduct unit testing and dry runs to ensure script reliability before execution. Test Execution & Reporting: Execute automated test scripts and generate detailed reports summarizing results. Develop and maintain automated scripts for change requests, optimizing regression testing. Identify areas for enhancing test execution performance and provide recommendations. Collaboration & Maintenance: Collaborate with development teams to address test case feedback, technical issues, and improvements. Maintain and update test scripts to accommodate application changes and new functionalities. Identify and implement technical solutions to enhance test automation efficiency and accuracy. Version Control & Integration: Maintain version control for all test scripts using Azure Repo. Integrate test automation scripts with CI/CD pipelines and implement API automation where necessary. Preferred Qualification: UiPath certifications (e.g., UiPath Test Automation Certified Professional) Experience with performance or load testing tools Knowledge of RPA orchestration and advanced automation patterns Exposure to Agile/Scrum methodologies and DevOps practices 4+ years’ experience in testing with proven hands-on experience with UiPath Test Suite in enterprise environments Solid background in web-based test development (HTML, JavaScript, REST APIs) Experience testing Microsoft Dynamics or Dynamics 365 applications Familiarity with Azure DevOps (ADO) and connecting it to UiPath Test Suite Practical use of UiPath Autopilot for intelligent test automation Strong scripting, debugging, and problem-solving skills Excellent communication and documentation abilities Work Experience Requirements Number of Overall Years Necessary: 2-5 Minimum of 4 years of UiPath Test Automation experience with bachelor’s degree Certification and Training: Click here to enter text.Microsoft Certified: Dynamics 365 Fundamentals or Functional Consultant Associate ISTQB (International Software Testing Qualifications Board) certification Azure DevOps Engineer or related Azure certifications Advanced training or badges from UiPath Academy Specialized Skills/Technical Knowledge/ Soft Skills & Team Attributes Experience with custom connector development or workflow automation in Microsoft Power Platform Knowledge of testing frameworks like Selenium, Cypress, or Postman (for API testing) Familiarity with source control systems such as Git or GitHub, especially when used alongside UiPath Understanding of test data management and virtualization strategies Background in setting up or maintaining test environments and virtual machines Strong stakeholder communication and ability to collaborate with cross-functional teams Analytical mindset with a knack for troubleshooting edge-case issues Agile thinking and willingness to iterate in fast-paced sprints Exposure to product lifecycle management, especially in enterprise SaaS or ERP environments Local Specifications (English and Local Language): ** To comply with regulations by the American with Disabilities Act (ADA), the principal duties in job descriptions must be essential to the job. To identify essential functions, focus on the purpose and the result of the duties rather than the manner in which they are performed. The following definition applies: a job function is essential if removal of that function would fundamentally change the job. Date Posted: Click here to enter a date. Location - Gurugram Mode - Hybrid

Posted 1 day ago

Apply

2.0 years

4 - 10 Lacs

Gurgaon

On-site

Expedia Group brands power global travel for everyone, everywhere. We design cutting-edge tech to make travel smoother and more memorable, and we create groundbreaking solutions for our partners. Our diverse, vibrant, and welcoming community is essential in driving our success. Why Join Us? To shape the future of travel, people must come first. Guided by our Values and Leadership Agreements, we foster an open culture where everyone belongs, differences are celebrated and know that when one of us wins, we all win. We provide a full benefits package, including exciting travel perks, generous time-off, parental leave, a flexible work model (with some pretty cool offices), and career development resources, all to fuel our employees' passion for travel and ensure a rewarding career journey. We’re building a more open world. Join us. Why Join Us? Are you an technologist who is passionate about building robust, scalable, and performant applications & data products? This is exactly what we do, join Data Engineering & Tooling Team! Data Engineering & Tooling Team (part of Enterprise Data Products at Expedia) is responsible for making traveler, partner & supply data accessible, unlocking insights and value! Our Mission is to build and manage the travel industry's premier Data Products and SDKs. Software Development Engineer II Introduction to team Our team is looking for an Software Engineer who applies engineering principles to build & improve existing systems. We follow Agile principles, and we're proud to offer a dynamic, diverse and collaborative environment where you can play an impactful role and build your career. Would you like to be part of a Global Tech company that does Travel? Don't wait, Apply Now! In this role, you will - Implement products and solutions that are highly scalable with high-quality, clean, maintainable, optimized, modular and well-documented code across the technology stack. [OR - Writing code that is clean, maintainable, optimized, modular.] Crafting API's, developing and testing applications and services to ensure they meet design requirements. Work collaboratively with all members of the technical staff and other partners to build and ship outstanding software in a fast-paced environment. Applying knowledge of software design principles and Agile methodologies & tools. Resolve problems and roadblocks as they occur with help from peers or managers. Follow through on details and drive issues to closure. Assist with supporting production systems (investigate issues and working towards resolution). Experience and qualifications: Bachelor's degree or Masters in Computer Science & Engineering, or a related technical field; or equivalent related professional experience. 2+ years of software development or data engineering experience in an enterprise-level engineering environment. Proficient with Object Oriented Programming concepts with a strong understanding of Data Structures, Algorithms, Data Engineering (at scale), and Computer Science fundamentals. Experience with Java, Scala, Spring framework, Micro-service architecture, Orchestration of containerized applications along with a good grasp of OO design with strong design patterns knowledge. Solid understanding of different API types (e.g. REST, GraphQL, gRPC), access patterns and integration. Prior knowledge & experience of NoSQL databases (e.g. ElasticSearch, ScyllaDB, MongoDB). Prior knowledge & experience of big data platforms, batch processing (e.g. Spark, Hive), stream processing (e.g. Kafka, Flink) and cloud-computing platforms such as Amazon Web Services. Knowledge & Understanding of monitoring tools, testing (performance, functional), application debugging & tuning. Good communication skills in written and verbal form with the ability to present information in a clear and concise manner. Accommodation requests If you need assistance with any part of the application or recruiting process due to a disability, or other physical or mental health conditions, please reach out to our Recruiting Accommodations Team through the Accommodation Request. We are proud to be named as a Best Place to Work on Glassdoor in 2024 and be recognized for award-winning culture by organizations like Forbes, TIME, Disability:IN, and others. Expedia Group's family of brands includes: Brand Expedia®, Hotels.com®, Expedia® Partner Solutions, Vrbo®, trivago®, Orbitz®, Travelocity®, Hotwire®, Wotif®, ebookers®, CheapTickets®, Expedia Group™ Media Solutions, Expedia Local Expert®, CarRentals.com™, and Expedia Cruises™. © 2024 Expedia, Inc. All rights reserved. Trademarks and logos are the property of their respective owners. CST: 2029030-50 Employment opportunities and job offers at Expedia Group will always come from Expedia Group’s Talent Acquisition and hiring teams. Never provide sensitive, personal information to someone unless you’re confident who the recipient is. Expedia Group does not extend job offers via email or any other messaging tools to individuals with whom we have not made prior contact. Our email domain is @expediagroup.com. The official website to find and apply for job openings at Expedia Group is careers.expediagroup.com/jobs. Expedia is committed to creating an inclusive work environment with a diverse workforce. All qualified applicants will receive consideration for employment without regard to race, religion, gender, sexual orientation, national origin, disability or age.

Posted 1 day ago

Apply

0 years

0 Lacs

India

Remote

Company Description At Trigonal AI, we specialize in building and managing end-to-end data ecosystems that empower businesses to make data-driven decisions with confidence. From data ingestion to advanced analytics, we offer the expertise and technology to transform data into actionable insights. Our core services include data pipeline orchestration, real-time analytics, and business intelligence & visualization. We use modern technologies such as Apache Airflow, Kubernetes, Apache Druid, Kafka, and leading BI tools to create reliable and scalable solutions. Let us help you unlock the full potential of your data. Role Description This is a full-time remote role for a Business Development Specialist. The specialist will focus on day-to-day tasks including lead generation, market research, customer service, and communication with potential clients. The role also includes analytical tasks and collaborating with the sales and marketing teams to develop and implement growth strategies. Qualifications Strong Analytical Skills for data-driven decision-making Effective Communication skills for engaging with clients and team members Experience in Lead Generation and Market Research Proficiency in Customer Service to maintain client relationships Proactive and independent work style Experience in the tech or data industry is a plus Bachelor's degree in Business, Marketing, or related field

Posted 1 day ago

Apply

9.0 - 13.0 years

0 Lacs

Kolkata, West Bengal, India

On-site

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY-Consulting – AI Enabled Automation -GenAI/Agentic – Manager We are looking to hire people with strong AI Enabled Automation skills and who are interested in applying AI in the process automation space – Azure, AI, ML, Deep Learning, NLP, GenAI , large Lang Models(LLM), RAG ,Vector DB , Graph DB, Python. At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture, and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Responsibilities: Development and implementation of AI enabled automation solutions, ensuring alignment with business objectives. Design and deploy Proof of Concepts (POCs) and Points of View (POVs) across various industry verticals, demonstrating the potential of AI enabled automation applications. Ensure seamless integration of optimized solutions into the overall product or system Collaborate with cross-functional teams to understand requirements, to integrate solutions into cloud environments (Azure, GCP, AWS, etc.) and ensure it aligns with business goals and user needs Educate team on best practices and keep updated on the latest tech advancements to bring innovative solutions to the project Technical Skills Requirements 9 to 13 years of relevant professional experience Proficiency in Python and frameworks like PyTorch, TensorFlow, Hugging Face Transformers. Strong foundation in ML algorithms, feature engineering, and model evaluation.- Must Strong foundation in Deep Learning, Neural Networks, RNNs, CNNs, LSTMs, Transformers (BERT, GPT), and NLP.-Must Experience in GenAI technologies — LLMs (GPT, Claude, LLaMA), prompting, fine-tuning. Experience with LangChain, LlamaIndex, LangGraph, AutoGen, or CrewAI.(Agentic Framework) Knowledge of retrieval augmented generation (RAG) Knowledge of Knowledge Graph RAG Experience with multi-agent orchestration, memory, and tool integrations Experience/Implement MLOps practices and tools (CI/CD for ML, containerization, orchestration, model versioning and reproducibility)--Good to have Experience with cloud platforms (AWS, Azure, GCP) for scalable ML model deployment. Good understanding of data pipelines, APIs, and distributed systems. Build observability into AI systems — latency, drift, performance metrics. Strong written and verbal communication, presentation, client service and technical writing skills in English for both technical and business audiences. Strong analytical, problem solving and critical thinking skills. Ability to work under tight timelines for multiple project deliveries. What we offer: At EY GDS, we support you in achieving your unique potential both personally and professionally. We give you stretching and rewarding experiences that keep you motivated, working in an atmosphere of integrity and teaming with some of the world's most successful companies. And while we encourage you to take personal responsibility for your career, we support you in your professional development in every way we can. You enjoy the flexibility to devote time to what matters to you, in your business and personal lives. At EY you can be who you are and express your point of view, energy and enthusiasm, wherever you are in the world. It's how you make a difference. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.

Posted 1 day ago

Apply

8.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Reference # 323129BR Job Type Full Time Your role The individual in this role will be accountable for successful and timely delivery of projects in an agile environment where digital products are designed and built using cutting-edge technology for WMA clients and Advisors.. It is a devops role that entails working with teams located in Budapest – Hungary, Wroclaw - Poland, Pune - India and New Jersey, US. This role will include, but not be limited to, the following: maintain and build ci/cd pipelines migrate applications to cloud environment build scripts and dashboards for monitoring health of application build tools to reduce occurrence of errors and improve customer experience deployment of changes in prod and non-prod environments follow release management processes for application releases maintain stability of non-prod environments work with development, qa and support groups in trouble shooting environment issues Your team You'll be working as an engineering leader in the Client Data and Onboarding Team in India. We are responsible for WMA (Wealth Management Americas) client facing technology applications. This leadership role entails working with teams in US and India. You will play an important role of ensuring scalable development methodology is followed across multiple teams and participate in strategy discussions with business, and technology strategy discussions with architects. Our culture centers around innovation, partnership, transparency, and passion for the future. Diversity helps us grow, together. That’s why we are committed to fostering and advancing diversity, equity, and inclusion. It strengthens our business and brings value to our clients. Your expertise should carry 8+ years of experience to develop, build and maintain gitlab CI/CD pipelines use containerization technologies, orchestration tools (k8s), build tools (maven, gradle), VCS (gitlab), Sonar, Fortify tools to build robust deploy and release infrastructure deploy changes in prod and non prod Azure cloud infrastructure using helm, terraform, ansible and setup appropriate observability measures build scripts (bash, python, puppet) and dashboards for monitoring health of applications (AppDynamics, Splunk, AppInsights) possess basic networking knowledge (load balancing, ssh, certificates), middleware knowledge (MQ, Kafka, Azure Service Bus, Event hub) follow release management processes for application releases Maintain stability of non-prod environments Work with development, QA and support groups in trouble shooting environment issues About Us UBS is the world’s largest and the only truly global wealth manager. We operate through four business divisions: Global Wealth Management, Personal & Corporate Banking, Asset Management and the Investment Bank. Our global reach and the breadth of our expertise set us apart from our competitors. We have a presence in all major financial centers in more than 50 countries. How We Hire We may request you to complete one or more assessments during the application process. Learn more Join us At UBS, we know that it's our people, with their diverse skills, experiences and backgrounds, who drive our ongoing success. We’re dedicated to our craft and passionate about putting our people first, with new challenges, a supportive team, opportunities to grow and flexible working options when possible. Our inclusive culture brings out the best in our employees, wherever they are on their career journey. We also recognize that great work is never done alone. That’s why collaboration is at the heart of everything we do. Because together, we’re more than ourselves. We’re committed to disability inclusion and if you need reasonable accommodation/adjustments throughout our recruitment process, you can always contact us. Disclaimer / Policy Statements UBS is an Equal Opportunity Employer. We respect and seek to empower each individual and support the diverse cultures, perspectives, skills and experiences within our workforce.

Posted 1 day ago

Apply

8.0 years

0 Lacs

India

On-site

Business Summary The Deltek Global Cloud team focuses on the delivery of first-class services and solutions for our customers. We are an innovative and dynamic team that is passionate about transforming the Deltek cloud services that power our customers' project success. Our diverse, global team works cross-functionally to make an impact on the business. If you want to work in a transformational environment, where education and training are encouraged, consider Deltek as the next step in your career! Position Responsibilities As a Senior Manager for the DevOps Engineering and Automation team, you will lead a team of skilled DevOps engineers responsible for automating infrastructure provisioning, configuration, and CI/CD pipelines for a portfolio of Enterprise solutions. With a strong DevOps transformational background, you will leverage your expertise in DevOps practices and tools and public clouds (AWS, OCI) to develop strategic initiatives that enhance the efficiency, scalability, and reliability of our deployment processes. Additionally, you will have significant experience in people management, strategy development, and cross-functional collaboration. Key Responsibilities: Strategic Leadership: Help develop and implement a strategic roadmap for DevOps practices, automation, and infrastructure management. Identify and prioritize opportunities for process improvements, cost efficiencies, and technological advancements. Collaborate with senior leadership to align DevOps strategies with business objectives and goals. Team Management: Lead, mentor, and develop a team of DevOps engineers, fostering a culture of collaboration, innovation, and continuous improvement. Manage team performance, set clear goals, and provide regular feedback and professional development opportunities. Recruit and onboard top talent to build a high-performing DevOps team. Infrastructure Provisioning and Configuration: Oversee the development and maintenance of infrastructure as code (IaC) using Terraform for provisioning cloud resources. Ensure the creation and maintenance of Ansible playbooks for automated configuration and management of infrastructure and applications. Implement best practices for infrastructure scalability, security, and cost management. CI/CD Pipeline Implementation: Guide and support the design, implementation, and management of CI/CD pipelines to automate the build, testing, and deployment of applications and services. Ensure integration of CI/CD pipelines with version control systems, build tools, and monitoring solutions. Promote practices that support automated testing, security scans, and compliance checks. Cloud Deployment and Management: Direct the deployment and management of applications and services in public cloud environments such as AWS and OCI. Utilize cloud-native services and tools to enhance application performance and reliability. Implement robust monitoring, troubleshooting, and disaster recovery solutions for cloud deployments. Cross-Functional Collaboration: Work closely with Engineering and Delivery stakeholders to ensure alignment and successful deployments. Facilitate design and code reviews, ensuring adherence to high standards of quality and performance. Drive cross-functional initiatives to improve process efficiency and project outcomes. Qualifications Qualifications: Education: Bachelor’s degree in Computer Science (strongly preferred), Information Technology, or a related field. Master’s degree preferred. Experience: Minimum of 8 years of experience in DevOps, cloud infrastructure, and automation, with at least 3 years in a leadership role. Skills: Expertise in Infrastructure and automated configuration tools for infrastructure provisioning or automated configuration management. Proven experience in designing and implementing CI/CD pipelines using tools such as Jenkins, Azure DevOps, GitLab CI, or CircleCI. Extensive hands-on experience with AWS and OCI, including services like EC2, S3, Lambda, VCN, and OCI Compute. Strong understanding of containerization and orchestration tools like Docker and Kubernetes. Knowledge of Oracle and SQL Server, including clustering, replication, partitioning, and indexing. Excellent scripting skills in languages such as Python, Bash, or PowerShell. Proficiency in monitoring and logging tools like Prometheus, Grafana, ELK stack, or CloudWatch. Strong leadership, communication, and interpersonal skills. Preferred Qualifications: Certifications: AWS Certified DevOps Engineer, Terraform Certified Associate, or similar.

Posted 1 day ago

Apply

12.0 - 15.0 years

4 - 8 Lacs

Mohali

On-site

Responsibilities & Key Deliverables Manage and design Siemens PLM, computing support via HPC and other RnD related software like Nastran, Hypermesh, NX etc. and their configurations. Design and architect cloud-based solutions using AWS, Azure, Google Cloud, or other cloud platforms. Develop and implement cloud migration strategies, cloud governance frameworks, and cloud best practices. Ensure the optimal performance, reliability, and security of the IT systems and software used by the RnD team. Lead and mentor the RnD IT staff and provide technical guidance and support. Coordinate with other teams and departments to align the RnD IT strategy with the business goals and customer needs. Research and evaluate new technologies and tools to improve the RnD IT processes and capabilities. Manage the budget and resources of the RnD IT department and report on the progress and outcomes of the projects. Education Qualification Bachelor's degree or higher in computer engineering or related field. General Experience Minimum 12- 15 years of experience in R&D IT, with at least 5 years in a leadership role. Traits & Skills Required Excellent communication, collaboration, and problem-solving skills. Ability to work under pressure and manage multiple tasks and deadlines. Passion for innovation and continuous improvement. Strong knowledge and experience in Siemens PLM, computing support via HPC and other RnD related software like Nastran, Hypermesh, NX etc. and their configurations. Strong knowledge and experience in AWS, Azure, Google Cloud, or other cloud platforms. Experience in cloud migration, cloud security, cloud networking, cloud storage, and cloud monitoring. Experience in DevOps, CI/CD, automation, and orchestration tools. Experience in web services, APIs, microservices, and serverless architectures. Experience in programming languages such as Dot net core, Java, C#, angular etc. Certification in Siemens PLM, & computational software management would be preferred Certification in AWS, Azure, Google Cloud, or other cloud platforms is a plus. System Generated Secondary Skills Job Segment: R&D Engineer, Engineer, Engineering

Posted 1 day ago

Apply

1.0 years

2 - 3 Lacs

India

On-site

Key Responsibilities: Design, implement, and manage CI/CD pipelines using tools like Jenkins, GitLab CI, or GitHub Actions. Maintain and monitor cloud infrastructure (AWS, Azure, GCP). Automate infrastructure provisioning using tools such as Terraform, Ansible, or CloudFormation. Manage containerization and orchestration using Docker and Kubernetes. Implement system monitoring, logging, and alerting (e.g., Prometheus, Grafana, ELK stack). Collaborate with development and QA teams to ensure smooth release processes. Ensure system availability, scalability, and performance. Enforce security best practices across all DevOps processes. Job Type: Full-time Pay: ₹18,000.00 - ₹25,000.00 per month Experience: DevOps: 1 year (Preferred) Work Location: In person

Posted 1 day ago

Apply

2.0 years

1 - 3 Lacs

Chennai

On-site

Job Description: 2+ years of experience in DevOps, Site Reliability Engineering, or related field. Proficiency in scripting languages like Bash, Python, or PowerShell. Experience with containerization and orchestration tools (Docker, Kubernetes). Familiarity with monitoring tools (Prometheus, Grafana, ELK Stack). Knowledge of Git and version control systems. Experience with security and compliance best practices is a plus. Key Responsibilities: Design, implement, and manage CI/CD pipelines using tools like Jenkins, GitLab CI, or GitHub Actions. Automate infrastructure provisioning using tools like Terraform, Ansible, or CloudFormation. Monitor system performance and ensure availability, scalability, and security. Collaborate with development teams to improve deployment workflows. Manage and maintain cloud infrastructure (AWS, Azure, GCP). Build and maintain system documentation and runbooks. Troubleshoot and resolve system and deployment issues in development, staging, and production environments. Preferred Skills: Certifications (AWS Certified DevOps Engineer, CKA, etc.) Experience with serverless architectures. Familiarity with Agile/Scrum workflows. Job Type: Full-time Pay: ₹15,000.00 - ₹25,000.00 per month Work Location: In person Speak with the employer +91 7395947629

Posted 1 day ago

Apply

15.0 - 20.0 years

1 - 6 Lacs

Chennai

On-site

Job Description: About Us At Bank of America, we are guided by a common purpose to help make financial lives better through the power of every connection. Responsible Growth is how we run our company and how we deliver for our clients, teammates, communities, and shareholders every day. One of the keys to driving Responsible Growth is being a great place to work for our teammates around the world. We’re devoted to being a diverse and inclusive workplace for everyone. We hire individuals with a broad range of backgrounds and experiences and invest heavily in our teammates and their families by offering competitive benefits to support their physical, emotional, and financial well-being. Bank of America believes both in the importance of working together and offering flexibility to our employees. We use a multi-faceted approach for flexibility, depending on the various roles in our organization. Working at Bank of America will give you a great career with opportunities to learn, grow and make an impact, along with the power to make a difference. Join us! Global Business Services Global Business Services delivers Technology and Operations capabilities to Lines of Business and Staff Support Functions of Bank of America through a centrally managed, globally integrated delivery model and globally resilient operations. Global Business Services is recognized for flawless execution, sound risk management, operational resiliency, operational excellence, and innovation. In India, we are present in five locations and operate as BA Continuum India Private Limited (BACI), a non-banking subsidiary of Bank of America Corporation and the operating company for India operations of Global Business Services. Process Overview: Reference Data Technology is responsible for the strategy, sourcing, maintenance, and distribution of Reference Data across the Bank. It is also responsible for Global Markets Client Onboarding, Reg W, and FMU Reporting. Reference Data comprises 3 main categories - Client, Instrument and Book. Reference Data Technology is a provider of data for Front to Back Flows, Enterprise Supply Chain, Risk, Banking, GWIM and Compliance. PME and Bookmap are the firm's Authorized Data Sources for Instrument and Book data. Cesium is the System of Record for Client data in Global Markets. Some of the data domains include but are not limited to: Client Counterparty: Organizations, Individuals, Prospects, Contacts Client Accounts & SSIs: Cash, Derivatives, Processing Book: trading Books, Subledgers, Volcker classifications Instruments: Listed Products, Cleared Products, EOD Pricing, Holiday Calendars Job Description Generative AI (GenAI) presents an exciting opportunity to derive valuable insights from data and drive revenue growth, efficiencies, and improved business processes. Technology will collaborate with Global Markets Sales & Trading, Quantitative Strategies & Data Group (QSDG) & Platform teams to the design and buildout its global GenAI platform. The platform will cater to a rapidly growing number of use cases that harness the power of GenAI. Both proprietary and open-source Large Language Models, and large structured and un-structured data sets will be leveraged to produce insights for Global Markets and its clients. We are seeking a Software Engineer to build this platform. In this role, you will ensure that software is developed to meet functional, non-functional and compliance requirements, and solutions are well designed with maintainability/ease of integration and testing built-in from the outset. Hands-on engagement in the full software lifecycle activities is expected. This includes requirements analysis, architecture design, coding, testing, and deployment. Job expectations include a strong knowledge of development and testing practices common to the industry and design and architectural patterns. Responsibilities: Hands on people and project manager role. Responsible for managing a mid-sized team and also oversee their deliverables on day-to-day basis Expected to contribute in his/her own individual capacity to software development and design and code reviews Design, develop, and modify architecture components, application interfaces, and solution enablers while ensuring principal architecture integrity is maintained. Mentor other software engineers and coach team on Continuous Integration and Continuous Development (CI-CD) practices and automating tool stack. Code solutions and implement automated unit tests to deliver a requirement/story per the defined acceptance criteria and compliance requirements. Help evaluate and execute a proof of concept as necessary to implement new ideas or mitigate risk. Design, develop, and maintain automated test suites (integration, regression, performance). Ensure solution meets product acceptance criteria with minimal technical debt. Troubleshoot build and setup failures and facilitate resolution. Ensure execution and delivery meets technology’s expectations in terms of the functionality, quality, performance, reliability, and timeline. Communicate status frequently to technology partners. Requirements: Education: BE / BTech / MTech / MCA / MSc Certifications (if any): NA Experience Range: 15 to 20 years in similar roles. Preferably in the financial industry Foundational Skills: Some exposure to manage mid-sized teams Hands-on experience in AI / Gen AI system design, implementation, and scaling, with expertise in large language models ( LLMs ) and AI frameworks. Expertise in Advanced of Python Development and full-stack technologies. Proven ability to architect enterprise-scale solutions Hands-on experience in application development in one or more areas MongoDB , Redis , React Framework, Impala , Autosys, FAST API services, Containerization . Working in large sized teams that collaboratively develop on a shared multi-repo codebase using IDEs (e.g. VS Code rather than Jupyter Notebooks), Continuous Integration ( CI ), Continuous Deployment ( CD ) and Continuous Testing Hands-on DevOps experience with one or more of the following enterprise development tools: Version Control ( GIT / Bitbucket ), Build Orchestration ( Jenkins ), Code Quality ( SonarQube and pytest Unit Testing), Artifact Management ( Artifactory ) and Deployment ( Ansible ) Experience with agile development methodologies and building supportability into applications Excellent analytical and problem-solving skills. Experience with developing frameworks and tools specific to AI/ML applications. Familiarity with cloud platforms and development in cloud environments. Ability to communicate clearly and effectively to a wide range of audience (business stakeholders, developer & support teams). Self-starter. Able to break down complex problems into smaller problems, manage dependencies, and efficiently drive through to a solution Detail oriented & highly organized. Adaptable to shifting & competing priorities. Desired Skills: Experience in Global markets domain Work Timings : 11:30am to 8:30pm IST Job Location: Chennai

Posted 1 day ago

Apply

8.0 years

47 Lacs

Coimbatore

On-site

Responsibilities: Develop cloud design patterns and architecture framework required for the project implementation. Define architecture standards for the development team, encourage the best practices, prepare the technical documentation for the project deliverables. Translate the Design Into Implementable pattern for the DevOps engineering team. Act as the single point of contact for any technical delivery related to a project. Provide technical thought leadership to the team with solution decisions, technical issues and for internal organization initiatives. Technical guidance and assistance on effort estimation to pre-sales teams for the client proposals. Project management activities include project estimation, work breakdown structure, setting the project schedules and sending the project status reports to clients. Assist Project Management team to help define and organize tasks across the sprints by participating in sprint planning and story point estimation. Mentor and guide the team members on the cloud technologies. Location: Chennai, Coimbatore, Bangalore & Hyderabad Must have skills : Bachelor's degree i n Engineering or a related technical field, or equivalent practical experience. 8+ years of experience in the industry with good exposure to Cloud, DevOps, and Automation. 4+ years of experience in designing and architecting public/private cloud preferably AWS Certified in Solutions Architect Professional and/or Specialty in AWS. Must have good knowledge on core infrastructure domains like Networking, Compute, Security, Backup and Monitoring. Proficient in one or more of the following IaC tools – Ex: AWS CloudFormation, Terraform, or Ansible. Proficient in one or more of the following scripting languages - Shell, Python, Ruby, Golang, Powershell, etc. Specialist in one or more of the CI/CD tools – Azure DevOps, Jenkins, Gitlab CI, Github Actions, Bitbucket Pipelines. Proficient in various Cloud security tools and their usage. Exposure to containers and container orchestration using Docker,Kubernetes, or any cloud managed Kubernetes services such as AWS ECS, Fargate, EKS, etc. Exposure to AWS Serverless technology – Lambda, Step Function, etc. Exposure to cloud security posture assessment (CSPA) and provide recommendations (AWS Security Hub). Exposure to AWS Well-Architected Framework and their application and recommendation. Expert in Business Continuity plans and related designs (DR). Exposure to cloud infrastructure cost management and optimization. Experience on setting up Datacentre to AWS Cloud connectivity using Site-to-Site VPN and/or Direct Connect. Experience in large scale migrations to cloud, with exposure to end-to- end migration methodology from Discovery, Build & Migration phases Nice to have skills: Seasoned in migrating and transforming legacy solutions to cloud Experience in any of the Database configuration and administration Experience in implementing Landing Zones and Control tower Deployment of Applications or Micro service via DevOps tools K8s Lens or equivalent tools for administration and monitoring Cross platform application integrations You are cloud-ready – You are familiar with various aspects of cloud technologies and have successfully helped customers adopt the usage of the cloud, public, private, and hybrid. You have advised on and architected solutions that involve industry-leading IaaS and PaaS solutions. You are a great listener - Our goal is to build solutions that last for years and continuously adapt to changing needs of the industry. You are a great listener to your customers, peers, industry trends and are receptive to what market needs. You are a collaborator - You enjoy working with multi-cultural teams, both domestic and international, and find ways to get the best out of your people. You also work well with a variety of roles including core development, design, operations, and support. You believe in continuous learning - Things change in our industry continuously, and you always love to learn both the underlying technology and the business motivations of our customers, constantly finding new ways to improve our solution, processes to add value for our customers. You are ready to contribute to rapid growth within a dynamic, small company culture. Job Type: Permanent Pay: Up to ₹4,700,000.00 per year Experience: total: 8 years (Required) Work Location: In person

Posted 1 day ago

Apply

3.0 years

11 - 24 Lacs

Chennai

On-site

Job Description Data Engineer, Chennai We’re seeking a highly motivated Data Engineer to join our agile, cross-functional team and drive end-to-end data pipeline development in a cloud-native, big data ecosystem. You’ll leverage ETL/ELT best practices and data lakehouse paradigms to deliver scalable solutions. Proficiency in SQL, Python, Spark, and modern data orchestration tools (e.g. Airflow) is essential, along with experience in CI/CD, DevOps, and containerized environments like Docker and Kubernetes. This is your opportunity to make an impact in a fast-paced, data-driven culture. Responsibilities Responsible for data pipeline development and maintenance. Contribute to development, maintenance, testing strategy, design discussions, and operations of the team. Participate in all aspects of agile software development including design, implementation, and deployment. Responsible for the end-to-end lifecycle of new product features / components. Ensuring application performance, uptime, and scale, maintaining high standards of code quality and thoughtful application design. Work with a small, cross-functional team on products and features to drive growth. Learning new tools, languages, workflows, and philosophies to grow. Research and suggest new technologies for boosting the product. Have an impact on product development by making important technical decisions, influencing the system architecture, development practices and more. Qualifications Excellent team player with strong communication skills. B.Sc. in Computer Sciences or similar. 3-5 years of experience in Data Pipeline development. 3-5 years of experience in PySpark / Databricks. 3-5 years of experience in Python / Airflow. Knowledge of OOP and design patterns. Knowledge of server-side technologies such as Java, Spring Experience with Docker containers, Kubernetes and Cloud environments Expertise in testing methodologies (Unit-testing, TDD, mocking). Fluent with large scale SQL databases. Good problem-solving and analysis abilities. Requirements - Advantage Experience with Azure cloud services. Experience with Agile Development methodologies. Experience with Git. Additional Information Our Benefits Flexible working environment Volunteer time off LinkedIn Learning Employee-Assistance-Program (EAP) About NIQ NIQ is the world’s leading consumer intelligence company, delivering the most complete understanding of consumer buying behavior and revealing new pathways to growth. In 2023, NIQ combined with GfK, bringing together the two industry leaders with unparalleled global reach. With a holistic retail read and the most comprehensive consumer insights—delivered with advanced analytics through state-of-the-art platforms—NIQ delivers the Full View™. NIQ is an Advent International portfolio company with operations in 100+ markets, covering more than 90% of the world’s population. For more information, visit NIQ.com Want to keep up with our latest updates? Follow us on: LinkedIn | Instagram | Twitter | Facebook Our commitment to Diversity, Equity, and Inclusion NIQ is committed to reflecting the diversity of the clients, communities, and markets we measure within our own workforce. We exist to count everyone and are on a mission to systematically embed inclusion and diversity into all aspects of our workforce, measurement, and products. We enthusiastically invite candidates who share that mission to join us. We are proud to be an Equal Opportunity/Affirmative Action-Employer, making decisions without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability status, age, marital status, protected veteran status or any other protected class. Our global non-discrimination policy covers these protected classes in every market in which we do business worldwide. Learn more about how we are driving diversity and inclusion in everything we do by visiting the NIQ News Center: https://nielseniq.com/global/en/news-center/diversity-inclusion

Posted 1 day ago

Apply

3.0 - 8.0 years

0 Lacs

Indore, Madhya Pradesh, India

On-site

Job Summary: We are seeking a skilled and experienced Q&A Engineer with a strong technical background in networking, automation, API testing, and performance testing. The ideal candidate will have proficiency in Postman API testing, Java programming, and testing frameworks like JMeter, Selenium, REST Assured, and Robot Framework. The candidate familiar with network architecture, including ORAN, SMO, RIC, and OSS/BSS is Plus. Key Responsibilities: Perform functional, performance, and load testing of web applications using tools such as JMeter and Postman. Develop, maintain, and execute automated test scripts using Selenium with Java for web application testing. Design and implement tests for RESTful APIs using REST Assured (Java library) for testing HTTP responses and ensuring proper API functionality. Collaborate with development teams to identify and resolve software defects through effective debugging and testing. Utilize the Robot Framework with Python for acceptance testing and acceptance test-driven development. Conduct end-to-end testing and ensure that systems meet all functional requirements. Ensure quality and compliance of software releases by conducting thorough test cases and evaluating product quality. Required Skill set: Postman API Testing: Experience in testing RESTful APIs and web services using Postman. Experience Range 3 to 8 years Java: Strong knowledge of Java for test script development, particularly with Selenium and REST Assured. JMeter: Experience in performance, functional, and load testing using Apache JMeter. Selenium with Java: Expertise in Selenium WebDriver for automated functional testing, including script development and maintenance using Java. REST Assured: Proficient in using the REST Assured framework (Java library) for testing REST APIs and validating HTTP responses. Robot Framework: Hands-on experience with the Robot Framework for acceptance testing and test-driven development (TDD) in Python. ORAN/SMO/RIC/OSS Architecture: In-depth knowledge of ORAN (Open Radio Access Network), SMO (Service Management Orchestration), RIC (RAN Intelligent Controller), and OSS (Operations Support Systems) architectures. Good to have Skill Set: Networking Knowledge: Deep understanding of networking concepts, specifically around RAN elements and network architectures (ORAN, SMO, RIC, OSS). Monitoring Tools : Experience with Prometheus, Grafana, and Kafka for real-time monitoring and performance tracking of applications and systems. Keycloak: Familiarity with Keycloak for identity and access management.

Posted 1 day ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

Remote

When you join Verizon You want more out of a career. A place to share your ideas freely — even if they’re daring or different. Where the true you can learn, grow, and thrive. At Verizon, we power and empower how people live, work and play by connecting them to what brings them joy. We do what we love — driving innovation, creativity, and impact in the world. Our V Team is a community of people who anticipate, lead, and believe that listening is where learning begins. In crisis and in celebration, we come together — lifting our communities and building trust in how we show up, everywhere & always. Want in? Join the #VTeamLife. What You’ll Be Doing… The Orchestration & Automation Planning team is looking for an Automation Engineer to maintain and advance our automation platform to support in-house and 3rd party RAN automation applications. This includes interfacing with external suppliers and internal stakeholders. In this role the candidate will be responsible for leading feature designs and will work with engineering and operations for solutions to improve reliability, scalability, and performance. You will partner with key stakeholders to ensure new features and or fixes are swiftly tested and rolled out with no negative impact to customers. In this role you will have the opportunity to envision and operationalize system and network variables to benchmark the platform and to ensure reliability and performance metrics are met. Ongoing assessment of the deployment process as well as implementation of new procedures to streamline the process. Designing architecture and specifying requirements for platform features to support SON applications with new capabilities, and meeting capacity and reliability requirements. Designing platform solutions to evolve to standard solution. Designing platform solutions with high availability and geo-redundancy, including distributed architecture, and meeting internal logging, alarming, auditing, and security practices. Executing solution plans and working with internal engineering and operations teams to bring the solutions to production. What We’re Looking For... You’re a technical specialist with solid credentials and a remarkable ability to find, break down, and solve problems. You work effectively with a wide range of internal and external stakeholders, and you’re phenomenal at partnering with the groups you support. You constantly look for ways to make a great network design better, and you’re a natural mentor to junior engineers looking to develop their technical skills. You’ll Need To Have Bachelor’s degree or four or more years of work experience. Four or more years of relevant work experience. Experience in automation technology. Experience in creating architecture and understanding its principles including availability, reliability, scalability, redundancy, high availability, serviceability. Experience in specifying requirements for solutions and features. Experience in the use of AI/ML and knowledge of basic AI/ML principles. Knowledge of 3GPP 4G and 5G technologies, particularly in the RAN domain. Knowledge of IP network services and protocols. Experience in software development including design, implementation, and test. Knowledge of Openstack related network functions. Knowledge of Container related network functions (Kubernetes) along with Openshift and SPK. Even better if you have one or more of the following: A degree in Engineering, Computer Science, or related discipline. Where you’ll be working In this hybrid role, you'll have a defined work location that includes work from home and assigned office days set by your manager. Scheduled Weekly Hours 40 Equal Employment Opportunity Verizon is an equal opportunity employer. We evaluate qualified applicants without regard to race, gender, disability or any other legally protected characteristics.

Posted 1 day ago

Apply

15.0 years

0 Lacs

Noida

On-site

Project Role : Application Tech Support Practitioner Project Role Description : Act as the ongoing interface between the client and the system or application. Dedicated to quality, using exceptional communication skills to keep our world class systems running. Can accurately define a client issue and can interpret and design a resolution based on deep product knowledge. Must have skills : Python (Programming Language) Good to have skills : Generative AI Minimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary: We are seeking a highly motivated and technically skilled GenAI & Prompt Engineering Specialist to join our Automation & Asset Development & Deployment team. This role will focus on designing, developing, and optimizing generative AI solutions using Python and large language models (LLMs). You will be instrumental in building intelligent automation workflows, refining prompt strategies, and ensuring scalable, secure AI deployments. Roles & Responsibilities: -• Design, test, and optimise prompts for LLMs to support use cases which benefit the infra & application managed services. • Build and maintain Python-based microservices and scripts for data processing, API integration, and model orchestration. • Collaborate with SMEs to convert business requirements into GenAI-powered workflows, including chunking logic, token optimisation, and schema transformation. • Work with foundation models and APIs (e.g., OpenAI, Vertex AI, Claude Sonnet) to embed GenAI capabilities into enterprise platforms. • Ensure all AI solutions comply with internal data privacy, PII masking, and security standards. • Conduct A/B testing of prompts, evaluate model outputs, and iterate based on SME feedback. • Maintain clear documentation of prompt strategies, model behaviors, and solution architectures. Professional & Technical Skills: • Strong proficiency in Python, including experience with REST APIs, data parsing, and automation scripting. • Deep understanding of LLMs, prompt engineering, and GenAI frameworks (e.g., LangChain, RAG pipelines). • Familiarity with data modelling, SQL, and RDBMS concepts. • Experience with agentic workflows, token optimization, and schema chunking. Additional Information: - The candidate should have minimum 5 years of experience in Python (Programming Language). - This position is based at our Noida office. - A 15 years full time education is required. 15 years full time education

Posted 1 day ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

LiveRamp is the data collaboration platform of choice for the world’s most innovative companies. A groundbreaking leader in consumer privacy, data ethics, and foundational identity, LiveRamp is setting the new standard for building a connected customer view with unmatched clarity and context while protecting precious brand and consumer trust. LiveRamp offers complete flexibility to collaborate wherever data lives to support the widest range of data collaboration use cases—within organizations, between brands, and across its premier global network of top-quality partners. Hundreds of global innovators, from iconic consumer brands and tech giants to banks, retailers, and healthcare leaders turn to LiveRamp to build enduring brand and business value by deepening customer engagement and loyalty, activating new partnerships, and maximizing the value of their first-party data while staying on the forefront of rapidly evolving compliance and privacy requirements. LiveRamp is looking for a Staff Backend Engineer to join our team and help build the Unified Segment Builder (USB) — the next-generation, comprehensive segmentation solution for creating precise, real-time, and meaningful audiences. USB is a foundational pillar in LiveRamp’s product ecosystem. It empowers customers to create powerful audience segments using 1st, 2nd, and 3rd-party data, with support for combining, excluding, and overlapping datasets. The solution is designed for scale, performance, and usability — replacing legacy segmentation tools and delivering a unified, world-class user experience. We are also rolling out AI-powered segment building capabilities based on USB, aiming to boost efficiency and expand the use cases beyond traditional campaign planners. You Will Collaborate with APAC engineers, and partner closely with US-based product and UX teams. Design and implement scalable backend systems, APIs, and infrastructure powering the USB and other core LiveRamp products. Lead cross-functional technical discussions, drive architectural decisions, and evangelize engineering best practices across teams. Mentor engineers and contribute to the technical leadership of the local team. Ensure operational excellence by building reliable, observable, and maintainable production systems. Help rearchitect our existing systems to provide a more powerful and flexible data processing environment at scale. Your Team Will Design, build, and scale USB and related segment-building products critical to LiveRamp’s success. Collaborate with engineering, product, DevOps, SRE, and QA teams to deliver new features and improvements. Build systems that integrate with the broader LiveRamp Data Collaboration Platform. Continuously improve quality, performance, and developer experience for internal tools and services About You 8+ years of experience writing and deploying production-grade backend code. Strong programming skills in Java, Python, kotlin, or Go. 3+ years of experience working with big data technologies such as Apache Spark, Hadoop/MapReduce, and Kafka. Extensive experience with containerization and orchestration technologies, including Docker and Kubernetes, for building and managing scalable, reliable services Proven experience designing and delivering large-scale distributed systems in production environments. Strong track record of contributing to or leading architectural efforts for complex systems. Hands-on experience with cloud platforms, ideally GCP (AWS or Azure also acceptable). Proficiency with Spring Boot and modern backend frameworks. Experience working with distributed databases (e.g., SingleStore, ClickHouse, etc.). Bonus Points Familiarity with building AI-enabled applications, especially those involving LLMs or generative AI workflows. Experience with LangChain or LangGraph frameworks for orchestrating multi-step AI agents is a strong plus. Benefits Flexible paid time off, paid holidays, options for working from home, and paid parental leave. Comprehensive Benefits Package: LiveRamp offers a comprehensive benefits package designed to help you be your best self in your personal and professional lives. Our benefits package offers medical, dental, vision, accident, life and disability, an employee assistance program, voluntary benefits as well as perks programs for your healthy lifestyle, career growth, and more. Your medical benefits extend to your dependents including parents. More About Us LiveRamp’s mission is to connect data in ways that matter, and doing so starts with our people. We know that inspired teams enlist people from a blend of backgrounds and experiences. And we know that individuals do their best when they not only bring their full selves to work but feel like they truly belong. Connecting LiveRampers to new ideas and one another is one of our guiding principles—one that informs how we hire, train, and grow our global team across nine countries and four continents. Click here to learn more about Diversity, Inclusion, & Belonging (DIB) at LiveRamp. To all recruitment agencies : LiveRamp does not accept agency resumes. Please do not forward resumes to our jobs alias, LiveRamp employees or any other company location. LiveRamp is not responsible for any fees related to unsolicited resumes.

Posted 1 day ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies