Jobs
Interviews

14240 Orchestration Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 years

18 Lacs

Mohali

On-site

Key Responsibilities: Design and develop full-stack web applications using the MERN (MongoDB, Express, React, Node.js) stack. Build RESTful APIs and integrate front-end and back-end systems. Deploy and manage applications using AWS services such as EC2, S3, Lambda, API Gateway, DynamoDB, CloudFront, RDS, etc. Implement CI/CD pipelines using AWS CodePipeline, CodeBuild, or other DevOps tools. Monitor, optimize, and scale applications for performance and availability. Ensure security best practices in both code and AWS infrastructure. Write clean, modular, and maintainable code with proper documentation. Work closely with product managers, designers, and QA to deliver high-quality products on schedule. Required Skills & Qualifications: Bachelor’s degree in Computer Science, Engineering, or related field (or equivalent experience). 3+ years of professional experience with MERN Stack development. Strong knowledge of JavaScript (ES6+), React.js (Hooks, Redux), and Node.js. Hands-on experience with MongoDB and writing complex queries and aggregations. Proficiency in deploying and managing applications on AWS. Experience with AWS services like EC2, S3, Lambda, API Gateway, RDS, CloudWatch, etc. Knowledge of Git, Docker, and CI/CD pipelines. Understanding of RESTful API design, microservices architecture, and serverless computing. Strong debugging and problem-solving skills. Preferred Qualifications: AWS Certification (e.g., AWS Certified Developer – Associate). Experience with Infrastructure as Code (IaC) using Terraform or AWS CloudFormation. Experience with GraphQL and WebSockets. Familiarity with container orchestration tools like Kubernetes or AWS ECS/EKS. Exposure to Agile/Scrum methodologies. Company overview: smartData is a leader in global software business space when it comes to business consulting and technology integrations making business easier, accessible, secure and meaningful for its target segment of startups to small & medium enterprises. As your technology partner, we provide both domain and technology consulting and our inhouse products and our unique productized service approach helps us to act as business integrators saving substantial time to market for our esteemed customers. With 8000+ projects, vast experience of 20+ years, backed by offices in the US, Australia, and India, providing next door assistance and round-the-clock connectivity, we ensure continual business growth for all our customers. Our business consulting and integrator services via software solutions focus on important industries of healthcare, B2B, B2C, & B2B2C platforms, online delivery services, video platform services, and IT services. Strong expertise in Microsoft, LAMP stack, MEAN/MERN stack with mobility first approach via native (iOS, Android, Tizen) or hybrid (React Native, Flutter, Ionic, Cordova, PhoneGap) mobility stack mixed with AI & ML help us to deliver on the ongoing needs of customers continuously. Job Type: Full-time Pay: Up to ₹1,800,000.00 per year Benefits: Health insurance Provident Fund Schedule: Day shift Monday to Friday Supplemental Pay: Performance bonus Work Location: In person

Posted 4 hours ago

Apply

6.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Why This Role Matters: Data is the foundation of our business, and your work will ensure that we continue to deliver high-quality competitive intelligence at scale. Web platforms are constantly evolving, deploying sophisticated anti-bot measures—your job is to stay ahead of them. If you thrive on solving complex technical challenges and enjoy working with real-world data at an immense scale, this role is for you. We seek a Software Development Engineer with expertise in cloud infrastructure, Big Data and web crawling technologies. This role bridges system reliability engineering with scalable data extraction solutions, ensuring our infrastructure remains robust and capable of handling high-volume data collection. You will design resilient systems, optimize automation pipelines, and tackle challenges posed by advanced bot-detection mechanisms. Key Responsibilities: Architect, deploy, and manage scalable cloud environments (AWS/GCP/DO) to support distributed data processing solutions to handle terabyte-scale datasets and billions of records efficiently Automate infrastructure provisioning, monitoring, and disaster recovery using tools like Terraform, Kubernetes, and Prometheus. Optimize CI/CD pipelines to ensure seamless deployment of web scraping workflows and infrastructure updates. Develop and maintain stealthy web scrapers using Puppeteer, Playwright, and headless chromium browsers. Reverse-engineer bot-detection mechanisms (e.g., TLS fingerprinting, CAPTCHA solving) and implement evasion strategies. Monitor system health, troubleshoot bottlenecks, and ensure 99.99% uptime for data collection and processing pipelines. Implement security best practices for cloud infrastructure, including intrusion detection, data encryption, and compliance audits. Partner with data collection, ML and SaaS teams to align infrastructure scalability with evolving data needs Research emerging technologies to stay ahead of anti-bot trends including technologies like Kasada, PerimeterX, Akamai, Cloudflare, and more. Required Skills: 4–6 years of experience in site reliability engineering and cloud infrastructure management . Proficiency in Python, JavaScript for scripting and automation . Hands-on experience with Puppeteer/Playwright, headless browsers, and anti-bot evasion techniques . Knowledge of networking protocols, TLS fingerprinting, and CAPTCHA-solving frameworks . Experience with monitoring and observability tools such as Grafana, Prometheus, Elasticsearch, and familiarity with monitoring and optimizing resource utilization in distributed systems. Experience with data lake architectures and optimizing storage using formats such as Parquet, Avro, or ORC. Strong proficiency in cloud platforms (AWS, GCP, or Azure) and containerization/orchestration (Docker, Kubernetes). Deep understanding of infrastructure-as-code tools (Terraform, Ansible) . Deep experience in designing resilient data systems with a focus on fault tolerance, data replication, and disaster recovery strategies in distributed environments. Experience implementing observability frameworks, distributed tracing, and real-time monitoring tools. Excellent problem-solving abilities, with a collaborative mindset and strong communication skills.

Posted 4 hours ago

Apply

2.0 years

0 - 1 Lacs

India

Remote

About The Role Masai, in academic collaboration with a premier institute, is seeking a Teaching Assistant (TA) for its New Age Software Engineering program. This advanced 90-hour course equips learners with Generative AI foundations, production-grade AI engineering, serverless deployments, agentic workflows, and vision-enabled AI applications. The TA will play a key role in mentoring learners, resolving queries, sharing real-world practices, and guiding hands-on AI engineering projects. This role is perfect for professionals who want to contribute to next-generation AI-driven software engineering education while keeping their technical skills sharp. Key Responsibilities (KRAs) Doubt-Solving Sessions Conduct or moderate weekly sessions to clarify concepts across: Generative AI & Prompt Engineering AI Lifecycle Management & Observability Serverless & Edge AI Deployments Agentic Workflows and Vision-Language Models (VLMs) Share industry insights and practical examples to reinforce learning. Q&A and Discussion Forum Support Respond to student questions through forums, chat, or email with detailed explanations and actionable solutions. Facilitate peer-to-peer discussions on emerging tools, frameworks, and best practices in AI engineering. Research & Project Support Assist learners in capstone project design and integration, including vector databases, agent orchestration, and performance tuning. Collaborate with the academic team to research emerging AI frameworks like LangGraph, CrewAI, Hugging Face models, and WebGPU deployments. Learner Engagement Drive engagement via assignment feedback, interactive problem-solving, and personalized nudges to keep learners motivated. Encourage learners to adopt best practices for responsible and scalable AI engineering. Content Feedback Loop Collect learner feedback and recommend updates to curriculum modules for continuous course improvement. Candidate Requirements 2+ years of experience in Software Engineering, AI Engineering, or Full-Stack Development. Strong knowledge of Python/Node.js, cloud platforms (AWS Lambda, Vercel, Cloudflare Workers), and modern AI tools. Hands-on experience with LLMs, Vector Databases (Pinecone, Weaviate), Agentic Frameworks (LangGraph, ReAct), and AI observability tools. Understanding of AI deployment, prompt engineering, model fine-tuning, and RAG pipelines. Excellent communication and problem-solving skills; mentoring experience is a plus. Familiarity with online learning platforms or LMS tools is advantageous. Engagement Details Time Commitment: 6 to 8 hours per week Location: Remote (online) Compensation: ₹8,000 to ₹10,000 per month Why Join Us? Benefits and Perks Contribute to a cutting-edge AI & software engineering program with a leading ed-tech platform. Mentor learners on next-generation AI applications and engineering best practices. Engage in flexible remote working while influencing future technological innovations. Access to continuous professional development and faculty enrichment programs. Network with industry experts and professionals in the AI and software engineering domain. Skills: edge,llms,rag pipelines,communication,online,aws lambda,databases,cloudflare workers,ai observability tools,vercel,prompt,learning,model fine-tuning,vector databases,prompt engineering,software,new age,agentic frameworks,mentoring,problem-solving,python,models,learners,node.js

Posted 4 hours ago

Apply

3.0 years

0 Lacs

Hyderābād

Remote

Our fast-paced and collaborative environment inspires us to create, think, and challenge each other in ways that make our solutions and our teams better. Whether you’re interested in engineering or development, marketing or sales, or something else – if this sounds like you, then we’d love to hear from you! We are headquartered in Denver, Colorado, with offices in the US, Canada, and India. Vertafore is a leading technology company whose innovative software solution are advancing the insurance industry. Our suite of products provides solutions to our customers that help them better manage their business, boost their productivity and efficiencies, and lower costs while strengthening relationships. Our mission is to move InsurTech forward by putting people at the heart of the industry. We are leading the way with product innovation, technology partnerships, and focusing on customer success. Our fast-paced and collaborative environment inspires us to create, think, and challenge each other in ways that make our solutions and our teams better. We are headquartered in Denver, Colorado, with offices across the U.S., Canada, and India. JOB DESCRIPTION We are seeking a highly motivated ServiceNow expert to be part of our IT Service Management team. The ideal candidate should possess relevant experience and be ready to hit the ground running on daily administration, support, and automation of workflows utilizing the platform and its integration functionalities. Besides being the ServiceNow Subject Matter Expert, the Technical Lead ServiceNow Administrator will own and see through resolutions for requests and issues related to the platform. Core Requirements and Responsibilities: Essential job functions include but are not limited to the following: Develop workflows, business rules, UI rules, form updates, and other platform features in a proficient manner to tailor ServiceNow to the needs of the organization Constantly improving workflow orchestrations with ServiceNow based on ITIL to support efficiency of incident, problem, change, task, project, and resource management Should have minimum of 3 years' experience in program/script that helps in integrating ServiceNow with different systems, perform routine automation. Collaborating with ServiceNow contacts on a regular basis and staying up to date on platform updates, upcoming new features, and pertinent security issues. Create, maintain, and monitor health of integrations with other systems including but not limited to Salesforce and Rally Upkeep a current and easily accessible Service Catalog Build and maintain up-to-date configuration management database (CMDB) using asset management in ServiceNow Monitor platform performance daily, assist end users, fix problems, and provide training when needed Make sure security and compliance are met with user roles, permissions, and data protection in ServiceNow. Adopt security best practices when designing workflow orchestration and relevant automations Continuous platform improvements include finding and fixing configuration gaps, data inconsistency problems, and unused feature correction. Follow through on all improvement-related action items Experienced or understand ServiceNow Portfolio Management. Implement, configure and support Strategic Portfolio Management in ServiceNow. Plan and carry out platform upgrades. Includes preparing for end-user impacting platform modifications or improvements Create and keep up-to-date thorough documentation for runbooks, processes, and configurations Collaborating alongside ServiceNow contacts on a regular basis is necessary to stay up to date on platform updates, upcoming new features, and pertinent security issues Conduct testing of all ServiceNow modifications in the lower environment prior to the rollout in production to enforce low-risk platform changes Periodic audit of user licenses to ensure usage is under control Partner with other teams to take advantage of any ServiceNow automation opportunity Adhere to Vertafore Change Management policies for code deployments Why Vertafore is the place for you: *Canada Only The opportunity to work in a space where modern technology meets a stable and vital industry Medical, vision & dental plans Life, AD&D Short Term and Long Term Disability Pension Plan & Employer Match Maternity, Paternity and Parental Leave Employee and Family Assistance Program (EFAP) Education Assistance Additional programs - Employee Referral and Internal Recognition Why Vertafore is the place for you: *US Only The opportunity to work in a space where modern technology meets a stable and vital industry We have a Flexible First work environment! Our North America team members use our offices for collaboration, community and team-building, with members asked to sometimes come into an office and/or travel depending on job responsibilities. Other times, our teams work from home or a similar environment. Medical, vision & dental plans PPO & high-deductible options Health Savings Account & Flexible Spending Accounts Options: Health Care FSA Dental & Vision FSA Dependent Care FSA Commuter FSA Life, AD&D (Basic & Supplemental), and Disability 401(k) Retirement Savings Plain & Employer Match Supplemental Plans - Pet insurance, Hospital Indemnity, and Accident Insurance Parental Leave & Adoption Assistance Employee Assistance Program (EAP) Education & Legal Assistance Additional programs - Tuition Reimbursement, Employee Referral, Internal Recognition, and Wellness Commuter Benefits (Denver) The selected candidate must be legally authorized to work in the United States. The above statements are intended to describe the general nature and level of work being performed by people assigned to this job. They are not intended to be an exhaustive list of all the job responsibilities, duties, skill, or working conditions. In addition, this document does not create an employment contract, implied or otherwise, other than an "at will" relationship. Vertafore strongly supports equal employment opportunity for all applicants regardless of race, color, religion, sex, gender identity, pregnancy, national origin, ancestry, citizenship, age, marital status, physical disability, mental disability, medical condition, sexual orientation, genetic information, or any other characteristic protected by state or federal law. The Professional Services (PS) and Customer Success (CX) bonus plans are a quarterly monetary bonus plan based upon individual and practice performance against specific business metrics. Eligibility is determined by several factors including: start date, good standing in the company, and actives status at time of payout. The Vertafore Incentive Plan (VIP) is an annual monetary bonus for eligible employees based on both individual and company performance. Eligibility is determined by several factors including: start date, good standing in the company, and actives status at time of payout. Commission plans are tailored to each sales role but common components include quota, MBO's and ABPMs. Salespeople receive their formal compensation plan within 30 days of hire. Vertafore is a drug free workplace and conducts preemployment drug and background screenings. We do not accept resumes from agencies, headhunters or other suppliers who have not signed a formal agreement with us. We want to make sure our recruiting process is accessible for everyone. if you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact recruiting@vertafore.com Just a note, this contact information is for accommodation requests only. Knowledge, Skills and Abilities: Around 6 years of outstanding practical experience orchestrating workflows using ServiceNow Application Engine, establishing, and maintaining Integration Hub with other systems Advanced knowledge in forms and features for Service Catalog, Incident, Problem, Change, and Projects in ServiceNow Strong knowledge with ServiceNow Asset Management to manage CMDB Great grasp on ITIL framework and best practices Exceptional problem solver and solution focused when handling simple to complex issues Strong understanding and exposure in enforcing development lifecycle when working on ServiceNow enhancements Expert knowledge of the latest ServiceNow features Other scripting experience such as JavaScript is a plus Excellent communication interpersonal skills with ability to work with others from diverse backgrounds Established time management skills and the ability to juggle against multiple tasks with an enthusiastic sense of urgency and capability to meet deadlines Able to maintain professional composure in any situation Strong organizational and planning skills, ability to work independently to deliver consistent results Qualifications : Bachelor’s or Master’s degree in Computer Science, Engineering, or equivalent combination of education and working ServiceNow Administrator experience required ServiceNow Certified Systems Administrator or higher certification

Posted 4 hours ago

Apply

5.0 years

1 - 3 Lacs

Hyderābād

On-site

Job Description Overview Seeking an Associate Manager, Data Operations, to support our growing data organization. In this role, you will assist in maintaining data pipelines and corresponding platforms (on-prem and cloud) while working closely with global teams on DataOps initiatives. Support the day-to-day operations of data pipelines, ensuring data governance, reliability, and performance optimization on Microsoft Azure. Hands-on experience with Azure Data Factory (ADF), Azure Synapse Analytics, Azure Databricks, and real-time streaming architectures is preferred. Assist in ensuring the availability, scalability, automation, and governance of enterprise data pipelines supporting analytics, AI/ML, and business intelligence. Contribute to DataOps programs, aligning with business objectives, data governance standards, and enterprise data strategy. Help implement real-time data observability, monitoring, and automation frameworks to improve data reliability, quality, and operational efficiency. Support the development of governance models and execution roadmaps to enhance efficiency across Azure, AWS, GCP, and on-prem environments. Work on CI/CD integration, data pipeline automation, and self-healing capabilities to improve enterprise-wide DataOps processes. Collaborate with cross-functional teams to support and maintain next-generation Data & Analytics platforms while promoting an agile and high-performing DataOps culture. Assist in the adoption of Data & Analytics technology transformations, ensuring automation for proactive issue identification and resolution. Partner with cross-functional teams to support process improvements, best practices, and operational efficiencies within DataOps. Responsibilities Assist in the implementation and optimization of enterprise-scale data pipelines using Azure Data Factory (ADF), Azure Synapse Analytics, Azure Databricks, and Azure Stream Analytics. Support data ingestion, transformation, orchestration, and storage workflows, ensuring data reliability, integrity, and availability. Help ensure seamless batch, real-time, and streaming data processing, focusing on high availability and fault tolerance. Contribute to DataOps automation efforts, including CI/CD for data pipelines, automated testing, and version control using Azure DevOps and Terraform. Collaborate with Data Engineering, Analytics, AI/ML, CloudOps, and Business Intelligence teams to support data-driven decision-making. Assist in aligning DataOps practices with regulatory and security requirements by working with IT, data stewards, and compliance teams. Support data operations and sustainment activities, including testing and monitoring processes for global products and projects. Participate in data capture, storage, integration, governance, and analytics efforts, working alongside cross-functional teams. Assist in managing day-to-day DataOps activities, ensuring adherence to service-level agreements (SLAs) and business requirements. Engage with SMEs and business stakeholders to ensure data platform capabilities align with business needs. Contribute to Agile work intake and execution processes, helping to maintain efficiency in data platform teams. Help troubleshoot and resolve issues related to cloud infrastructure and data services in collaboration with technical teams. Support the development and automation of operational policies and procedures, improving efficiency and resilience. Assist in incident response and root cause analysis, contributing to self-healing mechanisms and mitigation strategies. Foster a customer-centric approach, advocating for operational excellence and continuous improvement in service delivery. Help build a collaborative, high-performing team culture, promoting automation and efficiency within DataOps. Adapt to shifting priorities and support cross-functional teams in maintaining productivity and achieving business goals. Utilize technical expertise in cloud and data operations to support service reliability and scalability. Qualifications 5+ years of technology work experience in a large-scale global organization, with CPG industry experience preferred. 5+ years of experience in Data & Analytics roles, with hands-on expertise in data operations and governance. 2+ years of experience working within a cross-functional IT organization, collaborating with multiple teams. Experience in a lead or senior support role, with a focus on DataOps execution and delivery. Strong communication skills, with the ability to collaborate with stakeholders and articulate technical concepts to non-technical audiences. Analytical and problem-solving abilities, with a focus on prioritizing customer needs and operational improvements. Customer-focused mindset, ensuring high-quality service delivery and operational efficiency. Growth mindset, with a willingness to learn and adapt to new technologies and methodologies in a fast-paced environment. Experience supporting data operations in a Microsoft Azure environment, including data pipeline automation. Familiarity with Site Reliability Engineering (SRE) principles, such as monitoring, automated issue remediation, and scalability improvements. Understanding of operational excellence in complex, high-availability data environments. Ability to collaborate across teams, building strong relationships with business and IT stakeholders. Basic understanding of data management concepts, including master data management, data governance, and analytics. Knowledge of data acquisition, data catalogs, data standards, and data management tools. Strong execution and organizational skills, with the ability to follow through on operational plans and drive measurable results. Adaptability in a dynamic, fast-paced environment, with the ability to shift priorities while maintaining productivity.

Posted 4 hours ago

Apply

12.0 - 16.0 years

2 - 9 Lacs

Hyderābād

On-site

Job description Some careers shine brighter than others If you’re looking for a career that will help you stand out, join HSBC, and fulfil your potential Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further HSBC is one of the largest banking and financial services organizations in the world, with operations in 64 countries and territories We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realize their ambitions We are currently seeking an experienced professional to join our team in the role of Consultant Specialist 12 - 16 years of experience with below requirements and skills: Advanced SQL Development: Write complex SQL queries for data extraction, transformation, and analysis. Optimize SQL queries for performance and scalability. SQL Tuning and Joins: Analyze and improve query performance. Deep understanding of joins, indexing, and query execution plans. GCP BigQuery and GCS: Work with Google BigQuery for data warehousing and analytics. Manage and integrate data using Google Cloud Storage (GCS). Airflow DAG Development: Design, develop, and maintain workflows using Apache Airflow. Write custom DAGs to automate data pipelines and processes. Python Programming: Develop and maintain Python scripts for data processing and automation. Debug and optimize Python code for performance and reliability. Shell Scripting: Write and debug basic shell scripts for automation and system tasks. Continuous Learning: Stay updated with the latest tools and technologies in data engineering. Demonstrate a strong ability and attitude to learn and adapt quickly. Communication: Collaborate effectively with cross-functional teams. Clearly communicate technical concepts to both technical and non-technical stakeholders. Requirements To be successful in this role, you should meet the following requirements: Advanced SQL writing and query optimization. Strong understanding of SQL tuning, joins, and indexing. Hands-on experience with GCP services, especially BigQuery and GCS. Proficiency in Python programming and debugging. Experience with Apache Airflow and DAG development. Basic knowledge of shell scripting. Excellent problem-solving skills and a growth mindset. Strong verbal and written communication skills. Experience with data pipeline orchestration and ETL processes. Familiarity with other GCP services like Dataflow or Pub/Sub. Knowledge of CI/CD pipelines and version control (e.g., Git). You’ll achieve more when you join HSBC www.hsbc.com/careers HSBC is committed to building a culture where all employees are valued, respected and opinions count We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within an inclusive and diverse environment Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website Issued by – HSBC Software Development India

Posted 4 hours ago

Apply

9.0 years

3 - 8 Lacs

Hyderābād

On-site

Job Description Overview We are looking for a self-driven, software engineering mindset SRE support engineer enabling an SRE-driven orchestration of all components of the end2end ecosystem & preemptively diagnosing anomalies and remediating through automation. The SRE support engineer is integral part of the global team with its main purpose to provide a delightful customer experience for the user of the global consumer, commercial, supply chain and enablement functions in the PepsiCo digital products application portfolio of 260+ applications, enabling a full SRE Practice incident prevention / proactive resolution model. The scope of this role is focussed on the Modern architected application portfolio, B2B pepsiconnect and Direct to Customer and other S&T roadmap applications. Ensures that PepsiCo DPA applications service performance,reliability and availability expected by our customers and internal groups It requires a blend of technical expertise on SRE tools, modern applications arhictecture, IT operations experience, and analytics & influence skills. Responsibilities Reporting directly to the SRE & Modern Operations Associate Director, is responsible to enable & execute the pre-emptive diagnosis of PepsiCo applications towards service performance, reliability and availability expected by our customers and internal groups Responsible as pro-active support engineer, diagnosing any anomalies prior to any user and driving the necessary remediations across the teams involved. Develop / leverage aggregation correlation solutions that integrates events across all eco system component of the modern architecture solution and comes up with insights to continuously improve the user journey and order flow experience collaborating with software engineering teams. Drive incident response, root cause analysis (RCA), and post-mortem processes to ensure continuous improvement. Develop and maintain robust monitoring, alerting, and observability frameworks using tools like Grafana, ELK, etc. Collaborate with product and engineering teams during the design and development phases to embed reliability and operability into new services. Participate in architecture reviews and provide SRE input on scalability, fault tolerance, and deployment strategies. Define and implement SLOs/SLIs for new services before they go live, ensuring alignment with business objectives. Work closely with customer facing support teams to evolve & empower them with SRE insights Participate in on-call support and orchestrating blameless post-mortems and encourage the practice within the organization Provides inputs to the definition, collection and analysis of data relevant products systems and their interactions towards business process resiliency especially related impacting customer satisfaction, Actively engage and drive AI Ops adoption across teams Qualifications 9-11 years of work experience evolving to a SRE engineer with 3-5 years of experience in continuously improving and transforming IT operations ways of working Bachelor’s degree in Computer Science, Information Technology or a related field The ideal Engineer will be highly quantitative, have great judgment, able to connect dots across ecosytems, and efficiently work cross-functionally across teams to ensure SRE orchestrating solutions are meeting customer/end-user expectations The candidate will take a pragmatic approach resolving incidents, including the ability to systemically triangulate root causes and work effectively with external and internal teams to meet objectives. A firm understanding of SRE (Software Reliability Engineering) and IT Service Management (ITSM) processes with a track record for improving service offerings – pro-actively resolving incidents, providing a seamless customer/end-user experience and proactively identifying and mitigating areas of risk. Proven experience as an SRE in designing the events diagnostics, performance measures and alert solutions to meet the SLA/SLO/SLIs. Hands on experience in Python, SQL, relational or non-relational DBs, AppDynamics, Grafana, Splunk, Dynatrace, or other SRE Ops toolsets. Deep hands-on technical expertise, excellent verbal and written communication skills Differentiating Competencies Driving for Results: Demonstrates perseverance and resilience in the pursuit of goals. Confronts and works to resolve tough issues. Exhibits a “can-do” attitude and a willingness to take on significant challenges Decision Making: Quickly analyses complex problems to find actionable, pragmatic solutions. Sees connections in data, events, trends, etc. Consistently works against the right priorities Collaborating: Collaborates well with others to deliver results. Keeps others informed so there are no unnecessary surprises. Effectively listens to and understands what other people are saying. Communicating and Influencing: Ability to build convincing, persuasive, and logical storyboards. Strong executive presence. Able to communicate effectively and succinctly, both verbally and on paper. Motivating and Inspiring Others: Demonstrates a sense of passion, enjoyment, and pride about their work. Demonstrates a positive attitude in the workplace. Embraces and adapts well to change. Creates a work environment that makes work rewarding and enjoyable.

Posted 4 hours ago

Apply

5.0 - 10.0 years

0 Lacs

Hyderābād

On-site

Job Description Overview DataOps L3 The role will leverage & enhance existing technologies in the area of data and analytics solutions like Power BI, Azure data engineering technologies, ADLS, ADB, Synapse, and other Azure services. The role will be responsible for developing and support IT products and solutions using these technologies and deploy them for business users Responsibilities 5 to 10 Years of IT & Azure Data engineering technologies experience Prior experience in ETL, data pipelines, data flow techniques using Azure Data Services Working experience in Python, Py Spark, Azure Data Factory, Azure Data Lake Gen2, Databricks, Azure Synapse and file formats like JSON & Parquet. Experience in creating ADF Pipelines to source and process data sets. Experience in creating Databricks notebooks to cleanse, transform and enrich data sets. Development experience in orchestration of pipelines Good understanding about SQL, Databases, Datawarehouse systems preferably Teradata Experience in deployment and monitoring techniques. Working experience with Azure DevOps CI/CD pipelines to deploy Azure resources. Experience in handling operations/Integration with source repository Must have good knowledge on Datawarehouse concepts and Datawarehouse modelling. Working knowledge of SNOW including resolving incidents, handling Change requests /Service requests, reporting on metrics to provide insights. Collaborate with the project team to understand tasks to model tables using data warehouse best practices and develop data pipelines to ensure the efficient delivery of data. Strong expertise in performance tuning and optimization of data processing systems. Proficient in Azure Data Factory, Azure Databricks, Azure SQL Database, and other Azure data services. Develop and enforce best practices for data management, including data governance and security. Work closely with cross-functional teams to understand data requirements and deliver solutions that meet business needs. Proficient in implementing DataOps framework. Qualifications Azure data factory Azure Databricks Azure Synapse PySpark/SQL ADLS Azure DevOps with CI/CD implementation. Nice-to-Have Skill Sets: Business Intelligence tools (preferred—Power BI) DP-203 Certified.

Posted 4 hours ago

Apply

7.0 years

0 Lacs

India

On-site

About Us: MatchMove is a leading embedded finance platform that empowers businesses to embed financial services into their applications. We provide innovative solutions across payments, banking-as-a-service, and spend/send management, enabling our clients to drive growth and enhance customer experiences. Are You The One? As a Technical Lead Engineer - Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to: Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements: At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum. Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale. Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. MatchMove Culture: We cultivate a dynamic and innovative culture that fuels growth, creativity, and collaboration. Our fast-paced fintech environment thrives on adaptability, agility, and open communication. We focus on employee development, supporting continuous learning and growth through training programs, learning on the job and mentorship. We encourage speaking up, sharing ideas, and taking ownership. Embracing diversity, our team spans across Asia, fostering a rich exchange of perspectives and experiences. Together, we harness the power of fintech and e-commerce to make a meaningful impact on people's lives. Personal Data Protection Act: By submitting your application for this job, you are authorizing MatchMove to: collect and use your personal data, and to disclose such data to any third party with whom MatchMove or any of its related corporation has service arrangements, in each case for all purposes in connection with your job application, and employment with MatchMove; and retain your personal data for one year for consideration of future job opportunities (where applicable).

Posted 4 hours ago

Apply

5.0 years

2 - 7 Lacs

Hyderābād

Remote

At Meazure Learning, we believe in transforming learning and assessment experiences to unlock human potential. As a global leader in online testing and exam services, we support credentialing, licensure, workforce education, and higher education through purpose-built solutions that are secure, accessible, and deeply human-centered. With a global footprint across the U.S., Canada, India, and the U.K., our team is united by a passion for innovation and a commitment to integrity, quality, and learner success. About the Role We are looking for a seasoned Sr. DevOps Engineer to help us scale, secure, and optimize our infrastructure and deployment processes. This role is critical to enabling fast, reliable, and high-quality software delivery across our global engineering teams. You’ll be responsible for designing and maintaining cloud-based systems, automating operational workflows, and collaborating across teams to improve performance, observability, and uptime. The ideal candidate is hands-on, proactive, and passionate about creating resilient systems that support product innovation and business growth. Join Us and You’ll… Help define and elevate the user experience for learners and professionals around the world Collaborate with talented, mission-driven colleagues across regions Work in a culture that values trust, innovation, and transparency Have the opportunity to grow, lead, and make your mark in a high-impact, global organization Key Responsibilities Design, implement, and maintain scalable, secure, and reliable CI/CD pipelines Manage and optimize cloud infrastructure (e.g., AWS, Azure) and container orchestration (e.g., Kubernetes) Drive automation across infrastructure and development workflows Build and maintain monitoring, alerting, and logging systems to ensure reliability and observability Collaborate with Engineering, QA, and Security teams to deliver high-performing, compliant solutions Troubleshoot complex system issues in staging and production environments Guide and mentor junior engineers and contribute to DevOps best practices Desired Attributes: Key Skills 5+ years of experience in a DevOps or Site Reliability Engineering role Deep knowledge of cloud infrastructure (AWS, Azure, or GCP) Proficiency with containerization (Docker, Kubernetes) and Infrastructure as Code tools (Terraform, CloudFormation) Hands-on experience with writing code Hands-on experience with CI/CD platforms (Jenkins, GitHub Actions, or similar) Strong scripting capabilities (Bash, Python, or PowerShell) Familiarity with monitoring and logging tools (Prometheus, Grafana, ELK, or Datadog) A problem-solver with excellent communication and collaboration skills The Total Rewards - The Benefits Company Sponsored Health Insurance Competitive Pay Healthy Work Culture Career Growth Opportunities Learning and Development Opportunities Referral Award Program Company Provided IT Equipment (for remote team members) Transportation Program (on-site team members) Company Provided Meals (on-site team members) 14 Company Provided Holidays Generous Leave Program Learn more at www.meazurelearning.com Meazure Learning is an Equal Opportunity Employer and Prohibits Discrimination and Harassment of Any Kind: Meazure Learning is committed to the principle of equal employment opportunity for all employees and to providing employees with a work environment free of discrimination and harassment. All employment decisions at Meazure Learning are based on business needs, job requirements and individual qualifications, without regard to race, color, religion or belief, national, social or ethnic origin, sex (including pregnancy), age, physical, mental or sensory disability, HIV Status, sexual orientation, gender identity and/or expression, marital, civil union or domestic partnership status, past or present military service, family medical history or genetic information, family or parental status, or any other status protected by the laws or regulations in the locations where we operate. Meazure Learning will not tolerate discrimination or harassment based on any of these characteristics.

Posted 4 hours ago

Apply

15.0 years

3 - 7 Lacs

Hyderābād

On-site

Job Description Overview PepsiCo is seeking a strategic and visionary Generative AI Solutions leader to lead transformative AI initiatives across Consumer, Commercial, and Reporting functions. This role will focus on designing scalable AI-driven business solutions, driving global change management, and aligning AI initiatives to enterprise goals. The ideal candidate brings deep domain experience, cross-functional leadership, and the ability to translate AI capabilities into measurable business outcomes—without managing the underlying AI platforms. Responsibilities AI Transformation Strategy & Road mapping Lead the definition and execution of enterprise-wide strategies for Consumer AI, Commercial AI, and Reporting AI use cases. Identify, prioritize, and solution complex AI-powered business opportunities aligned with PepsiCo's digital agenda. Translate market trends, AI capabilities, and business needs into an actionable Generative AI roadmap. Solution Design & Cross-Functional Orchestration Drive cross-functional solutioning using Gen-AI and Agentic AI capabilities and platforms of PepsiCo. Collaborate with business, data, and engineering teams to craft impactful AI agent-based solutions for Commercial and consumer facing functions including Marketing & R&D. Architect and design future AI solutions leveraging agentic frameworks. Collaborate with engineering teams to provide the necessary features for building. Work closely with Enterprise Architecture and Cloud Architecture teams to build scalable architecture. Leadership, Influence, and Governance Act as the face of Generative AI solutioning for senior executives and transformation leaders. Drive alignment across global and regional teams for solution design, prioritization, and scale-up. Provide technical leadership and mentorship to the AI engineering team. Stay up to date with the latest advancements in AI and related technologies. Drive innovation and continuous improvement in AI platform development. Ensure solutions meet enterprise standards for Responsible AI, data privacy, and business continuity. Qualifications 15+ years of experience in enterprise AI, digital transformation, or solution architecture, with a track record of leading AI-powered business programs. Candidates must hold a BE/B.Tech/M.Tech/MS degree (Full-time) in Engineering or a related technical field. Strong understanding of Consumer/Commercial business functions and how to apply AI to transform them (sales, marketing, supply chain, insights, reporting). Demonstrated experience designing Gen-AI or multi-agent solutions, using orchestration frameworks like LangGraph, CrewAI, AutoGen, or Temporal. Deep capability in AI-powered reporting, scenario modeling, insight generation, and intelligent automation. Proven success in change management, stakeholder engagement, and global rollout of strategic programs. Excellent communication and influences.

Posted 4 hours ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

Remote

Every day, tens of millions of people come to Roblox to explore, create, play, learn, and connect with friends in 3D immersive digital experiences– all created by our global community of developers and creators. At Roblox, we’re building the tools and platform that empower our community to bring any experience that they can imagine to life. Our vision is to reimagine the way people come together, from anywhere in the world, and on any device. We’re on a mission to connect a billion people with optimism and civility, and looking for amazing talent to help us get there. A career at Roblox means you’ll be working to shape the future of human interaction, solving unique technical challenges at scale, and helping to create safer, more civil shared experiences for everyone. Roblox Operating System (ROS) is our internal productivity platform that governs how Roblox operates as a company. Through an integrated suite of tools, ROS shapes how we make talent and personnel decisions, plan and organize work, discover knowledge, and scale efficiently. We are seeking a Senior Data Engineer to enhance our data posture and architecture, synchronizing data across vital third-party systems like Workday, Greenhouse, GSuite, and JIRA, as well as our internal Roblox OS application database. Our Roblox OS app suite encompasses internal tools and third-party applications for People Operations, Talent Acquisition, Budgeting, Roadmapping, and Business Analytics. We envision an integrated platform that streamlines processes while providing employees and leaders with the information they need to support the business. This is a new team in our Roblox India location, working closely with data scientists & analysts, product & engineering, and other stakeholders in India & US. You will report to the Engineering Manager of the Roblox OS Team in your local location and collaborate with Roblox internal teams globally. Work Model : This role is based in Gurugram and follows a hybrid structure — 3 days from the office (Tuesday, Wednesday & Thursday) and 2 days work from home. Shift Time : 2:00pm - 10:30pm IST (Cabs will be provided) You Will Design and Build Scalable Data Pipelines: Architect, develop, and maintain robust, scalable data pipelines using orchestration frameworks like Airflow to synchronize data between internal systems. Implement and Optimize ETL Processes: Apply strong understanding of ETL (Extract, Transform, Load) processes and best practices for seamless data integration and transformation. Develop Data Solutions with SQL: Utilize your proficiency in SQL and relational databases (e.g., PostgreSQL) for advanced querying, data modeling, and optimizing data solutions. Contribute to Data Architecture: Actively participate in data architecture and implementation discussions, ensuring data integrity and efficient data transposition. Manage and optimize data infrastructure, including database, cloud storage solutions, and API endpoints. Write High-Quality Code: Focus on developing clear, readable, testable, modular, and well-monitored code for data manipulation, automation, and software development with a strong emphasis on data integrity. Troubleshoot and Optimize Performance: Apply excellent analytical and problem-solving skills to diagnose data issues and optimize pipeline performance. Collaborate Cross-Functionally: Work effectively with cross-functional teams, including data scientists, analysts, and business stakeholders, to translate business needs into technical data solutions. Ensure Data Governance and Security: Implement data anonymization and pseudonymization techniques to protect sensitive data, and contribute to master data management (MDM) concepts including data quality, lineage, and governance frameworks. You Have Data Engineering Expertise: At least 6+ Proven experience designing, building, and maintaining scalable data pipelines, coupled with a strong understanding of ETL processes and best practices for data integration. Database and Data Warehousing Proficiency: Deep proficiency in SQL and relational databases (e.g., PostgreSQL), and familiarity with at least one cloud-based data warehouse solution (e.g., Snowflake, Redshift, BigQuery). Technical Acumen: Strong scripting skills for data manipulation and automation. Familiarity with data streaming platforms (e.g., Kafka, Kinesis), and knowledge of containerization (e.g., Docker) and cloud infrastructure (e.g., AWS, Azure, GCP) for deploying and managing data solutions. Data & Cloud Infrastructure Management: Experience with managing and optimizing data infrastructure, including database, cloud storage solutions, and configuring API endpoints. Software Development Experience: Experience in software development with a focus on data integrity and transposition, and a commitment to writing clear, readable, testable, modular, and well-monitored code. Problem-Solving & Collaboration Skills: Excellent analytical and problem-solving abilities to troubleshoot complex data issues, combined with strong communication and collaboration skills to work effectively across teams. Passion for Data: A genuine passion for working with amounts of data from various sources, understanding the critical impact of data quality on company strategy at an executive level. Adaptability: Ability to thrive and deliver results in a fast-paced environment with competing priorities. Roles that are based in an office are onsite Tuesday, Wednesday, and Thursday, with optional presence on Monday and Friday (unless otherwise noted). Roblox provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws. Roblox also provides reasonable accommodations for all candidates during the interview process.

Posted 4 hours ago

Apply

8.0 - 12.0 years

2 - 9 Lacs

Hyderābād

On-site

Job description Some careers shine brighter than others If you’re looking for a career that will help you stand out, join HSBC, and fulfil your potential Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further HSBC is one of the largest banking and financial services organizations in the world, with operations in 64 countries and territories We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realize their ambitions We are currently seeking an experienced professional to join our team in the role of Consultant Specialist 8 - 12 years of experience with below requirements and skills: Advanced SQL Development: Write complex SQL queries for data extraction, transformation, and analysis. Optimize SQL queries for performance and scalability. SQL Tuning and Joins: Analyze and improve query performance. Deep understanding of joins, indexing, and query execution plans. GCP BigQuery and GCS: Work with Google BigQuery for data warehousing and analytics. Manage and integrate data using Google Cloud Storage (GCS). Airflow DAG Development: Design, develop, and maintain workflows using Apache Airflow. Write custom DAGs to automate data pipelines and processes. Python Programming: Develop and maintain Python scripts for data processing and automation. Debug and optimize Python code for performance and reliability. Shell Scripting: Write and debug basic shell scripts for automation and system tasks. Continuous Learning: Stay updated with the latest tools and technologies in data engineering. Demonstrate a strong ability and attitude to learn and adapt quickly. Communication: Collaborate effectively with cross-functional teams. Clearly communicate technical concepts to both technical and non-technical stakeholders. Requirements To be successful in this role, you should meet the following requirements: Advanced SQL writing and query optimization. Strong understanding of SQL tuning, joins, and indexing. Hands-on experience with GCP services, especially BigQuery and GCS. Proficiency in Python programming and debugging. Experience with Apache Airflow and DAG development. Basic knowledge of shell scripting. Excellent problem-solving skills and a growth mindset. Strong verbal and written communication skills. Experience with data pipeline orchestration and ETL processes. Familiarity with other GCP services like Dataflow or Pub/Sub. Knowledge of CI/CD pipelines and version control (e.g., Git). You’ll achieve more when you join HSBC www.hsbc.com/careers HSBC is committed to building a culture where all employees are valued, respected and opinions count We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within an inclusive and diverse environment Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website Issued by – HSBC Software Development India

Posted 4 hours ago

Apply

7.0 years

0 Lacs

Hyderābād

On-site

Job Description Overview As a key member of the team, you will be responsible for designing, building, and maintaining the data pipelines and platforms that support analytics, machine learning, and business intelligence. You will lead a team of data engineers and collaborate closely with cross-functional stakeholders to ensure that data is accessible, reliable, secure, and optimized for AI-driven applications Responsibilities Architect and implement scalable data solutions to support LLM training, fine-tuning, and inference workflows. Lead the development of ETL/ELT pipelines for structured and unstructured data across diverse sources. Ensure data quality, governance, and compliance with industry standards and regulations. Collaborate with Data Scientists, MLOps, and product teams to align data infrastructure with GenAI product goals. Mentor and guide a team of data engineers, promoting best practices in data engineering and DevOps. Optimize data workflows for performance, cost-efficiency, and scalability in cloud environments. Drive innovation by evaluating and integrating modern data tools and platforms (e.g., Databricks, Azure etc) Qualifications Bachelor’s or Master’s degree in Computer Science, Data Engineering, Information Systems, or a related technical field. 7+ years of experience in data engineering, with at least 2+ years in a leadership or senior role. Proven experience designing and managing data platforms and pipelines in cloud environments (Azure, AWS, or GCP). Experience supporting AI/ML workloads, especially involving Large Language Models (LLMs) Strong proficiency in SQL and Python Hands-on experience with data orchestration tools

Posted 4 hours ago

Apply

3.0 years

2 - 3 Lacs

Hyderābād

On-site

India Information Technology (IT) Group Functions Job Reference # 322652BR City Hyderabad Job Type Full Time Your role If you are a highly motivated and skilled DevOps Engineer, we are excited to hear from you. We are looking for strong candidates to develop and execute a comprehensive DevOps strategy aligned with organizational goals and objectives, driving continuous improvement across all aspects of development and operations. Objectives Implement scalable infrastructure solutions applying the right design principles & UBS practices Collaborate and work with application development teams to design and implement required infrastructure solutions Your team You'll be building & working within the Group Chief Technology Organization, focusing on the delivery of the enterprise data mesh. You will be joining a team that are helping to scale, build and leverage data products in the firm. The team partners with different divisions and functions across the Bank to develop innovative digital solutions and expand our technical expertise into new areas. As a DevOps engineer, you will be part of a committed, quality driven technical group working on cutting-edge technologies using Azure. Your expertise 3+ years of experience in software development, system administration, and cloud infrastructure management. Experience with advanced DevOps practices Designing, implementing, and maintaining scalable and secure cloud infrastructure, preferably using Azure Managing infrastructure as code (Iac) using tools like Terraform, Bicep, or ARM templates Experience with CI/CD pipelines to deploy IaC using tools like Azure DevOps Automating deployment processes for cloud-based applications and services and ensuring high availability and performance of CI/CD systems Experience with cloud-based Logging, Monitoring and Alerting solutions like Azure Monitor, Application Insights, Grafana, and Prometheus. Your Education Bachelor’s degree in Computer Science, Engineering, or a related field (or equivalent experience) Your Technical Skills Proficiency in scripting languages such as Python, Bash, or PowerShell. Extensive hands-on experience with containerization and orchestration (e.g., Docker, Kubernetes). Advanced knowledge of public cloud services, preferably Azure services including AKS, App Services, and Networking. Strong experience with monitoring tools like Grafana and Prometheus. Proficiency with version control systems (e.g., Git). Advanced knowledge of networking, Linux/Unix systems, and cloud architecture. Good to have exposure around implementing cluster mesh & service mesh topology patterns . Knowledge of Istio, managed Kubernetes environments, and microservices architecture. Certifications: Relevant certifications (e.g., Azure DevOps Engineer Expert, Kubernetes certifications) are highly preferred. About us UBS is the world’s largest and the only truly global wealth manager. We operate through four business divisions: Global Wealth Management, Personal & Corporate Banking, Asset Management and the Investment Bank. Our global reach and the breadth of our expertise set us apart from our competitors. We have a presence in all major financial centers in more than 50 countries. How we hire We may request you to complete one or more assessments during the application process. Learn more Join us At UBS, we know that it's our people, with their diverse skills, experiences and backgrounds, who drive our ongoing success. We’re dedicated to our craft and passionate about putting our people first, with new challenges, a supportive team, opportunities to grow and flexible working options when possible. Our inclusive culture brings out the best in our employees, wherever they are on their career journey. We also recognize that great work is never done alone. That’s why collaboration is at the heart of everything we do. Because together, we’re more than ourselves. We’re committed to disability inclusion and if you need reasonable accommodation/adjustments throughout our recruitment process, you can always contact us. Disclaimer / Policy statements UBS is an Equal Opportunity Employer. We respect and seek to empower each individual and support the diverse cultures, perspectives, skills and experiences within our workforce.

Posted 4 hours ago

Apply

2.0 years

9 - 10 Lacs

Hyderābād

On-site

Job Description Summary The embedded software quality test engineer is part of a research and development team responsible for designing and testing software for industrial control applications primarily for the electrical transmission and distribution industry. Product Testing include a variety of automated, manual and simulation procedures designed to validate the quality and performance of the products in line with design and industry requirements Job Description Essential Responsibilities Be part of an agile development team that develops embedded software applications. Familiarize with GE controllers and develop good understanding on their functionality. Collaborate with development and system teams to test containerized microservices (Docker, Kubernetes) in complex simulation environments. Own and execute test cases for each requirement as part of an agile iteration schedule. Identify and ensure requirements traceability to test cases. Identify and report defects detected during testing Assist in prioritization of reported defects and work with software developers to facilitate timely closure Verify resolution of resolved defects Record and report test results in an effective manner. Design functional verification test plans to validate performance, boundary and negative testing Qualifications /Requirements Bachelors degree in STEM Minimum 2 years of experience in software development and test, SCADA communications or system integration for control systems. Knowledge in basic electronic engineering fundamentals, Electrical protection, substation automation and SCADA. Ability to learn and apply test tools such as protocol Analyzer, software simulation applications, device configuration tools. Able to work both as part of a team and independently utilizing agile execution tools Familiarity with Substation Automation and SCADA applications and protocols Understanding of utility / SCADA communication protocols concepts, networking and interaction between Intelligent Electronic Devices Hands on with systems designed based on Industrial communication protocols, technologies and standards such as DNP3, Modbus, IEC 60870, IEC 61850, IEEE 1588, Ethernet communications and cyber security Hands-on experience with container technologies (e.g., Docker) and orchestration tools (e.g., Kubernetes). Desired Characteristics Capacity to listen, understand and synthesize end-user requirements in a multi-cultural environment. Organized ability to multi-task and stay organized. High energy, self-starter, with a proven track record in delivering results. Establishes a sense of urgency to complete tasks in an efficient, timely, and effective manner. Strong team player, able to foster good working relationships with other functional areas. Familiar with fundamental program tools and processes. Strong problem-solving skills Ability to work independently Strong oral and written communication skills. Familiarity with Substation Automation and SCADA applications and protocols will be an asset Understanding of utility / SCADA communication protocols concepts, networking and interaction between Intelligent Electronic Devices will be an asset Experience with industrial applications will be an asset Experience in validating and troubleshooting software within containerized or virtualized environments will be an asset. Additional Information Relocation Assistance Provided: Yes

Posted 4 hours ago

Apply

3.0 years

3 - 7 Lacs

Cochin

On-site

Minimum Required Experience : 3 years Full Time Skills Azure Cloud Kubernetes Helm Charts Git Docker Description Job Title: Software DevOps Engineer (3-5 Years Experience) or Senior Software DevOps Engineer (5-10 Years Experience) Job Description: Responsibilities: Design, implement, and maintain CI/CD pipelines to ensure efficient and reliable software delivery. Collaborate with Development, QA, and Operations teams to streamline the deployment and operation of applications. Monitor system performance, identify bottlenecks, and troubleshoot issues to ensure high availability and reliability. Automate repetitive tasks and processes to improve efficiency and reduce manual intervention. Participate in code reviews and contribute to the improvement of best practices and standards. Implement and manage infrastructure as code (IaC) using Terraform. Document processes, configurations, and procedures for future reference. Stay updated with the latest industry trends and technologies to continuously improve DevOps processes. Create POC for the latest tools and technologies. Requirements: Bachelor's degree in Computer Science, Information Technology, or a related field. 1-3 years of experience in a DevOps or related role. Proficiency with version control systems (e.g., Git). Experience with scripting languages (e.g., Python, Bash). Strong understanding of CI/CD concepts and tools (e.g., Azure DevOps, Jenkins, GitLab CI). Experience with cloud platforms (e.g., AWS, Azure, GCP). Familiarity with containerization technologies (e.g., Docker, Kubernetes). Basic understanding of networking and security principles. Strong problem-solving skills and attention to detail. Excellent communication and teamwork skills. Ability to learn and adapt to new technologies and methodologies. Ready to work with clients directly. Mandatory Skill: Azure Cloud, Azure DevOps, CI\CD Pipeline, Version control (git) Linux Commands, Bash Script Docker, Kubernetes, Helm Charts Any Monitoring tools such as Grafana, Prometheus, ELK Stack, Azure Monitoring Azure, AKS, Azure Storage, Virtual Machine Understanding of micro-services architecture, orchestration, Sql Server. Optional Skill: Ansible Script, Kafka, MongoDB Key Vault Azure Cli

Posted 4 hours ago

Apply

1.0 - 2.0 years

2 - 3 Lacs

Cochin

On-site

We are looking for a skilled Junior DevOps Engineer to join our team and help us streamline our development and deployment processes. In this role, you will work closely with software developers, IT operations, and system administrators to build and maintain scalable infrastructure, automate deployment pipelines, and ensure the reliability and efficiency of our systems. You will play a key role in implementing best practices for continuous integration and continuous deployment (CI/CD), monitoring, and cloud services. Experience: 1-2 years as a DevOps Engineer Location : Kochi,Infopark Phase II Immediate Joiners Preferred Key Responsibility Area Exposure to version control systems such as Git, SVN (Subversion), and Mercurial foundational tools. Experience in CI/CD tools like Jenkins, Travis CI, CircleCI, and GitLab CI/CD Proficiency in configuration management tools such as Ansible, Puppet, Chef, and SaltStack Knowledge in containerization platforms such as Docker and container orchestration tools like Kubernetes Exposure to Infrastructure as Code (IaC) Tools like Terraform, AWS CloudFormation, Azure Resource Manager, and Google Cloud Deployment Manager Experience in Monitoring and logging solutions such as Prometheus, Grafana, ELK Stack (Elasticsearch, Logstash, Kibana), Splunk, and Datadog. Knowledge of collaboration and communication platforms such as Slack, and Atlassian Jira. Qualifications Bachelor’s degree in Computer Science, Information Technology, or a related field. Proven experience as a DevOps Engineer or in a similar role. Job Types: Full-time, Permanent Pay: ₹240,000.00 - ₹350,000.00 per year Benefits: Health insurance Provident Fund Schedule: Day shift Monday to Friday Supplemental Pay: Performance bonus Yearly bonus Application Question(s): are u willing to relocate to Kochi? Whats your notice period? Work Location: In person

Posted 4 hours ago

Apply

5.0 years

0 Lacs

Navi Mumbai, Maharashtra, India

On-site

We deliver the world’s most complex projects. Work as part of a collaborative and inclusive team. Enjoy a varied & challenging role. Building on our past. Ready for the future Worley is a global professional services company of energy, chemicals and resources experts headquartered in Australia. Right now, we’re bridging two worlds as we accelerate to more sustainable energy sources, while helping our customers provide the energy, chemicals and resources that society needs now. We partner with our customers to deliver projects and create value over the life of their portfolio of assets. We solve complex problems by finding integrated data-centric solutions from the first stages of consulting and engineering to installation and commissioning, to the last stages of decommissioning and remediation. Join us and help drive innovation and sustainability in our projects. The Role As a Applications Technical Specialist II with Worley, you will work closely with our existing team to deliver projects for our clients while continuing to develop your skills and experience etc…. Ensure project deployment as per conceptual design documentation and architecture Collaborate with various Information Technology and business stakeholder groups to ensure deployed solutions meet all agreed upon criteria Primary resource responsible for development of enhancements & fixes, and ongoing support of ServiceNow HRSD Design, develop and implement service portal related enhancements/fixes with the ServiceNow HRSD module Develop integrations on ServiceNow platform to various modules ITSM, HRSD, Custom Apps, Etc., Build and maintain Service Catalogues/ Record Producers inclusive of workflow and Orchestration Create and maintain client scripts, business rules, UI Policies, widgets, service portal, jobs, etc. (JavaScript/HTML/CSS) Troubleshoot and resolve any potential technical application issues. Adhere to ServiceNow best practices (code best practices, update sets, table relationships, application customization, etc.) Adhere to Worley Change Management principles to ensure the stability of sub-production and production environments Proactive, responsive and focused on anticipating future requirements and/or issues Recover quickly after change, disruptions, or mistakes and can remain productive and focused. Is adaptable and can apply lessons learned in one situation to another situation. Develop clear and concise technical/process documentation Global Reports creation and administration with platform analytics or performance analytics features Provide HRSD application training to business teams and help desks (train the trainer) About You To be considered for this role it is envisaged you will possess the following attributes: Excellent interpersonal and presentation skills Fluent in spoken and written English 5+ years experience as a ServiceNow Administrator 5+ years experience using JavaScript in ServiceNow 5+ years experience as an administrator for ServiceNow Service Catalogs and Service Portal 5+ years experience using web services in ServiceNow (REST and SOAP) 5+ years experience integrating ServiceNow with other platforms via all available options (automated flat file loads and transform maps, web services, connectors, etc) Experience implementing and maintaining SLAs Experience using Integration Hub and Service Graph connectors Experience acting as an administrator for all ITSM modules Experience acting as the primary regression testing resource for a ServiceNow upgrade. Strong understanding of the Users, Groups, Roles, and Security Groups implementation in ServiceNow and the automated methods used to maintain them. Sound knowledge of industry standards and methodologies Broad understanding of software applications in use at Worley including but not limited to Peoplelink, Oracle eBusiness Suite, Windows Operating Systems, Citrix, Systems Centre Suite of Products, Active Directory, Azure, Office 365, SharePoint, MS Teams Ability to work with globally dispersed virtual teams across a number of disciplines with Finance Service Management, HAM, HRSD, ITOM applications (Discovery, Event Management, Operational Intelligence, Orchestration, Service Mapping, CMDB) highly desirable. Personal Qualities/Behaviours: Strong work ethic Detail oriented and able to solve problems with efficient troubleshooting. Self-driven and takes responsibility. Moving forward together We want our people to be energized and empowered to drive sustainable impact. So, our focus is on a values-inspired culture that unlocks brilliance through belonging, connection and innovation. We’re building a diverse, inclusive and respectful workplace. Creating a space where everyone feels they belong, can be themselves, and are heard. And we're not just talking about it; we're doing it. We're reskilling our people, leveraging transferable skills, and supporting the transition of our workforce to become experts in today's low carbon energy infrastructure and technology. Whatever your ambition, there’s a path for you here. And there’s no barrier to your potential career success. Join us to broaden your horizons, explore diverse opportunities, and be part of delivering sustainable change. Worley takes personal data protection seriously and respects EU and local data protection laws. You can read our full Recruitment Privacy Notice Here. Please note: If you are being represented by a recruitment agency you will not be considered, to be considered you will need to apply directly to Worley. Company Worley Primary Location IND-MM-Mumbai Other Locations IND-KR-Bangalore, IND-AP-Hyderabad, IND-MM-Pune, IND-TN-Chennai, IND-MM-Navi Mumbai Job Applications Schedule Full-time Employment Type Employee Job Level Experienced Job Posting Jul 4, 2025 Unposting Date Aug 3, 2025 Reporting Manager Title Senior General Manager

Posted 4 hours ago

Apply

0 years

0 Lacs

Gurgaon

On-site

MongoDB's mission is to empower innovators to create, transform, and disrupt industries by unleashing the power of software and data. We enable organizations of all sizes to easily build, scale, and run modern applications by helping them modernize legacy workloads, embrace innovation, and unleash AI. Our industry-leading developer data platform, MongoDB Atlas, is the only globally distributed, multi-cloud database and is available in more than 115 regions across AWS, Google Cloud, and Microsoft Azure. Atlas allows customers to build and run applications anywhere—on premises, or across cloud providers. With offices worldwide and over 175,000 new developers signing up to use MongoDB every month, it's no wonder that leading organizations, like Samsung and Toyota, trust MongoDB to build next-generation, AI-powered applications. The Enterprise Advanced team is a diverse group of individuals across Europe and India, who develop software to run MongoDB on any type of infrastructure at global scale. Our software and services allow users to deploy fault-tolerant, globally distributed MongoDB clusters in minutes. The main focus of this team is to adapt our software to manage MongoDB clusters which are deployed in data centers or private cloud platforms. You will work on the core functionality for all of our products, mainly on the Ops Manager, Cloud Manager, and Automation products. Our team's end users are some of the largest businesses in the world, deploying massive clusters and processing huge amounts of data. We are looking to speak to candidates who are based in Gurugram for our hybrid working model. This role will report to the Engineering Manager also based in our Gurgaon office. What you'll do... Implement, test, and release features for Cloud Manager and Ops Manager Test and incrementally ship elements of complex projects Apply our core values to your work, in planning, design, and coding Assist with troubleshooting bugs in customer deployments A great fit for this role will be Someone who loves programming! Someone who enjoys working with others to achieve a common goal! You're flexible! You're willing to take on a wide variety of responsibilities, learning as you go You're a self-starter! You're comfortable organizing your own time, acting on feedback and prioritizing with guidance from senior members of your team Requirements Experience working as backend engineer Experience building multi-threaded, asynchronous, distributed systems Good knowledge of Computer Science fundamentals (data structures and algorithms) Good understanding of Object Orientation concepts Nice to haves Familiarity running services on Cloud Infrastructure (Amazon AWS, Google Cloud Platform, Microsoft Azure), using containers and/or container orchestration platforms (Docker, Kubernetes, Openshift) Experience working directly with production systems Experience with multiple programming languages Experience with the MongoDB Server (specialized, in-depth training will be provided upon joining) Experience or interest in full-stack web application development What is in it for you: Generous compensation package (top-range salary, equity, comprehensive benefits) Flexible working options Opportunities to learn on the job (time to upskill in new technologies) Team budget for attending industry-specific conferences and training High level of independence in your day to day work Engineers in our team have the chance to work with multiple programming languages (JAVA, Javascript/React, Golang) You'll be joining a good-humored supportive team that works well together! To drive the personal growth and business impact of our employees, we're committed to developing a supportive and enriching culture for everyone. From employee affinity groups, to fertility assistance and a generous parental leave policy, we value our employees' wellbeing and want to support them along every step of their professional and personal journeys. Learn more about what it's like to work at MongoDB, and help us make an impact on the world! MongoDB is committed to providing any necessary accommodations for individuals with disabilities within our application and interview process. To request an accommodation due to a disability, please inform your recruiter. MongoDB is an equal opportunities employer. Requisition ID 2263207586

Posted 4 hours ago

Apply

5.0 years

8 - 9 Lacs

Gurgaon

On-site

Expedia Group brands power global travel for everyone, everywhere. We design cutting-edge tech to make travel smoother and more memorable, and we create groundbreaking solutions for our partners. Our diverse, vibrant, and welcoming community is essential in driving our success. Why Join Us? To shape the future of travel, people must come first. Guided by our Values and Leadership Agreements, we foster an open culture where everyone belongs, differences are celebrated and know that when one of us wins, we all win. We provide a full benefits package, including exciting travel perks, generous time-off, parental leave, a flexible work model (with some pretty cool offices), and career development resources, all to fuel our employees' passion for travel and ensure a rewarding career journey. We’re building a more open world. Join us. Introduction to team : Our Supply and Market Place division is sourcing the best possible inventory and content from our partners, generating the best prices and customer experience, and ensuring our supply is transacted fairly across our marketplace. This division builds innovative products, services, and tools to deliver high-quality experiences for partners and travellers both. The goal of Supply Coaching Foundation org is to delight partners by connecting them to the right travellers. We’ll do that by building an adaptive experience that provides data and ML driven opportunities to our partners to help them grow their business. As part of Scout team we computes, organizes and streams the recommended actions for EG's supply partners with the ultimate goal of maximizing the returns for their time investment on Expedia Marketplace. Plus we also tracks partner's reactions to these recommendations to continuously learn & evolve. Our team works very closely with Machine Learning Scientists in a fast-paced Agile environment to create and productionize algorithms that directly impacts the partners of Expedia In this role, you will: Work in a cross-functional geographically distributed team of Machine Learning engineers and ML Scientists to design and code large scale batch and real-time pipelines on the Cloud. Prototype creative solutions quickly by developing minimum viable products and work with seniors and peers in crafting and implementing the technical vision of the team Act as a point of contact for junior team members, offering advice and direction Actively participate in all phases of the end-to-end ML model lifecycle (includes feature engineering, model training, model scoring, model validation) for enterprise applications projects to tackle sophisticated business problems in production environments Collaborate with global team of data scientists, administrators, data analysts, data engineers, and data architects on production systems and applications Collaborate with cross-functional teams to integrate generative AI solutions into existing workflow systems. Participate in code reviews to assess overall code quality and flexibility. Define, develop and maintain artifacts like technical design or partner documentation Maintain, monitor, support and improve our solutions and systems with a focus on service excellence Experience and qualifications: Degree in software engineering, computer science, informatics or a similar field Experience: 5+ years if Bachelors, 3+ year if you are Masters Comfortable programming in Python(Primary) and Scala(Secondary). Hands-on experience with OOAD, design patterns, SQL and NoSQL Must Have experience in big data technologies, in particular Spark , Hive, Hue and Databricks Experience in developing and deploying Batch and Real Time Inferencing applications. You have a good understanding of machine learning pipelines and ML Lifecycle. Familiarity of basics with both traditional ML and Gen-AI algorithms and tools Experience of using cloud services (e.g. AWS) Experience with workflow orchestration tools (e.g. Airflow) Passionate about learning, especially in the areas of micro-services, system architecture, Data Science and Machine Learning. Experience working with Agile/Scrum methodologies Accommodation requests If you need assistance with any part of the application or recruiting process due to a disability, or other physical or mental health conditions, please reach out to our Recruiting Accommodations Team through the Accommodation Request. We are proud to be named as a Best Place to Work on Glassdoor in 2024 and be recognized for award-winning culture by organizations like Forbes, TIME, Disability:IN, and others. Expedia Group's family of brands includes: Brand Expedia®, Hotels.com®, Expedia® Partner Solutions, Vrbo®, trivago®, Orbitz®, Travelocity®, Hotwire®, Wotif®, ebookers®, CheapTickets®, Expedia Group™ Media Solutions, Expedia Local Expert®, CarRentals.com™, and Expedia Cruises™. © 2024 Expedia, Inc. All rights reserved. Trademarks and logos are the property of their respective owners. CST: 2029030-50 Employment opportunities and job offers at Expedia Group will always come from Expedia Group’s Talent Acquisition and hiring teams. Never provide sensitive, personal information to someone unless you’re confident who the recipient is. Expedia Group does not extend job offers via email or any other messaging tools to individuals with whom we have not made prior contact. Our email domain is @expediagroup.com. The official website to find and apply for job openings at Expedia Group is careers.expediagroup.com/jobs. Expedia is committed to creating an inclusive work environment with a diverse workforce. All qualified applicants will receive consideration for employment without regard to race, religion, gender, sexual orientation, national origin, disability or age.

Posted 4 hours ago

Apply

0 years

4 - 10 Lacs

Chennai

Remote

When you join Verizon You want more out of a career. A place to share your ideas freely — even if they’re daring or different. Where the true you can learn, grow, and thrive. At Verizon, we power and empower how people live, work and play by connecting them to what brings them joy. We do what we love — driving innovation, creativity, and impact in the world. Our V Team is a community of people who anticipate, lead, and believe that listening is where learning begins. In crisis and in celebration, we come together — lifting our communities and building trust in how we show up, everywhere & always. Want in? Join the #VTeamLife. What you’ll be doing... You'll be part of the "Verizon Global Infrastructure (VGI), Network & Information Security” group working on security and automation tools to protect against cyber threats within the VGS Technology organization. You will work with a team of cybersecurity engineers with network & infrastructure background, threat intelligence analysts to automate the use case recommendations provided in order to secure the network infrastructure. You will develop code and script to provide Technology Solution to address Security opportunities and Automation. Some of your daily responsibilities would be the following. Developing and maintaining Network & Infrastructure Security reporting dashboards, scorecards Automating Network Platform integration with logging tools & systems Maintaining the automation platforms supporting VGS Enterprise Network, On-Prem Infrastructure, Datacenters and Cloud organizations in their Cyber practice Performing automation and orchestration of all security tools for visibility and prevention. Driving a culture of Security by Design and automation to scale cyber security practices. Identifying opportunities and use cases for automation to remediate vulnerabilities , implement controls, orchestrate between tools and automate security practices. Ensuring effectiveness and coverage of security, policies and controls of VGS Network & Infrastructure, prioritizing risk level. Leveraging industry proven tools to identify and reduce Cyber Risks Ensuring security posture of VGS Network & Infrastructure, e.g., access management, vulnerabilities remediation, etc. Assisting in Crisis Management, Ransomware Recovery and Business Continuity planning. What we’re looking for... You are passionate about cloud network security and automation as a career. You are self-driven and motivated, with good communication and analytical skills. You’re a sought-after team member that thrives in a dynamic work environment. You will be working with multiple partners from the business groups, so networking and managing effective working relationships should be your top most priority. You have an understanding of industry trends in all areas of Information Security. You'll need to have: Bachelor’s degree in Computer Science / Information Technology Engineering or one or more years of experience. Two or more years of relevant work experience in IT or automation Strong knowledge of one or more programming languages Good knowledge of Devops, CI/CD and tools like Git, Jenkins, Docker, Netmiko etc Knowledge of Network or Security fundamentals Strong communication and collaboration abilities Even better if you have one or more of the following: Network relevant certifications like CCNA Cloud relevant certifications like CSPM, CNAPP, CWPP will be an added plus Strong expertise in at least one operating system Window or Linux. Where you’ll be working In this hybrid role, you'll have a defined work location that includes work from home and assigned office days set by your manager. Scheduled Weekly Hours 40 Equal Employment Opportunity Verizon is an equal opportunity employer. We evaluate qualified applicants without regard to race, gender, disability or any other legally protected characteristics.

Posted 4 hours ago

Apply

0 years

2 - 6 Lacs

Chennai

On-site

TransUnion's Job Applicant Privacy Notice What We'll Bring: Experienced Sr. Analyst Responsible for providing Application operations support for critical business applications, ensuring system stability and resolving incidents with in SLA. Collaborates with cross-functional teams to troubleshoot issues, monitor performance, and implement process improvements. Mentor junior team, proficient in leveraging latest DevOps tools and practices Docker, K8, Containerization and cloud ,Various monitoring Tools to enhance efficiency. What You'll Bring: Provide Applications Operation support for critical business applications, ensuring high availability, quick incident resolution, and minimal business disruption. Proactively monitor application and system health using tools like Grafana, Splunk, and AppDynamics; respond to alerts and system anomalies. Troubleshoot and resolve incidents, perform root cause analysis, and work collaboratively with development and infrastructure teams for permanent fixes.(Excellent Working knowledge in LINUX, SQL, SPLUNK, Grafana and Various other monitoring Tools. (AppDynamics, SPOTFIRE) Document knowledge base articles, RCA reports, and support runbooks to streamline operational workflows and ensure team alignment. Participate in 24x7 Shift, on-call support rotation, ensuring timely handling of high-priority incidents and escalations. Follow ITIL processes such as Incident, Problem, and Change Management; experience with tools like ServiceNow or BMC Remedy is preferred. Support deployments, release coordination, and post-deployment validation as part of the release and change management cycle. Work with modern DevOps tools like Git, Jenkins, Docker, Kubernetes, and CI/CD pipelines in cloud-based environments (AWS/Azure). Mentor and guide junior support analysts, fostering knowledge sharing and best practices for consistent service delivery. Communicate clearly and professionally with stakeholders, providing timely updates, impact assessments, and issue resolution plans. Bachelor’s degree in Computer Science, IT, or a related field. Certifications: ITIL Foundation (required), and any of the following are a plus: AWS Cloud Practitioner, Microsoft Azure Fundamentals, Docker/Kubernetes certifications, or DevOps-related credentials. Excellent written and verbal communication skills, with a focus on clarity, responsiveness, and stakeholder engagement. Impact You'll Make: Strong hands-on expertise in Linux/Unix environments is mandatory, including shell scripting and system troubleshooting. Experienced in ITSM tools like BMC Remedy and ServiceNow for incident, problem, and service request tracking. Hands-on experience in containerization and orchestration using Docker and Kubernetes; working knowledge of monitoring/logging tools (Grafana, Splunk). Familiarity with cloud-based applications and environments, with the ability to support and troubleshoot distributed systems. Proficiency in SQL for data investigation and support, with the ability to write queries and analyze logs for issue resolution. Additional Automation experience is an Added advantage. This is a hybrid position and involves regular performance of job responsibilities virtually as well as in-person at an assigned TU office location for a minimum of two days a week. TransUnion Job Title Sr Analyst, Applications Support

Posted 4 hours ago

Apply

10.0 - 12.0 years

6 - 8 Lacs

Chennai

On-site

The Applications Development Senior Programmer Analyst is an intermediate level position responsible for participation in the establishment and implementation of new or revised application systems and programs in coordination with the Technology team. The overall objective of this role is to contribute to applications systems analysis and programming activities. Responsibilities: Conduct tasks related to feasibility studies, time and cost estimates, IT planning, risk technology, applications development, model development, and establish and implement new or revised applications systems and programs to meet specific business needs or user areas Monitor and control all phases of development process and analysis, design, construction, testing, and implementation as well as provide user and operational support on applications to business users Utilize in-depth specialty knowledge of applications development to analyze complex problems/issues, provide evaluation of business process, system process, and industry standards, and make evaluative judgement Recommend and develop security measures in post implementation analysis of business usage to ensure successful system design and functionality Consult with users/clients and other technology groups on issues, recommend advanced programming solutions, and install and assist customer exposure systems Ensure essential procedures are followed and help define operating standards and processes Serve as advisor or coach to new or lower level analysts Has the ability to operate with a limited level of direct supervision. Can exercise independence of judgement and autonomy. Acts as SME to senior stakeholders and /or other team members. Appropriately assess risk when business decisions are made, demonstrating particular consideration for the firm's reputation and safeguarding Citigroup, its clients and assets, by driving compliance with applicable laws, rules and regulations, adhering to Policy, applying sound ethical judgment regarding personal behavior, conduct and business practices, and escalating, managing and reporting control issues with transparency. Qualifications: 10-12 years of relevant experience Experience in systems analysis and programming of software applications Experience in managing and implementing successful projects Working knowledge of consulting/project management techniques/methods Ability to work under pressure and manage deadlines or unexpected changes in expectations or requirements Required Skills: Experience to Design and develop robust backend applications using Spring Boot, Spring Batch and other Spring ecosystem modules. Experience to Architect, develop, and deploy microservices solutions on cloud platforms using containerization and orchestration tools. Experience in Lightspeed. Experience in Kafka or any messaging tools. Experience with Java-RDBMS (Oracle) development Experience in Client reporting like Advice and Statements is Knowledge of operating Systems – Linux/Unix (SUN/IBM), Windows Working experience with Application servers - WebLogic, WebSphere Any experience with ISIS Papyrus and ETL tools would be a plus Education: Bachelor’s degree/University degree or equivalent experience This job description provides a high-level review of the types of work performed. Other job-related duties may be assigned as required. - Job Family Group: Technology - Job Family: Applications Development - Time Type: Full time - Most Relevant Skills Please see the requirements listed above. - Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. - Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster.

Posted 4 hours ago

Apply

5.0 years

0 Lacs

India

Remote

Experience : 5.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: Java, Python, Golang, AWS, Google Cloud, Azure, MongoDB, PostgreSQL, Yugabyte, AuroraDB Netskope is Looking for: About Netskope Today, there's more data and users outside the enterprise than inside, causing the network perimeter as we know it to dissolve. We realized a new perimeter was needed, one that is built in the cloud and follows and protects data wherever it goes, so we started Netskope to redefine Cloud, Network and Data Security. Since 2012, we have built the market-leading cloud security company and an award-winning culture powered by hundreds of employees spread across offices in Santa Clara, St. Louis, Bangalore, London, Paris, Melbourne, Taipei, and Tokyo. Our core values are openness, honesty, and transparency, and we purposely developed our open desk layouts and large meeting spaces to support and promote partnerships, collaboration, and teamwork. From catered lunches and office celebrations to employee recognition events and social professional groups such as the Awesome Women of Netskope (AWON), we strive to keep work fun, supportive and interactive. Visit us at Netskope Careers. Please follow us on LinkedIn and Twitter@Netskope. About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. Netskope's API Protection team is responsible for designing and implementing a scalable and elastic architecture to provide protection for enterprise SaaS and IaaS application data. This is achieved by ingesting high volume activity events at near real-time and analyzing data to provide security risk management for our customers, including data security, access control, threat prevention, data loss prevention, user coaching and more. What’s In It For You As a member of this team you will work in an innovative, fast-paced environment with other experts to build Cloud-Native solutions using technologies like Kubernetes, Helm, Prometheus, Grafana, Jaeger (open tracing), persistent messaging queues, SQL/NO-SQL databases, key-value stores, etc. You will solve complex scale problems, and deploy and manage the solution in production. If you are driven by high-quality, high-velocity software delivery challenges, and using innovative and cutting edge solutions to achieve these goals, we would like to speak with you. What you will be doing Architect and implement critical software infrastructure for distributed large-scale multi-cloud environments. Review architectures and designs across the organization to help guide other engineers to build scalable cloud services. Provide technical leadership and strategic direction for large-scale distributed cloud-native solutions. Be a catalyst for improving engineering processes and ownership. Research, incubate, and drive new technologies to ensure we are leveraging the latest innovations. Required Skills And Experience 5 to 15 years of experience in the field of software development Excellent programming experience with Go, C/C++, Java, Python Experience building and delivering cloud microservices at scale Expert understanding of distributed systems, data structures, and algorithms A skilled problem solver well-versed in considering and making technical tradeoffs A strong communicator who can quickly pick up new concepts and domains Bonus points for Golang knowledge Production experience with building, deploying and managing microservices in Kubernetes or similar technologies is a bonus Production experience with Cloud-native concepts and technologies related to CI/CD, orchestration (e.g. Helm charts), observability (e.g. Prometheus, Opentracing), distributed databases, messaging (REST, gRPC) is a bonus Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 4 hours ago

Apply

Exploring Orchestration Jobs in India

India has seen a significant growth in the demand for orchestration professionals in recent years, with many companies embracing automation and cloud technologies. As a result, job seekers with skills in orchestration have ample opportunities to explore in the Indian job market.

Top Hiring Locations in India

  1. Bangalore
  2. Pune
  3. Hyderabad
  4. Mumbai
  5. Chennai

Average Salary Range

The average salary range for orchestration professionals in India varies based on experience levels. Entry-level positions can expect to earn around ₹4-6 lakhs per annum, while experienced professionals can earn upwards of ₹15 lakhs per annum.

Career Path

In the field of orchestration, a typical career progression may include roles such as Junior Orchestration Engineer, Orchestration Specialist, Senior Orchestration Architect, and Orchestration Manager.

Related Skills

Alongside orchestration, professionals in this field are often expected to have skills in cloud computing, automation, scripting (e.g., Python, Shell), containerization technologies (e.g., Docker, Kubernetes), and infrastructure as code tools (e.g., Terraform).

Interview Questions

  • What is orchestration and why is it important in cloud computing? (basic)
  • Can you explain the difference between orchestration and automation? (basic)
  • How familiar are you with container orchestration tools like Kubernetes? (medium)
  • Describe a scenario where you had to troubleshoot a complex orchestration issue. (medium)
  • What are some best practices for implementing orchestration in a cloud environment? (medium)
  • How do you ensure scalability and reliability in an orchestrated system? (advanced)
  • Can you discuss the challenges of orchestration in a multi-cloud environment? (advanced)
  • How do you handle security concerns in an orchestrated infrastructure? (advanced)

Closing Remark

As the demand for orchestration professionals continues to rise in India, now is the perfect time for job seekers to hone their skills, prepare for interviews, and confidently apply for exciting opportunities in this field. Best of luck on your job search journey!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies