Role & responsibilities PostgreSQL Database admin Proficient with Postgres installation and configuration, particularly Postgres Plus Advanced Server from EnterpriseDB and AWS RDS Aurora PostgreSQL. Proficient with Postgres monitoring and alerting tools/processes, specifically PEM from EnterpriseDB. Proficient in the setup, configuration, and monitoring of PostgreSQL binary and logical data replication solutions (Binary Streaming, XDB, Bi-Directional Replication - BDR, etc.). Proficient with collecting diagnostics and tuning PostgreSQL as well as SQL tuning. Proficient with Postgres procedural languages (PL/pgSQL, PL/Tcl, PL/Perl, PL/Python) and SQL. Proficient in designing and supporting Postgres clustered environments. Monitoring and observability with tools like Splunk and Dynatrace. Experience in Postgres replication technologies. Perform debugging, tuning, and performance enhancement, as well as automation of operational and continuous integration aspects of the Postgres platforms. Proficient with the Linux operating system, specifically Oracle Linux Enterprise. Intermediate understanding of logical and physical data models
Role & responsibilities Strong expertise in MPS, DPS/DS, and DRP modules. Strong understanding of Solvers and system integration frameworks. Deep understanding of master and transactional data required for Demand and Supply Planning. Experience designing, developing, and testing functional specifications for customizations, integrations, and reporting. Experience OPAL configuration. Familiarity with SAP ERP systems (ECC, S/4HANA) and SAP APO (preferred). Preferred candidate profile OMP Supply Planning (SP/OPR) Certification is mandatory
Role & responsibilities Must have: Linux, 2.CICD Git Bash / Bit bucket,3 ITIL. Good to have skills: Jenkins, 2, Automation Must have skills:CI/CD, 2 Unix Shell Scripting, SQL, 3 . ITSM Change 4. L2 Support experience is a must JD as below: Perform Root Cause Analysis in detail for High severity Incidents and take action on fixing the underlying cause of the high severity issues. Take necessary preventive actions also. Support customer to fill in the Post Incident Report (PIR) when any high impacting Incidents affecting customers occurred. Participate / Initiate in War Room calls that impacts application availability or has a customer impact Willing to work on shifts (Morning & Afternoon shifts) & Weekend support Tools Used: Remedy – Ticketing Tool Rally (For Story and Bug Tracking) Splunk and Dynatrace for Monitoring WinScp (file movement/ validation) Cyberark/Putty Toad – Querying Tool for DB
Role & responsibilities Design, implement, configure, and maintain both the Moogsoft AIOps platform and the Apex AIOps Incident Management platform. Develop and manage integrations between Moogsoft, Apex AIOps, and other relevant IT monitoring, logging, and ITSM tools. Develop and implement automation workflows and runbooks within both AIOps platforms to streamline incident management and remediation processes. Collaborate with various IT teams (monitoring, application, infrastructure, security, operations) to understand their AIOps requirements and translate them into effective solutions on both platforms. Define and enforce best practices for the configuration, utilization, and administration of both Moogsoft and Apex AIOps. Troubleshoot issues related to platform performance, data ingestion, correlation logic, integrations, and automation within both environments. Create and maintain comprehensive documentation for configurations, integrations, workflows, and operational procedures for both Moogsoft and Apex AIOps. Develop and implement metrics and dashboards within both platforms to provide visibility into IT health, incident trends, and the effectiveness of AIOps implementations. Ensure the security and compliance of both Moogsoft and Apex AIOps platforms and their integrations. Preferred candidate profile Bachelor's degree in Computer Science, Information Technology, Engineering, or a related field. Proven experience (typically 8+ years) in a technical role focused on AIOps platforms, event management, or systems integration, with specific experience in both Moogsoft and Apex AIOps.
Role & responsibilities Provide best-in-class Production Support for Data Science engineering, application development, andproduction Line-of-Business teams Design, engineer, and implement critical infrastructure to support Data Science platforms such as Alteryx,Dataiku and Azure Machine Learning Act as a Problem Manager to troubleshoot and resolve complex Technical Incidents with an eye on developing sustainable automata Collaborate with cross-functional teams like engineering, hardware, platform services, cloud, and operations Monitor and Analyze MongoDB performance, with a foci on automation and reliability Ensure compliance with operational, risk, and change management guidelines Provide occasional weekend or after-hours support Develop and document best practices Your team you will be working in the Data Platform SRE team in Hyderabad to focus on reliability, operations, and efficiency in collaboration with our Global Data Science Service team. We enable solutions for numerous lines of business by implementing modern data science initiatives across Wealth Management, Investment Banking, and Corporate Center divisions including Risk and Finance, HR, and other Technology Service Teams. We offer flexibility in the workplace and equal opportunities for all team members. Preferred candidate profile Hands-on experience with UNIX/Windows Administration, ideally 5+ years Alteryx administration experience is desirable Knowledge regarding analytic products such as Dataiku, Alteryx, Azure Synapse, and DataBricks is a plus Ability to solve complex issues with solution-design thinking Dev-Ops experience with GitLab CI/CD and PowerShell scripting skills or other programming languages Track-record of influencing IT stakeholders and business partners A confident communicator who can explain technology to business stakeholders
Job description Role & responsibilities Design, implement, configure, and maintain both the Moogsoft AIOps platform and the Apex AIOps Incident Management platform. Develop and manage integrations between Moogsoft, Apex AIOps, and other relevant IT monitoring, logging, and ITSM tools. Develop and implement automation workflows and runbooks within both AIOps platforms to streamline incident management and remediation processes. Collaborate with various IT teams (monitoring, application, infrastructure, security, operations) to understand their AIOps requirements and translate them into effective solutions on both platforms. Define and enforce best practices for the configuration, utilization, and administration of both Moogsoft and Apex AIOps. Troubleshoot issues related to platform performance, data ingestion, correlation logic, integrations, and automation within both environments. Create and maintain comprehensive documentation for configurations, integrations, workflows, and operational procedures for both Moogsoft and Apex AIOps. Develop and implement metrics and dashboards within both platforms to provide visibility into IT health, incident trends, and the effectiveness of AIOps implementations. Ensure the security and compliance of both Moogsoft and Apex AIOps platforms and their integrations. Preferred candidate profile Bachelor's degree in Computer Science, Information Technology, Engineering, or a related field. Proven experience (typically 8+ years) in a technical role focused on AIOps platforms, event management, or systems integration, with specific experience in both Moogsoft and Apex AIOps.
Role & responsibilities Provide best-in-class Production Support for Data Science engineering, application development, and production Line-of-Business teams Design, engineer, and implement critical infrastructure to support Data Science platforms such as Alteryx, Dataiku and Azure Machine Learning Act as a Problem Manager to troubleshoot and resolve complex Technical Incidents with an eye on developing sustainable automata Collaborate with cross-functional teams like engineering, hardware, platform services, cloud, and operations Monitor and Analyze MongoDB performance, with a foci on automation and reliability Ensure compliance with operational, risk, and change management guidelines Provide occasional weekend or after-hours support Develop and document best practices Your team you will be working in the Data Platform SRE team in Hyderabad to focus on reliability, operations, and efficiency in collaboration with our Global Data Science Service team. We enable solutions for numerous lines of business by implementing modern data science initiatives across Wealth Management, Investment Banking, and Corporate Center divisions including Risk and Finance, HR, and other Technology Service Teams. We offer flexibility in the workplace and equal opportunities for all team members. Preferred candidate profile Hands-on experience with UNIX/Windows Administration, ideally 5+ years Alteryx administration experience is desirable Knowledge regarding analytic products such as Dataiku, Alteryx, Azure Synapse, and DataBricks is a plus Ability to solve complex issues with solution-design thinking Dev-Ops experience with GitLab CI/CD and PowerShell scripting skills or other programming languages Track-record of influencing IT stakeholders and business partners A confident communicator who can explain technology to business stakeholders
Job Description: Intune SME Engineer Role & Responsibilities Design, deploy, and manage Microsoft Intune and Enterprise Mobility + Security (EMS) solutions. Provide L3-level support for endpoint management, configuration policies, application deployment, and device compliance. Automate routine tasks and workflows using PowerShell scripting to improve operational efficiency. Troubleshoot complex issues related to device management, enrollment, and security policies. Collaborate with cross-functional teams to implement enterprise mobility strategies and ensure policy compliance. Maintain documentation of configurations, procedures, and best practices. Ensure security standards are met for mobile and endpoint devices through proactive monitoring and policy enforcement. Preferred Candidate Profile Strong hands-on experience with Microsoft Intune and EMS. Proven expertise in providing L3-level technical support for endpoint management. Proficiency in PowerShell scripting for automation and administrative tasks. Experience in designing automated workflows for device management and compliance reporting. Good understanding of enterprise mobility, device security, and cloud management concepts. Strong problem-solving skills with the ability to handle escalated issues independently.
Role & responsibilities PostgreSQL Database admin Proficient with Postgres installation and configuration, particularly Postgres Plus Advanced Server from Enterprise DB and AWS RDS Aurora PostgreSQL. Proficient with Postgres monitoring and alerting tools/processes, specifically PEM from Enterprise DB. Proficient in the setup, configuration, and monitoring of PostgreSQL binary and logical data replication solutions (Binary Streaming, XDB, Bi-Directional Replication - BDR, etc.). Proficient with collecting diagnostics and tuning PostgreSQL as well as SQL tuning. Proficient with Postgres procedural languages (PL/pgSQL, PL/Tcl, PL/Perl, PL/Python) and SQL. Proficient in designing and supporting Postgres clustered environments. Monitoring and observability with tools like Splunk and Dynatrace. Experience in Postgres replication technologies. Perform debugging, tuning, and performance enhancement, as well as automation of operational and continuous integration aspects of the Postgres platforms. Proficient with the Linux operating system, specifically Oracle Linux Enterprise. Intermediate understanding of logical and physical data models
Role & responsibilities Performing data migration activities using Informatica (ETL tools), participating in multiple workshops to understand data structures, and analyse new system data requirements. Understanding source and target applications, and how the migration will fit into the new system. Working closely with Data, Business, and Technology teams to create, unit test and support integration workflows and mappings using IICS/IDMC. This includes, but not limited to, use of mapplets, rule specifications, joins, task flows, and parameters within Data Quality, and ETL frameworks. Working with Data Analysts to understand source systems data quality issues and the technical requirements of the source to target mappings; designing jobs with Informatica best practices (including but not limited to performance, efficiency, scalability, re-usability, transformations). Ensure data accuracy, integrity, and security in all data integration activities. Write T-SQL queries and scripts for data transformation and validation; performance tuning and optimization of ETL processes. Develop shell scripts for automation and orchestration. Document technical mappings, mapplets and rules in detail, as required. Preferred candidate profile 5+ yrs experience understanding of ETL concepts and methodologies; and a strong functional understanding of RDBMS. Proficient in T-SQL & Python for data manipulation and validation; and experience with performance tuning / optimization of ETL processes. Excellent problem-solving and troubleshooting skills. Experience using, configuring, and scheduling Control-M jobs. Experience in large data migration programs is highly desired. Experience in financial services is highly desired, in particular, retail & business banking product data, and data models. Ability to work collaboratively, and independently, with various teams in a dynamic development environment and determine solutions that meet business, technology, and data requirements. 5+ years of experience on ETL Tools like Informatica cloud, IICS & Informatica PowerCenter is highly desired. ETL tool certifications preferred.
Role Overview We are seeking an experienced Windows Production Support Engineer to manage, monitor, and optimize Windows-based production environments. The ideal candidate will have strong expertise in clustering and scripting, with a proven track record of resolving critical production issues and ensuring high availability. Responsibilities Provide end-to-end Windows production support, ensuring system stability, availability, and performance. Troubleshoot and resolve incidents, problems, and outages within defined SLAs. Implement and manage Windows clustering solutions for high availability and disaster recovery. Develop and maintain PowerShell scripts for automation, monitoring, and administrative tasks. Perform patching, upgrades, and performance tuning across production servers. Collaborate with infrastructure, application, and security teams to ensure seamless operations. Prepare documentation for processes, changes, and issue resolutions. Participate in on-call rotations and provide 24x7 production support as required. Requirements 910 years of experience in Windows Server administration and production support. Strong hands-on knowledge of Windows clustering technologies. Proficiency in PowerShell scripting for automation and operational efficiency. Experience in performance tuning, monitoring tools, and incident response. Ability to work under pressure and manage high-priority incidents effectively. Good communication and collaboration skills to work with cross-functional teams.
Role & responsibilities Performing data migration activities using Informatica (ETL tools), participating in multiple workshops to understand data structures, and analyse new system data requirements. Understanding source and target applications, and how the migration will fit into the new system. Working closely with Data, Business, and Technology teams to create, unit test and support integration workflows and mappings using IICS/IDMC. This includes, but not limited to, use of mapplets, rule specifications, joins, task flows, and parameters within Data Quality, and ETL frameworks. Working with Data Analysts to understand source systems data quality issues and the technical requirements of the source to target mappings; designing jobs with Informatica best practices (including but not limited to performance, efficiency, scalability, re-usability, transformations). Ensure data accuracy, integrity, and security in all data integration activities. Write T-SQL queries and scripts for data transformation and validation; performance tuning and optimization of ETL processes. Develop shell scripts for automation and orchestration. Document technical mappings, mapplets and rules in detail, as required.. Preferred candidate profile 5+ yrs experience understanding of ETL concepts and methodologies; and a strong functional understanding of RDBMS. Proficient in T-SQL & Python for data manipulation and validation; and experience with performance tuning / optimization of ETL processes. Excellent problem-solving and troubleshooting skills. Experience using, configuring, and scheduling Control-M jobs. Experience in large data migration programs is highly desired. Experience in financial services is highly desired, in particular, retail & business banking product data, and data models. Ability to work collaboratively, and independently, with various teams in a dynamic development environment and determine solutions that meet business, technology, and data requirements. 5+ years of experience on ETL Tools like Informatica cloud, IICS & Informatica PowerCenter is highly desired. ETL tool certifications preferred.
Role & responsibilities Senior Windows SA required with at least 9 + Years experience. Have exceptional hands-on working experience in Windows Server Operating System Products. Must demonstrate background in troubleshooting, identifying root cause, implement solutions or finetuning systems in a complex Windows network environment including Active directory. Minimum 5 years of production support experience. Minimum 3 years of support experience in Windows server clusters. Experience in Powershell scripting and ADO DevOps would be an added advantage. Understand automation frameworks. Must understand in detail supporting technologies like DB (MSSQL), Networks, Storage (EMC, HDS) & Hardware (x86) Background in VMWARE or AWS Cloud or consolidated environments desired. Demonstrate ability to overcome operational challenges. Lead ongoing Production activities, identifying potential area of improvements, implementing such changes and maintaining system configurations. Can work independently and as a technical team member. Can operate in an international and multi-cultural environment. Can demonstrate problem-solving abilities. Can demonstrate willingness to pursue creative solutions to platform support problems. Must be able to contribute to the wider team including operations and controls. Must be willing to work in 24x7 (including) weekends on request. Regulatory & Business Conduct: Display exemplary conduct and live by the Groups Values and Code of Conduct. Take personal responsibility for embedding the highest standards of ethics, including regulatory and business conduct, across Standard Chartered Bank. This includes understanding and ensuring compliance with, in letter and spirit, all applicable laws, regulations, guidelines and the Group Code of Conduct. Lead to achieve the outcomes set out in the Banks Conduct Principles: [Fair Outcomes for Clients; Effective Financial Markets; Financial Crime Compliance; The Right Environment.] * Effectively and collaboratively identify, escalate, mitigate and resolve risk, conduct and compliance matters.
Role & responsibilities Experience in administration/operations in GCP cloud environment Experience in implementations in hybrid and pure cloud environments Experience in GCP services like Compute Engine/App Engine/VPCs /Cloud NAT/Load Balancing/and Cloud Storage/Cloud SQL/Cloud Logging and Monitoring. Ability to deploy, manage, and operate scalable, highly available, and fault-tolerant systems using GCP Experience in IAM, GKE, Pub/Sub, Cloud Run, GCP operations suite and database/analytics services like BiqQuery, Dataflow, Cloud SQL, Looker etc. Ensure governance and security via IAM, organization policies, service accounts, and resource hierarchy. Automate infrastructure provisioning using Terraform, integrating with Azure DevOps Pipelines and GitHub Actions for CI/CD workflows. Intermediate to advanced system/network administrator level knowledge of Linux/Unix or Windows systems. Strong knowledge of computer networking (VPC Network, subnets, private access, VPC peering, VPN, hybrid connectivity, routing, Interconnect, load balancing, firewall, HTTP/HTTPS, SSL, DNS etc.). Monitor infrastructure using Cloud Monitoring, set up alerts and dashboards, and optimize for cost and performance. Collaborate with application, DevOps, and security teams for end-to-end delivery and operational excellence. Ability to work in a 24*7 operations environment Outstanding troubleshooting, attention to detail, and communication skills. Sensitivity: Internal & Restricted Manage and troubleshoot GKE (Google Kubernetes Engine) clusters including networking, ingress, autoscaling, and security policies
Role & responsibilities :QA/Testers T-SQL a must. Experience with large finance (banking) data migration projects highly regarded. 5+ years experience in a Test Analyst role (in a client-facing environment). Experience in financial services is highly desired, in particular retail & business banking product data, and data models. Experience in large data migration & integration programs is desired Strong analytical skills, attention to detail, and the ability to write clear and concise test cases and reports Proficiency in software testing methodologies, tools, and techniques with an understanding of data management, governance and security Knowledge of test automation frameworks and scripting languages; knowledge of Jira Xray is desirable Ability to write SQL for data selection and validation (scenario identification etc.). Familiarity with defect tracking tools and Agile project management tools (e.g., Confluence, Jira) is desirable Experience of working in an Agile environment. Willing to work a minimum of 2 days a week in the office
Responsibilities Lead the full Software Development Lifecycle (SDLC) designing, coding, testing, deploying, and maintaining applications using Java and React technologies. Architect and implement scalable microservices-based backend systems using Java (Spring Boot, RESTful APIs) along with modern front-end components built with React.js. Collaborate cross-functionally with product owners, UX/UI designers, QA, and other engineering teams to deliver seamless and high-quality user experiences. Provide technical guidance and mentorship to junior engineers; conduct peer reviews and uphold best coding practices. Apply BFSI domain knowledge to guide design decisions, ensure compliance, and anticipate domain-specific risk and performance constraints. Troubleshoot and resolve production issues, perform root cause analysis, and enhance system resilience and performance. Participate and contribute actively within Agile/Scrum environments , including stand-ups, sprint review, and planning sessions Qualifications 6 - 10 years of experience, with minimum 4 years in Java & React development Strong backend (Java, Spring, microservices) and frontend (React, JavaScript, HTML/CSS) skills Familiarity with Redux or similar state management, and DevOps tools (CI/CD, Git, testing frameworks) Cloud platform experience (AWS/Azure/GCP) Excellent troubleshooting and analytical skills BFSI domain expertise required
Key Responsibilities: Design and develop automated test cases using Tricentis Tosca for web, desktop, and API applications. Execute, monitor, and analyze automation test results to ensure system stability and accuracy. Collaborate with development and QA teams to define test automation strategies and frameworks. Integrate Tosca automation with CI/CD pipelines (e.g., Jenkins, Azure DevOps). Maintain and update test suites as applications evolve. Identify, log, and track defects using defect management tools such as JIRA or ALM . Ensure test data management and environment setup for smooth automation execution. Contribute to continuous process improvement and best practices for automation. Required Skills and Qualifications: Bachelors degree in Computer Science, IT, or equivalent. 4–5 years of hands-on experience in Automation Testing using Tosca . Strong understanding of Tosca Modules, Test Cases, Test Steps, and Execution Lists . Experience in API, Web, and Mobile Automation using Tosca. Knowledge of Tosca Commander, Tosca BI, and Tosca API Scan tools. Familiarity with CI/CD tools (Jenkins, Azure DevOps, GitLab). Exposure to Agile/Scrum methodologies . Strong analytical, debugging, and problem-solving skills.
Key Responsibilities: Design, automate, and maintain CI/CD pipelines using Jenkins, GitLab CI, or AWS Code Pipeline. Build, deploy, and manage cloud infrastructure on AWS using Terraform (IaC) . Implement and manage Kubernetes clusters (EKS) and Docker -based containerized applications. Manage configuration automation using Ansible , CloudFormation , or similar tools. Monitor system performance, troubleshoot issues, and optimize cloud resources. Implement security best practices , backup strategies, and compliance policies. Collaborate with developers to improve deployment processes and delivery efficiency. Use monitoring tools (Prometheus, Grafana, ELK, CloudWatch) to ensure high availability and reliability. Required Skills and Qualifications: Bachelors degree in Computer Science, IT, or equivalent. 4 - 5 years of relevant experience in DevOps, Cloud, or Infrastructure Management. Proficiency in AWS services (EC2, S3, RDS, Lambda, IAM, VPC, CloudFront, etc.). Strong experience in Terraform for Infrastructure as Code. Expertise in Kubernetes (EKS) and Docker . Hands-on with CI/CD tools (Jenkins, GitLab CI/CD, CircleCI, or AWS CodePipeline). Solid understanding of Linux administration , networking, and security. Scripting experience in Bash, Shell, or Python . Familiarity with monitoring, alerting, and logging tools .