Jobs
Interviews

17543 Terraform Jobs - Page 15

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

JR0126086 Associate, Technology Operations – Pune, India Want to work on global strategic initiatives with a FinTech company that is poised to revolutionize the industry? Join the team and help shape our company’s digital capabilities and revolutionize an industry! Join Western Union as an Associate, Technology Operations. Western Union powers your pursuit. The Associate, Technology Operations is expected to own solution and services delivery of Database Engineering and Operations for both system-level and application-level databases: On-premises and on the Cloud and using insight from customers and colleagues worldwide to improve financial services for families, small businesses, multinational corporations, and non-profit organizations. Role Responsibilities To own and manage the database portfolio (Oracle, MSSQL, DB2, AWS RDS MySQL/PostgreSQL/Aurora MySQL/ Aurora PostgreSQL/Redis/Couchbase/Cassandra) Enable the delivery of high quality of service of database infrastructure support and ensure service support and delivery processes are in place to meet business needs Lead virtual teams, 3rd parties and 3rd party services. You will handle internal and third-party service review meetings covering performance, service improvements, quality and processes Design and implement Highly Available (HA), Scalable, fit for use large database solutions and advocating best practices across On-prem and AWS Cloud Collaborate with architecture, engineering, support, teams in designing, deploying and scalable database solutions Ability to handle multiple projects and deadlines in a fast-paced environment independently Advanced troubleshooting skills: Database performance tuning, issue resolutions, on-going replication issues Adopt Automation of day-to-day administration and maintenance in Cloud/On-premise and emergent Engineering best practices in CI/CD implementations Define and manage best practices around database security and help to ensure security and compliance across all database systems This position is a stakeholder-facing role and requires that you establish and manage expectations with the business teams and drive your team to achieve the expected service levels. Identify and drive methodologies and processes that support world class standards for production stability and identify and manage key targeted areas for improvement. You will conduct regular team huddles, periodic problem-solving workshops, process confirmation reviews and 1x1 coaching sessions. Role Requirements 3+ years of experience working in Fintech, Ecommerce, IT or consulting organization, of which at least 1 year of experience designing and implementing database systems in On-premise and AWS cloud. Hands-on AWS experience is a must. Professional experience working on on-prem/AWS Oracle/SQ Server / DB2 / MySQL / PostgreSQL / Couchbase / Cassandra databases Hands on experience in managing at least one database replication technology: Oracle GoldenGate / IBM MQ / HVR (Fire Tran)/ AWS DMS Prior experience in leading global virtual teams, 3rd parties and 3rd party services. Proven experience of using SQL & NoSQL datastores Hands-on experience on AWS Cloud - certification in AWS preferred Experience with database DevOps practices, infrastructure as a code automation tools such as Terraform, AWS Cloud Formation Experience in implementing solutions using various flavors of operating systems including Linux, Load balancers, Liquibase DB Change Automation, HA/DR & storage architecture Experience in database migrations from On-prem to AWS Cloud. We make financial services accessible to humans everywhere. Join us for what’s next. Western Union is positioned to become the world’s most accessible financial services company transforming lives and communities. To support this, we have launched a Digital Banking Service and Wallet across several European markets to enhance our customers’ experiences by offering a state-of-the-art digital Ecosystem. More than moving money, we design easy-to-use products and services for our digital and physical financial ecosystem that help our customers move forward. Just as we help our global customers prosper, we support our employees in achieving their professional aspirations. You’ll have plenty of opportunities to learn new skills and build a career, as well as receive a great compensation package. If you’re ready to help drive the future of financial services, it’s time for the Western Union. Learn more about our purpose and people at https://careers.westernunion.com. Benefits You will also have access to short-term incentives, multiple health insurance options, accident and life insurance, and access to best-in-class development platforms, to name a few (https://careers.westernunion.com/global-benefits/). Please see the location-specific benefits below and note that your Recruiter may share additional role-specific benefits during your interview process or in an offer of employment. Your India Specific Benefits Include Employees Provident Fund [EPF] Gratuity Payment Public holidays Annual Leave, Sick leave, Compensatory leave, and Maternity / Paternity leave Annual Health Check up Hospitalization Insurance Coverage (Mediclaim) Group Life Insurance, Group Personal Accident Insurance Coverage, Business Travel Insurance Relocation Benefit Western Union values in-person collaboration, learning, and ideation whenever possible. We believe this creates value through common ways of working and supports the execution of enterprise objectives which will ultimately help us achieve our strategic goals. By connecting face-to-face, we are better able to learn from our peers, problem-solve together, and innovate. Our Hybrid Work Model categorizes each role into one of three categories. Western Union has determined the category of this role to be Hybrid. This is defined as a flexible working arrangement that enables employees to divide their time between working from home and working from an office location. The expectation is to work from the office a minimum of three days a week. We are passionate about diversity. Our commitment is to provide an inclusive culture that celebrates the unique backgrounds and perspectives of our global teams while reflecting the communities we serve. We do not discriminate based on race, color, national origin, religion, political affiliation, sex (including pregnancy), sexual orientation, gender identity, age, disability, marital status, or veteran status. The company will provide accommodation to applicants, including those with disabilities, during the recruitment process, following applicable laws. Estimated Job Posting End Date 08-05-2025 This application window is a good-faith estimate of the time that this posting will remain open. This posting will be promptly updated if the deadline is extended or the role is filled.

Posted 3 days ago

Apply

10.0 years

1 - 2 Lacs

India

Remote

Our client is seeking a skilled OpenShift Infrastructure Administrator for a 6-month extendable, and onsite engagement at their Dubai office (banking client). This role is focused on managing, securing, and optimizing enterprise-grade Red Hat OpenShift environments in a hybrid cloud infrastructure. The ideal candidate should possess deep expertise in OpenShift administration, infrastructure automation, and Azure cloud services. Experience with Terraform, CI/CD pipelines, and AWS will be considered a plus. Key Responsibilities Administer, maintain, and scale Red Hat OpenShift clusters in alignment with enterprise security and performance standards. Design and implement automation for provisioning and managing infrastructure using Infrastructure as Code (IaC) tools such as Terraform. Integrate OpenShift with CI/CD pipelines for application deployments, configuration management, and infrastructure updates. Monitor system performance, perform root cause analysis, and apply performance tuning and resource optimization. Enforce security policies and high availability across OpenShift workloads using Azure-native and Kubernetes-native tooling. Collaborate with DevOps and platform engineering teams to support seamless deployment workflows and system uptime. Provide ongoing support for OpenShift platform upgrades, patching, and configuration management. Manage Terraform state, remote backends, and ensure governance across multi-environment cloud infrastructure. Stay current with Red Hat OpenShift updates, Kubernetes ecosystem developments, and cloud-native tools. Required Qualifications & Skills 10+ years of experience in infrastructure administration and DevOps, with a strong focus on Red Hat OpenShift. Hands-on expertise in Azure services, Docker, Kubernetes, and Terraform. Proficient in CI/CD pipeline design, automation, and integration with OpenShift environments. Solid understanding of IaC principles, cloud infrastructure governance, and security best practices. Bachelor’s degree in Computer Science, IT, or a related field. Experience working in regulated industries such as banking or fintech is a plus. Compensation & Benefits Salary: 17,000 per month AED salary (fixed, and non-negotiable) Visa sponsorship Medical insurance Emirates ID provided Note: We don't offer accomodation, and no family visa Skills: terraform,openshift administration,azure cloud services,docker,openshift,ci/cd pipelines,infrastructure,kubernetes,performance tuning,security best practices,infrastructure automation,red hat

Posted 3 days ago

Apply

12.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Job ID: 1369 Location: Gurgaon, IN Job Family: Engineering Services Job Type: Full Time Working Mode: Fully On-Site About Us Innovation. Sustainability. Productivity. This is how we are Breaking New Ground in our mission to sustainably advance the noble work of farmers and builders everywhere. With a growing global population and increased demands on resources, our products are instrumental to feeding and sheltering the world. From developing products that run on alternative power to productivity-enhancing precision tech, we are delivering solutions that benefit people – and they are possible thanks to people like you. If the opportunity to build your skills as part of a collaborative, global team excites you, you’re in the right place. Grow a Career. Build a Future! Be part of this company at the forefront of agriculture and construction, that passionately innovates to drive customer efficiency and success. And we know innovation can’t happen without collaboration. So, everything we do at CNH Industrial is about reaching new heights as one team, always delivering for the good of our customers. Job Purpose The Solution Engineer will be responsible for providing technical leadership and people management skills, in leading the product development for a Connectivity platform, which supports iot Platforms such as iotHub & ThingWorx. This role involves coordinating with internal teams, program managers, and external teams to ensure that engineering solutions are delivered efficiently and meet organizational requirements. Key Responsibilities Develop a strategic technical vision and roadmap for the product, ensuring alignment with market needs and stakeholder objectives while maintaining a focus on solution-level decisions and their impact on overall business goals. Lead the design and management of technical solutions, ensuring scalability, security, and alignment with business strategy. Align product architecture with product vision and goals; evaluate technology options, lead decision-making processes, and communicate implications effectively to stakeholders. Collaborate closely with different development teams to deliver features on time and in accordance with CNHI’s standards, processes, and policies. Provide detailed technical specifications and accurate time/effort estimates; ensure non-functional requirements (e.g., performance, scalability, security) are met. Monitor the development progress and communicate the status updates to relevant stakeholders. Proactively identify and resolve project constraints, risks, and technical challenges across teams. Facilitate collaboration across cross-functional teams through clearly defined frameworks and coordination structures. Architect and oversee integration of IoT platforms such as Azure IoT Hub & ThingWorx with enterprise systems. Guide the development of secure, real-time, data-intensive IoT applications focusing on telemetry, edge computing, and data streaming pipelines. Experience Experience Required 12+ years of industry experience, including 3+ years in a Solution/Enterprise Architect role. Strong fundamentals in software architecture and engineering (OOP, RESTful APIs, Design Patterns, Data Structures, Algorithms). Skilled in business analysis, stakeholder collaboration, and strategic technology planning. Expertise in security, encryption, API design, integration patterns, data architecture, messaging systems, and asynchronous programming. Experience developing RESTful APIs using OpenAPI specifications. Proven experience in Microservices architecture with Docker and Kubernetes. Hands-on experience with Microsoft Azure cloud services including PaaS components (Service Bus, Event Hub, Blob Storage, Key Vault, API Management, Function Apps). Experience with performance tuning and application optimization. Expertise in IoT integration including device provisioning, edge connectivity, data ingestion, and telemetry. Technical Skills Languages & Frameworks: C#, ASP.NET Core, LINQ, JavaScript, HTML5, Angular, Java Authentication & Security: Azure AD, OAuth 2.0 (JWT), SSL/TLS. Databases: SQL Server, PostgreSQL, Cosmos DB Caching: Azure Redis Cache Cloud & DevOps: Microsoft Azure (AKS, Blob Storage, Key Vault, APIM, Event Hub, Service Bus, Functions), Azure DevOps, Terraform, Git, GitLab) Monitoring: App Insights, Datadog , ELK IoT Platforms: Azure IoT Hub &/Or ThingWorx Messaging & Streaming: MQTT, Azure Event Hub, Service Bus Topics Testing & Automation: NUnit, xUnit,K6, PowerShell, Python Preferred Qualifications Bachelor’s or master’s degree in computer science, Computer Engineering, or related field. What We Offer We offer dynamic career opportunities across an international landscape. As an equal opportunity employer, we are committed to delivering value for all our employees and fostering a culture of respect. Benefits At CNH, we understand that the best solutions come from the diverse experiences and skills of our people. Here, you will be empowered to grow your career, to follow your passion, and help build a better future. To support our employees, we offer regional comprehensive benefits, including: Flexible work arrangements Savings & Retirement benefits Tuition reimbursement Parental leave Adoption assistance Fertility & Family building support Employee Assistance Programs Charitable contribution matching and Volunteer Time Off

Posted 3 days ago

Apply

2.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Overview TekWissen is a global workforce management provider throughout India and many other countries in the world. The below clientis a global company with shared ideals and a deep sense of family. From our earliest days as a pioneer of modern transportation, we have sought to make the world a better place – one that benefits lives, communities and the planet Job Title: Specialty Development Senior Location: Chennai Work Type: Hybrid Position Description Bachelor's Degree 2+Years in GCP Services - Biq Query, Data Flow, Dataproc, DataPlex, DataFusion, Terraform, Tekton, Cloud SQL, Redis Memory, Airflow, Cloud Storage 2+ Years in Data Transfer Utilities 2+ Years in Git / any other version control tool 2+ Years in Confluent Kafka 1+ Years of Experience in API Development 2+ Years in Agile Framework 4+ years of strong experience in python, Pyspark development. 4+ years of shell scripting to develop the adhoc jobs for data importing/exporting Skills Required Python, dataflow, Dataproc, GCP Cloud Run, DataForm, Agile Software Development, Big Query, TERRAFORM, Data Fusion, Cloud SQL, GCP, KAFKA Skills Preferred Java Experience Required 8+ years Education Required Bachelor's Degree TekWissen® Group is an equal opportunity employer supporting workforce diversity.

Posted 3 days ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Overview TekWissen is a global workforce management provider throughout India and many other countries in the world. The below clientis a global company with shared ideals and a deep sense of family. From our earliest days as a pioneer of modern transportation, we have sought to make the world a better place – one that benefits lives, communities and the planet Job Title: Software Engineer Senior Location: Chennai Work Type: Hybrid Position Description As part of the client's DP&E Platform Observability team, you'll help build a top-tier monitoring platform focused on latency, traffic, errors, and saturation. You'll design, develop, and maintain a scalable, reliable platform, improving MTTR/MTTX, creating dashboards, and optimizing costs. Experience with large systems, monitoring tools (Prometheus, Grafana, etc.), and cloud platforms (AWS, Azure, GCP) is ideal. The focus is a centralized observability source for data-driven decisions and faster incident response. Skills Required Spring Boot, Angular, Cloud Computing Skills Preferred Google Cloud Platform - Biq Query, Data Flow, Dataproc, Data Fusion, TERRAFORM, Tekton,Cloud SQL, AIRFLOW, POSTGRES, Airflow PySpark, Python, API Experience Required 5+ years of overall experience with proficiency in Java, angular or any javascript technology with experience in designing and deploying cloud-based data pipelines and microservices using GCP tools like BigQuery, Dataflow, and Dataproc. Ability to leverage best in-class data platform technologies (Apache Beam, Kafka,...) to deliver platform features, and design & orchestrate platform services to deliver data platform capabilities. Service-Oriented Architecture and Microservices: Strong understanding of SOA, microservices, and their application within a cloud data platform context. Develop robust, scalable services using Java Spring Boot, Python, Angular, and GCP technologies. Full-Stack Development: Knowledge of front-end and back-end technologies, enabling collaboration on data access and visualization layers (e.g., React, Node.js). Design and develop RESTful APIs for seamless integration across platform services. Implement robust unit and functional tests to maintain high standards of test coverage and quality. Database Management: Experience with relational (e.g., PostgreSQL, MySQL) and NoSQL databases, as well as columnar databases like BigQuery. Data Governance and Security: Understanding of data governance frameworks and implementing RBAC, encryption, and data masking in cloud environments. CI/CD and Automation: Familiarity with CI/CD pipelines, Infrastructure as Code (IaC) tools like Terraform, and automation frameworks. Manage code changes with GitHub and troubleshoot and resolve application defects efficiently. Ensure adherence to SDLC best practices, independently managing feature design, coding, testing, and production releases. Problem-Solving: Strong analytical skills with the ability to troubleshoot complex data platform and microservices issues. Experience Preferred GCP Data Engineer, GCP Professional Cloud Education Required Bachelor's Degree TekWissen® Group is an equal opportunity employer supporting workforce diversity.

Posted 3 days ago

Apply

7.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Overview TekWissen is a global workforce management provider throughout India and many other countries in the world. The below clientis a global company with shared ideals and a deep sense of family. From our earliest days as a pioneer of modern transportation, we have sought to make the world a better place – one that benefits lives, communities and the planet Job Title: Software Engineer Senior Location: Chennai Work Type: Hybrid Position Description As a Software Development Engineer working in the client's Credit IT, you will join a team that supports to develop Rest APIs / Microservices based digital products of Core platform engineering. You will work on a balanced product team to define, design, develop and deploy innovative software solutions. Additionally, you will conduct proof-of-concepts to support new features, ensure quality, timely delivery using Agile Extreme Programming practices. Write production quality code that delivers great customer experiences Work on a small agile team to deliver working, tested software Work effectively with product owners, product designers and other technical experts Review, comment and accept code base contributions Contribute to Service Level Objectives Skills Required Google Cloud Platform - Biq Query, Data Flow, Dataproc, Data Fusion, TERRAFORM, Tekton,Cloud SQL, AIRFLOW, POSTGRES, Airflow PySpark, Python, API Experience Required 7+ years software engineering experience in web, mobile, api or full-stack development Education Required Bachelor's Degree TekWissen® Group is an equal opportunity employer supporting workforce diversity.

Posted 3 days ago

Apply

3.0 - 6.0 years

0 Lacs

Pune, Maharashtra, India

On-site

The Applications Development Intermediate Programmer Analyst is an intermediate level position responsible for participation in the establishment and implementation of new or revised application systems and programs in coordination with the Technology team. The overall objective of this role is to contribute to applications systems analysis and programming activities. Responsibilities: Utilize knowledge of applications development procedures and concepts, and basic knowledge of other technical areas to identify and define necessary system enhancements, including using script tools and analyzing/interpreting code Consult with users, clients, and other technology groups on issues, and recommend programming solutions, install, and support customer exposure systems Apply fundamental knowledge of programming languages for design specifications. Analyze applications to identify vulnerabilities and security issues, as well as conduct testing and debugging Serve as advisor or coach to new or lower level analysts Identify problems, analyze information, and make evaluative judgements to recommend and implement solutions Resolve issues by identifying and selecting solutions through the applications of acquired technical experience and guided by precedents Has the ability to operate with a limited level of direct supervision. Can exercise independence of judgement and autonomy. Acts as SME to senior stakeholders and /or other team members. Appropriately assess risk when business decisions are made, demonstrating particular consideration for the firm's reputation and safeguarding Citigroup, its clients and assets, by driving compliance with applicable laws, rules and regulations, adhering to Policy, applying sound ethical judgment regarding personal behavior, conduct and business practices, and escalating, managing and reporting control issues with transparency. Qualifications: 3-6 years of relevant experience in the Financial Service industry Intermediate level experience in Applications Development role Consistently demonstrates clear and concise written and verbal communication Demonstrated problem-solving and decision-making skills Ability to work under pressure and manage deadlines or unexpected changes in expectations or requirements Skills: CI/CD scripting , DevOps , IAAS (Terraform) , Openshift , Python Education: Bachelor’s degree/University degree or equivalent experience This job description provides a high-level review of the types of work performed. Other job-related duties may be assigned as required. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Applications Development ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Most Relevant Skills Please see the requirements listed above. ------------------------------------------------------ Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster.

Posted 3 days ago

Apply

5.0 years

0 Lacs

Anupgarh, Rajasthan, India

On-site

35851BR Hyderabad Job Description Job Summary: We are looking for a skilled Ansible Automation Expert to design, implement, and maintain infrastructure and configuration automation solutions using Ansible , Ansible Tower (or AWX ), and other related tools. The ideal candidate will play a key role in driving infrastructure-as-code (IaC), improving deployment consistency, and enhancing operational efficiency through automation. Key Responsibilities Automation Development: Design, write, and manage Ansible playbooks, roles, and modules for infrastructure provisioning, configuration management, and application deployment. Build scalable automation frameworks for Linux and Windows environments. Platform Integration & Management: Integrate Ansible with CI/CD pipelines (e.g., GitLab, Jenkins, Azure DevOps) for automated deployments. Manage Ansible Tower / AWX for job scheduling, RBAC, workflows, and auditing. Infrastructure as Code (IaC): Define and implement IaC best practices using Ansible in conjunction with Terraform, CloudFormation, or similar tools. Automate provisioning in on-prem, hybrid, or cloud (AWS, Azure, GCP) environments. Operations & Support: Develop monitoring and logging solutions for automated tasks. Troubleshoot automation issues and provide root cause analysis. Create and maintain detailed documentation and version-controlled code repositories (Git). Collaboration & Enablement: Collaborate with system admins, DevOps, cloud, and security teams to identify automation opportunities. Provide knowledge transfer, training, and mentorship to team members on Ansible usage and automation standards. Required Skills & Experience 5+ years of experience in infrastructure automation with strong expertise in Ansible and YAML scripting. Proficiency in managing Ansible Tower / AWX environments. Deep understanding of Linux system administration and basic Windows automation. Experience integrating Ansible with CI/CD tools (e.g., Jenkins, GitLab, Azure DevOps). Working knowledge of cloud platforms (AWS, Azure, GCP) and automation in hybrid environments. Familiarity with version control (Git) and code review practices. Strong scripting skills (e.g., Python, Bash, PowerShell). Preferred Qualifications Red Hat Certified Specialist in Ansible Automation or equivalent certification. Experience with Terraform, Packer, Docker, or Kubernetes. Familiarity with security automation, patching, and compliance enforcement using Ansible. Knowledge of REST APIs and custom Ansible modules. Soft Skills Strong problem-solving and debugging skills. Excellent communication and documentation abilities. Ability to work independently in a fast-paced environment. Passion for automation, efficiency, and continuous improvement. Qualifications B.E/B Tech Range of Year Experience-Min Year 5 Range of Year Experience-Max Year 10

Posted 3 days ago

Apply

8.0 years

25 - 39 Lacs

Hyderabad, Telangana, India

On-site

Location: Hyderabad (Work from Office) Experience: 8+ Years Employment Type: Full-Time Job Summary: We are seeking a highly skilled and experienced Senior PKI & Identity Infrastructure Engineer to lead the design, implementation, and maintenance of our organization's critical identity and security infrastructure. This role focuses on enterprise Public Key Infrastructure (PKI), Active Directory (AD), and cloud-based infrastructure in hybrid environments. The ideal candidate will bring deep expertise in PKI, Windows Certificate Services, Keyfactor, and cloud platforms such as AWS and Azure. This project is scheduled for one year that is extended up-to 3 years Key Areas Of Responsibility PKI Infrastructure Design and maintain enterprise PKI architecture using Windows Certificate Authority. Administer and optimize Keyfactor platform for certificate lifecycle management. Configure and manage Hardware Security Modules (HSMs). Automate certificate discovery, issuance, and renewal processes. Develop PKI policies, procedures, and disaster recovery plans. Monitor PKI environments to ensure compliance with security standards. Active Directory & Identity Management Architect and secure enterprise Active Directory infrastructure. Lead Active Directory hardening and consolidation initiatives. Configure and manage Microsoft Entra ID (formerly Azure AD). Design and manage enterprise SSO solutions and application integrations. Implement Zero Trust Architecture and identity lifecycle management. Establish security monitoring and alerting for AD environments. Cloud Infrastructure Design and maintain hybrid environments across AWS EC2 and Azure. Develop Infrastructure as Code (IaC) solutions using Terraform. Implement cloud security best practices and compliance frameworks. Manage cloud identity federation and networking. Design disaster recovery solutions and optimize cloud resource utilization. Required Technical Skills PKI Expertise: Advanced experience with Windows Certificate Authority. Hands-on with Keyfactor platform. Deep understanding of HSMs and certificate lifecycle management. Knowledge of PKI standards and compliance requirements. Active Directory & Identity Expert-level understanding of Active Directory architecture. AD security hardening, consolidation, and remediation. Experience with Microsoft Entra ID (Azure AD). Familiarity with SSO, application federation, and SIEM integration. Cloud & Automation Proficient in Terraform scripting for AWS/Azure infrastructure. Strong understanding of AWS EC2, Azure VM, networking, and identity. Automation using PowerShell, Python, and integration with CI/CD pipelines. Required Qualifications 8+ years of experience in IT infrastructure and security. 5+ years of specialized experience in PKI and Keyfactor. Strong cloud infrastructure knowledge (AWS and Azure). Proven track record of securing and managing enterprise-scale AD environments. Certifications Preferred Microsoft (MCSE, Azure Security Engineer) AWS (Associate or Professional level) Security (CISSP, CISM) Additional Skills Strong project management and leadership abilities Excellent communication and problem-solving skills Experience in technical documentation and change management Ability to explain complex concepts to both technical and non-technical stakeholders Key Projects & Tasks PKI Infrastructure: Design and deploy enterprise PKI. Automate certificate lifecycle with Keyfactor. Configure HSMs and ensure compliance. Active Directory Implement AD hardening and security monitoring. Manage Entra ID and enterprise SSO. Establish identity governance. Cloud Infrastructure Develop Terraform modules for hybrid cloud. Implement cloud security controls and DR planning. Optimize cloud costs and automate deployments.

Posted 3 days ago

Apply

0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Responsibilities Collaborate with clients to understand their business requirements and translate them into cloud-based solutions. Design and architect scalable, reliable, and secure cloud solutions on platforms such as Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP). Conduct cloud assessments and provide recommendations on cloud migration strategies, including lift-and-shift, re-platforming, or refactoring. Develop architecture blueprints, diagrams, and documentation that clearly articulate the proposed solutions. Work closely with development teams to ensure cloud architecture best practices are followed throughout the software development lifecycle. Assist in the evaluation and selection of cloud services, tools, and frameworks to meet project requirements. Implement and configure cloud services, infrastructure, and networking components as required. Ensure security and compliance of cloud solutions by implementing appropriate security controls and following industry best practices. Collaborate with operations teams to optimize cloud resource utilization, monitor performance, and troubleshoot issues. Stay up to date with the latest cloud technologies, trends, and best practices, and share knowledge with the team and clients. Candidate requirements: Bachelor’s degree in computer science, Information Technology, or a related field. Proven experience as a Cloud Solution Architect or a similar role, with a deep understanding of cloud technologies and architectures. Strong knowledge of cloud platforms such as AWS, Azure, or GCP, including their core services and capabilities. Experience designing and implementing scalable, reliable, and secure cloud solutions. Familiarity with cloud migration strategies and patterns. Knowledge of cloud security principles and best practices. Proficiency in infrastructure-as-code (IaC) tools and techniques, such as Terraform or CloudFormation. Strong problem-solving and analytical skills, with the ability to understand complex requirements and propose appropriate cloud solutions. Excellent communication and presentation skills, with the ability to effectively communicate technical concepts to both technical and non-technical stakeholders. Relevant certifications such as AWS Certified Solutions Architect, Microsoft Certified: Azure Solutions Architect, or Google Cloud Certified – Professional Cloud Architect. Experience with DevOps practices and CI/CD pipelines is a plus

Posted 3 days ago

Apply

7.0 years

0 Lacs

Kochi, Kerala, India

On-site

Responsibilities and Duties: 7+ years of professional experience as a DevOps engineer A team player with a zeal and passion to learn new technologies. Design, deployment, and support of OCI infrastructure and applications, using CI/CD Help map existing patterns/standards from AWS into OCI Collaborate with external teams and stakeholders to drive project deliverables. Working with engineers and providing skills transfer to deliver their best in an inclusive and psychologically safe environment. Ability to rapidly absorb new technologies and work in a fast-paced environment. Ability to identify and communicate risks/impacts and then adjust as necessary. Requirements Experience in implementation and design of Cloud infrastructure environments using modern CI/CD deployment patterns with Terraform, Jenkins, and Git. Demonstrable knowledge and expertise in operational support of systems hosted on OCI. Support multiple application teams to achieve project delivery on OCI and AWS (AWS experience an advantage). Experience contributing to technical and planning meetings to identify best outcomes. Strong Agile, problem-solving, and time management skills. Experience of deployment/support of Oracle databases within OCI (Deep DBA skills NOT required, however) Qualification: B.Tech/MCA/M.Tech/MSc (Computers)/Equivalent Experience: 5+ Years Skills: OCI (Oracle Cloud Infrastructure), Oracle Database, Terraform, DevOps, CI/CD, Jenkins, Bit bucket, GitHub

Posted 3 days ago

Apply

3.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. As a FHIR Solutions Engineer, you will play a pivotal role in designing, developing, and implementing FHIR-based solutions. You will leverage your extensive experience and expertise in FHIR to enhance our clinical systems, ensuring seamless interoperability and data exchange. Day to day duties will include design and development of FHIR based clinical knowledge solutions, EMR integration, performing knowledge transfer, and participation in team agile process. Primary Responsibility Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Bachelor’s or master’s degree in computer science, Information Technology, or a related field 3+ years of experience in healthcare technology with a focus on FHIR based solutions 3+ years of experience in FHIR Specification and Operations 3+ years of experience in the healthcare industry 2+ years of experience in Azure infrastructure and architecture Solid understanding of clinical data standards and interoperability Technical Skills: Typescript/Javascript NPM/NodeJS FHIR Experience Bash Shell Scripting Github/git GitHub Actions/Workflow Proven track record of successfully implementing FHIR-based solutions in large health care organizations Proven excellent problem-solving skills and attention to detail Proven solid communication and collaboration skills Preferred Qualifications Experience or knowledge in Terraform and cloud architecture At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.

Posted 3 days ago

Apply

5.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. We are seeking a talented and motivated Data Engineer to join our growing data team. You will play a key role in building scalable data pipelines, optimizing data infrastructure, and enabling data-driven solutions. Primary Responsibilities Design, develop, and maintain scalable ETL/ELT pipelines for batch and real-time data processing Build and optimize data models and data warehouses to support analytics and reporting Collaborate with analysts and software engineers to deliver high-quality data solutions Ensure data quality, integrity, and security across all systems Monitor and troubleshoot data pipelines and infrastructure for performance and reliability Contribute to internal tools and frameworks to improve data engineering workflows Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications 5+ years of experience working on commercially available software and / or healthcare platforms as a Data Engineer 3+ years of solid experience designing and building Enterprise Data solutions on cloud 1+ years of experience developing solutions hosted within public cloud providers such as Azure or AWS or private cloud/container-based systems using Kubernetes/OpenShift Experience with some of the modern relational databases Experience with Data warehousing services preferably Snowflake Experience in using modern software engineering and product development tools including Agile / SAFE, Continuous Integration, Continuous Delivery, DevOps etc. Solid experience of operating in a quickly changing environment and driving technological innovation to meet business requirements Skilled at optimizing SQL statements Subject matter expert on Cloud technologies preferably Azure and Big Data ecosystem Preferred Qualifications Experience with real-time data streaming and event-driven architectures Experience building Big Data solutions on public cloud (Azure) Experience building data pipelines on Azure with skills Databricks spark, scala, Azure Data factory, Kafka and Kafka Streams, App services, Az Functions Experience developing RESTful Services in .NET, Java or any other language Experience with DevOps in Data engineering Experience with Microservices architecture Exposure to DevOps practices and infrastructure-as-code (e.g., Terraform, Docker) Knowledge of data governance and data lineage tools Ability to establish repeatable processes, best practices and implement version control software in a Cloud team environment At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.

Posted 3 days ago

Apply

5.0 years

0 Lacs

Greater Chennai Area

On-site

Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : DevOps Good to have skills : NA Minimum 5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various teams to ensure that application development aligns with business objectives, overseeing project timelines, and facilitating communication among stakeholders to drive project success. You will also engage in problem-solving discussions and provide guidance to team members, ensuring that best practices are followed throughout the development process. Roles & Responsibilities: - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work related problems. - Facilitate knowledge sharing sessions to enhance team capabilities. - Monitor project progress and provide regular updates to stakeholders. Professional & Technical Skills: - Must To Have Skills: Proficiency in DevOps. - Strong understanding of continuous integration and continuous deployment practices. - Experience with cloud platforms such as AWS, Azure, or Google Cloud. - Familiarity with containerization technologies like Docker and Kubernetes. - Ability to implement infrastructure as code using tools like Terraform or Ansible. Additional Information: - The candidate should have minimum 3 years of experience in DevOps. - This position is based at our Chennai office. - A 15 years full time education is required.

Posted 3 days ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Position Overview Job Title- Cloud DEVOPS Engineer Location- Pune, India Role Description Our Corporate Bank Technology team is a global team of 3000 across 30 countries. The primary businesses that we support within Corporate Bank are Cash Management, Securities Services, Trade Finance and Trust & Agency Services. CB Technology support these businesses through CIO aligned teams and also by ‘horizontals’ such as Client Connectivity, Surveillance and Regulatory, Infrastructure, Architecture, Production, and Risk & Control. Deutsche Bank’s Corporate Bank division is a leading provider of cash management, trade finance and securities finance. We complete green-field projects that deliver the best Corporate Bank - Securities Services products in the world. Our team is diverse, international, and driven by shared focus on clean code and valued delivery. At every level, agile minds are rewarded with competitive pay, support, and opportunities to excel. You will work as part of a cross-functional agile delivery team. You will bring an innovative approach to software development, focusing on using the latest technologies and practices, as part of a relentless focus on business value. You will be someone who sees engineering as team activity, with a predisposition to open code, open discussion and creating a supportive, collaborative environment. You will be ready to contribute to all stages of software delivery, from initial analysis right through to production support. What We’ll Offer You As part of our flexible scheme, here are just some of the benefits that you’ll enjoy Best in class leave policy Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your Key Responsibilities Design, build, and maintain scalable, secure, and high-availability infrastructure on GCP Cloud platform Manage infrastructure as code using tools like Terraform. Develop and optimize CI/CD pipelines to ensure smooth, efficient and reliable data workload releases As part of our Data platform team, you will work on various components as DevOpS Engineer Deliver high quality software and to be passionate about software engineering Enabling the adoption of practices such as SRE and DevSecOps to minimise toil and manual tasks and increase automation and stability. Automate repetitive tasks and process to improve efficiency and reduce errors. Develop and maintain scripts for infrastructure automation, monitoring and deployment Implement and enforce security best practices across the infrastructure and deployment processes Your Skills And Experience Proficiency in Infrastructure as Code - Terraform (must) Proficiency in cloud platforms such Google Cloud (preferred), AWS or Azzure Able to Design and develop Terraform modules, templates and scripts to provision and manage GCP infrastructure resources at enterprise level Usage of enterprise Security Management solutions including GCP Secret Manager Network Fundamentals – Firewalls and ingress/egress Patterns Skills in at least one: GCP Networking, GCP Data Services, GCP Serverless Experience of security configuration Management via guardrails Experience in CI/CD tools GitHub Actions – CI/CD experience would be a plus Experience with Docker/Kubernetes (creating images, deployment) Knowledge about GCP Data Services would be a plus such as Big Query, Dataproc, Composer Experience with GCP based security hardening including IAM, ACL, firewall rules, Exposure to delivering good quality code within enterprise scale development Working knowledge of environment monitoring tools such as GCO, NewRelic, Prometheus, Grafana How We’ll Support You Training and development to help you excel in your career Coaching and support from experts in your team A culture of continuous learning to aid progression A range of flexible benefits that you can tailor to suit your needs About Us And Our Teams Please visit our company website for further information: https://www.db.com/company/company.htm We strive for a culture in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively. Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment.

Posted 3 days ago

Apply

0 years

0 Lacs

India

Remote

About Us We are a cutting-edge technology company specializing in media forensics. Our mission is to develop advanced AI solutions that detect and localize tampering attempts in digital media such as images and PDFs, ensuring the authenticity and integrity of our clients' content. The Role We are seeking a Senior DevOps Engineer to design, implement, and maintain robust cloud infrastructure that underpins mission-critical workflows for our international clients. The successful candidate will collaborate closely with development teams to automate deployments, ensure scalability, and maintain high availability across our services. Key Responsibilities Infrastructure as Code: Define and manage cloud resources with Terraform or AWS CDK, adhering to strict version-control and peer-review practices. Continuous Integration / Continuous Deployment (CI/CD): Build and maintain pipelines that enable reliable, zero-downtime deployments. Scalability and Reliability: Architect and optimise serverless and containerised solutions (AWS Lambda, ECS/EKS) capable of handling variable workloads. Observability: Implement comprehensive logging, metrics, and tracing to facilitate proactive incident detection and resolution. Collaboration: Work with backend engineers to streamline deployment processes, improve system performance, and uphold security best practices. Required Qualifications Fluent in English Extensive experience with Amazon Web Services (Cognito, API Gateway, Lambda, S3, Dynamo, ECS/EKS, IAM, and CloudWatch) Proficiency with Terraform or AWS CDK Proficiency in Python Knowledge of Kubernetes (managed via EKS or self-hosted). CI/CD using bitbucket pipelines, bitbucket pipes, ECR (elastic containers registry) for docker images, CodeArtifact repository for hosting private libraries Nice-to-Haves Proficiency in Node.js Expertise in designing and optimising DynamoDB architectures. Why Photocert? Fully remote work Salary based on experience Coaching and learning opportunities Social virtual Fridays

Posted 3 days ago

Apply

4.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Head of Architecture and Technology (Hands-On, High-Ownership) Company: Elysium PTE. LTD. Location: Chennai, Tamil Nadu — at office Employment Type: Full-time, permanent Compensation: ₹15 L fixed CTC + up to 5 % ESOP (performance-linked vesting, 4-year schedule with 1-year cliff) Reports to: Founding Team ________________________________________ About Elysium Elysium is a founder-led studio headquartered in Singapore with its delivery hub in Chennai. We are currently building a global gaming-based mar-tech platform while running a premium digital-services practice (branding, immersive web, SaaS MVPs, AI-powered solutions). We thrive on speed, experimentation and shared ownership. ________________________________________ The opportunity We’re looking for a hungry technologist who can work in an early-stage start-up along with the founders to build ambitious global products & services. You’ll code hands-on every week, shape product architecture, and grow a lean engineering pod—owning both our flagship product and client deliveries. ________________________________________ What you will achieve in your first 12 months • Co-ordinate & develop the In-house products with internal & external teams. • Build and mentor a six-to-eight-person engineering/design squad that hits ≥ 85 % on time delivery for IT-service clients. • Cut mean time-to-deployment to under 30 minutes through automated CI/CD and Infrastructure-as-Code. • Implement GDPR-ready data flows and a zero-trust security baseline across all projects. • Publish quarterly tech radars and internal playbooks that keep the team learning and shipping fast. ________________________________________ Day-to-day responsibilities • Resource management & planning using the internal & external teams with respect to our products & client deliveries. • Pair-program and review pull requests to enforce clean, testable code. • Translate product/user stories into domain models, sprint plans and staffing forecasts. • Design cloud architecture (AWS / GCP) that balances cost and scale; own IaC, monitoring and on-call until an SRE is hired. • Evaluate and manage specialist vendors for parts of the flagship app; hold them accountable on quality and deadlines. • Scope and pitch technical solutions in client calls; draft SoWs and high-level estimates with founders. • Coach developers and designers, set engineering KPIs, run retrospectives and post-mortems. • Prepare technical artefacts for future fundraising and participate in VC diligence. ________________________________________ Must-have Requirements • 5 – 8 years modern full-stack development with at least one product shipped to >10 k MAU or comparable B2B scale. • Expert knowledge of modern full-stack ecosystems: Node.js or Python or Go; React/Next.js; distributed data stores (PostgreSQL, DynamoDB, Redis, Kafka or similar). • Deep familiarity with AWS, GCP or Azure, including cost-optimized design, autoscaling, serverless patterns, container orchestration and IaC tools such as Terraform or CDK. • Demonstrated ownership of DevSecOps practices: CI/CD, automated testing matrices, vulnerability scanning, SRE dashboards and incident post-mortems. • Excellent communication skills, able to explain complex trade-offs to founders, designers, marketers and non-technical investors. • Hunger to learn, ship fast, and own meaningful equity in lieu of a senior-corporate pay check. ________________________________________ Nice-to-have extras • Prior work in fintech, ad-tech or loyalty. • Experience with WebGL/Three.js, real-time event streaming (Kafka, Kinesis), LLM pipelines & Blockchain. • Exposure to seed- or Series-A fundraising, investor tech diligence or small-team leadership. ________________________________________ What we offer • ESOP of up to 5 % on a 4-year vest (1-year cliff) with performance accelerators tied to product milestones. • Direct influence on tech stack, culture and product direction—your code and decisions will shape the company’s valuation. • A team that values curiosity, transparency and shipping beautiful work at start-up speed. ________________________________________

Posted 3 days ago

Apply

75.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Bring more to life. Are you ready to accelerate your potential and make a real difference within life sciences, diagnostics and biotechnology? At Pall Corporation, one of Danaher’s 15+ operating companies, our work saves lives—and we’re all united by a shared commitment to innovate for tangible impact. You’ll thrive in a culture of belonging where you and your unique viewpoint matter. And by harnessing Danaher’s system of continuous improvement, you help turn ideas into impact – innovating at the speed of life. As a global leader in high-tech filtration, separation, and purification, Pall Corporation thrives on helping our customers solve their toughest challenges. Our products serve diverse, global customer needs across a wide range of applications to advance health, safety and environmentally responsible technologies. From airplane engines to hydraulic systems, scotch to smartphones, OLED screens to paper—everyday Pall is there, helping protect critical operating assets, improve product quality, minimize emissions and waste, and safeguard health. For the exponentially curious, Pall is a place where you can thrive and amplify your impact on the world. Find what drives you on a team with a more than 75-year history of discovery, determination, and innovation. Learn about the Danaher Business System which makes everything possible. Job Description: Design and implement secure, scalable, and cost-effective cloud infrastructure using AWS services. Develop and maintain Infrastructure as Code (IaC) using tools such as Terraform or AWS CloudFormation. Lead or support Windows Server upgrades and migrations to AWS, ensuring compliance with security and operational standards. Automate cloud operations and deployments through scripting in PowerShell, Python, or Bash. Collaborate with network, security, and client technology teams to align cloud solutions with business and compliance requirements. The essential requirements of the job include: AWS Certified Solutions Architect – Associate (or higher). Proficiency in Infrastructure as Code (IaC) using Terraform or AWS CloudFormation. Experience with Windows Server administration, including upgrades and cloud migrations. Strong scripting skills in PowerShell, Python, or Bash. Hands-on experience with core AWS services such as EC2, VPC, IAM, S3, and CloudWatch. Join our winning team today. Together, we’ll accelerate the real-life impact of tomorrow’s science and technology. We partner with customers across the globe to help them solve their most complex challenges, architecting solutions that bring the power of science to life. For more information, visit www.danaher.com. At Danaher, we value diversity and the existence of similarities and differences, both visible and not, found in our workforce, workplace and throughout the markets we serve. Our associates, customers and shareholders contribute unique and different perspectives as a result of these diverse attributes.

Posted 3 days ago

Apply

10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Greetings from TCS!!! TCS has been a great pioneer in feeding the fire of Techies like you. We are a global leader in the technology arena and there’s nothing that can stop us from growing together. Your role is of key importance, as it lays down the foundation for the entire project. Make sure you have a valid EP number before the interview. To create an EP Number, please visit https://ibegin.tcs.com/iBegin/register Kindly complete the registration if you have not done it yet. Role: GCP DevOps Engineer Experience: 10-12 years Location: Hyderabad Primary Skills: GCP DevOps , Terraform , Kubernetes, Docker, Jenkins Experience: Min 5+ years of experience in GCP. Responsibilities Experience in Agile practices. Stay up to date with emerging trends and technologies in software development and apply them to improve the quality and performance of applications. The hire is expected to have good knowledge of Java based technologies as well as fair knowledge of Cloud (GCP) migration. The hire is also expected to have basic GCP understanding and hands on experience in GCP. Additionally, he will also help us in supporting CI/CD pipeline in the project. Good problem-solving skills and understanding of microservices based architecture GCP fundamentals and hands on experience in GCP managed services especially K8. Must have hands on exp in Terraform, Docker, Jenkins, Kubernetes

Posted 3 days ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Position: Devops Engineer Location: Hyderabad Experience: 5+ Years Skills - 5+ years of experience developing AWS applications. Bachelor's or master's degree in computer science or a related field of study, or equivalent experience. Experience developing Solutions on AWS using tools like Terraform, Terragrunt, Jenkins, Cloudformation etc Familiarity with bitbucket, Terraform Cloud, Jira, and Agile methodologies. Knowledge of Python, Shell and Powershell scripting. Advanced Level Knowledge of AWS Cloud, platform architecture.. Strong debugging and troubleshooting skills.

Posted 3 days ago

Apply

7.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Title: Software Engineer - Backend (Python) Experience: 7+ Years Location : Hyderabad About the Role: Our team is responsible for building the backend components of the GenAI Platform. The Platform Offers Safe, compliant and cost-efficient access to LLMs, including Opensource & Commercial ones, adhering to Experian standards and policies Reusable tools, frameworks and coding patterns to perform various functions involved in either fine-tuning a LLM or developing a RAG-based application What you'll do here Design & build backend components of our GenAI platform on AWS. Collaborate with geographically distributed cross-functional teams. Participate in on-call rotation with the rest of the team to handle production incidents. What you'll need to succeed Must Have Skills At least 7+ years of professional backend web development experience with Python. Experience of AI and RAG Experience with DevOps & IaC tools such as Terraform, Jenkins etc. Experience with MLOps platorms such as AWS Sagemaker, Kubeflow or MLflow. Experience with web development frameworks such as Flask, Django or FastAPI. Experience with concurrent programming designs such as AsyncIO. Experience with any of the public cloud platforms like AWS, Azure, GCP, preferably AWS. Experience with CI/CD practices, tools, and frameworks. Nice To Have Skills Experience with Apache Kafka and developing Kafka client applications in Python. Experience with big data processing frameworks, preferably Apache Spark. Experience with containers (Docker) and container platorms like AWS ECS or AWS EKS. Experience with unit and functional testing frameworks. Experience with various Python packaging options such as Wheel, PEX or Conda. Experience with metaprogramming techniques in Python

Posted 3 days ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Title : Software Engineer - Backend (Python) About The Role Our team is responsible for building the backend components of MLOps platform on AWS. The backend components we build are the fundamental blocks for feature engineering, feature serving, model deployment and model inference in both batch and online modes. What You'll Do Here Design & build backend components of our MLOps platform on AWS. Collaborate with geographically distributed cross-functional teams. Participate in on-call rotation with the rest of the team to handle production incidents. What you'll need to succeed Must Have Skills Experience with web development frameworks such as Flask, Django or FastAPI. Experience working with WSGI & ASGI web servers such as Gunicorn, Uvicorn etc. Experience with concurrent programming designs such as AsyncIO. Experience with unit and functional testing frameworks. Experience with any of the public cloud platforms like AWS, Azure, GCP, preferably AWS. Experience with CI/CD practices, tools, and frameworks. Nice To Have Skills Experience with Apache Kafka and developing Kafka client applications in Python. Experience with MLOps platorms such as AWS Sagemaker, Kubeflow or MLflow. Experience with big data processing frameworks, preferably Apache Spark. Experience with containers (Docker) and container platorms like AWS ECS or AWS EKS. Experience with DevOps & IaC tools such as Terraform, Jenkins etc. Experience with various Python packaging options such as Wheel, PEX or Conda. Experience with metaprogramming techniques in Python. Primary Skills Python Development (Flask, Django or FastAPI) WSGI & ASGI web servers (Gunicorn, Uvicorn etc) AWS

Posted 3 days ago

Apply

6.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Summary: We are looking for a highly skilled AWS DevOps Engineer with hands-on expertise across both AWS and Microsoft Azure . You will be responsible for building secure, scalable, and automated infrastructure while supporting end-to-end CI/CD processes, containerization, and monitoring in a hybrid cloud environment . This is a hands-on engineering role with a strong focus on performance, security, and automation. Key Responsibilities: Architect, implement, and maintain cloud-native and hybrid infrastructure using AWS and Azure services. Build and manage CI/CD pipelines for automated build, test, and deployment using tools like GitHub Actions, Jenkins, GitLab CI, or Azure DevOps. Leverage Terraform, CloudFormation, and ARM templates for repeatable infrastructure as code (IaC) deployments. Containerize applications and orchestrate environments using Docker and Kubernetes (EKS, AKS). Set up and manage monitoring, logging, and alerting using CloudWatch, Azure Monitor, Prometheus, Grafana, or ELK stack. Automate system-level operations and workflows using Python, Bash, or PowerShell . Continuously track and optimize cloud cost, performance, and security posture . Collaborate across Dev, QA, and Security teams to enforce DevSecOps practices . Diagnose and troubleshoot infrastructure and deployment issues across development and production environments. Required Skills & Qualifications: 6+ years of experience as a DevOps Engineer in a production-grade cloud-native environment. Strong expertise in AWS core services (EC2, IAM, S3, RDS, Lambda, CloudFormation, etc.). Proficient with Microsoft Azure services and hybrid cloud deployment. Deep working knowledge of Infrastructure as Code tools like Terraform, CloudFormation, or ARM templates. Practical experience with CI/CD tools such as Jenkins, GitLab CI, Azure DevOps, GitHub Actions. Experience building and managing Kubernetes clusters (EKS, AKS) and container pipelines. Proficiency in scripting with Python , Bash , or PowerShell . Strong understanding of cloud security , IAM policies, firewall configuration, and networking fundamentals. Experience working in agile, fast-paced DevOps environments . Preferred Qualifications: Bachelor's or master’s in computer science, Engineering, or a related technical field. AWS or Azure certifications (e.g., AWS DevOps Engineer – Professional , Azure Administrator Associate ). Familiarity with GitOps tools such as ArgoCD or Flux. Exposure to hybrid or multi-cloud strategy and deployment scenarios. Experience working in Agile/Scrum-based teams.

Posted 3 days ago

Apply

6.0 years

0 Lacs

Pune, Maharashtra, India

On-site

JOB DESCRIPTION Job Title: Azure Developer About the Company/Team Oracle FSGIU's Finergy division is a specialized team dedicated to the Banking, Financial Services, and Insurance (BFSI) industry, offering innovative solutions and deep domain expertise. With a focus on accelerated implementation, Finergy helps financial institutions rapidly deploy multi-channel platforms, ensuring a seamless customer experience. Our team excels in providing end-to-end banking solutions, leveraging integrated dashboards and analytics to enhance operational efficiency. Finergy's consulting services provide strategic guidance, aligning technology with business goals. Job Summary We are seeking a talented Azure Developer with 4–6 years of experience to join our dynamic team and contribute to the development of cutting-edge cloud-native applications on the Microsoft Azure platform. The role involves close collaboration with architects, DevOps, and cross-functional teams to deliver secure and scalable solutions, utilizing modern development practices. Key Responsibilities Design and develop cloud-based applications using Azure App Services, Azure Functions, and Logic Apps, ensuring scalability and performance. Build and maintain RESTful APIs and backend services, integrating with databases and external systems to create robust solutions. Implement CI/CD pipelines using Azure DevOps and Git to ensure efficient and automated deployment processes. Utilize Infrastructure as Code (IaC) tools such as ARM, Bicep, or Terraform to provision and manage Azure resources effectively. Create automated workflows and integrations using Azure Logic Apps, Event Grid, and Service Bus, optimizing business processes. Implement security measures using Azure Key Vault, Azure AD, and other security services to protect sensitive data. Monitor application performance and health using Azure Monitor, Application Insights, and Log Analytics, ensuring timely issue resolution. Collaborate actively with DevOps, QA, and fellow developers in an Agile/Scrum environment to deliver high-quality software. Qualifications & Skills Mandatory: 4–6 years of software development experience, including at least 2 years in Azure development. Proficiency in .NET Core / C# or other backend programming languages like Node.js or Python. In-depth knowledge of Azure PaaS services: App Services, Azure Functions, Azure SQL, Cosmos DB, Storage, and API Management. Hands-on experience with CI/CD implementation using Azure DevOps or GitHub Actions. Understanding of microservices architecture and containerization using Docker. Familiarity with scripting languages (PowerShell or Azure CLI) for automation. Strong debugging and troubleshooting abilities in distributed cloud environments. Good-to-Have: Microsoft certifications such as Azure Developer Associate (AZ-204) or Azure Fundamentals (AZ-900) are preferred. Experience with Azure Kubernetes Service (AKS) and containerized deployments is advantageous. Knowledge of Agile methodologies, DevOps practices, and Test-Driven Development (TDD). Excellent communication skills and a collaborative mindset for effective teamwork. Self-Assessment Questions: Describe a cloud-native application you developed on Azure. What Azure services did you use, and how did you ensure scalability and security? Explain your experience with CI/CD pipelines. How have you utilized Azure DevOps or similar tools to automate the deployment process? Share a scenario where you implemented microservices architecture. What challenges did you face, and how did you ensure effective communication between services? Discuss a complex debugging scenario in a distributed cloud environment. How did you identify and resolve the issue? RESPONSIBILITIES Job Title: Azure Developer About the Company/Team Oracle FSGIU's Finergy division is a specialized team dedicated to the Banking, Financial Services, and Insurance (BFSI) industry, offering innovative solutions and deep domain expertise. With a focus on accelerated implementation, Finergy helps financial institutions rapidly deploy multi-channel platforms, ensuring a seamless customer experience. Our team excels in providing end-to-end banking solutions, leveraging integrated dashboards and analytics to enhance operational efficiency. Finergy's consulting services provide strategic guidance, aligning technology with business goals. Job Summary We are seeking a talented Azure Developer with 4–6 years of experience to join our dynamic team and contribute to the development of cutting-edge cloud-native applications on the Microsoft Azure platform. The role involves close collaboration with architects, DevOps, and cross-functional teams to deliver secure and scalable solutions, utilizing modern development practices. Key Responsibilities Design and develop cloud-based applications using Azure App Services, Azure Functions, and Logic Apps, ensuring scalability and performance. Build and maintain RESTful APIs and backend services, integrating with databases and external systems to create robust solutions. Implement CI/CD pipelines using Azure DevOps and Git to ensure efficient and automated deployment processes. Utilize Infrastructure as Code (IaC) tools such as ARM, Bicep, or Terraform to provision and manage Azure resources effectively. Create automated workflows and integrations using Azure Logic Apps, Event Grid, and Service Bus, optimizing business processes. Implement security measures using Azure Key Vault, Azure AD, and other security services to protect sensitive data. Monitor application performance and health using Azure Monitor, Application Insights, and Log Analytics, ensuring timely issue resolution. Collaborate actively with DevOps, QA, and fellow developers in an Agile/Scrum environment to deliver high-quality software. Qualifications & Skills Mandatory: 4–6 years of software development experience, including at least 2 years in Azure development. Proficiency in .NET Core / C# or other backend programming languages like Node.js or Python. In-depth knowledge of Azure PaaS services: App Services, Azure Functions, Azure SQL, Cosmos DB, Storage, and API Management. Hands-on experience with CI/CD implementation using Azure DevOps or GitHub Actions. Understanding of microservices architecture and containerization using Docker. Familiarity with scripting languages (PowerShell or Azure CLI) for automation. Strong debugging and troubleshooting abilities in distributed cloud environments. Good-to-Have: Microsoft certifications such as Azure Developer Associate (AZ-204) or Azure Fundamentals (AZ-900) are preferred. Experience with Azure Kubernetes Service (AKS) and containerized deployments is advantageous. Knowledge of Agile methodologies, DevOps practices, and Test-Driven Development (TDD). Excellent communication skills and a collaborative mindset for effective teamwork. Self-Assessment Questions: Describe a cloud-native application you developed on Azure. What Azure services did you use, and how did you ensure scalability and security? Explain your experience with CI/CD pipelines. How have you utilized Azure DevOps or similar tools to automate the deployment process? Share a scenario where you implemented microservices architecture. What challenges did you face, and how did you ensure effective communication between services? Discuss a complex debugging scenario in a distributed cloud environment. How did you identify and resolve the issue? QUALIFICATIONS Career Level - IC2 ABOUT US As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.

Posted 4 days ago

Apply

8.0 years

0 Lacs

Mumbai, Maharashtra, India

Remote

About This Role Are you interested in building innovative technology that crafts the financial markets? Do you like working at the speed of a startup, and solving some of the world’s most exciting challenges? Do you want to work with, and learn from, hands-on leaders in technology and finance? At BlackRock, we are looking for Software Engineers who like to innovate and solve sophisticated problems. We recognize that strength comes from diversity, and will embrace your outstanding skills, curiosity, and passion while giving you the opportunity to grow technically and as an individual. We invest and protect over $9 trillion (USD) of assets and have an extraordinary responsibility to our clients all over the world. Our technology empowers millions of investors to save for retirement, pay for college, buy a home, and improve their financial well-being. Being a technologist at BlackRock means you get the best of both worlds: working for one of the most sophisticated financial companies and being part of a software development team responsible for next generation technology and solutions. What are Aladdin and Aladdin Engineering? You will be working on BlackRock's investment operating system called Aladdin. Aladdin is used both internally within BlackRock and externally by many financial institutions. Aladdin combines sophisticated risk analytics with comprehensive portfolio management, trading, and operations tools on a single platform to power informed decision-making and create a connective tissue for thousands of users investing worldwide. Our development teams reside inside the Aladdin Engineering group. We collaboratively build the next generation of technology that changes the way information, people, and technology intersect for global investment firms. We build and package tools that manage trillions in assets and supports millions of financial instruments. We perform risk calculations and process millions of transactions for thousands of users every day worldwide! Being a Member Of Aladdin Engineering, You Will Be Tenacious: Work in a fast paced and highly complex environment Creative thinker: Analyse multiple solutions and deploy technologies in a flexible way. Great teammate: Think and work collaboratively and communicate effectively. Fast learner: Pick up new concepts and apply them quickly. Responsibilities Include Collaborate with team members in a multi-office, multi-country environment. Deliver high efficiency, high availability, concurrent and fault tolerant software systems. Significantly contribute to development of Aladdin’s global, multi-asset trading platform. Work with product management and business users to define the roadmap for the product. Design and develop innovative solutions to complex problems, identifying issues and roadblocks. Apply validated quality software engineering practices through all phases of development. Ensure resilience and stability through quality code reviews, unit, regression and user acceptance testing, dev ops and level two production support. Be a leader with vision and a partner in brainstorming solutions for team productivity, efficiency, guiding and motivating others. Drive a strong culture by bringing principles of inclusion and diversity to the team and setting the tone through specific recruiting, management actions and employee engagement. For VP Level: In addition to the above, a VP level candidate should be able to lead individual projects priorities, deadlines and deliverables. Qualifications B.S. / M.S. degree in Computer Science, Engineering, or a related subject area B.E./ B.TECH./ MCA or any other relevant engineering degree from a reputed university. For VP Level: 8+ years of proven experience Skills And Experience A proven foundation in C++ and related technologies in a multiprocess distributed UNIX environment Knowledge of Java, Perl, and/or Python are a plus Track record building high quality software with design-focused and test-driven approaches Experience with working with an extensive legacy code base (e.g., C++ 98) Understanding of performance issues (memory, processing time, I/O, etc.) Understanding of relational databases is a must. Great analytical, problem-solving and communication skills Some experience or a real interest in finance, investment processes, and/or an ability to translate business problems into technical solutions. For VP Level: In addition to the above, a VP level candidate should have experience leading development teams, projects or being responsible for the design and technical quality of a significant application, system, or component. Ability to form positive relationships with partnering teams, sponsors, and user groups. Nice To Have And Opportunities To Learn Expertise in building distributed applications using SQL and/or NOSQL technologies like MS SQL, Sybase, Cassandra or Redis A real-world practitioner of applying cloud-native design patterns to event-driven microservice architectures. Exposure to high scale distributed technology like Kafka, Mongo, Ignite, Redis Exposure to building microservices and APIs ideally with REST, Kafka or gRPC Experience with optimization, algorithms or related quantitative processes. Experience with Cloud platforms like Microsoft Azure, AWS, Google Cloud Experience with cloud deployment technology (Docker, Ansible, Terraform, etc.) is also a plus. Experience with DevOps and tools like Azure DevOps Experience with AI-related projects/products or experience working in an AI research environment. Exposure to Docker, Kubernetes, and cloud services is beneficial. A degree, certifications or opensource track record that shows you have a mastery of software engineering principles. Our Benefits To help you stay energized, engaged and inspired, we offer a wide range of benefits including a strong retirement plan, tuition reimbursement, comprehensive healthcare, support for working parents and Flexible Time Off (FTO) so you can relax, recharge and be there for the people you care about. Our hybrid work model BlackRock’s hybrid work model is designed to enable a culture of collaboration and apprenticeship that enriches the experience of our employees, while supporting flexibility for all. Employees are currently required to work at least 4 days in the office per week, with the flexibility to work from home 1 day a week. Some business groups may require more time in the office due to their roles and responsibilities. We remain focused on increasing the impactful moments that arise when we work together in person – aligned with our commitment to performance and innovation. As a new joiner, you can count on this hybrid model to accelerate your learning and onboarding experience here at BlackRock. About BlackRock At BlackRock, we are all connected by one mission: to help more and more people experience financial well-being. Our clients, and the people they serve, are saving for retirement, paying for their children’s educations, buying homes and starting businesses. Their investments also help to strengthen the global economy: support businesses small and large; finance infrastructure projects that connect and power cities; and facilitate innovations that drive progress. This mission would not be possible without our smartest investment – the one we make in our employees. It’s why we’re dedicated to creating an environment where our colleagues feel welcomed, valued and supported with networks, benefits and development opportunities to help them thrive. For additional information on BlackRock, please visit @blackrock | Twitter: @blackrock | LinkedIn: www.linkedin.com/company/blackrock BlackRock is proud to be an Equal Opportunity Employer. We evaluate qualified applicants without regard to age, disability, family status, gender identity, race, religion, sex, sexual orientation and other protected attributes at law.

Posted 4 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies