Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Position Overview Job Title: Senior DevOps Engineer, AVP Location: Pune, India Role Description DB Global Technology is Deutsche Bank’s technology center in Pune. The team is made up of enthusiastic professionals that work in an international environment adapting to different context and learning new technologies and parts of Deutsche Banks’ businesses. Every day we look at what needs to be done to support continuous business and how to improve current activities. Changing the Bank is a challenging endeavour which we tackle every day and enjoy our success when our efforts fundamentally change how Deutsche Bank works. We are seeking a highly skilled and proactive DevOps Engineer who possesses strong technical and operational expertise along with a deep understanding of private cloud infrastructures. You will be part of a dynamic team responsible for designing, deploying, automating, and maintaining a private cloud solution based on Kubernetes running on GCP clusters . This role is critical in ensuring the efficiency and reliability of our cloud operations. Deutsche Bank’s Corporate Bank division is a leading provider of cash management, trade finance and securities finance. We complete green-field projects that deliver the best Corporate Bank - Securities Services products in the world. Our team is diverse, international, and driven by shared focus on clean code and valued delivery. At every level, agile minds are rewarded with competitive pay, support, and opportunities to excel. You will work as part of a cross-functional agile delivery team. You will bring an innovative approach to software development, focusing on using the latest technologies and practices, as part of a relentless focus on business value. You will be someone who sees engineering as team activity, with a predisposition to open code, open discussion and creating a supportive, collaborative environment. You will be ready to contribute to all stages of software delivery, from initial analysis right through to production support. What We’ll Offer You As part of our flexible scheme, here are just some of the benefits that you’ll enjoy Best in class leave policy Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your Key Responsibilities Gather requirements, articulation of problem statement, capacity estimation, planning , design, implementation, quality, security, compliance and delivery. Broadly all functional and non-functional responsibilities. Team design, execution of deliveries and release Develops terraform scripts, Kubernets ymls and github actions. Focus on reusability. Understand end to end process for deployment and infra structure landscape on cloud. Understands network firewall and can debug the deployment related issues. Verifies the developed terraform scripts, GitHub Actions and Kubernetes yml by reviews (4-eyes principle). Configure Monitoring and alerting around application health deployed in GCP Designs infra for targeted deployable components of the application. Ensures architectural changes (as defined by Architects) are implemented. Enssures resiliency of deployment and security of application at code, build and deploy level. Provides Level 3 support for technical infrastructure components of application (i.e., databases, middleware and user interfaces). Contributes to problem and root cause analysis. Integrates software components following the integration strategy. Verifies integrated software components after deployment. Carries out rollback plan clinically. Ensures that all Infra as code changes end up in Change Items (CIs). Where applicable, develops routines to deploy CIs to the target environments. Provides Release Deployments on non Production Management controlled environments. Supports creation of Software Product Training Materials, Software Product User Guides, and Software Product Deployment Instructions. Checks consistency of documents with the respective Software Product Release. Where applicable, manages maintenance of applications and performs technical change requests scheduled according to Release Management processes. Fixes software defects/bugs, measures and analyses code for quality. Collaborates with colleagues participating in other stages of the Software Development Lifecycle (SDLC). Identifies dependencies between software product components, between technical components, and between applications and interfaces. Identifies product integration verifications to be performed based on the integration sequence and relevant dependencies. Your Skills And Experience Educated to degree level or above Experience of working in a dynamic collaborative environment Using initiative to proactively prioritize workload Comfortable working with junior engineering staff through to senior business stakeholders How We’ll Support You Training and development to help you excel in your career Coaching and support from experts in your team A culture of continuous learning to aid progression A range of flexible benefits that you can tailor to suit your needs About Us And Our Teams Please visit our company website for further information: https://www.db.com/company/company.htm We strive for a culture in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively. Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment.
Posted 5 days ago
10.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by diversity and inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health equity on a global scale. Join us to start Caring. Connecting. Growing together. The Solutions Architect will be responsible for designing and implementing scalable, efficient, and robust systems that meet our business and technical requirements. This role requires a deep understanding of Unified Modeling Language (UML) for system modeling, proficiency in integrating modern technologies and APIs, expertise in cloud infrastructure, and the ability to drive platform development and comprehensive system design. Primary Responsibilities System Design & Architecture: Develop comprehensive system architectures that align with business objectives and technical requirements Create detailed UML diagrams (class diagrams, sequence diagrams, use case diagrams, etc.) to model system components and workflows Design scalable, secure, and maintainable architectures for new and existing applications UML & Flow Creation: Utilize UML effectively to visualize, specify, construct, and document software systems Develop clear and comprehensive flowcharts and process diagrams to illustrate system operations and integrations Modern Technology & API Integrations: Integrate modern technologies and third-party APIs to enhance system functionality and performance Design and implement RESTful and/or GraphQL APIs ensuring seamless interaction between different system components and external services Evaluate and recommend new technologies and tools to improve system integrations and overall architecture Cloud Infrastructure: Design and manage cloud-based infrastructure on platforms such as AWS, Azure, or Google Cloud Platform Ensure cloud solutions are optimized for performance, cost, and security Implement infrastructure-as-code (IaC) using tools like Terraform, AWS CloudFormation, or similar Platform Development: Lead the development of platform solutions that support multiple applications and services Ensure the platform architecture supports scalability, reliability, and ease of maintenance Collaborate with development teams to integrate platform services into various applications Collaboration & Communication: Work closely with stakeholders, including product managers, developers, and business analysts, to gather requirements and translate them into technical specifications Present architectural designs and system proposals to technical and non-technical audiences Mentor and guide junior architects and developers in best practices and architectural standards Quality Assurance & Compliance: Ensure all architectural solutions comply with industry standards, security protocols, and regulatory requirements Conduct architectural reviews and provide recommendations for improvements Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Bachelor’s or Master’s degree in Computer Science, Information Technology, Engineering, or a related field 10+ years of experience in software architecture or a related role Hands-on experience with modern data technologies and API integrations Proven experience with UML and creating detailed system flow diagrams Experience in platform development and comprehensive system design Knowledge of containerization and orchestration tools like Docker and Kubernetes Understanding of DevOps practices and CI/CD pipelines Familiarity with API design and development (RESTful, GraphQL) Solid background in cloud infrastructure design and management (AWS, Azure, GCP) Proficiency in UML tools such as Microsoft Visio, Lucidchart, or similar Proven solid programming skills in languages such as Java, Python, C#, or JavaScript Proven exceptional problem-solving and analytical skills Proven solid communication and interpersonal abilities Proven ability to work independently and collaboratively in a fast-paced environment Proven detail-oriented with a focus on quality and efficiency Preferred Qualifications AWS Certified Solutions Architect, Microsoft Certified: Azure Solutions Architect, or Google Professional Cloud Architect Certified UML Professional (CUP) or similar certifications Experience with microservices architecture Knowledge of cybersecurity best practices Familiarity with Agile/Scrum methodologies At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.
Posted 5 days ago
0 years
0 Lacs
Gurgaon, Haryana, India
On-site
JOB DESCRIPTION: The Fanatical Support for AWS team provides industry leading Fanatical Support™ to Rackspace customers as part of a global team. Rackspace is hiring AWS Cloud Engineers to deliver Fanatical Support with Amazon Web Services. Fanatical Support for AWS includes a wide range of services and features to help customers make the most out of their chosen hosting strategy. Using your deep technical expertise, you will help customers optimize their workloads by providing application focused assistance to build, deploy, integrate, scale and heal using native AWS and 3rd party tool-chains and automation oriented agile principles. Through both hands-on and consultative approaches, you will be responsible fo supporting customers with tasks including provisioning and modifying Cloud environments, performing upgrades, and addressing day-to-day customer deployment issues via phone and ticket. At Rackspace we pride ourselves on our ability to deliver fanatical experience - this means our support team blends technical expertise and strong customer oriented professional skills. Being successful in this role requires: Working knowledge of Amazon Web Services Products & Services, Relational and NoSQL Databases, Caching, Object and Block Storage, Scaling, Load Balancing, CDNs, Terraform, Networking etc Excellent working knowledge of Windows or Linux operating systems – experience of supporting and troubleshooting issues and performance Intermediate understanding of central networking concepts: VLANs, layer2/3 routing, access lists & load balancing Good understanding of design of native Cloud applications, Cloud application design patterns and practices Hands on knowledge using CloudFormation and/or Terraform JOB REQUIREMENTS: Key Accountabilities Build, operate and support AWS Cloud environments Assist customers in the configuration of backup, patching and monitoring of servers and services Build customer solutions, leveraging automation and delivery mechanisms for efficiency and scalability Respond to customer support requests via tickets and phone calls within response time SLAs Ticket Queue Management and Ticket triaging – escalating to senior engineers when required Troubleshoot performance degradation or loss of service as time critical incidents as needed Drive strong customer satisfaction (NPS) through Fanatical Support Ownership of issues, including collaboration with other teams and escalation Support the success and development of others in the team Key Performance Indicators: Customer Satisfaction scores - NPS Speed to online- Meeting required delivery times Performance indicators – Ticket queues, response times Quality indicators – Peer review, customer feedback PERSON SPECIFICATION: Technical achiever with a strong work ethic, creative, collaborative, team player A strong background in AWS and/or demonstrative hosting-specific technical skills: Compute and Networking Storage and Content Delivery Database Administration and Security Deployment and Management Application Services Analytics Mobile Services CloudFormation/Terraform
Posted 5 days ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Position Overview Job Title: Cloud Engineer, AS Location: Pune, India Role Description A Google Cloud Platform (GCP) Engineer is responsible for designing, implementing, and managing cloud infrastructure and services on Google Cloud. Here’s a detailed role description in points: The Platform Engineering Team is responsible for building and maintaining the foundational infrastructure, tooling, and automation that enable efficient, secure, and scalable software development and deployment. The team focuses on creating a self-service platform for developers and operational teams, ensuring reliability, security, and compliance while improving developer productivity. Design and manage scalable, secure, and cost-effective cloud infrastructure (GCP, AWS, Azure). Implement Infrastructure as Code (IaC) using Terraform Implement security best practices for IAM, networking, encryption, and secrets management. Ensure regulatory compliance (SOC 2, ISO 27001, PCI-DSS) by automating security checks. Manage API gateways, service meshes, and secure service-to-service communication.. Enable efficient workload orchestration using Kubernetes, serverless What We’ll Offer You As part of our flexible scheme, here are just some of the benefits that you’ll enjoy, Best in class leave policy. Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your Key Responsibilities Cloud Infrastructure Management – Design, deploy, and manage scalable, secure, and cost-effective cloud environments on GCP. Automation & Scripting – Develop Infrastructure as Code (IaC) using Terraform, Deployment Manager, or other tools. Security & Compliance – Implement security best practices, IAM policies, and ensure compliance with organizational and regulatory standards. Networking & Connectivity – Configure and manage VPCs, subnets, firewalls, VPNs, and interconnects for secure cloud networking. CI/CD & DevOps – Set up CI/CD pipelines using Cloud Build, Jenkins, GitHub Actions, or similar tools for automated deployments. Monitoring & Logging – Implement monitoring and alerting using Stackdriver (Cloud Operations), Prometheus, or third-party tools. Cost Optimization – Analyze and optimize cloud spending by leveraging committed use discounts, autoscaling, and right-sizing resources. Disaster Recovery & Backup – Design backup, high availability, and disaster recovery strategies using Cloud Storage, Snapshots, and multi-region deployments. Database Management – Deploy and manage GCP databases like Cloud SQL, BigQuery, Firestore, and Spanner. Containerization & Kubernetes – Deploy and manage containerized applications using GKE (Google Kubernetes Engine) and Cloud Run. Your Skills And Experience Strong experience with GCP services like Compute Engine, Cloud Storage, IAM, Networking, Kubernetes, and Serverless technologies. Proficiency in scripting (Python, Bash) and Infrastructure as Code (Terraform, CloudFormation). Knowledge of DevOps practices, CI/CD tools, and GitOps workflows. Understanding of security, IAM, networking, and compliance in cloud environments. Experience with monitoring tools like Stackdriver, Prometheus, or Datadog. Strong problem-solving skills and ability to troubleshoot cloud-based infrastructure. Google Cloud certifications (e.g., Associate Cloud Engineer, Professional Cloud Architect, or Professional DevOps Engineer) are a plus. How We’ll Support You Training and development to help you excel in your career. Coaching and support from experts in your team. A culture of continuous learning to aid progression. A range of flexible benefits that you can tailor to suit your needs. About Us And Our Teams Please visit our company website for further information: https://www.db.com/company/company.htm We strive for a culture in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively. Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment.
Posted 5 days ago
3.0 - 5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Requirements Bachelor’s degree in computer science or equivalent 3 to 5 Years of experience in AWScloud 3 to 5 years in architecting and implementing fully automated, secure, reliable, scalable & resilient multi-cloud/hybrid-cloud solution History of developing scripts to automate infrastructure tasks Seasoned Infrastructure as Code developer (Terraform is strongly preferred) Experience with Identity and Access Management Practical experience with version control systems (Git is preferred) Production level experience with containerisation (Docker) and orchestration (Kubernetes) Good scripting skills (e.g. Bash, Python) Strong written and verbal communication skills Able to thrive in a collaborative and cross-functional environment Familiar with Agile methodology concepts
Posted 5 days ago
8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description: About the Company: Join AT&T and reimagine the communications and technologies that connect the world. Our Chief Security Office ensures that our assets are safeguarded through truthful transparency, enforce accountability and master cybersecurity to stay ahead of threats. Bring your bold ideas and fearless risk-taking to redefine connectivity and transform how the world shares stories and experiences that matter. When you step into a career with AT&T, you won’t just imagine the future-you’ll create it. About the Job: We are seeking a highly experienced Senior Specialist in Digital Certificate Management Operations to join our Cybersecurity team. The ideal candidate will have deep functional and operational expertise in the Public Key Infrastructure (PKI), cryptography, and certificate lifecycle management to ensure the secure issuance, renewal, revocation, and overall management of digital certificates across the enterprise. This role will collaborate with developers, network engineers, and security teams to maintain a robust and compliant certificate ecosystem that supports secure communications and data protection. This role will have hands-on experience with relevant tools and compliance frameworks. Experience Level: 8+ years. Location: Hyderabad / Bengaluru Responsibilities Include: Manage certificate lifecycle operations including issuance, renewal, revocation, and cross-certification within complex CA hierarchies. Enforce cryptographic key management policies including key generation, escrow, rotation, and destruction Monitor certificate status and proactively address expirations to prevent service disruptions. Troubleshoot and resolve certificate-related issues across multiple platforms and applications. Automate certificate management processes using scripting languages and certificate management tools. Maintain accurate documentation of certificate inventories, configurations, and operational procedures. Collaborate closely with developers, security teams, network administrators, and other stakeholders to ensure secure and compliant certificate deployments. Ensure compliance with PKI best practices, industry standards, and regulatory requirements. Establish monitoring and alerting mechanisms for certificate expiration and operational health. Participate in periodic reviews / checks and respond to certificate management-related queries. Stay current with emerging trends, threats, and technologies in digital certificate management. Support incident response efforts related to certificate compromise or misuse. Lead PKI-related operations, mentor junior team members, and facilitate cross-team collaboration with security, DevOps, and infrastructure groups. Produce comprehensive documentation and communicate complex technical concepts clearly to diverse stakeholders. Provide training and support to internal teams on certificate best practices. Attention to detail is crucial Should be flexible to provide coverage in US morning hours Should be flexible with shifts and supporting on weekends Required skills: Overall - At least 8+ years of experience in performing Digital Certificate Management Operations including: Core PKI & Security Skills Advanced understanding of X.509 certificates, CRLs, OCSP, and complex CA hierarchies (root, intermediate, issuing). Expertise in certificate lifecycle management at scale, cross-certification, and trust model architectures. Strong cryptographic knowledge including symmetric/asymmetric encryption, digital signatures, and hashing algorithms. Proven experience with key management policies covering generation, escrow, rotation, and secure destruction. Demonstrated ability to lead complex PKI operations and guide junior team members. Excellent collaboration skills working with security, DevOps, infrastructure, and application teams. Operationalize secure PKI systems integrated with IAM, SSO, MFA, and compliant with standards such as NIST, FIPS 140-2, and ISO 27001. In-depth knowledge of networking protocols relevant to certificate distribution and validation: SSH, TLS/SSL, HTTPS, S/MIME, IPsec, VPNs, DNS, LDAP, HTTP. Proven experience leveraging automation for certificate lifecycle management using scripting tools like PowerShell and Python Tools & Technologies: Hands-on experience with OpenSSL, Keytool, Certutil. Familiarity with Microsoft AD CS, KeyFactor, Venafi, HashiCorp Vault, and EJBCA. Experience managing Hardware Security Modules (HSMs) such as Thales and SafeNet. ACME protocol for automated certificate lifecycle management Monitoring, Logging and Compliance: Lead and Operationalize certificate expiration monitoring and alerting systems to prevent outages. Maintain thorough logging and auditing of all certificate operations for security and compliance purposes. Proven ability to troubleshoot complex certificate-related issues across diverse platforms. Strong documentation skills to support audit readiness and operational transparency. Automation Python with libraries like cryptography, pyOpenSSL, requests, subprocess for PKI automation and API integration. PowerShell for Windows PKI environments (e.g., AD CS). Bash scripting for Linux-based PKI tools and OpenSSL automation. Java for working with PKI tools such as EJBCA and integrations like HashiCorp Vault. Other automation tools: Ansible, Terraform, and CI/CD systems (GitHub Actions, Jenkins). RESTful API integrations for DigiCert, HashiCorp Vault, and ACME protocol platforms. Desirable skills: Bachelor's or master's degree in computer science, mathematics, information systems, engineering, or cybersecurity. Industry certifications such as CEH, CISSP, SANS and/or other relevant certifications Ability to prioritize individual/group work in a high-stress and time-bound environment Excellent communication, problem-solving, and analytical skills. Ability to work independently and as part of a team. Additional information (if any): Should be flexible to provide coverage in US morning hours Should be flexible with shifts and supporting on weekends #Cybersecurity Weekly Hours: 40 Time Type: Regular Location: IND:AP:Hyderabad / Argus Bldg 4f & 5f, Sattva, Knowledge City- Adm: Argus Building, Sattva, Knowledge City It is the policy of AT&T to provide equal employment opportunity (EEO) to all persons regardless of age, color, national origin, citizenship status, physical or mental disability, race, religion, creed, gender, sex, sexual orientation, gender identity and/or expression, genetic information, marital status, status with regard to public assistance, veteran status, or any other characteristic protected by federal, state or local law. In addition, AT&T will provide reasonable accommodations for qualified individuals with disabilities. AT&T is a fair chance employer and does not initiate a background check until an offer is made.
Posted 5 days ago
10.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description: Role: Manager – Cybersecurity – Certificate Management Operations About the Company: Join AT&T and reimagine the communications and technologies that connect the world. Our Chief Security Office ensures that our assets are safeguarded through truthful transparency, enforce accountability and master cybersecurity to stay ahead of threats. Bring your bold ideas and fearless risk-taking to redefine connectivity and transform how the world shares stories and experiences that matter. When you step into a career with AT&T, you won’t just imagine the future-you’ll create it. About the Job: We are seeking an experienced Cybersecurity Manager in Digital Certificate Management Operations to join our Cybersecurity team. The individual in this role will be responsible for leading and managing a team of 10+ experienced cybersecurity professionals in AT&T India. The Manager in this role leads, manages and works with team members on Digital Certificate Management operations The ideal candidate will have deep functional and operational expertise in the Public Key Infrastructure (PKI), cryptography, and certificate lifecycle management to ensure the secure issuance, renewal, revocation, and overall management of digital certificates across the enterprise. This role will collaborate with developers, network engineers, and security teams to maintain a robust and compliant certificate ecosystem that supports secure communications and data protection. This role will have hands-on experience with relevant tools and compliance frameworks. Experience Level: 10+ years. Location: Hyderabad / Bengaluru Responsibilities Include: Functionally lead and manage a team of 10+ experienced professionals in AT&T India Manage PKI-related operations, mentor team members, and facilitate cross-team collaboration with security, DevOps, and infrastructure groups. Manage certificate lifecycle operations including issuance, renewal, revocation, and cross-certification within complex CA hierarchies. Enforce cryptographic key management policies including key generation, escrow, rotation, and destruction Monitor certificate status and proactively address expirations to prevent service disruptions. Troubleshoot and resolve certificate-related issues across multiple platforms and applications. Automate certificate management processes using scripting languages and certificate management tools. Maintain accurate documentation of certificate inventories, configurations, and operational procedures. Collaborate closely with developers, security teams, network administrators, and other stakeholders to ensure secure and compliant certificate deployments. Ensure compliance with PKI best practices, industry standards, and regulatory requirements. Establish monitoring and alerting mechanisms for certificate expiration and operational health. Participate in periodic reviews / checks and respond to certificate management-related queries. Stay current with emerging trends, threats, and technologies in digital certificate management. Lead incident response efforts related to certificate compromise or misuse. Produce comprehensive documentation and communicate complex technical concepts clearly to diverse stakeholders. Provide training and support to internal teams on certificate best practices. Attention to detail and sense of urgency is crucial Collaborate with leadership teams, provide subject matter expertise and insights. Support and guide team members in providing high-quality services / deliverables. Support, guide and mentor team members in technical and functional matters Should be flexible to provide coverage in US morning hours Should be flexible with shifts and supporting on weekends Required skills: Overall - At least 10+ years of experience in performing Digital Certificate Management Operations including: Core PKI & Security Skills Advanced understanding of X.509 certificates, CRLs, OCSP, and complex CA hierarchies (root, intermediate, issuing). Expertise in certificate lifecycle management at scale, cross-certification, and trust model architectures. Strong cryptographic knowledge including symmetric/asymmetric encryption, digital signatures, and hashing algorithms. Proven experience with key management policies covering generation, escrow, rotation, and secure destruction. Demonstrated ability to lead complex PKI operations and guide junior team members. Excellent collaboration skills working with security, DevOps, infrastructure, and application teams. Operationalize secure PKI systems integrated with IAM, SSO, MFA, and compliant with standards such as NIST, FIPS 140-2, and ISO 27001. In-depth knowledge of networking protocols relevant to certificate distribution and validation: SSH, TLS/SSL, HTTPS, S/MIME, IPsec, VPNs, DNS, LDAP, HTTP. Proven experience leveraging automation for certificate lifecycle management using scripting tools like PowerShell and Python Tools & Technologies: Hands-on experience with OpenSSL, Keytool, Certutil. Familiarity with Microsoft AD CS, KeyFactor, Venafi, HashiCorp Vault, and EJBCA. Experience managing Hardware Security Modules (HSMs) such as Thales and SafeNet. ACME protocol for automated certificate lifecycle management Monitoring, Logging and Compliance: Lead and Operationalize certificate expiration monitoring and alerting systems to prevent outages. Maintain thorough logging and auditing of all certificate operations for security and compliance purposes. Proven ability to troubleshoot complex certificate-related issues across diverse platforms. Strong documentation skills to support audit readiness and operational transparency. Automation Python with libraries like cryptography, pyOpenSSL, requests, subprocess for PKI automation and API integration. PowerShell for Windows PKI environments (e.g., AD CS). Bash scripting for Linux-based PKI tools and OpenSSL automation. Java for working with PKI tools such as EJBCA and integrations like HashiCorp Vault. Other automation tools: Ansible, Terraform, and CI/CD systems (GitHub Actions, Jenkins). RESTful API integrations for DigiCert, HashiCorp Vault, and ACME protocol platforms. Desirable skills: Bachelor's or master's degree in computer science, mathematics, information systems, engineering, or cybersecurity. Industry certifications such as CEH, CISSP, SANS and/or other relevant certifications Ability to prioritize individual/group work in a high-stress and time-bound environment Excellent communication, problem-solving, and analytical skills. Ability to work independently and as part of a team. Additional information (if any): Should be flexible to provide coverage in US morning hours Should be flexible with shifts and supporting on weekends Weekly Hours: 40 Time Type: Regular Location: IND:AP:Hyderabad / Argus Bldg 4f & 5f, Sattva, Knowledge City- Adm: Argus Building, Sattva, Knowledge City It is the policy of AT&T to provide equal employment opportunity (EEO) to all persons regardless of age, color, national origin, citizenship status, physical or mental disability, race, religion, creed, gender, sex, sexual orientation, gender identity and/or expression, genetic information, marital status, status with regard to public assistance, veteran status, or any other characteristic protected by federal, state or local law. In addition, AT&T will provide reasonable accommodations for qualified individuals with disabilities. AT&T is a fair chance employer and does not initiate a background check until an offer is made.
Posted 5 days ago
0 years
0 Lacs
Pune/Pimpri-Chinchwad Area
On-site
Position Type Full time Type Of Hire Experienced (relevant combo of work and education) Education Desired Bachelor of Computer Engineering Travel Percentage 0% Are you curious, motivated, and forward-thinking? At FIS you’ll have the opportunity to work on some of the most challenging and relevant issues in financial services and technology. Our talented people empower us, and we believe in being part of a team that is open, collaborative, entrepreneurial, passionate and above all fun. Pune (Two days in-office, Three days virtual) What You Will Be Doing Site Reliability Engineer will play a critical role in driving innovation and growth for the Banking Solutions, Payments and Capital Markets business. In this role, the candidate will have the opportunity to make a lasting impact on the company's transformation journey, drive customer-centric innovation and automation, and position the organization as a leader in the competitive banking, payments and investment landscape. Specifically, the Site Reliability Engineer will be responsible for the following~ Design and maintain monitoring solutions for infrastructure, application performance, and user experience Implement automation tools to streamline tasks, scale infrastructure, and ensure seamless deployments Ensure application reliability, availability, and performance, minimizing downtime and optimizing response times Lead incident response, including identification, triage, resolution, and post-incident analysis Conduct capacity planning, performance tuning, and resource optimization Collaborate with security teams to implement best practices and ensure compliance Manage deployment pipelines and configuration management for consistent and reliable app deployments Develop and test disaster recovery plans and backup strategies Collaborate with development, QA, DevOps, and product teams to align on reliability goals and incident response processes Participate in on-call rotations and provide 24/7 support for critical incidents What You Bring Proficiency in development technologies, architectures, and platforms (web, API) Experience with cloud platforms (AWS, Azure, Google Cloud) and IaC tools Knowledge of monitoring tools (Prometheus, Grafana, DataDog) and logging frameworks (Splunk, ELK Stack) Experience in incident management and post-mortem reviews Strong troubleshooting skills for complex technical issues Proficiency in scripting languages (Python, Bash) and automation tools (Terraform, Ansible) Experience with CI/CD pipelines (Jenkins, GitLab CI/CD, Azure DevOps) Ownership approach to engineering and product outcomes Excellent interpersonal communication, negotiation, and influencing skills What We Offer You A work environment built on collaboration, flexibility and respect Competitive salary and attractive range of benefits designed to help support your lifestyle and wellbeing Varied and challenging work to help you grow your technical skillset Privacy Statement FIS is committed to protecting the privacy and security of all personal information that we process in order to provide services to our clients. For specific information on how FIS protects personal information online, please see the Online Privacy Notice. Sourcing Model Recruitment at FIS works primarily on a direct sourcing model; a relatively small portion of our hiring is through recruitment agencies. FIS does not accept resumes from recruitment agencies which are not on the preferred supplier list and is not responsible for any related fees for resumes submitted to job postings, our employees, or any other part of our company. #pridepass
Posted 5 days ago
10.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
At EY, we’re all in to shape your future with confidence. We’ll help you succeed in a globally connected powerhouse of diverse teams and take your career wherever you want it to go. Join EY and help to build a better working world. Designation DevSecOps Architect Job Description Architect, deploy, and manage complex services on one or more major cloud platforms (AWS, Azure, GCP), optimizing cloud resources for performance and cost-efficiency. Design, implement, and manage scalable and resilient cloud infrastructure using Terraform, developing reusable infrastructure modules and ensuring consistent deployment across environments. Possess strong hands-on experience in deploying, configuring, and troubleshooting Kubernetes environments (on-premise/cloud), and experience in deploying and scaling microservices applications on K8s clusters. Design, build, and maintain robust CI/CD pipelines using tools like Jenkins, GitLab CI, CircleCI, Argo CD, or Azure DevOps, and monitor and troubleshoot CI/CD pipeline issues to ensure high availability and reliability. Deploy, manage, and scale applications using Kubernetes, and create and maintain Helm charts for application deployment. Manage the container lifecycle, including image builds, deployments, and scaling. Write, test, and maintain scripts in Python, Bash, or PowerShell for extensive automation tasks, and develop automation scripts to streamline operational workflows and reduce manual intervention. Implement monitoring solutions using tools like Prometheus, Grafana, or cloud-native services, and set up logging and alerting to proactively identify and resolve issues. Collaborate with cross-functional teams including development, QA, and operations to improve the development and deployment processes. Implement and manage security best practices for cloud infrastructure and applications, embedding DevSecOps principles. Possess strong hands-on experience on configuration management tools (Ansible, Puppet, Chef, etc.). Lead and architect large-scale cloud migration initiatives, including re-platforming and re-architecting applications. Design and execute strategies for upgrading existing infrastructure, platforms, and applications with minimal downtime and risk. Contribute significantly to Responses for Proposals (RFPs) and Requests for Information (RFIs), articulating technical solutions and value propositions. Provide expert guidance and solutioning, leading technical discussions with various stakeholders, business groups, and senior leadership. Mentor and lead a team of DevOps engineers, fostering their technical growth and ensuring adherence to architectural standards. Demonstrate strong communication and verbal skills for effective stakeholder management and team leadership. Desired Profile Seeking a DevOps Manager/Architect with 10+ years of hands-on Cloud and DevOps experience, including significant leadership. Requires a Bachelor's/master’s in computer science. Must have expert proficiency in Terraform and extensive experience across at least two major cloud platforms (AWS, Azure, GCP). Strong hands-on experience with Kubernetes, Helm charts, and designing/optimizing CI/CD pipelines (e.g., Jenkins, GitLab CI) is essential. Proficiency in Python and scripting (Bash/PowerShell) is also a must. Valued experience includes leading cloud migrations, contributing to RFP/RFI processes, and mentoring teams. Excellent problem-solving, communication, and collaboration skills are critical. Experience with configuration management (Ansible, Puppet) and DevSecOps principles is required; OpenShift is a plus. Experience 10 years and above Education B.Tech. / BS in Computer Science Technical Skills & Certifications Certifications in cloud platforms (e.g., AWS Certified Solutions Architect, Azure Administrator, Google Professional Cloud Architect). Terraform, Kubernetes, Python, CI/CD, Ansible, Security tools, Monitoring tools. EY | Building a better working world EY is building a better working world by creating new value for clients, people, society and the planet, while building trust in capital markets. Enabled by data, AI and advanced technology, EY teams help clients shape the future with confidence and develop answers for the most pressing issues of today and tomorrow. EY teams work across a full spectrum of services in assurance, consulting, tax, strategy and transactions. Fueled by sector insights, a globally connected, multi-disciplinary network and diverse ecosystem partners, EY teams can provide services in more than 150 countries and territories.
Posted 5 days ago
8.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Cloud DevOps Integration Specialist - Senior Job Summary: We are seeking a Manager with up to 8 years of experience in DevOps practices and integration. The ideal candidate will have a strong technical background in CI/CD pipelines, automation, and cloud technologies, along with the ability to lead teams in implementing DevOps solutions that enhance software delivery processes. Key Responsibilities: Design, implement, and manage DevOps practices and integration solutions, ensuring optimal performance and efficiency. Collaborate with development and operations teams to understand their requirements and develop tailored DevOps strategies. Lead and mentor DevOps teams, providing guidance and support in the deployment of automation and integration solutions. Stay informed about trends in DevOps technologies and practices to drive service improvements and strategic decisions. Provide technical insights for proposals and engage in client discussions to support business development efforts. Qualifications: Up to 8 years of experience in DevOps practices and integration, with a focus on CI/CD and automation. Proven experience in implementing DevOps solutions in cloud environments (AWS, Azure, GCP). Strong communication and client engagement skills, with the ability to translate technical requirements into effective DevOps strategies. Bachelor's degree in Information Technology, Computer Science, or a related field. Relevant certifications in DevOps practices are highly desirable. Preferred Skills: Proficiency in automation tools (Jenkins, Ansible, Terraform) and cloud platforms. Knowledge of security practices and compliance standards related to DevOps. Strong analytical and problem-solving skills, with the ability to work effectively in a team environment. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 5 days ago
6.0 years
0 Lacs
India
Remote
Senior Developer - Experience 6-10 Years Technical Lead - Experience 8+ Years Architect - Experience 10+ Years Location - Remote Key Responsibilities: Lead the design and implementation of cloud-native microservices using Python FastAPI, Pydantic, and Async I/O. Architect, build, and optimize APIs, worker services, and event-driven systems leveraging Confluent Kafka. Define and enforce coding standards, testing strategies, and development best practices. Implement CI/CD pipelines using GitHub Actions or other tools and manage secure and scalable deployments. Work with Docker, Terraform, and GCP infrastructure services including Cloud Run, Pub/Sub, Secret Manager, Artifact Registry, and Eventarc. Guide the integration of monitoring and observability tools such as New Relic, Cloud Logging, and Cloud Monitoring. Drive initiatives around performance tuning, caching (Redis), and data transformation including XSLT, XML/XSD processing. Support version control and code collaboration using Git/GitHub. Mentor team members, conduct code reviews, and ensure quality through unit testing frameworks like Pytest or unittest. Collaborate with stakeholders to translate business needs into scalable and maintainable solutions. Mandatory Skills: Programming & Frameworks: Expert in Python and experienced with FastAPI or equivalent web frameworks. Strong knowledge of Async I/O, Pydantic Settings Hands-on with Pytest or unittest Experience with Docker, Terraform, and Kafka (Confluent) Version Control & DevOps: Experience with any version control Proven CI/CD pipeline implementation experience Cloud & Infrastructure: Hands-on experience with any cloud provider Data Processing: Knowledge of XSLT transformations, XML/XSD processing Monitoring & Observability: Familiar with integrating monitoring/logging solutions, New relic preferred. Databases & Storage: Experience with any database/storage solution Understanding of caching mechanisms
Posted 5 days ago
40.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description we are currently in the market for an experienced Senior Oracle Cloud Infrastructure (OCI) Consultant to join our growing Managed Services Practice. You will have the opportunity to work with the latest cloud technology and work on projects across a multiplicity of sectors and industries. Qualifications You will have experience in the following areas: Required: Oracle Cloud Infrastructure 2023 Architect Associate or higher OCI support, including user management, IAM policies, and AD SSO integrations Experience with IPSec VPNs, network routing, and security group issues in OCI CloudGuard experience and alert resolution DevOps skills, particularly Terraform scripting Basic Linux administrative skills (training provided for less experienced candidates) Desirable : Oracle Linux Certification and ITIL Foundation Advanced DevOps skills (Terraform for full infrastructure builds, Ansible for Linux configurations) FastConnect configuration experience Oracle Database knowledge (basic experience with TNS connections and database links) Responsibilities Scope of Work: Build and deploy Docker images Manage Helm charts and deployments across all three environments: sandbox, staging, and production Update LLD/HLD documentation (infra-related only) Maintain and update CIQ documents Support in case of issue on observability and monitoring of the system Required Technical Skills for TA: Cloud/Containerization: Kubernetes, Docker DevOps Tools: Jenkins, Git, CI/CD workflows Scripting Languages: Shell scripting Package Management: Helm (Kubernetes) Monitoring Tools: Prometheus, Grafana Operating Systems: Linux / Oracle Linux Qualifications Career Level - IC3 About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.
Posted 5 days ago
6.0 years
0 Lacs
India
Remote
What You'll Do Avalara is an AI-first company. We expect every engineer and manager to use AI to enhance productivity, quality, innovation, and customer value. AI is embedded in our workflows and products — and success at Avalara requires embracing AI as an essential capability, not an optional too. We are looking for experienced Machine Learning Engineers with a background in software development and a deep enthusiasm for solving complex problems. You will lead a dynamic team dedicated to designing and implementing a large language model framework to power diverse applications across Avalara. Your responsibilities will span the entire development lifecycle, including conceptualization, prototyping and delivery of the LLM platform features. You will build core agent infrastructure—A2A orchestration and MCP-driven tool discovery—so teams can launch secure, scalable agent workflows. You will be reporting to Senior Manager, Machine Learning What Your Responsibilities Will Be We are looking for engineers who can think quick and have a background in implementation. Your responsibilities will include: Build on top of the foundational framework for supporting Large Language Model Applications at Avalara Experience with LLMs - like GPT, Claude, LLama and other Bedrock models Leverage best practices in software development, including Continuous Integration/Continuous Deployment (CI/CD) along with appropriate functional and unit testing in place. Promote innovation by researching and applying the latest technologies and methodologies in machine learning and software development. Write, review, and maintain high-quality code that meets industry standards, contributing to the project's. Lead code review sessions, ensuring good code quality and documentation. Mentor junior engineers, encouraging a culture of collaboration Proficiency in developing and debugging software with a preference for Python, though familiarity with additional programming languages is valued and encouraged. What You’ll Need To Be Successful 6+ years of experience building Machine Learning models and deploying them in production environments as part of creating solutions to complex customer problems. Proficiency working in cloud computing environments (AWS, Azure, GCP), Machine Learning frameworks, and software development best practices. Experience working with technological innovations in AI & ML(esp. GenAI) and apply them. Experience with design patterns and data structures. Good analytical, design and debugging skills. Technologies You Will Work With Python, LLMs, Agents, A2A, MCP, MLFlow, Docker, Kubernetes, Terraform, AWS, GitLab, Postgres, Prometheus, and Grafana We are the AI & ML enablement group in Avalara. This is a remote position. How We’ll Take Care Of You Total Rewards In addition to a great compensation package, paid time off, and paid parental leave, many Avalara employees are eligible for bonuses. Health & Wellness Benefits vary by location but generally include private medical, life, and disability insurance. Inclusive culture and diversity Avalara strongly supports diversity, equity, and inclusion, and is committed to integrating them into our business practices and our organizational culture. We also have a total of 8 employee-run resource groups, each with senior leadership and exec sponsorship. What You Need To Know About Avalara We’re defining the relationship between tax and tech. We’ve already built an industry-leading cloud compliance platform, processing over 54 billion customer API calls and over 6.6 million tax returns a year. Our growth is real - we're a billion dollar business - and we’re not slowing down until we’ve achieved our mission - to be part of every transaction in the world. We’re bright, innovative, and disruptive, like the orange we love to wear. It captures our quirky spirit and optimistic mindset. It shows off the culture we’ve designed, that empowers our people to win. We’ve been different from day one. Join us, and your career will be too. We’re An Equal Opportunity Employer Supporting diversity and inclusion is a cornerstone of our company — we don’t want people to fit into our culture, but to enrich it. All qualified candidates will receive consideration for employment without regard to race, color, creed, religion, age, gender, national orientation, disability, sexual orientation, US Veteran status, or any other factor protected by law. If you require any reasonable adjustments during the recruitment process, please let us know.
Posted 5 days ago
15.0 years
0 Lacs
India
Remote
What You'll Do Avalara is an AI-first company. We expect every engineer, manager, and to use AI to enhance productivity, quality, innovation, and customer value. AI is embedded in our workflows, and products — and success at Avalara requires embracing AI as an essential capability, not an optional tool. As a Principal Engineer, you will apply your vision and drive to create some market leading technology. We have a phenomenal team working in an open, collaborative environment that makes taxes and compliance less taxing to deal with. It will be up to you to convert product vision and requirements into the finished product. Avalara is a global company with dev teams across multiple locations in the world. You will report to Senior Director of Engineering. What Your Responsibilities Will Be Automation vs People Power: Computers are great for process automation but there is a limit to what it can do, and you know where that limit is. You will solve the unique challenges that occur at the intersection of software and people-driven tasks and apply these solutions to the novel business automation that Avalara aims to create. Industry Leadership: Avalara is the market leader, and we intend on staying that way. That means we cannot be complacent. We encourage everyone to make bold moves and keep testing their limits. You will improve and produce ideas to make things better. Collaborate with teams to align integration efforts with product and team goals. Lead, mentor, and inspire multiple teams, providing guidance on best practices, architecture, and development methodologies. Foster a culture of innovation, collaboration, and accountability within the engineering teams. Ensure seamless data flow and real-time synchronization between systems, minimizing latency and ensuring data integrity for an ever-growing client-base and increasing data volumes. Guide the design of high-quality, scalable, and maintainable integration solutions. Focus on security aspects, observability, scalability, and telemetry. Perform code reviews and ensure coding standards are followed. What You’ll Need To Be Successful Bachelor/master's degree in computer science or equivalent 15+ years of full stack experience in software design. Experience with Object-oriented programming languages, APIs, data models, and authentication mechanisms. Good experience with RESTful APIs, JSON, XML, and other data interchange formats. Familiarity with authentication protocols like OAuth and token-based authentication. Experience working on AWS Cloud and DevOps (Terraform, Docker, ECS) would be beneficial. Experience building scalable, resilient, and observable distributed systems Experience delivering high-quality software projects. Proficiency in CI/CD tools (Jenkins, GitLab, etc.) , Work Location : India, Remote How We’ll Take Care Of You Total Rewards In addition to a great compensation package, paid time off, and paid parental leave, many Avalara employees are eligible for bonuses. Health & Wellness Benefits vary by location but generally include private medical, life, and disability insurance. Inclusive culture and diversity Avalara strongly supports diversity, equity, and inclusion, and is committed to integrating them into our business practices and our organizational culture. We also have a total of 8 employee-run resource groups, each with senior leadership and exec sponsorship. What You Need To Know About Avalara We’re defining the relationship between tax and tech. We’ve already built an industry-leading cloud compliance platform, processing over 54 billion customer API calls and over 6.6 million tax returns a year. Our growth is real - we're a billion dollar business - and we’re not slowing down until we’ve achieved our mission - to be part of every transaction in the world. We’re bright, innovative, and disruptive, like the orange we love to wear. It captures our quirky spirit and optimistic mindset. It shows off the culture we’ve designed, that empowers our people to win. We’ve been different from day one. Join us, and your career will be too. We’re An Equal Opportunity Employer Supporting diversity and inclusion is a cornerstone of our company — we don’t want people to fit into our culture, but to enrich it. All qualified candidates will receive consideration for employment without regard to race, color, creed, religion, age, gender, national orientation, disability, sexual orientation, US Veteran status, or any other factor protected by law. If you require any reasonable adjustments during the recruitment process, please let us know.
Posted 5 days ago
6.0 years
0 Lacs
India
Remote
What You'll Do Avalara is an AI-first company. We expect every engineer, manager, and leader to actively leverage AI to enhance productivity, quality, innovation, and customer value. AI is embedded in our workflows, decision-making, and products — and success at Avalara requires embracing AI as an essential capability, not an optional tool. We are looking for accomplished Machine Learning Engineers with a background in software development and a deep enthusiasm for solving complex problems. You will lead a dynamic team dedicated to designing and implementing a large language model framework to power diverse applications across Avalara. Your responsibilities will span the entire development lifecycle, including conceptualization, prototyping, development, and delivery of the LLM platform features. You will build core agent infrastructure—A2A orchestration and MCP-driven tool discovery—so teams can launch secure, scalable agent workflows. You will be reporting to Senior Manager, ML Engineering. What Your Responsibilities Will Be We are looking for engineers who can think quick and have a background in implementation. Your responsibilities will include: Build on top of the foundational framework for supporting Large Language Model Applications at Avalara Experience with LLMs - like GPT, Claude, LLama and other Bedrock models Leverage best practices in software development, including Continuous Integration/Continuous Deployment (CI/CD) along with appropriate functional and unit testing in place. Drive innovation by researching and applying the latest technologies and methodologies in machine learning and software development. Write, review, and maintain high-quality code that meets industry standards, contributing to the project's technical expertise. Lead code review sessions, ensuring good code quality and documentation. Mentor junior engineers, promoting a culture of collaboration and Engineering expertise. Proficiency in developing and debugging software with a preference for Python, though familiarity with additional programming languages is valued and encouraged. What You’ll Need To Be Successful 6+ years of experience building Machine Learning models and deploying them in production environments as part of creating solutions to complex customer problems. Proficiency working in cloud computing environments (AWS, Azure, GCP), Machine Learning frameworks, and software development best practices. Demonstrated experience staying current with breakthroughs in AI/ML, with a focus on GenAI. Experience with design patterns and data structures. Technologies You Will Work With Python, LLMs, Agents, A2A, MCP, MLFlow, Docker, Kubernetes, Terraform, AWS, GitLab, Postgres, Prometheus, and Grafana We are the AI & ML enablement group in Avalara. We empower Avalara's Product and Engineering teams with the latest AI & ML capabilities, driving easy-to-use, automated compliance solutions that position Avalara as the industry AI technology leader and the go-to choice for all compliance needs. This is a remote position. How We’ll Take Care Of You Total Rewards In addition to a great compensation package, paid time off, and paid parental leave, many Avalara employees are eligible for bonuses. Health & Wellness Benefits vary by location but generally include private medical, life, and disability insurance. Inclusive culture and diversity Avalara strongly supports diversity, equity, and inclusion, and is committed to integrating them into our business practices and our organizational culture. We also have a total of 8 employee-run resource groups, each with senior leadership and exec sponsorship. What You Need To Know About Avalara We’re defining the relationship between tax and tech. We’ve already built an industry-leading cloud compliance platform, processing over 54 billion customer API calls and over 6.6 million tax returns a year. Our growth is real - we're a billion dollar business - and we’re not slowing down until we’ve achieved our mission - to be part of every transaction in the world. We’re bright, innovative, and disruptive, like the orange we love to wear. It captures our quirky spirit and optimistic mindset. It shows off the culture we’ve designed, that empowers our people to win. We’ve been different from day one. Join us, and your career will be too. We’re An Equal Opportunity Employer Supporting diversity and inclusion is a cornerstone of our company — we don’t want people to fit into our culture, but to enrich it. All qualified candidates will receive consideration for employment without regard to race, color, creed, religion, age, gender, national orientation, disability, sexual orientation, US Veteran status, or any other factor protected by law. If you require any reasonable adjustments during the recruitment process, please let us know.
Posted 5 days ago
3.0 years
0 Lacs
India
Remote
What You'll Do Join a collaborative team focused on building scalable, cloud-native tax compliance solutions that simplify complex challenges for businesses around the world. You'll help create, customer-facing features by contributing to primary product development across distributed systems and modern web technologies. You'll work with experienced engineers and partners who value innovation, ownership, and learning. What Your Responsibilities Will Be You will develop scalable, cloud-native applications using C#, ASP.NET Core, REACT and AWS. On a typical day, you'll write high-quality code, participate in design and code reviews, and contribute to feature development within a microservices architecture. You'll use tools like Git, Kubernetes, Terraform, and CI/CD pipelines to ensure efficient and reliable delivery. Collaboration with teams will be important for translating requirements into working software that powers Avalara's tax automation solutions. You will report to an Engineering Manager. This is a remote role. What You’ll Need To Be Successful Bachelor's degree in Computer Science, Engineering, or a related field Minimum of 3 years of professional experience in software development, preferably in product development Strong proficiency in C# and ASP.NET Core, with a solid grasp of data structures, algorithms, and design patterns Hands-on experience building cloud-native, distributed systems with a DevOps mindset, preferably using AWS Proven experience working with microservices architecture Familiarity with Infrastructure as Code (IaC) tools such as Terraform Experience deploying and managing containerized applications using Kubernetes Proficient in working with relational databases like SQL Server or PostgreSQL Experience with front-end frameworks or libraries such as React and Next.js Proficient with version control systems like Git, as well as CI/CD pipelines and Agile development practices Strong problem-solving skills and keen attention to detail Excellent collaboration skills, with the ability to gather requirements and align with stakeholders Effective communicator and strong team player Capable of working independently and managing time and priorities efficiently Good to have knowledge on GenAI implementation How We’ll Take Care Of You Total Rewards In addition to a great compensation package, paid time off, and paid parental leave, many Avalara employees are eligible for bonuses. Health & Wellness Benefits vary by location but generally include private medical, life, and disability insurance. Inclusive culture and diversity Avalara strongly supports diversity, equity, and inclusion, and is committed to integrating them into our business practices and our organizational culture. We also have a total of 8 employee-run resource groups, each with senior leadership and exec sponsorship. What You Need To Know About Avalara We’re defining the relationship between tax and tech. We’ve already built an industry-leading cloud compliance platform, processing over 54 billion customer API calls and over 6.6 million tax returns a year. Our growth is real - we're a billion dollar business - and we’re not slowing down until we’ve achieved our mission - to be part of every transaction in the world. We’re bright, innovative, and disruptive, like the orange we love to wear. It captures our quirky spirit and optimistic mindset. It shows off the culture we’ve designed, that empowers our people to win. We’ve been different from day one. Join us, and your career will be too. We’re An Equal Opportunity Employer Supporting diversity and inclusion is a cornerstone of our company — we don’t want people to fit into our culture, but to enrich it. All qualified candidates will receive consideration for employment without regard to race, color, creed, religion, age, gender, national orientation, disability, sexual orientation, US Veteran status, or any other factor protected by law. If you require any reasonable adjustments during the recruitment process, please let us know.
Posted 5 days ago
10.0 years
0 Lacs
Pune, Maharashtra, India
On-site
What You'll Do The Global Analytics & Insights (GAI) team is seeking a Data & Analytics Engineering Manager to lead our team in designing, developing, and maintaining data pipelines and analytics infrastructure. As a Data & Analytics Engineering Manager, you will play a pivotal role in empowering a team of engineers to build and enhance analytics applications and a modern data platform using Snowflake, dbt (Data Build Tool), Python, Terraform, and Airflow. You will become an expert in Avalara’s financial, marketing, sales, and operations data. The ideal candidate will have deep SQL experience, an understanding of modern data stacks and technology, demonstrated leadership and mentoring experience, and an ability to drive innovation and manage complex projects. This position will report to Senior Manager. What Your Responsibilities Will Be Mentor a team of data engineers, providing guidance and support to ensure a high level of quality and career growth Lead a team of data engineers in the development and maintenance of data pipelines, data modelling, code reviews and data products Collaborate with cross-functional teams to understand requirements and translate them into scalable data solutions Drive innovation and continuous improvements within the data engineering team Build maintainable and scalable processes and playbooks to ensure consistent delivery and quality across projects Drive adoption of best practices in data engineering and data modelling Be the visible lead of the team- coordinate communication, releases, and status to various stakeholders What You’ll Need To Be Successful Bachelor's degree in Computer Science, Engineering, or related field 10+ years experience in data engineering field, with deep SQL knowledge 2+ years management experience, including direct technical reports 5+ years’ experience with data warehousing concepts and technologies 4+ years of working with Git, and demonstrated experience using these tools to facilitate growth of engineers 4+ years working with Snowflake 3+ years working with dbt (dbt core preferred) Preferred Qualifications Snowflake, Dbt, AWS Certified 3+ years working with Infrastructure as Code, preferably Terraform 2+ years working with CI / CD, and demonstrated ability to build and operate pipelines Experience and understanding of Snowflake administration and security principles Demonstrated experience with Airflow How We’ll Take Care Of You Total Rewards In addition to a great compensation package, paid time off, and paid parental leave, many Avalara employees are eligible for bonuses. Health & Wellness Benefits vary by location but generally include private medical, life, and disability insurance. Inclusive culture and diversity Avalara strongly supports diversity, equity, and inclusion, and is committed to integrating them into our business practices and our organizational culture. We also have a total of 8 employee-run resource groups, each with senior leadership and exec sponsorship. What You Need To Know About Avalara We’re defining the relationship between tax and tech. We’ve already built an industry-leading cloud compliance platform, processing over 54 billion customer API calls and over 6.6 million tax returns a year. Our growth is real - we're a billion dollar business - and we’re not slowing down until we’ve achieved our mission - to be part of every transaction in the world. We’re bright, innovative, and disruptive, like the orange we love to wear. It captures our quirky spirit and optimistic mindset. It shows off the culture we’ve designed, that empowers our people to win. We’ve been different from day one. Join us, and your career will be too. We’re An Equal Opportunity Employer Supporting diversity and inclusion is a cornerstone of our company — we don’t want people to fit into our culture, but to enrich it. All qualified candidates will receive consideration for employment without regard to race, color, creed, religion, age, gender, national orientation, disability, sexual orientation, US Veteran status, or any other factor protected by law. If you require any reasonable adjustments during the recruitment process, please let us know.
Posted 5 days ago
4.0 years
0 Lacs
Ahmedabad, Gujarat, India
Remote
Experience : 4.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Office (Ahmedabad) Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Attri) What do you need for this opportunity? Must have skills required: Azure, Docker, TensorFlow, Python, Shell Scripting Attri is Looking for: About Attri Attri is an AI organization that helps businesses initiate and accelerate their AI efforts. We offer the industry’s first end-to-end enterprise machine learning platform, empowering teams to focus on ML development rather than infrastructure. From ideation to execution, our global team of AI experts supports organizations in building scalable, state-of-the-art ML solutions. Our mission is to redefine businesses by harnessing cutting-edge technology and a unique, value-driven approach. With team members across continents, we celebrate diversity, curiosity, and innovation. We’re now looking for a Senior DevOps Engineer to join our fast-growing, remote-first team. If you're passionate about automation, scalable cloud systems, and supporting high-impact AI workloads, we’d love to connect. What You'll Do (Responsibilities): Design, implement, and manage scalable, secure, and high-performance cloud-native infrastructure across Azure. Build and maintain Infrastructure as Code (IaC) using Terraform or CloudFormation. Develop event-driven and serverless architectures using AWS Lambda, SQS, and SAM. Architect and manage containerized applications using Docker, Kubernetes, ECR, ECS, or AKS. Establish and optimize CI/CD pipelines using GitHub Actions, Jenkins, AWS CodeBuild & CodePipeline. Set up and manage monitoring, logging, and alerting using Prometheus + Grafana, Datadog, and centralized logging systems. Collaborate with ML Engineers and Data Engineers to support MLOps pipelines (Airflow, ML Pipelines) and Bedrock with Tensorflow or PyTorch. Implement and optimize ETL/data streaming pipelines using Kafka, EventBridge, and Event Hubs. Automate operations and system tasks using Python and Bash, along with Cloud CLIs and SDKs. Secure infrastructure using IAM/RBAC and follow best practices in secrets management and access control. Manage DNS and networking configurations using Cloudflare, VPC, and PrivateLink. Lead architecture implementation for scalable and secure systems, aligning with business and AI solution needs. Conduct cost optimization through budgeting, alerts, tagging, right-sizing resources, and leveraging spot instances. Contribute to backend development in Python (Web Frameworks), REST/Socket and gRPC design, and testing (unit/integration). Participate in incident response, performance tuning, and continuous system improvement. Good to Have: Hands-on experience with ML lifecycle tools like MLflow and Kubeflow Previous involvement in production-grade AI/ML projects or data-intensive systems Startup or high-growth tech company experience Qualifications: Bachelor’s degree in Computer Science, Information Technology, or a related field. 5+ years of hands-on experience in a DevOps, SRE, or Cloud Infrastructure role. Proven expertise in multi-cloud environments (AWS, Azure, GCP) and modern DevOps tooling. Strong communication and collaboration skills to work across engineering, data science, and product teams. Benefits: Competitive Salary 💸 Support for continual learning (free books and online courses) 📚 Leveling Up Opportunities 🌱 Diverse team environment 🌍 How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 5 days ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Python + AWS/DataBricks Developer 📍 Hyderabad (Work from Office) 📅 5+ years experience | Immediate joiners preferred 🔹 Must-have Skills: Expert Python programming (3.7+) Strong AWS (EC2, S3, Lambda, Glue, CloudFormation) DataBricks platform experience ETL pipeline development SQL/NoSQL databases PySpark/Pandas proficiency 🔹 Good-to-have: AWS certifications Terraform knowledge Airflow experience Interested candidates can share profiles to shruti.pandey@codeethics.in Please mention the position you're applying for! #Hiring #ReactJS #Python #AWS #DataBricks #HyderabadJobs #TechHiring #WFO
Posted 5 days ago
0.0 - 5.0 years
8 - 12 Lacs
Musheerabad, Hyderabad, Telangana
On-site
Senior DevOps Engineer – Technical Support Focus Hyderabad, India (Onsite Role) Experience Level: 5+ years Employment Type: Full-time About the Role We are seeking a seasoned Senior DevOps Engineer to bridge the gap between development and operations, ensuring seamless software delivery and robust infrastructure. This role emphasizes technical support, requiring close collaboration with support teams to resolve complex issues and enhance system reliability. Key Responsibilities Infrastructure Management: Design, implement, and maintain scalable infrastructure solutions using tools like Terraform and Ansible. CI/CD Pipelines: Develop and optimize continuous integration and deployment pipelines to ensure efficient software delivery. Monitoring & Incident Response: Implement monitoring solutions and lead incident response efforts to maintain system uptime and performance. Collaboration: Work closely with development, QA, and support teams to address technical challenges and improve operational workflows. Automation: Automate repetitive tasks to enhance efficiency and reduce manual intervention. Qualifications Education: Bachelor's degree in Computer Science, Engineering, or a related field. Experience: Minimum of 5 years in DevOps roles, with a focus on infrastructure and technical support. Technical Skills: Proficiency in cloud platforms (AWS, Azure, or GCP), containerization tools (Docker, Kubernetes), and scripting languages (Python, Bash). Soft Skills: Strong problem-solving abilities, excellent communication skills, and a collaborative mindset. Job Types: Full-time, Permanent, Contractual / Temporary Contract length: 12 months Pay: ₹800,000.00 - ₹1,200,000.00 per year Benefits: Paid sick time Ability to commute/relocate: Musheerabad, Hyderabad, Telangana: Reliably commute or planning to relocate before starting work (Required) Education: Bachelor's (Preferred) Experience: DevOps: 5 years (Required) Language: English (Required) Shift availability: Night Shift (Required) Work Location: In person
Posted 5 days ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
Remote
At Seismic, we're proud of our engineering culture where technical excellence and innovation drive everything we do. We're a remote-first data engineering team responsible for the critical data pipeline that powers insights for over 2,300 customers worldwide. Our team manages all data ingestion processes, leveraging technologies like Apache Kafka, Spark, various C# microservices services, and a shift-left data mesh architecture to transform diverse data streams into the valuable reporting models that our customers rely on daily to make data-driven decisions. Additionally, we're evolving our analytics platform to include AI-powered agentic workflows. Who You Are Have working knowledge of one OO language, preferably C#, but won’t hold your Java expertise against you (you’re the type of person who’s interested in learning and becoming an expert at new things). Additionally, we’ve been using Python more and more, and bonus points if you’re familiar with Scala. Have experience with architecturally complex distributed systems. Highly focused on operational excellence and quality – you have a passion to write clean and well tested code and believe in the testing pyramid. Outstanding verbal and written communication skills with the ability to work with others at all levels, effective at working with geographically remote and culturally diverse teams. You enjoy solving challenging problems, all while having a blast with equally passionate team members. Conversant in AI engineering. You’ve been experimenting with building ai solutions/integrations using LLMs, prompts, Copilots, Agentic ReAct workflows, etc. At Seismic, we’re committed to providing benefits and perks for the whole self. To explore our benefits available in each country, please visit the Global Benefits page. Please be aware we have noticed an increase in hiring scams potentially targeting Seismic candidates. Read our full statement on our Careers page. Seismic is the global leader in AI-powered enablement, empowering go-to-market leaders to drive strategic growth and deliver exceptional customer experiences at scale. The Seismic Enablement Cloud™ is the only unified AI-powered platform that prepares customer-facing teams with the skills, content, tools, and insights needed to maximize every buyer interaction and strengthen client relationships. Trusted by more than 2,000 organizations worldwide, Seismic helps businesses achieve measurable outcomes and accelerate revenue growth. Seismic is headquartered in San Diego with offices across North America, Europe, Asia and Australia. Learn more at seismic.com. Seismic is committed to building an inclusive workplace that ignites growth for our employees and creates a culture of belonging that allows all employees to be seen and valued for who they are. Learn more about DEI at Seismic here. Collaborating with experienced software engineers, data scientists and product managers to rapidly build, test, and deploy code to create innovative solutions and add value to our customers' experience. Building large scale platform infrastructure and REST APIs serving machine learning driven content recommendations to Seismic products. Leveraging the power of context in third-party applications such as CRMs to drive machine learning algorithms and models. Helping build next-gen Agentic tooling for reporting and insights Processing large amounts of internal and external system data for analytics, caching, modeling and more. Identifying performance bottlenecks and implementing solutions for them. Participating in code reviews, system design reviews, agile ceremonies, bug triage and on-call rotations. BS or MS in Computer Science, similar technical field of study, or equivalent practical experience. 3+ years of software development experience within a SaaS business. Must have a familiarity with .NET Core, and C# and frameworks. Experience in data engineering - building and managing Data Pipelines, ETL processes, and familiarity with various technologies that drive them: Kafka, FiveTran (Optional), Spark/Scala (Optional), etc. Data warehouse experience with Snowflake, or similar (AWS Redshift, Apache Iceberg, Clickhouse, etc). Familiarity with RESTFul microservice-based APIs Experience in modern CI/CD pipelines and infrastructure (Jenkins, Github Actions, Terraform, Kubernetes) a big plu (or equivalent) Experience with the SCRUM and the AGILE development process. Familiarity developing in cloud-based environments Optional: Experience with 3rd party integrations Optional: familiarity with Meeting systems like Zoom, WebEx, MS Teams Optional: familiarity with CRM systems like Salesforce, Microsoft Dynamics 365, Hubspot. If you are an individual with a disability and would like to request a reasonable accommodation as part of the application or recruiting process, please click here. Headquartered in San Diego and with employees across the globe, Seismic is the global leader in sales enablement , backed by firms such as Permira, Ameriprise Financial, EDBI, Lightspeed Venture Partners, and T. Rowe Price. Seismic also expanded its team and product portfolio with the strategic acquisitions of SAVO, Percolate, Grapevine6, and Lessonly. Our board of directors is composed of several industry luminaries including John Thompson, former Chairman of the Board for Microsoft. Seismic is an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to gender, age, race, religion, or any other classification which is protected by applicable law. Please note this job description is not designed to cover or contain a comprehensive listing of activities, duties or responsibilities that are required of the employee for this job. Duties, responsibilities and activities may change at any time with or without notice.
Posted 5 days ago
150.0 years
0 Lacs
Kochi, Kerala, India
On-site
Job Description We are looking for someone who embraces new technologies and exudes passion solving complex puzzles. Your responsibilities focus on three critical areas of the global communications team: Network Security, Network Infrastructure, and Network Optimization. Additionally, you will participate with your global team members in acquisitions, technology evaluations, design, and of course resolving intellectually challenging technical issues. If you enjoy traveling and exploring cultures around the world, you are in luck. With 600+ locations in every time zone in over 65 countries on every continent except Antarctica, the sun never sets for NOV. We are a globally diverse happy family, driven to power the people who power the world. Responsibilities You like to engage directly with business groups in architecture discussions and standard methodology. Possess strong communications skills and the ability to build trust and integrity in your relationships with our business partners. Compelling presenter with superb communication skills to represent the capabilities and services IT provides. Ability to communicate deftly to both technical and non-technical audiences. Ability to quickly learn, understand, and work with new emerging technologies, methodologies and solutions in the cloud/IT technology space. Your deep understanding of network application protocols such as DNS, HTTP, encryption, and skills tackling problems solved with network packet capture make you an ideal team member. Deep understanding of IT security as a whole with proven focus on firewalls, application and endpoint security. A rich understanding of wired or wireless network protocols. Required understanding of global wide area networking and impact of latency. What do you think of a wireless only office? A proven knowledge of datacenter and cloud networking. Required knowledge of scripting/programming knowledge on automation, and concept of redundancy. Solid grasp of optimization technologies including compression, caching, load balance, and distributed topology. Require experience working with business teams to dissect applications to facilitate understanding optimization opportunities. Requirements You should have experience in or exposure to the following technologies. NGFW, Cloud security, WAF, DNS RPZ, SDWAN, SDN, Microsegmentation using Zscaler or Cloudflare. Current popular Network OSes, Network virtualization, Hybrid Clouds like Azure/AWS/Google, Linux, Docker, etc. Datacenter technology EVPN/VXLAN, MP-BGP, MLAG, VARP, spine-leaf topology using Arista Networks. IPSec, MB-BGP, OSPF, IS-IS, MPLS, VRF, VXLAN, STP, IRF/Clustering, LACP, 802.1x, 802.1q, 802.11, DNS, DHCP, HTTP, SSL x509, QoS DevSecOps, Rest API, Python, Perl, C, Javascript, Powershell, Bash scripting, Ansible, Terraform, etc Zero Trust principles and least-privileges access. WAN SD-WAN and Optimization using Versa Networks. Bachelor of Science in Computer Science or Computer Engineering Certifications Current CCNP or CCIE (must) Aruba ACMP or ACSA professional certification (plus) Azure solution architect expert (plus) Zscaler certification (plus) Palo Alto PCNSE / PCSAE (plus) Wireshark WCNA (plus) About Us Every day, the oil and gas industry’s best minds put more than 150 years of experience to work to help our customers achieve lasting success. We Power the Industry that Powers the World Throughout every region in the world and across every area of drilling and production, our family of companies has provided the technical expertise, advanced equipment, and operational support necessary for success—now and in the future. Global Family We are a global family of thousands of individuals, working as one team to create a lasting impact for ourselves, our customers, and the communities where we live and work. Purposeful Innovation Through purposeful business innovation, product creation, and service delivery, we are driven to power the industry that powers the world better. Service Above All This drives us to anticipate our customers’ needs and work with them to deliver the finest products and services on time and on budget.
Posted 5 days ago
0 years
0 Lacs
Trivandrum, Kerala, India
On-site
We are seeking a DevOps Engineer to join our dynamic team. In this role, you’ll play a key role in building and managing resilient, secure, and scalable cloud infrastructure while driving automation and optimizing development and operations workflows. If you are passionate about cloud technologies, automation, and continuous improvement, this is the perfect opportunity for you. What You’ll Do As a DevOps Engineer , you will be a crucial member of our team, responsible for the seamless integration between software development and IT operations. You’ll focus on automation, continuous delivery, and enhancing system performance, reliability, and scalability. Key Responsibilities : CI/CD Pipeline Management: Design, implement, and maintain automated CI/CD pipelines using tools such as Jenkins, GitHub Actions, and Nexus to streamline development and accelerate deployment cycles. Cloud Infrastructure Management: Architect, deploy, and optimize scalable, highly available, and secure cloud environments on Azure, ensuring cost-efficiency and operational excellence. Infrastructure Automation: Utilize Infrastructure-as-Code (IaC) tools like Terraform and Azure Resource Manager (ARM) templates to automate the provisioning and management of cloud infrastructure. Collaborative Development: Work alongside development and operations teams to design and maintain infrastructure solutions that align with business and software requirements. System Monitoring & Performance Optimization: Monitor system performance proactively, troubleshoot production issues, and resolve them quickly to ensure uninterrupted service. Security & Compliance: Partner with security teams to implement cloud security best practices and ensure compliance with industry regulations and Allianz’s internal policies. Disaster Recovery & High Availability: Develop and manage high availability (HA) and disaster recovery (DR) strategies to ensure critical systems remain available and resilient. Containerization & Microservices: Support the development and deployment of microservices using Docker and Kubernetes for container orchestration. Continuous Improvement: Continuously identify opportunities to optimize infrastructure, enhance system reliability, and improve operational efficiency. Environment Support: Maintain and support development, staging, and production environments to ensure consistent application performance. What Will Make You Successful in This Role? We are looking for a DevOps Engineer with extensive hands-on experience in Azure cloud platforms, CI/CD pipelines , and infrastructure automation . You should have a solid technical foundation, a passion for automation, and the ability to work effectively in a collaborative and fast-paced environment. Required Skills & Qualifications: Experience: Proven experience as a DevOps Engineer or similar roles, with a focus on Azure cloud environments and CI/CD workflows. CI/CD Tools: Hands-on experience with CI/CD tools such as Jenkins, GitHub Actions, Nexus, or similar platforms to automate build, test, and deployment processes. Cloud Expertise: Expertise in managing Azure cloud services such as Azure Kubernetes Service (AKS), Azure App Services, Azure Functions, and Azure Data Factory. Infrastructure Automation: Proficient in using Terraform, ARM templates, and other IaC tools to automate infrastructure provisioning and management. Containerization & Orchestration: Hands-on experience with Docker and Kubernetes to manage containerized applications and orchestrate microservices-based architectures. Scripting & Automation: Strong scripting skills in Python, Bash, Groovy, Shell, or similar languages for task automation and environment management. Linux/Unix Proficiency: Solid experience with Linux/Unix system administration, shell scripting, and automation. Monitoring & Logging: Familiarity with Dynatrace, Prometheus, or similar tools for monitoring, logging, and troubleshooting system performance. Version Control: Expertise in Git for version control and collaboration on code repositories. Networking & Infrastructure: In-depth understanding of networking concepts, including firewalls, load balancing, DNS configurations, and general cloud network architecture. Agile & DevOps Best Practices: Solid understanding of Agile methodologies and DevOps principles to promote collaboration and optimize workflows. Problem-Solving & Troubleshooting: Excellent problem-solving skills with the ability to resolve complex issues in live production environments. Collaboration & Communication: Strong communication skills and the ability to collaborate effectively with cross-functional teams, including developers, security specialists, and stakeholders. Bachelor’s degree in Computer Science, Information Technology, or a related experience. Preferred Qualifications: Familiarity with disaster recovery (DR) strategies and implementing high availability (HA) solutions. Knowledge of security best practices for cloud infrastructure and application deployment. Why Join Us? At Allianz Technology , you’ll work with a passionate, diverse, and talented team of over 10,000 professionals across more than 55 countries. We believe in empowering our people to lead and innovate, offering opportunities to learn and grow while working on cutting-edge technologies. You’ll play a key role in the future of digital transformation within Allianz, contributing to meaningful projects that drive business success. You’ll be part of a collaborative and dynamic work environment that values your ideas, expertise, and contributions. We offer: Competitive salary and benefits package. Opportunities for growth and career advancement in a global company. A flexible, hybrid working model that promotes work-life balance. A diverse, inclusive environment that celebrates different perspectives and backgrounds. Diversity & Inclusion Statement Allianz Technology is proud to be an equal opportunity employer, promoting diversity and inclusion within our workforce. We welcome applicants from all backgrounds, and we encourage applications regardless of gender identity, sexual orientation, ethnicity, cultural background, age, nationality, religion, disability, or philosophy of life. We are committed to fostering an inclusive environment where everyone has the opportunity to thrive. Join Us. Let’s Care for Tomorrow. Allianz Group is one of the most trusted insurance and asset management companies in the world. Caring for our employees, their ambitions, dreams and challenges, is what makes us a unique employer. Together we can build an environment where everyone feels empowered and has the confidence to explore, to grow and to shape a better future for our customers and the world around us. We at Allianz believe in a diverse and inclusive workforce and are proud to be an equal opportunity employer. We encourage you to bring your whole self to work, no matter where you are from, what you look like, who you love or what you believe in. We therefore welcome applications regardless of ethnicity or cultural background, age, gender, nationality, religion, disability or sexual orientation. Join us. Let's care for tomorrow.
Posted 5 days ago
3.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Role Overview: Fort Hill -LogiX is redefining capital program oversight by integrating advanced AI systems. We are seeking a Platform Engineer with experience in DevOps, Generative AI systems, and infrastructure automation who can help scale and stabilize our next-gen intelligence platform. You will build the foundational systems that power our large language model integrations (Claude, Gemini, GPT, Bedrock), support real-time data workflows, and enable intelligent decision-making across complex financial and operational environments. Key Responsibilities Infrastructure & Cloud Engineering Design and maintain resilient infrastructure on AWS, including core services like EC2, ECS, RDS, Lambda, IAM, and VPC networking. Manage and scale containerized applications using Docker and Kubernetes (EKS) in multi-environment production workflows. Implement Infrastructure-as-Code (IaC) using Terraform or CloudFormation. Build and maintain CI/CD pipelines using GitHub Actions, GitLab CI, or Jenkins, enabling frequent, secure deployments. Configure observability tooling (CloudWatch, Grafana, Prometheus, ELK) for system reliability and performance monitoring. Generative AI Infrastructure Deploy and orchestrate integrations with: o AWS Bedrock (Claude, Titan) o Google Cloud Gemini Enterprise o OpenAI GPT APIs (via Azure or OpenAI platform) 2. Build secure, scalable AI inference flows, including throttling, fallback, and caching mechanisms. 3.Support Retrieval-Augmented Generation (RAG) pipelines using LangChain, LangGraph, and vector databases. 4.Optimize latency, cost, and throughput for AI-enabled services. Security & Reliability Work with platform and product teams to enforce secure deployments, encryption (TLS, KMS, IAM). Automate role-based access controls and service provisioning. Lead infrastructure incident response, root cause diagnostics, and uptime management in production environments. Automation & Operations Automate testing, deployment, and rollback of AI services and platform features. Implement event-driven and lazy approval workflows for secure and efficient platform operations. Scale internal tools and platforms to support rapid prototyping and stable production delivery. Qualifications Required: 3+ years in DevOps, Cloud Engineering, or Platform Engineering roles. Deep understanding of AWS, Kubernetes (EKS), Terraform, and CI/CD workflows. Experience integrating Generative AI models/APIs such as Claude, GPT, or Gemini. Strong scripting in Python, Bash, or similar. Understanding of scalable APIs, distributed systems, and cloud-native microservices. Familiarity with LangGraph, LangChain, vector stores (e.g., Pinecone, PGVector, Weaviate). Experience deploying LLM applications with real-world data pipelines and observability. · Certification in AWS (e.g., DevOps Engineer, Solutions Architect). Prior work in platform-as-a-service or AI tool development environments. Fort Hill LogiX - LogiX is a pioneering platform built to deliver intelligent process review, assurance, and transparency capital programs. Powered by AI and cloud automation, LogiX empowers enterprises with smart tools that streamline reviews, highlight anomalies, and accelerate decision-making.
Posted 5 days ago
4.0 - 5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
About the Role: We are looking for an experienced and highly skilled Software Development Engineer (SDE) with 4-5 years of experience who will work along with Genworx team. This role involves contributing to the development of cutting-edge generative AI applications and systems, integrating advanced machine learning models, and ensuring high standards in design, implementation, and deployment. Key Responsibilities: Problem Solving and Design: Propose and implement scalable, efficient solutions tailored to real-world use cases Software Development: Develop both front-end and back-end components (full stack) for generative AI applications. Debug and resolve systemic issues in a timely manner. Generative AI Development: Integrate pre-trained models, including large language models (LLMs). Optimize AI solutions for performance, scalability, and reliability. Leverage generative AI frameworks and technologies CI/CD and DevOps Practices: Build and maintain CI/CD pipelines for seamless deployment of AI-driven applications. Implement DevOps best practices to ensure efficient and secure operations. Operational Excellence: Maintain high standards of performance, security, and scalability in design and implementation. Document technical workflows and ensure adherence to operational guidelines. Team Collaboration and Mentorship: Work closely with cross-functional teams, including product managers, ML engineers, and designers, to deliver high-quality solutions. Participate in code reviews, brainstorming sessions, and team discussions. Requirements Required Skills and Qualifications: Education: Bachelor’s or Master’s degree in Computer Science, Software Engineering, Artificial Intelligence, or a related field. Technical Skills: Proficiency in front-end technologies: HTML, CSS, JavaScript (React, Angular, or Vue.js). Proficiency in back-end technologies: Node.js, Python, or Java. Strong understanding of databases: SQL (MySQL/PostgreSQL) and NoSQL (MongoDB). Hands-on experience with CI/CD tools (Jenkins, GitHub Actions, GitLab CI/CD, CircleCI). Familiarity with AI frameworks such as TensorFlow, PyTorch, or Hugging Face. Attributes: Strong problem-solving and system design skills. Ability to mentor team members and collaborate effectively in a team environment. Excellent communication skills for explaining technical concepts. Cloud / Infrastructure: Experience with cloud platforms such as AWS, Azure, or Google Cloud. Knowledge of containerization tools (Docker, Kubernates) and infrastructure as code (Terraform, CloudFormation). Benefits What We Offer: A vibrant and innovative work environment focused on generative AI solutions. Competitive salary and benefits package tailored for mid-level engineers. Opportunities to work on state-of-the-art AI projects with industry experts. A culture that values creativity, collaboration, and continuous learning
Posted 5 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39817 Jobs | Dublin
Wipro
19388 Jobs | Bengaluru
Accenture in India
15458 Jobs | Dublin 2
EY
14907 Jobs | London
Uplers
11185 Jobs | Ahmedabad
Amazon
10459 Jobs | Seattle,WA
IBM
9256 Jobs | Armonk
Oracle
9226 Jobs | Redwood City
Accenture services Pvt Ltd
7971 Jobs |
Capgemini
7704 Jobs | Paris,France