Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 8.0 years
0 Lacs
India
On-site
Responsibilities Design, implement, and maintain CI/CD pipelines using Jenkins to support automated builds, testing, and deployments. Manage and optimize AWS infrastructure for scalability, reliability, and cost-effectiveness. To streamline operational workflows and develop automation scripts and tools using shell scripting and other programming languages. Collaborate with cross-functional teams (Development, QA, Operations) to ensure seamless software delivery and deployment. Monitor and troubleshoot infrastructure, build failures, and deployment issues to ensure high availability and performance. Implement and maintain robust configuration management practices and infrastructure-as-code principles. Document processes, systems, and configurations to ensure knowledge sharing and maintain operational consistency. Performing ongoing maintenance and upgrades (Production & non-production) Qualifications Experience: 5-8 years in DevOps or a similar role. Cloud Expertise: Proficient in AWS services such as EC2, S3, RDS, Lambda, IAM, CloudFormation, or similar. CI/CD Tools: Hands-on experience with Jenkins pipelines (declarative and scripted). Scripting Skills: Proficiency in either shell scripting or powershell Programming Knowledge: Familiarity with at least one programming language (e.g., Python, Java, or Go). IMP: Scripting/Programming is integral to this role and will be a key focus in the interview process. Version Control: Experience with Git and Git-based workflows. Monitoring Tools: Familiarity with tools like CloudWatch, Prometheus, or similar. Problem-solving: Strong analytical and troubleshooting skills in a fast-paced environment CDK Knowledge in AWS DevOps. Tools: Experience with Terraform and Kubernetes.
Posted 6 days ago
5.0 - 8.0 years
0 Lacs
India
On-site
Responsibilities Design, implement, and maintain CI/CD pipelines using Jenkins to support automated builds, testing, and deployments. Manage and optimize AWS infrastructure for scalability, reliability, and cost-effectiveness. To streamline operational workflows and develop automation scripts and tools using shell scripting and other programming languages. Collaborate with cross-functional teams (Development, QA, Operations) to ensure seamless software delivery and deployment. Monitor and troubleshoot infrastructure, build failures, and deployment issues to ensure high availability and performance. Implement and maintain robust configuration management practices and infrastructure-as-code principles. Document processes, systems, and configurations to ensure knowledge sharing and maintain operational consistency. Performing ongoing maintenance and upgrades (Production & non-production) Qualifications Experience: 5-8 years in DevOps or a similar role. Cloud Expertise: Proficient in AWS services such as EC2, S3, RDS, Lambda, IAM, CloudFormation, or similar. CI/CD Tools: Hands-on experience with Jenkins pipelines (declarative and scripted). Scripting Skills: Proficiency in either shell scripting or powershell Programming Knowledge: Familiarity with at least one programming language (e.g., Python, Java, or Go). IMP: Scripting/Programming is integral to this role and will be a key focus in the interview process. Version Control: Experience with Git and Git-based workflows. Monitoring Tools: Familiarity with tools like CloudWatch, Prometheus, or similar. Problem-solving: Strong analytical and troubleshooting skills in a fast-paced environment. CDK Knowledge in AWS DevOps. Tools: Experience with Terraform and Kubernetes.
Posted 6 days ago
0 years
0 Lacs
India
Remote
About Company Our client is a trusted global innovator of IT and business services. We help clients transform through consulting, industry solutions, business process services, digital & IT modernization and managed services. Our client enables them, as well as society, to move confidently into the digital future. We are committed to our clients’ long-term success and combine global reach with local client attention to serve them in over 50 countries around the globe. Job Title: Python Developer with Azure & AKS Location: Noida / Remote Experience: 7+ yrs Job Type : Contract to hire Notice Period:- Immediate joiner Mandatory Skills · Hands-on experience with Python Developer with Azure & AKS. Hands-on experience with Azure Kubernetes Service (AKS) — deploying, managing, and troubleshooting applications on AKS. Strong knowledge of containerisation using Docker and orchestration using Kubernetes with Python. Familiarity with Azure services like Azure Blob Storage, Azure Functions, Azure Service Bus, Azure Key Vault, etc. Experience in implementing CI/CD pipelines using Azure DevOps, GitHub Actions, or similar tools. Knowledge of infrastructure such as code (IaC) tools like Terraform, Bicep, or ARM templates. Familiarity with monitoring and logging tools in Azure — e.g., Application Insights, Log Analytics, and Azure Monitor. Understanding cloud security, networking, and resource management best practices in a production Azure environment. Experience working in DevOps-enabled teams following Agile and iterative development. Responsibilities Writing clean, high-quality, high-performance, maintainable code Develop and support software including applications, database integration, interfaces, and new functionality enhancements Coordinate cross-functionally to insure project meets business objectives and compliance standards Support test and deployment of new products and features Participate in code reviews. Qualifications Bachelor's degree in Computer Science (or related field)
Posted 6 days ago
18.0 - 25.0 years
6 - 8 Lacs
Noida
On-site
Company Description SBS is a global financial technology company that’s helping banks and the financial services industry to reimagine how to operate in an increasingly digital world. SBS is a trusted partner of more than 1,500 financial institutions and large-scale lenders in 80 countries worldwide, including Santander, Societé Generale, KCB Bank, Kensington Mortgages, Mercedes-Benz, and Toyota FS. Its cloud platform offers clients a composable architecture to digitize operations, ranging from banking, lending, compliance, to payments, and consumer and asset finance. With 3,400 employees in 50 offices, SBS is recognized as a Top 10 European Fintech company by IDC and as a leader in Omdia’s Universe: Digital Banking Platforms. Job Description What You’ll Do Design and Implement Test Automation Strategy: Develop and own a comprehensive test automation strategy covering all levels of testing – unit, component, integration, and end-to-end – across the platform’s microservices and user interfaces. Ensure that each layer of the tech stack has appropriate automated test coverage for fast, high-quality releases. Drive Full-Stack Test Coverage: Proactively identify functional coverage gaps and under-tested areas – especially in the UI , API , and caching layers (e.g. Redis ) – and address them with improved automated tests. Continuously raise the bar on test effectiveness by expanding coverage and improving test scenarios for edge cases and failure conditions. Build & Enhance Automation Frameworks: Architect and implement robust test automation frameworks for different domains of testing as required. This includes improving our UI automation (using Cypress or similar), strengthening API testing frameworks (using K6 or similar) , and establishing performance testing to simulate load and stress. You will ensure these frameworks are scalable, maintainable, and aligned with a modern JVM/Spring Boot & Angular tech stack. Select and Integrate Testing Tools: Evaluate and implement or enhance the right set of automation tools and libraries that best fit our stack (Java/Kotlin, Spring Boot backend, Angular frontend). If needed, introduce new tools or testing approaches (e.g. BDD, contract testing) to improve quality. Ensure that our choice of tools (testing frameworks, assertion libraries, reporting tools) maximizes efficiency and developer friendliness. Embed Testing in CI/CD Pipelines: Integrate automated tests into our GitLab CI/CD pipelines as quality gates. Implement continuous testing practices so that every code commit triggers automated test suites (unit, API, UI, performance), providing rapid feedback on failures. You will lead the evolution of our continuous testing strategy within the CI/CD pipeline , ensuring that no code reaches production without passing the necessary checks. Manage Test Environments & Data: Oversee test environment provisioning and test data management. Use AWS cloud infrastructure and Infrastructure-as-Code (Terraform) to set up and tear down test environments on demand, automate test data creation/seeding, and ensure test environments mimic production for reliable results. Maintain data integrity and compliance (GDPR, PCI DSS, etc.) in test datasets given the regulatory environment. Collaborate and Champion Quality: Work closely with developers, DevOps engineers, product managers, and other stakeholders to instill an automation-first mindset . Through design reviews, code reviews, and regular sync-ups, ensure testing considerations are part of planning and development. Act as a quality evangelist , coaching teams on best practices and helping to troubleshoot testing challenges. Influence and improve the overall engineering quality culture , making sure that quality is a shared responsibility across the team. Ensure Compliance and Reliability: In a SaaS, cloud-native environment with rapid sprint cycles, ensure our test processes and frameworks account for strict regulatory constraints and security requirements of the banking domain. Design test scenarios for regulatory compliance (e.g. PSD2, GDPR, PCI) and fail-safes for sensitive financial workflows, so that our platform remains compliant and reliable under all conditions. Monitor, Report, and Improve: Define and track quality KPIs such as automated test coverage, test pass rates, defect leakage, and performance benchmarks. Regularly report on quality status to stakeholders. Use these insights to continually improve test strategies – optimize test execution time, enhance CI/CD feedback loops, and ensure that automation delivers tangible value in catching issues early. Qualifications Minimum Qualification Extensive QA & Automation Experience: Bachelor’s/Master’s degree in Computer Science or related field (or equivalent experience). 18 to 25 years in software testing/QA, with at least a few years in a test automation architect or lead role for complex software products. You have a track record of designing automation solutions for large-scale, distributed systems. Hands-on Automation Skills: Proven experience in building and maintaining automated test frameworks for web applications and APIs. You are a hands-on coder with deep programming skills in languages like Java or other JVM languages, and comfortable scripting in JavaScript/TypeScript or Python when needed. You write clean, maintainable test code and are familiar with design patterns for test automation. Testing Framework Expertise: In-depth knowledge of modern testing tools and frameworks. You have worked with UI automation (e.g. Cypress, Selenium, or Playwright), API testing (e.g. K6, RestAssured, Postman/Newman, or similar), and performance testing tools (e.g. k6, JMeter, Gatling). You understand testing across different layers – including contract testing of microservices, database validation, and can even script tests around caching layers like Redis if required. CI/CD and DevOps Know-how: Solid experience integrating test automation into CI/CD pipelines . You are familiar with Git-based workflows and tools like GitLab CI (or Jenkins/Azure DevOps etc.), and can write pipeline scripts/jobs to run tests, handle artifacts, and report results. Knowledge of Docker/Kubernetes for containerized test execution is a plus. Cloud and Infrastructure Skills: Experience working in cloud environments (AWS) and using Terraform or other IaC tools to manage infrastructure. You understand how to set up test environments in the cloud, manage configurations (perhaps using Docker compose or Kubernetes manifests), and utilize cloud services for testing (S3, databases, etc.). Quality Mindset & Soft Skills: An automation-first mindset – you consistently look to automate repetitive testing tasks and reduce manual effort. Excellent analytical and problem-solving abilities to debug test failures and pinpoint issues across complex systems. Strong collaboration and communication skills to work with cross-functional teams and to advocate for quality practices. You are comfortable leading discussions on testing strategy, providing constructive feedback, and influencing without authority when necessary. Attention to Detail and Accountability: A keen eye for detail in identifying edge cases, race conditions, and potential failure points that others might miss. High sense of ownership and accountability for product quality – you take pride in catching issues early and ensuring the customer experience is flawless. Preferred Qualifications Domain Expertise: Experience in banking, fintech, or financial services domains, especially in core banking, payments, or digital lending. Understanding of banking workflows and regulations helps you design better test scenarios and compliance checks. Performance & Security Testing: Advanced experience with performance testing (analyzing throughput, latency, bottlenecks) and exposure to security testing in financial applications. Familiarity with tools for security scanning or vulnerability testing in CI/CD is a plus. Leadership & Certifications: Prior experience leading a QA/Automation team or mentoring other QA engineers. Relevant certifications (e.g. ISTQB Advanced Test Manager/Architect, Certified Agile Testing, AWS Cloud Practitioner) can be a plus, but proven skills matter more. Additional Tools: Exposure to monitoring/observability tools (e.g. Grafana, Kibana) to correlate test results with system metrics. Experience with contract testing (e.g. Pact) or service virtualization in complex integrations. Any experience in using AI/ML tools for testing or predictive quality analytics would be an extra bonus (showing you stay on the cutting edge). Additional Information Secondary Location:Noida Campus At our organization, we are committed to fighting against all forms of discrimination. We foster a work environment that is inclusive and respectful of all differences. All of our positions are open to people with disabilities. At our organization, we are committed to fighting against all forms of discrimination. We foster a work environment that is inclusive and respectful of all differences. All of our positions are open to people with disabilities.
Posted 6 days ago
3.0 years
3 - 10 Lacs
Noida
On-site
Job Description Job ID SENIO015160 Employment Type Regular Work Style on-site Location Noida,UP,India Role Senior Site Reliability Engineer Sr Site Reliability Engineer Sr Site Reliability Engineers at UKG are critical team members that have a breadth of knowledge encompassing all aspects of service delivery. They develop software solutions to enhance, harden and support our service delivery processes. This can include building and managing CI/CD deployment pipelines, automated testing, capacity planning, performance analysis, monitoring, alerting, chaos engineering and auto remediation. Site Reliability Engineers must be passionate about learning and evolving with current technology trends. They strive to innovate and are relentless in pursuing a flawless customer experience. They have an “automate everything” mindset, helping us bring value to our customers by deploying services with incredible speed, consistency, and availability. Job Responsibilities: Engage in and improve the lifecycle of services from conception to EOL, including system design consulting, and capacity planning Define and implement standards and best practices related to: System Architecture, Service delivery, metrics and the automation of operational tasks Support services, product & engineering teams by providing common tooling and frameworks to deliver increased availability and improved incident response. Improve system performance, application delivery and efficiency through automation, process refinement, postmortem reviews, and in-depth configuration analysis Collaborate closely with engineering professionals within the organization to deliver reliable services Increase operational efficiency, effectiveness, and quality of services by treating operational challenges as a software engineering problem (reduce toil) Guide junior team members and serve as a champion for Site Reliability Engineering Actively participate in incident response, including on-call responsibilities Partner with stakeholders to influence and help drive the best possible technical and business outcomes Required Qualifications Engineering degree, or a related technical discipline, or equivalent work experience Experience coding in higher-level languages (e.g., Python, JavaScript, C++, or Java) Knowledge of Cloud based applications & Containerization Technologies Demonstrated understanding of best practices in metric generation and collection, log aggregation pipelines, time-series databases, and distributed tracing Working experience with industry standards like Terraform, Ansible Demonstrable fundamentals in 2 of the following: Computer Science, Cloud architecture, Security, or Network Design fundamentals Demonstrable fundamentals in 2 of the following: Computer Science, Cloud architecture, Security, or Network Design fundamentals (Experience, Education, Certification, License and Training) Must have at least 3 years of hands-on experience working in Engineering or Cloud Minimum 3 years' experience with public cloud platforms (e.g. GCP, AWS, Azure) Minimum 3 years' Experience in configuration and maintenance of applications and/or systems infrastructure for large scale customer facing company Experience with distributed system design and architecture Who We Are Here at UKG, Our Purpose Is People. UKG combines the strength and innovation of Ultimate Software and Kronos, uniting two award-winning, employee-centered cultures. Our U Krewers are an extraordinary group of talented, innovative, and collaborative individuals who care about more than just work. We strive to create a culture of belonging and an employee experience filled with meaningful recognition and best-in-class rewards and benefits. UKG has 14,000 employees around the globe and is known for its inclusive and supportive workplace culture. Ready to join the U Krew? ukg.com/careers
Posted 6 days ago
12.0 years
0 Lacs
Noida
On-site
About Aeris: For more than three decades, Aeris has been a trusted cellular IoT leader enabling the biggest IoT programs and opportunities across Automotive, Utilities and Energy, Fleet Management and Logistics, Medical Devices, and Manufacturing. Our IoT technology expertise serves a global ecosystem of 7,000 enterprise customers and 30 mobile network operator partners, and 80 million IoT devices across the world. Aeris powers today’s connected smart world with innovative technologies and borderless connectivity that simplify management, enhance security, optimize performance, and drive growth. Built from the ground up for IoT and road-tested at scale, Aeris IoT Services are based on the broadest technology stack in the industry, spanning connectivity up to vertical solutions. As veterans of the industry, we know that implementing an IoT solution can be complex, and we pride ourselves on making it simpler. Our company is in an enviable spot. We’re profitable, and both our bottom line and our global reach are growing rapidly. We’re playing in an exploding market where technology evolves daily and new IoT solutions and platforms are being created at a fast pace. A few things to know about us: We put our customers first . When making decisions, we always seek to do what is right for our customer first, our company second, our teams third, and individual selves last. We do things differently. As a pioneer in a highly competitive industry that is poised to reshape every sector of the global economy, we cannot fall back on old models. Rather, we must chart our own path and strive to out-innovate, out-learn, out-maneuver and out-pace the competition on the way. We walk the walk on diversity. We’re a brilliant and eclectic mix of ethnicities, religions, industry experiences, sexual orientations, generations and more – and that’s by design. We see diverse perspectives as a core competitive advantage. Integrity is essential. We believe in doing things well – and doing them right. Integrity is a core value here: you’ll see it embodied in our staff, our management approach and growing social impact work (we have a VP devoted to it). You’ll also see it embodied in the way we manage people and our HR issues: we expect employees and managers to deal with issues directly, immediately and with the utmost respect for each other and for the Company. We are owners. Strong managers enable and empower their teams to figure out how to solve problems. You will be no exception, and will have the ownership, accountability and autonomy needed to be truly creative. Job Title: Senior Oracle Database Administrator (DBA) – GCP Location: Noida, India We are seeking a highly skilled and experienced Senior Oracle DBA to manage and maintain our critical Oracle 12c, 18c, 19c, 21c single instance with DG and RAC databases, hosted on Google Cloud Platform (GCP). The ideal candidate will possess deep expertise in Oracle database administration, including installation, configuration, patching, performance tuning, security, and backup/recovery strategies within a cloud environment. They will also have expertise and experience optimizing the underlying operating system and database parameters for maximum performance and stability. Responsibilities: Database Administration: Install, configure, and maintain Oracle 12c, 18c, 19c, 21c single instance with DG and RAC databases on GCP Compute Engine. Implement and manage Oracle Data Guard for high availability and disaster recovery, including switchovers, failovers, and broker configuration. Perform database upgrades, patching, and migrations. Develop and implement backup and recovery strategies, including RMAN configuration and testing. Monitor database performance and proactively identify and resolve performance bottlenecks. Troubleshoot database issues and provide timely resolution. Implement and maintain database security measures, including user access control, auditing, and encryption. Automate routine database tasks using scripting languages (e.g., Shell, Python, PL/SQL). Create and maintain database documentation. Database Parameter Tuning: In-depth knowledge of Oracle database initialization parameters and their impact on performance, with a particular focus on memory management parameters. Expertise in tuning Oracle memory structures (SGA, PGA) for optimal performance in a GCP environment. This includes: Precisely sizing the SGA components (Buffer Cache, Shared Pool, Large Pool, Java Pool, Streams Pool) based on workload characteristics and available GCP Compute Engine memory resources. Optimizing PGA allocation (PGA_AGGREGATE_TARGET, PGA_AGGREGATE_LIMIT) to prevent excessive swapping and ensure efficient SQL execution. Understanding the interaction between SGA and PGA memory regions and how they are affected by GCP instance memory limits. Tuning the RESULT_CACHE parameters for optimal query performance, considering the available memory and workload patterns. Proficiency in using Automatic Memory Management (AMM) and Automatic Shared Memory Management (ASMM) features and knowing when manual tuning is required for optimal results. Knowledge of how GCP instance memory limits can impact Oracle's memory management and the appropriate adjustments to make. Experience with analysing AWR reports and identifying areas for database parameter optimization, with a strong emphasis on identifying memory-related bottlenecks (e.g., high buffer busy waits, excessive direct path reads/writes). Proficiency in tuning SQL queries using tools like SQL Developer and Explain Plan, particularly identifying queries that consume excessive memory or perform inefficient memory access patterns. Knowledge of Oracle performance tuning methodologies and best practices, specifically as they apply to memory management in a cloud environment. Experience with database indexing strategies and index optimization, understanding the impact of indexes on memory utilization. Solid understanding of Oracle partitioning and its benefits for large databases, including how partitioning can affect memory usage and query performance. Ability to perform proactive performance tuning based on workload analysis and trending, with a focus on memory usage patterns and potential memory-related performance issues. Expertise in diagnosing and resolving memory leaks or excessive memory consumption issues within the Oracle database. Deep understanding of how shared memory segments are managed within the Linux OS on GCP Compute Engine and how to optimize them for Oracle. Data Guard Expertise: Deep understanding of Oracle Data Guard architectures (Maximum Performance, Maximum Availability, Maximum Protection). Expertise in configuring and managing Data Guard broker for automated switchovers and failovers. Experience in troubleshooting Data Guard issues and ensuring data consistency. Knowledge of Data Guard best practices for performance and reliability. Proficiency in performing Data Guard role transitions (switchover, failover) with minimal downtime. Experience with Active Data Guard is a plus. Operating System Tuning: Deep expertise in Linux operating systems (e.g., Oracle Linux, Red Hat, CentOS) and their interaction with Oracle databases. Performance tuning of the Linux operating system for optimal Oracle database performance, including: Kernel parameter tuning (e.g., shared memory settings, semaphores, file descriptor limits). Memory management optimization (e.g., HugePages configuration). I/O subsystem tuning (e.g., disk scheduler selection, filesystem optimization). Network configuration optimization (e.g., TCP/IP parameters). Monitoring and analysis of OS performance metrics using tools like vmstat, iostat, top, and sar. Identifying and resolving OS-level resource contention issues (CPU, memory, I/O). Good to Have: GCP Environment Management: Provision and manage GCP Compute Engine instances for Oracle databases, including selecting appropriate instance types and storage configurations. Configure and manage GCP networking components (VPCs, subnets, firewalls) for secure database access. Utilize GCP Cloud Monitoring and Logging for database monitoring and troubleshooting. Implement and manage GCP Cloud Storage for database backups. Experience with Infrastructure as Code (IaC) tools like Terraform or Cloud Deployment Manager to automate GCP resource provisioning. Cost optimization of Oracle database infrastructure on GCP. Other Products and Platforms Experience with other cloud platforms (AWS, Azure). Experience with NoSQL databases. Experience with Agile development methodologies. Experience with DevOps practices and tools (e.g., Ansible, Chef, Puppet). Experience with GoldenGate. Qualifications: Bachelor's degree in Computer Science or a related field. Minimum 12+ years of experience as an Oracle DBA. Proven experience managing Oracle 12c, 18c, 19c, and 21c single instance with DG and RAC databases in a production environment, with strong Data Guard expertise. Extensive experience with Oracle database performance tuning, including OS-level and database parameter optimization. Hands-on experience with Oracle databases hosted on Google Cloud Platform (GCP). Strong understanding of Linux operating systems. Excellent troubleshooting and problem-solving skills. Strong communication and collaboration skills. Oracle Certified Professional (OCP) certification is highly preferred. GCP certifications (e.g., Cloud Architect, Cloud Engineer) are a plus. Aeris may conduct background checks to verify the information provided in your application and assess your suitability for the role. The scope and type of checks will comply with the applicable laws and regulations of the country where the position is based. Additional detail will be provided via the formal application process. Aeris walks the walk on diversity. We’re a brilliant mix of varying ethnicities, religions, cultures, sexual orientations, gender identities, ages and professional/personal/military experiences – and that’s by design. Diverse perspectives are essential to our culture, innovative process and competitive edge. Aeris is proud to be an equal opportunity employer. VV8Ow9JB0S
Posted 6 days ago
5.0 - 7.0 years
0 Lacs
Noida
On-site
Job Information Date Opened 29/07/2025 Job Type Full time Industry Technology Work Experience 5-7 years City Noida Province Uttar Pradesh Country India Postal Code 201303 Job Description Key Responsibilities: Design, implement, and manage scalable, secure, and reliable cloud infrastructure on Azure Perform regular system monitoring, verify the integrity and availability of all cloud-based resources, and troubleshoot issues as needed Automate and streamline operations and processes using DevOps tools and methodologies, including Jenkins Collaborate with development teams to ensure seamless integration and continuous delivery Manage and optimize performance, utilization, and costs in the Azure cloud environment Conduct root cause analysis for incidents, identify and implement corrective actions to prevent future occurrences Ensure compliance with security policies, standards, and best practices Requirements Required Skills and Qualifications: Extensive experience with Azure cloud services including compute, storage, networking, and security Proficiency in scripting and automation using tools like PowerShell, Shell Scripts, Crons, Azure CLI, ARM templates, Terraform, and Ansible Strong understanding of DevOps practices, including CI/CD pipelines with Jenkins, version control (e.g., Git), and configuration management Experience with monitoring and logging tools such as Graylog, Nagios, and Azure Monitor Excellent troubleshooting skills with a systematic approach to problem-solving Hands-on experience with Linux systems Familiarity with network analysis tools like WireShark Knowledge of security tools such as Vault Python proficiency is a plus Ability to work in a fast-paced, collaborative environment and manage multiple priorities Strong communication and interpersonal skills Educational Background: Bachelor’s degree in Computer Science, Information Technology, or a related field Preferred Qualifications: Azure certifications (e.g., Azure Administrator, Azure DevOps Engineer) Experience with containerization technologies such as Docker and Kubernetes Knowledge of other cloud platforms like AWS or Google Cloud is a plus Shift Details: Open to rotational shifts with 12x7 support 5 working days a week, with 9-hour shifts
Posted 6 days ago
5.0 years
4 - 9 Lacs
Noida
On-site
Bachelor’s/Master’s degree in Computer Science, Information Technology or related field 5-7 years of experience in a DevOps role Strong understanding of the SDLC and experience with working on fully Agile teams Proven experience in coding & scripting DevOps, Ant/Maven, Groovy, Terraform, Shell Scripting, and Helm Chart skills. Working experience with IaC tools like Terraform, CloudFormation, or ARM templates Strong experience with cloud computing platforms (e.g. Oracle Cloud (OCI), AWS, Azure, Google Cloud) Experience with containerization technologies (e.g. Docker, Kubernetes/EKS/AKS) Experience with continuous integration and delivery tools (e.g. Jenkins, GitLab CI/CD) Kubernetes - Experience with managing Kubernetes clusters and using kubectl for managing helm chart deployments, ingress services, and troubleshooting pods. OS Services – Basic Knowledge to Manage, configuring, and troubleshooting Linux operating system issues (Linux), storage (block and object), networking (VPCs, proxies, and CDNs) Monitoring and instrumentation - Implement metrics in Prometheus, Grafana, Elastic, log management and related systems, and Slack/PagerDuty/Sentry integrations Strong know-how of modern distributed version control systems (e.g. Git, GitHub, GitLab etc) Strong troubleshooting and problem-solving skills, and ability to work well under pressure Excellent communication and collaboration skills, and ability to lead and mentor junior team members Career Level - IC3 Design, implement, and maintain automated build, deployment, and testing systems Experience in Taking Application Code and Third Party Products and Building Fully Automated Pipelines for Java Applications to Build, Test and Deploy Complex Systems for delivery in Cloud. Ability to Containerize an Application i.e. creating Docker Containers and Pushing them to an Artifact Repository for deployment on containerization solutions with OKE (Oracle container Engine for Kubernetes) using Helm Charts. Lead efforts to optimize the build and deployment processes for high-volume, high-availability systems Monitor production systems to ensure high availability and performance, and proactively identify and resolve issues Support and Troubleshoot Cloud Deployment and Environment Issues Create and maintain CI/CD pipelines using tools such as Jenkins, GitLab CI/CD Continuously improve the scalability and security of our systems, and lead efforts to implement best practices Participate in the design and implementation of new features and applications, and provide guidance on best practices for deployment and operations Work with security team to ensure compliance with industry and company standards, and implement security measures to protect against threats Keep up-to-date with emerging trends and technologies in DevOps, and make recommendations for improvement Lead and mentor junior DevOps engineers and collaborate with cross-functional teams to ensure successful delivery of projects Analyze, design develop, troubleshoot and debug software programs for commercial or end user applications. Writes code, completes programming and performs testing and debugging of applications. As a member of the software engineering division, you will analyze and integrate external customer specifications. Specify, design and implement modest changes to existing software architecture. Build new products and development tools. Build and execute unit tests and unit test plans. Review integration and regression test plans created by QA. Communicate with QA and porting engineering to discuss major changes to functionality. Work is non-routine and very complex, involving the application of advanced technical/business skills in area of specialization. Leading contributor individually and as a team member, providing direction and mentoring to others. BS or MS degree or equivalent experience relevant to functional area. 6+ years of software engineering or related experience.
Posted 6 days ago
4.0 years
9 - 20 Lacs
Noida
Remote
Lead Assistant Manager EXL/LAM/1432691 Global TechnologyNoida Posted On 29 Jul 2025 End Date 12 Sep 2025 Required Experience 4 - 9 Years Basic Section Number Of Positions 2 Band B2 Band Name Lead Assistant Manager Cost Code G070102 Campus/Non Campus NON CAMPUS Employment Type Permanent Requisition Type New Max CTC 900000.0000 - 2000000.0000 Complexity Level Not Applicable Work Type Hybrid – Working Partly From Home And Partly From Office Organisational Group Enabling Sub Group Global Technology Organization Global Technology LOB Global Technology SBU Technology Operations Country India City Noida Center Noida - Centre 59 Skills Skill FULL SATCK DEVELOPER TYPESETTING JAVASCRIPT REACT JS NODE JS, EXPRESS JS AZURE CLOUD Minimum Qualification BTECH BCA Certification No data available Job Description Job Title: Full Stack Developer Location:Noida[remote] Experience: 4–10 years Department:Global Technology Reports to: Senior Manager Role Overview We are looking for a talented Full Stack Developer with deep expertise in backend and frontend development using modern JavaScript and Python stacks. You will architect, build, and deliver scalable web applications and APIs, collaborate closely with cross-functional teams, and drive engineering best practices in a cloud-native environment. Key Responsibilities Design, develop, test, and deploy robust backend services and APIs using Node.js , Express.js , TypeScript , and Python . Build intuitive, responsive, and performant frontends using React.js and Next.js . Implement and maintain data storage solutions (SQL/NoSQL) and integrate third-party services. Ensure application security, performance, and scalability using best engineering practices. Work with cloud platforms ( AWS , Azure , GCP ) for application deployment, monitoring, and scaling. Collaborate with Product, Design, and QA to deliver seamless user experiences. Write clean, maintainable, and well-documented code following industry standards. Participate in code reviews, architecture discussions, and process improvements. Troubleshoot, debug, and optimize application performance. Stay current with emerging technologies, trends, and best practices in full stack and cloud development. Required Skills & Experience 4–10 years of professional experience as a Full Stack Developer or similar role. Strong proficiency in Node.js , Express.js , TypeScript , JavaScript (ES6+) , and Python . Hands-on experience with React.js and Next.js for frontend development. Experience building RESTful APIs, microservices, and serverless architectures. Good understanding of SQL (PostgreSQL, MySQL) and NoSQL (MongoDB, DynamoDB) databases. Solid knowledge of cloud platforms ( AWS , Azure , or GCP ) for deploying and managing web applications. Familiarity with CI/CD, containerization (Docker), and infrastructure as code (Terraform, CloudFormation) is a plus. Experience with version control systems (Git) and modern development workflows. Strong problem-solving, debugging, and analytical skills. Excellent communication and teamwork abilities. Preferred Qualifications Experience with GraphQL, WebSockets, or real-time applications. Familiarity with DevOps practices and site reliability engineering. Exposure to testing frameworks (Jest, Mocha, Cypress) and automation tools. Previous work in Agile/Scrum teams. Workflow Workflow Type Digital Solution Center
Posted 6 days ago
6.0 years
0 Lacs
Noida
On-site
The Aristocrat operate a highly skilled team of DevOps engineers who regularly support various teams with CI/CD processes and infrastructure setup. They collaborate closely with development, and studio teams to create and maintain CI/CD pipelines, automate manual support tasks, and also build and manage the environments needed for application deployment and operations. As a DevOps team member, you will be working closely with the Development teams to produce CI/CD pipelines, help with code deployments and optimize the flow of software from development to production. What you will do: Taking care of the GCP, AWS and Azure Cloud Infrastructure (Provisioning/Alerting/Monitoring etc.). Experience in creating Private Networks, Establishing Networking. Must have handled Firewalls and VPN tunnels. Design and document processes for versioning, deployment, and the migration of code between environments. Excellent knowledge of Docker, Kubernetes. Excellent knowledge of Terraform, Ansible. Good knowledge of scripting language Python/Shell/Bash. Good knowledge of CI/CD including GitOps. Good knowledge of Jenkins Pipelines/Groovy & Azure Pipelines. Strong Experience in Linux OS. Experience on working on Production 24X7 support [L2/L3]. Experience of Sever, Storage, and Network operations. Knowledge of Monitoring tools like Grafana, Prometheus, Datadog. Knowledge of Logging tools Cora Logix, ELK, Splunk Intermediate experience with VMware. Experience with JIRA/Confluence or other defect tracking/Wiki system. Good experience with Istio or service mesh [Good to have]. Ability to work with a geographically dispersed team. Able to grasp functional aspects well (quickly and with minimal guidance). What We're Looking For B.Tech. / B.E. / MCA in Computer Science with 6+ years of experience Must have strong analytical and creative problem-solving skills. Able to challenge the status-quo and constantly suggest improvements. Demonstrates an extremely high level of accuracy and attention to detail. Must have strong communication skills and able to work with team. Ability to drive discussions towards conclusions. Articulate and should be able to express ideas and issues without inhibitions. Why Aristocrat? Aristocrat is a world leader in gaming content and technology, and a top-tier publisher of free-to-play mobile games. We deliver great performance for our B2B customers and bring joy to the lives of the millions of people who love to play our casino and mobile games. And while we focus on fun, we never forget our responsibilities. We strive to lead the way in responsible gameplay, and to lift the bar in company governance, employee wellbeing and sustainability. We’re a diverse business united by shared values and an inspiring mission to bring joy to life through the power of play. We aim to create an environment where individual differences are valued, and all employees have the opportunity to realize their potential. We welcome and encourage applications from all people regardless of age, gender, race, ethnicity, cultural background, disability status or LGBTQ+ identity. EEO M/F/D/V World Leader in Gaming Entertainment Robust benefits package Global career opportunities Our Values All about the Player Talent Unleashed Collective Brilliance Good Business Good Citizen Travel Expectations None Additional Information Depending on the nature of your role, you may be required to register with the Nevada Gaming Control Board (NGCB) and/or other gaming jurisdictions in which we operate. At this time, we are unable to sponsor work visas for this position. Candidates must be authorized to work in the job posting location for this position on a full-time basis without the need for current or future visa sponsorship.
Posted 6 days ago
5.0 years
0 Lacs
West Bengal
On-site
Job Information Date Opened 30/07/2025 Job Type Full time Industry IT Services Work Experience 5+ Years City Kolkata Province West Bengal Country India Postal Code 700091 About Us We are a fast growing technology company specializing in current and emerging internet, cloud and mobile technologies. Job Description CodelogicX is a forward-thinking tech company dedicated to pushing the boundaries of innovation and delivering cutting-edge solutions. We are seeking a Senior DevOps Engineer with at least 5 years of hands-on experience in building, managing, and optimizing scalable infrastructure and CI/CD pipelines. The ideal candidate will play a crucial role in automating deployment workflows, securing cloud environments and managing container orchestration platforms. You will leverage your expertise in AWS, Kubernetes, ArgoCD, and CI/CD to streamline our development processes, ensure the reliability and scalability of our systems, and drive the adoption of best practices across the team. Key Responsibilities: Design, implement, and maintain CI/CD pipelines using GitHub Actions and Bitbucket Pipelines. Develop and manage Infrastructure as Code (IaC) using Terraform for AWS-based infrastructure. Setup and administer SFTP servers on cloud-based VMs using chroot configurations and automate file transfers to S3-backed Glacier . Manage SNS for alerting and notification integration. Ensure cost optimization of AWS services through billing reviews and usage audits. Implement and maintain secure secrets management using AWS KMS , Parameter Store , and Secrets Manager . Configure, deploy, and maintain a wide range of AWS services, including but not limited to: Compute Services o Provision and manage compute resources using EC2, EKS, AWS Lambda, and EventBridge for compute-driven, serverless and event-driven architectures. Storage & Content Delivery o Manage data storage and archival solutions using S3, Glacier, and content delivery through CloudFront. Networking & Connectivity o Design and manage secure network architectures with VPCs, Load Balancers, Security Groups, VPNs, and Route 53 for DNS routing and failover. Ensure proper functioning of Network Services like TCP/IP, reverse proxies (e.g., NGINX). Monitoring & Observability o Implement monitoring, logging, and tracing solutions using CloudWatch, Prometheus, Grafana, ArgoCD, and OpenTelemetry to ensure system health and performance visibility. Database Services o Deploy and manage relational databases via RDS for MySQL, PostgreSQL, Aurora, and healthcare-specific FHIR database configurations. Security & Compliance o Enforce security best practices using IAM (roles, policies), AWS WAF, Amazon Inspector, GuardDuty, Security Hub, and Trusted Advisor to monitor, detect, and mitigate risks. GitOps o Apply excellent knowledge of GitOps practices, ensuring all infrastructure and application configuration changes are tracked and versioned through Git commits. Architect and manage Kubernetes environments (EKS) , implementing Helm charts, ingress controllers, autoscaling (HPA/VPA), and service meshes (Istio), troubleshoot advanced issues related to pods, services, DNS, and kubelets. Apply best practices in Git workflows (trunk-based, feature branching) in both monorepo and multi-repo environments. Maintain, troubleshoot, and optimize Linux-based systems (Ubuntu, CentOS, Amazon Linux). Support the engineering and compliance teams by addressing requirements for HIPAA, GDPR, ISO 27001, SOC 2 , and ensuring infrastructure readiness. Perform rollback and hotfix procedures with minimal downtime. Collaborate with developers to define release and deployment processes. Manage and standardize build environments across dev, staging, and production. Manage release and deployment processes across dev, staging, and production. Work cross-functionally with development and QA teams. Lead incident postmortems and drive continuous improvement. Perform root cause analysis and implement corrective/preventive actions for system incidents. Set up automated backups/snapshots, disaster recovery plans, and incident response strategies. Ensure on-time patching. Mentor junior DevOps engineers. Requirements Required Qualifications: Bachelor's degree in Computer Science, Engineering, or equivalent practical experience. 5+ years of proven DevOps engineering experience in cloud-based environments. Advanced knowledge of AWS , Terraform , CI/CD tools , and Kubernetes (EKS) . Strong scripting and automation mindset. Solid experience with Linux system administration and networking. Excellent communication and documentation skills. Ability to collaborate across teams and lead DevOps initiatives independently. Preferred Qualifications: Experience with infrastructure as code tools such as Terraform or CloudFormation. Experience with GitHub Actions is a plus. Certifications in AWS (e.g., AWS DevOps Engineer, AWS SysOps Administrator) or Kubernetes (CKA/CKAD). Experience working in regulated environments (e.g., healthcare or fintech). Exposure to container security tools and cloud compliance scanners. Experience: 5-10 Years Working Mode: Hybrid Job Type: Full-Time Location: Kolkata Benefits Health insurance Hybrid working mode Provident Fund Parental leave Yearly Bonus Gratuity
Posted 6 days ago
2.0 years
0 Lacs
Andhra Pradesh
On-site
We are seeking an experienced and innovative Generative AI Developer to join our AWAC team. In this role, you will lead the design and development of GenAI and Agentic AI applications using state of the art LLMs and AWS native services. You will work on both R&D focused proofof concepts and production grade implementations, collaborating with cross-functional teams to bring intelligent, scalable solutions to life. Key Responsibilities Design, develop, and deploy Generative AI and Agentic AI applications using LLMs such as Claude, Cohere, Titan, and others. Lead the development of proof of concept (PoC) solutions to explore new use cases and validate AI driven innovations. Architect and implement retrieval augmented generation (RAG) pipelines using LangChain and Vector Databases like OpenSearch. Integrate with AWS services including Bedrock API, SageMaker, SageMaker JumpStart, Lambda, EKS/ECS, Amazon Connect, Amazon Q. Apply few shot, one shot, and zero shot learning techniques to fine tune and prompt LLMs effectively. Collaborate with data scientists, ML engineers, and business stakeholders to translate complex requirements into scalable AI solutions. Implement CI/CD pipelines, infrastructure as code using Terraform, and follow DevOps best practices. Optimize performance, cost, and reliability of AI applications in production environments. Document architecture, workflows, and best practices to support knowledge sharing and onboarding. Required Skills & Technologies Experience in Python development, with at least 2 years in AI/ML or GenAI projects. Strong hands on experience with LLMs and Generative AI frameworks. Proficiency in LangChain, Vector DBs (e.g OpenSearch), and prompt engineering. Deep understanding of AWS AI/ML ecosystem: Bedrock, SageMaker, Lambda, EKS/ECS. Experience with serverless architectures, containerization, and cloud native development. Familiarity with DevOps tools: Git, CI/CD, Terraform. Strong debugging, performance tuning, and problem solving skills. Preferred Qualifications Experience with Amazon Q, Amazon Connect, or Amazon Titan. Familiarity with Claude, Cohere, or other foundation models. Bachelors or Master s degree in Computer Science, AI/ML, or a related field. Experience in building agentic workflows and multi agent orchestration is a plus. About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.
Posted 6 days ago
3.0 - 7.0 years
0 Lacs
Kanayannur, Kerala, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY GDS – Data and Analytics (D&A) – Senior – Senior Data Scientist Role Overview: We are seeking a highly skilled and experienced Senior Data Scientist with a minimum of 3 - 7 years of experience in Data Science and Machine Learning, preferably with experience in NLP, Generative AI, LLMs, MLOps, Optimization techniques, and AI solution Architecture. In this role, you will play a key role in the development and implementation of AI solutions, leveraging your technical expertise. The ideal candidate should have a deep understanding of AI technologies and experience in designing and implementing cutting-edge AI models and systems. Additionally, expertise in data engineering, DevOps, and MLOps practices will be valuable in this role. Responsibilities: Your technical responsibilities: Contribute to the design and implementation of state-of-the-art AI solutions. Assist in the development and implementation of AI models and systems, leveraging techniques such as Language Models (LLMs) and generative AI. Collaborate with stakeholders to identify business opportunities and define AI project goals. Stay updated with the latest advancements in generative AI techniques, such as LLMs, and evaluate their potential applications in solving enterprise challenges. Utilize generative AI techniques, such as LLMs, to develop innovative solutions for enterprise industry use cases. Integrate with relevant APIs and libraries, such as Azure Open AI GPT models and Hugging Face Transformers, to leverage pre-trained models and enhance generative AI capabilities. Implement and optimize end-to-end pipelines for generative AI projects, ensuring seamless data processing and model deployment. Utilize vector databases, such as Redis, and NoSQL databases to efficiently handle large-scale generative AI datasets and outputs. Implement similarity search algorithms and techniques to enable efficient and accurate retrieval of relevant information from generative AI outputs. Collaborate with domain experts, stakeholders, and clients to understand specific business requirements and tailor generative AI solutions accordingly. Conduct research and evaluation of advanced AI techniques, including transfer learning, domain adaptation, and model compression, to enhance performance and efficiency. Establish evaluation metrics and methodologies to assess the quality, coherence, and relevance of generative AI outputs for enterprise industry use cases. Ensure compliance with data privacy, security, and ethical considerations in AI applications. Leverage data engineering skills to curate, clean, and preprocess large-scale datasets for generative AI applications. Requirements: Bachelor's or Master's degree in Computer Science, Engineering, or a related field. A Ph.D. is a plus. Minimum 3-7 years of experience in Data Science and Machine Learning. In-depth knowledge of machine learning, deep learning, and generative AI techniques. Proficiency in programming languages such as Python, R, and frameworks like TensorFlow or PyTorch. Strong understanding of NLP techniques and frameworks such as BERT, GPT, or Transformer models. Familiarity with computer vision techniques for image recognition, object detection, or image generation. Experience with cloud platforms such as Azure, AWS, or GCP and deploying AI solutions in a cloud environment. Expertise in data engineering, including data curation, cleaning, and preprocessing. Knowledge of trusted AI practices, ensuring fairness, transparency, and accountability in AI models and systems. Strong collaboration with software engineering and operations teams to ensure seamless integration and deployment of AI models. Excellent problem-solving and analytical skills, with the ability to translate business requirements into technical solutions. Strong communication and interpersonal skills, with the ability to collaborate effectively with stakeholders at various levels. Understanding of data privacy, security, and ethical considerations in AI applications. Track record of driving innovation and staying updated with the latest AI research and advancements. Good to Have Skills: Apply trusted AI practices to ensure fairness, transparency, and accountability in AI models and systems. Utilize optimization tools and techniques, including MIP (Mixed Integer Programming). Drive DevOps and MLOps practices, covering continuous integration, deployment, and monitoring of AI models. Implement CI/CD pipelines for streamlined model deployment and scaling processes. Utilize tools such as Docker, Kubernetes, and Git to build and manage AI pipelines. Apply infrastructure as code (IaC) principles, employing tools like Terraform or CloudFormation. Implement monitoring and logging tools to ensure AI model performance and reliability. Collaborate seamlessly with software engineering and operations teams for efficient AI model integration and deployment. Familiarity with DevOps and MLOps practices, including continuous integration, deployment, and monitoring of AI models. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 6 days ago
3.0 - 7.0 years
0 Lacs
Kochi, Kerala, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY GDS – Data and Analytics (D&A) – Senior – Senior Data Scientist Role Overview: We are seeking a highly skilled and experienced Senior Data Scientist with a minimum of 3 - 7 years of experience in Data Science and Machine Learning, preferably with experience in NLP, Generative AI, LLMs, MLOps, Optimization techniques, and AI solution Architecture. In this role, you will play a key role in the development and implementation of AI solutions, leveraging your technical expertise. The ideal candidate should have a deep understanding of AI technologies and experience in designing and implementing cutting-edge AI models and systems. Additionally, expertise in data engineering, DevOps, and MLOps practices will be valuable in this role. Responsibilities: Your technical responsibilities: Contribute to the design and implementation of state-of-the-art AI solutions. Assist in the development and implementation of AI models and systems, leveraging techniques such as Language Models (LLMs) and generative AI. Collaborate with stakeholders to identify business opportunities and define AI project goals. Stay updated with the latest advancements in generative AI techniques, such as LLMs, and evaluate their potential applications in solving enterprise challenges. Utilize generative AI techniques, such as LLMs, to develop innovative solutions for enterprise industry use cases. Integrate with relevant APIs and libraries, such as Azure Open AI GPT models and Hugging Face Transformers, to leverage pre-trained models and enhance generative AI capabilities. Implement and optimize end-to-end pipelines for generative AI projects, ensuring seamless data processing and model deployment. Utilize vector databases, such as Redis, and NoSQL databases to efficiently handle large-scale generative AI datasets and outputs. Implement similarity search algorithms and techniques to enable efficient and accurate retrieval of relevant information from generative AI outputs. Collaborate with domain experts, stakeholders, and clients to understand specific business requirements and tailor generative AI solutions accordingly. Conduct research and evaluation of advanced AI techniques, including transfer learning, domain adaptation, and model compression, to enhance performance and efficiency. Establish evaluation metrics and methodologies to assess the quality, coherence, and relevance of generative AI outputs for enterprise industry use cases. Ensure compliance with data privacy, security, and ethical considerations in AI applications. Leverage data engineering skills to curate, clean, and preprocess large-scale datasets for generative AI applications. Requirements: Bachelor's or Master's degree in Computer Science, Engineering, or a related field. A Ph.D. is a plus. Minimum 3-7 years of experience in Data Science and Machine Learning. In-depth knowledge of machine learning, deep learning, and generative AI techniques. Proficiency in programming languages such as Python, R, and frameworks like TensorFlow or PyTorch. Strong understanding of NLP techniques and frameworks such as BERT, GPT, or Transformer models. Familiarity with computer vision techniques for image recognition, object detection, or image generation. Experience with cloud platforms such as Azure, AWS, or GCP and deploying AI solutions in a cloud environment. Expertise in data engineering, including data curation, cleaning, and preprocessing. Knowledge of trusted AI practices, ensuring fairness, transparency, and accountability in AI models and systems. Strong collaboration with software engineering and operations teams to ensure seamless integration and deployment of AI models. Excellent problem-solving and analytical skills, with the ability to translate business requirements into technical solutions. Strong communication and interpersonal skills, with the ability to collaborate effectively with stakeholders at various levels. Understanding of data privacy, security, and ethical considerations in AI applications. Track record of driving innovation and staying updated with the latest AI research and advancements. Good to Have Skills: Apply trusted AI practices to ensure fairness, transparency, and accountability in AI models and systems. Utilize optimization tools and techniques, including MIP (Mixed Integer Programming). Drive DevOps and MLOps practices, covering continuous integration, deployment, and monitoring of AI models. Implement CI/CD pipelines for streamlined model deployment and scaling processes. Utilize tools such as Docker, Kubernetes, and Git to build and manage AI pipelines. Apply infrastructure as code (IaC) principles, employing tools like Terraform or CloudFormation. Implement monitoring and logging tools to ensure AI model performance and reliability. Collaborate seamlessly with software engineering and operations teams for efficient AI model integration and deployment. Familiarity with DevOps and MLOps practices, including continuous integration, deployment, and monitoring of AI models. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 6 days ago
2.0 years
0 Lacs
Andhra Pradesh, India
On-site
We are seeking an experienced and innovative Generative AI Developer to join our AWAC team. In this role, you will lead the design and development of GenAI and Agentic AI applications using state of the art LLMs and AWS native services. You will work on both R&D focused proofof concepts and production grade implementations, collaborating with cross-functional teams to bring intelligent, scalable solutions to life. Key Responsibilities Design, develop, and deploy Generative AI and Agentic AI applications using LLMs such as Claude, Cohere, Titan, and others. Lead the development of proof of concept (PoC) solutions to explore new use cases and validate AI driven innovations. Architect and implement retrieval augmented generation (RAG) pipelines using LangChain and Vector Databases like OpenSearch. Integrate with AWS services including Bedrock API, SageMaker, SageMaker JumpStart, Lambda, EKS/ECS, Amazon Connect, Amazon Q. Apply few shot, one shot, and zero shot learning techniques to fine tune and prompt LLMs effectively. Collaborate with data scientists, ML engineers, and business stakeholders to translate complex requirements into scalable AI solutions. Implement CI/CD pipelines, infrastructure as code using Terraform, and follow DevOps best practices. Optimize performance, cost, and reliability of AI applications in production environments. Document architecture, workflows, and best practices to support knowledge sharing and onboarding. Required Skills & Technologies Experience in Python development, with at least 2 years in AI/ML or GenAI projects. Strong hands on experience with LLMs and Generative AI frameworks. Proficiency in LangChain, Vector DBs (e.g OpenSearch), and prompt engineering. Deep understanding of AWS AI/ML ecosystem: Bedrock, SageMaker, Lambda, EKS/ECS. Experience with serverless architectures, containerization, and cloud native development. Familiarity with DevOps tools: Git, CI/CD, Terraform. Strong debugging, performance tuning, and problem solving skills. Preferred Qualifications Experience with Amazon Q, Amazon Connect, or Amazon Titan. Familiarity with Claude, Cohere, or other foundation models. Bachelors or Master s degree in Computer Science, AI/ML, or a related field. Experience in building agentic workflows and multi agent orchestration is a plus.
Posted 6 days ago
3.0 - 7.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY GDS – Data and Analytics (D&A) – Senior – Senior Data Scientist Role Overview: We are seeking a highly skilled and experienced Senior Data Scientist with a minimum of 3 - 7 years of experience in Data Science and Machine Learning, preferably with experience in NLP, Generative AI, LLMs, MLOps, Optimization techniques, and AI solution Architecture. In this role, you will play a key role in the development and implementation of AI solutions, leveraging your technical expertise. The ideal candidate should have a deep understanding of AI technologies and experience in designing and implementing cutting-edge AI models and systems. Additionally, expertise in data engineering, DevOps, and MLOps practices will be valuable in this role. Responsibilities: Your technical responsibilities: Contribute to the design and implementation of state-of-the-art AI solutions. Assist in the development and implementation of AI models and systems, leveraging techniques such as Language Models (LLMs) and generative AI. Collaborate with stakeholders to identify business opportunities and define AI project goals. Stay updated with the latest advancements in generative AI techniques, such as LLMs, and evaluate their potential applications in solving enterprise challenges. Utilize generative AI techniques, such as LLMs, to develop innovative solutions for enterprise industry use cases. Integrate with relevant APIs and libraries, such as Azure Open AI GPT models and Hugging Face Transformers, to leverage pre-trained models and enhance generative AI capabilities. Implement and optimize end-to-end pipelines for generative AI projects, ensuring seamless data processing and model deployment. Utilize vector databases, such as Redis, and NoSQL databases to efficiently handle large-scale generative AI datasets and outputs. Implement similarity search algorithms and techniques to enable efficient and accurate retrieval of relevant information from generative AI outputs. Collaborate with domain experts, stakeholders, and clients to understand specific business requirements and tailor generative AI solutions accordingly. Conduct research and evaluation of advanced AI techniques, including transfer learning, domain adaptation, and model compression, to enhance performance and efficiency. Establish evaluation metrics and methodologies to assess the quality, coherence, and relevance of generative AI outputs for enterprise industry use cases. Ensure compliance with data privacy, security, and ethical considerations in AI applications. Leverage data engineering skills to curate, clean, and preprocess large-scale datasets for generative AI applications. Requirements: Bachelor's or Master's degree in Computer Science, Engineering, or a related field. A Ph.D. is a plus. Minimum 3-7 years of experience in Data Science and Machine Learning. In-depth knowledge of machine learning, deep learning, and generative AI techniques. Proficiency in programming languages such as Python, R, and frameworks like TensorFlow or PyTorch. Strong understanding of NLP techniques and frameworks such as BERT, GPT, or Transformer models. Familiarity with computer vision techniques for image recognition, object detection, or image generation. Experience with cloud platforms such as Azure, AWS, or GCP and deploying AI solutions in a cloud environment. Expertise in data engineering, including data curation, cleaning, and preprocessing. Knowledge of trusted AI practices, ensuring fairness, transparency, and accountability in AI models and systems. Strong collaboration with software engineering and operations teams to ensure seamless integration and deployment of AI models. Excellent problem-solving and analytical skills, with the ability to translate business requirements into technical solutions. Strong communication and interpersonal skills, with the ability to collaborate effectively with stakeholders at various levels. Understanding of data privacy, security, and ethical considerations in AI applications. Track record of driving innovation and staying updated with the latest AI research and advancements. Good to Have Skills: Apply trusted AI practices to ensure fairness, transparency, and accountability in AI models and systems. Utilize optimization tools and techniques, including MIP (Mixed Integer Programming). Drive DevOps and MLOps practices, covering continuous integration, deployment, and monitoring of AI models. Implement CI/CD pipelines for streamlined model deployment and scaling processes. Utilize tools such as Docker, Kubernetes, and Git to build and manage AI pipelines. Apply infrastructure as code (IaC) principles, employing tools like Terraform or CloudFormation. Implement monitoring and logging tools to ensure AI model performance and reliability. Collaborate seamlessly with software engineering and operations teams for efficient AI model integration and deployment. Familiarity with DevOps and MLOps practices, including continuous integration, deployment, and monitoring of AI models. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 6 days ago
3.0 - 5.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Position: Cloud Engineer Experience- 3-5 years Location : Noida Work Mode: WFO The ideal candidate must be self-motivated with a proven track record as a Cloud Engineer (AWS) that can help in implementation, adoption, and day-to-day support of an AWS cloud Infrastructure environment distributed among multiple regions and Business Units. The individual in this role must be a technical expert on AWS who understands and practice the AWS Well Architected Framework and is familiar with a multi-account strategy deployment using Control Tower/Landing Zone setup. The ideal candidate can manage day-to-day operations, troubleshoot problems, provide routine maintenance and can enhance system health monitoring on the cloud stack. Must have excellent communication and verbal skills. Technical Skills Strong experience on AWS IaaS architectures Hands-on experience in deploying and supporting AWS services such as EC2, autoscaling, AMI management, snapshots, ELB, S3, Route 53, VPC, RDS, SES, SNS, CloudFormation, CloudWatch, IAM, Security Groups, CloudTrail, Lambda etc. Experience on building and supporting AWS Workspaces Experience in deploying and troubleshooting either Windows or Linux Operating systems. Experience with AWS SSO and RBAC Understanding of DevOps tools such Terraform, GitHub, and Jenkins. Experience working on ITSM processes and tools such as Remedy, ServiceNow. Ability to operate at all levels within the organization and cross functionally within multiple Client organizations Responsibilities Responsibilities include planning, automation, implementations and maintenance of the AWS platform and its associated services Provide SME / L2 and above level technical support Carry out deployment and migration activities Must be able to mentor and provide technical guidance to L1 engineers Monitoring of AWS infrastructure and perform routine maintenance, operational tasks Work on ITSM tickets and ensure adherence to support SLAs Work on change management processes Excellent analytical and problem-solving skills. Exhibits excellent service to others Qualifications At least 2 to 3 years of relevant experience on AWS Overall, 3-5 years of IT experience working for a global Organization Bachelor’s Degree or higher in Information Systems, Computer Science, or equivalent experience. Certified AWS Cloud Practitioner will be preferred. Location: Noida - UI, Noida, Uttar Pradesh, India
Posted 6 days ago
4.0 years
0 Lacs
Gurugram, Haryana, India
On-site
About us Bain & Company is a global management consulting that helps the world’s most ambitious change makers define the future. Across 65 offices in 40 countries, we work alongside our clients as one team with a shared ambition to achieve extraordinary results, outperform the competition and redefine industries. Since our founding in 1973, we have measured our success by the success of our clients, and we proudly maintain the highest level of client advocacy in the industry. In 2004, the firm established its presence in the Indian market by opening the Bain Capability Center (BCC) in New Delhi. The BCC is now known as BCN (Bain Capability Network) with its nodes across various geographies. BCN is an integral and largest unit of (ECD) Expert Client Delivery. ECD plays a critical role as it adds value to Bain's case teams globally by supporting them with analytics and research solutioning across all industries, specific domains for corporate cases, client development, private equity diligence or Bain intellectual property. The BCN comprises of Consulting Services, Knowledge Services and Shared Services. Who you will work with Pyxis leverages a broad portfolio of 50+ alternative datasets to provide real-time market intelligence and customer insights through a unique business model that enables us to provide our clients with competitive intelligence unrivaled in the market today. We provide insights and data via custom one-time projects or ongoing subscriptions to data feeds and visualization tools. We also offer custom data and analytics projects to suit our clients’ needs. Pyxis can help teams answer core questions about market dynamics, products, customer behavior, and ad spending on Amazon with a focus on providing our data and insights to clients in the way that best suits their needs. Refer to: www.pyxisbybain.com What you’ll do Setting up tools and required infrastructure Defining and setting development, test, release, update, and support processes for DevOps operation Have the technical skill to review, verify, and validate the software code developed in the project. Troubleshooting techniques and fixing the code bugs Monitoring the processes during the entire lifecycle for its adherence and updating or creating new processes for improvement and minimizing the wastage Encouraging and building automated processes wherever possible Identifying and deploying cybersecurity measures by continuously performing vulnerability assessment and risk management Incidence management and root cause analysis Selecting and deploying appropriate CI/CD tools Strive for continuous improvement and build continuous integration, continuous development, and constant deployment pipeline (CI/CD Pipeline) Mentoring and guiding the team members Managing periodic reporting on the progress to the management About you A Bachelor’s or Master’s degree in Computer Science or related field 4 + years of software development experience with 3+ years as a devops engineer High proficiency in cloud management (AWS heavily preferred) including Networking, API Gateways, infra deployment automation, and cloud ops Knowledge of Dev Ops/Code/Infra Management Tools: (GitHub, SonarQube, Snyk, AWS X-ray, Docker, Datadog and containerization) Infra automation using Terraform, environment creation and management, containerization using Docker Proficiency with Python Disaster recovery, implementation of high availability apps / infra, business continuity planning What makes us a great place to work We are proud to be consistently recognized as one of the world's best places to work, a champion of diversity and a model of social responsibility. We are currently ranked the #1 consulting firm on Glassdoor’s Best Places to Work list, and we have maintained a spot in the top four on Glassdoor's list for the last 12 years. We believe that diversity, inclusion and collaboration is key to building extraordinary teams. We hire people with exceptional talents, abilities and potential, then create an environment where you can become the best version of yourself and thrive both professionally and personally. We are publicly recognized by external parties such as Fortune, Vault, Mogul, Working Mother, Glassdoor and the Human Rights Campaign for being a great place to work for diversity and inclusion, women, LGBTQ and parents.
Posted 1 week ago
4.0 - 12.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Title : Google Cloud DevOps Engineer Location : PAN India The Opportunity: Publicis Sapient is looking for a Cloud & DevOps Engineer to join our team of bright thinkers and enablers. You will use your problem-solving skills, craft & creativity to design and develop infrastructure interfaces for complex business applications. Contribute ideas for improvements in Cloud and DevOps practices, delivering innovation through automation. We are on a mission to transform the world, and you will be instrumental in shaping how we do it with your ideas, thoughts, and solutions. Your Impact OR Responsibilities: Combine your technical expertise and problem-solving passion to work closely with clients, turning complex ideas into end-to-end solutions that transform our clients’ business. Lead and support the implementation of Engineering side of Digital Business Transformations with cloud, multi-cloud, security, observability and DevOps as technology enablers. Responsible for Building Immutable Infrastructure & maintain highly scalable, secure, and reliable cloud infrastructure, which is optimized for performance cost, and compliant with security standards to prevent security breaches Enable our customers to accelerate their software development lifecycle and reduce the time-to-market for their products or services. Your Skills & Experience: 4 to 12 years of experience in Cloud & DevOps with Full time Bachelor’s /Master’s degree (Science or Engineering preferred) Expertise in below DevOps & Cloud tools: GCP (Compute, IAM, VPC, Storage, Serverless, Database, Kubernetes, Pub-Sub, Operations Suit) Configuration and monitoring DNS, APP Servers, Load Balancer, Firewall for high volume traffic Extensive experience in designing, implementing, and maintaining infrastructure as code using preferably Terraform or Cloud Formation/ARM Templates/Deployment Manager/Pulumi Experience Managing Container Infrastructure (On Prem & Managed e.g., AWS ECS, EKS, or GKE) Design, implement and Upgrade container infrastructure e.g., K8S Cluster & Node Pools Create and maintain deployment manifest files for microservices using HELM Utilize service mesh Istio to create gateways, virtual services, traffic routing and fault injection Troubleshoot and resolve container infrastructure & deployment issues Continues Integration & Continues Deployment Develop and maintain CI/CD pipelines for software delivery using Git and tools such as Jenkins, GitLab, CircleCI, Bamboo and Travis CI Automate build, test, and deployment processes to ensure efficient release cycles and enforce software development best practices e.g., Quality Gates, Vulnerability Scans etc. Automate Build & Deployment process using Groovy, GO, Python, Shell, PowerShell Implement DevSecOps practices and tools to integrate security into the software development and deployment lifecycle. Manage artifact repositories such as Nexus and JFrog Artifactory for version control and release management. Design, implement, and maintain observability, monitoring, logging and alerting using below tools Observability: Jaeger, Kiali, CloudTrail, Open Telemetry, Dynatrace Logging: Elastic Stack (Elasticsearch, Logstash, Kibana), Fluentd, Splunk Monitoring: Prometheus, Grafana, Datadog, New Relic Good to Have: Associate Level Public Cloud Certifications Terraform Associate Level Certification Benefits of Working Here: Gender-Neutral Policy 18 paid holidays throughout the year for NCR/BLR (22 For Mumbai) Generous parental leave and new parent transition program Flexible work arrangements Employee Assistance Programs to help you in wellness and well being Learn more about us at www.publicissapient.com or explore other career opportunities here
Posted 1 week ago
0.0 - 2.0 years
0 - 0 Lacs
Turbhe Khurd, Navi Mumbai, Maharashtra
On-site
JD Devops Experience required : 2 yrs – 3yrs Max salary /Month: 23000 The ideal candidate will: Design and implement scalable, reliable AWS infrastructure. Develop and maintain automation tools and CI/CD pipelines using Jenkins or GitHub Actions. Build, operate, and maintain Kubernetes clusters for container orchestration. Leverage Infrastructure as Code tools like Terraform or CloudFormation for consistent environment provisioning. Automate system tasks using Python or Golang scripting. Collaborate closely with developers and SREs to ensure systems are resilient, scalable, and efficient. Monitor and troubleshoot system performance using observability tools like Datadog, Prometheus, and Grafana. Primary Skills Bachelors degree in Computer Science, Information Technology, or a related field 6+ years of experience as a DevOps Engineer with a strong focus on AWS Hands-on experience with containerization tools like Docker Expertise in managing Kubernetes clusters in production Create Helm charts for the applications Proficient in creating and managing CI/CD workflows with Jenkins or GitHub Actions Strong background in Infrastructure as Code (Terraform or CloudFormation) Automation and scripting skills using Python or Golang Strong analytical and problem-solving abilities Excellent communication and collaboration skills Certifications in Kubernetes or Terraform are a plus Good to Have Skills Configuration management using Ansible Basic understanding of AI & ML concepts Job Types: Full-time, Permanent Pay: ₹20,000.00 - ₹26,000.00 per month Benefits: Paid sick time Paid time off Provident Fund Schedule: Day shift Fixed shift Monday to Friday Morning shift Weekend availability Supplemental Pay: Performance bonus Yearly bonus Ability to commute/relocate: Turbhe Khurd, Navi Mumbai, Maharashtra: Reliably commute or planning to relocate before starting work (Required) Application Question(s): Open to negotiate with current Salary Willing to sign the service bond of 18 months in case got selected Experience: DevOps: 2 years (Required) Work Location: In person Speak with the employer +91 7087738773 Expected Start Date: 04/08/2025
Posted 1 week ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About The Job We are seeking a skilled Data Platform Engineer with expertise in High-Performance Computing (HPC) and cloud computing to support our scientific research activities globally (Canada, US, France and Germany mostly). The ideal candidate will have experience in managing and optimizing Linux-based HPC environments, as well as proficiency in AWS cloud services. This role involves collaborating with various R&D groups to provide technical support and drive continuous improvements in our computing infrastructure. The candidate should be adept at handling both open-source and commercial software across different R&D fields. What You Will Be Doing Support in-silico activities in the Boston area, including the installation, configuration, and optimization of Linux workstations and applications. Provide continuous improvements and maintenance of the current Linux environments. Manage and optimize AWS cloud resources, including key services such as Amazon FSx for Lustre, EC2, S3… Collaborate with research teams to understand their computational needs and provide tailored solutions. Ensure the security, scalability, and efficiency of cloud-based scientific workflows. Troubleshoot and resolve technical issues in both on-premises and cloud environments. Handle the compilation, installation, and maintenance of open-source software and commercial applications. Stay updated with the latest advancements in HPC and cloud technologies to recommend and implement improvements. Main responsibilities: - Proven experience in managing and optimizing Linux-based HPC environments. Strong proficiency in AWS cloud services, particularly Amazon FSx for Lustre, EC2, S3. Knowledge of cloud architecture, including network design, storage solutions, and security best practices. Familiarity with scripting languages such as Bash, Python, or Perl for automation and system administration tasks. Experience with containerization technologies like Docker and orchestration tools like Kubernetes. Ability to compile, install, and maintain open-source software and commercial applications. Strong problem-solving skills and the ability to work independently and in a team. Excellent communication skills to collaborate effectively with researchers and technical teams. Bachelor's or Master's degree in Computer Science, Engineering, or a related field. Preferred Qualifications Experience with other cloud platforms (e.g., Google Cloud, Azure). Knowledge of bioinformatics or scientific computing workflows. Experience in working with HPC schedulers (SLURM, PBS, Grid Engine etc…) Familiarity with Infrastructure as Code (IaC) tools like Terraform or CloudFormation. Certifications in AWS or other cloud platforms. Experience with software tools in various R&D fields, such as: Drug Design and Molecular Modeling: Schrödinger, Moe, Amber, Gromacs, NAMD, AlphaFold. Genomics and Data Analysis: NGS pipelines (Cellranger), KNIME, R/RStudio/RShiny. Pharmacokinetics and Clinical Simulations: Monolix, Matlab, R/RStudio, Julia. Structural Biology and Imaging: CryoSparc, Relion, CCP4, Pymol. Why choose us? Bring the miracles of science to life alongside a supportive, future-focused team. Discover endless opportunities to grow your talent and drive your career, whether it’s through a promotion or lateral move, at home or internationally. Enjoy a thoughtful, well-crafted rewards package that recognizes your contribution and amplifies your impact. Take good care of yourself and your family, with a wide range of health and wellbeing benefits including high-quality healthcare, prevention and wellness programs and at least 14 weeks’ gender-neutral parental leave. Opportunity to work in an international environment, collaborating with diverse business teams and vendors, working in a dynamic team, and fully empowered to propose and implement innovative ideas. Pursue Progress . Discover Extraordinary . Progress doesn’t happen without people – people from different backgrounds, in different locations, doing different roles, all united by one thing: a desire to make miracles happen. You can be one of those people. Chasing change, embracing new ideas and exploring all the opportunities we have to offer. Let’s pursue progress. And let’s discover extraordinary together. At Sanofi, we provide equal opportunities to all regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, or gender identity. Watch our ALL IN video and check out our Diversity Equity and Inclusion actions at sanofi.com! null Pursue Progress . Discover Extraordinary . Join Sanofi and step into a new era of science - where your growth can be just as transformative as the work we do. We invest in you to reach further, think faster, and do what’s never-been-done-before. You’ll help push boundaries, challenge convention, and build smarter solutions that reach the communities we serve. Ready to chase the miracles of science and improve people’s lives? Let’s Pursue Progress and Discover Extraordinary – together. At Sanofi, we provide equal opportunities to all regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity, protected veteran status or other characteristics protected by law.
Posted 1 week ago
3.0 years
0 Lacs
India
On-site
We need an experienced DevOps Engineer to single-handedly build our Automated Provisioning Service on Google Cloud Platform. You'll implement infrastructure automation that provisions complete cloud environments for B2B customers in under 10 minutes. Core Responsibilities: Infrastructure as Code Implementation Develop Terraform modules for automated GCP resource provisioning Create reusable templates for: GKE cluster deployment with predefined node pools Cloud Storage bucket configuration Cloud DNS and SSL certificate automation IAM roles and service account setup Implement state management and version control for IaC Automation & Orchestration Build Cloud Functions or Cloud Build triggers for provisioning workflows Create automation scripts (Bash/Python) for deployment orchestration Deploy containerized Node.js applications to GKE using Helm charts Configure automated SSL certificate provisioning via Certificate Manager Security & Access Control Implement IAM policies and RBAC for customer isolation Configure secure service accounts with minimal required permissions Set up audit logging and monitoring for all provisioned resources Integration & Deployment Create webhook endpoints to receive provisioning requests from frontend Implement provisioning status tracking and error handling Document deployment procedures and troubleshooting guides Ensure 5-10 minute provisioning time SLA Required Skills & Certifications: MANDATORY Certification (Must have one of the following): Google Cloud Associate Cloud Engineer (minimum requirement) Google Cloud Professional Cloud DevOps Engineer (preferred) Google Cloud Professional Cloud Architect (preferred) Technical Skills (Must Have): 3+ years hands-on experience with Google Cloud Platform Strong Terraform expertise with proven track record GKE/Kubernetes deployment and management experience Proficiency in Bash and Python scripting Experience with CI/CD pipelines (Cloud Build preferred) GCP IAM and security best practices knowledge Ability to work independently with minimal supervision Nice to Have: Experience developing RESTful APIs for service integration Experience with multi-tenant architectures Node.js/Docker containerization experience Helm chart creation and management Deliverables (2-Month Timeline) Month 1: Complete Terraform modules for all GCP resources Working prototype of automated provisioning flow Basic IAM and security implementation Integration with webhook triggers Month 2: Production-ready deployment with error handling Performance optimization (achieve <10 min provisioning) Complete documentation and runbooks Handover and knowledge transfer Technical Environment Primary Tools: Terraform, GCP (GKE, Cloud Storage, Cloud DNS, IAM) Languages: Bash, Python (automation scripts) Orchestration: Cloud Build, Cloud Functions Containerization: Docker, Kubernetes, Helm Ideal Candidate Self-starter who can own the entire DevOps scope independently Strong problem-solver comfortable with ambiguity Excellent time management skills to meet tight deadlines Clear communicator who documents their work thoroughly Important Note: Google Cloud certification is mandatory for this position due to partnership requirements. Please include your certification details and ID number in your application. Application Requirements: Proof of valid Google Cloud certification Examples of similar GCP automation projects GitHub/GitLab links to relevant Terraform modules (if available)
Posted 1 week ago
6.0 years
18 - 30 Lacs
India
On-site
Role: Senior Database Administrator (DevOps) Experience: 7+ Type: Contract Job Summary We are seeking a highly skilled and experienced Database Administrator with a minimum of 6 years of hands-on experience managing complex, high-performance, and secure database environments. This role is pivotal in maintaining and optimizing our multi-platform database infrastructure , which includes PostgreSQL, MariaDB/MySQL, MongoDB, MS SQL Server , and AWS RDS/Aurora instances. You will be working primarily within Linux-based production systems (e.g., RHEL 9.x) and will play a vital role in collaborating with DevOps, Infrastructure, and Data Engineering teams to ensure seamless database performance across environments. The ideal candidate has strong experience with infrastructure automation tools like Terraform and Ansible , is proficient with Docker , and is well-versed in cloud environments , particularly AWS . This is a critical role where your efforts will directly impact system stability, scalability, and security across all environments. Key Responsibilities Design, deploy, monitor, and manage databases across production and staging environments. Ensure high availability, performance, and data integrity for mission-critical systems. Automate database provisioning, configuration, and maintenance using Terraform and Ansible. Administer Linux-based systems for database operations with an emphasis on system reliability and uptime. Establish and maintain monitoring systems, set up proactive alerts, and rapidly respond to performance issues or incidents. Work closely with DevOps and Data Engineering teams to integrate infrastructure with MLOps and CI/CD pipelines. Implement and enforce database security best practices, including data encryption, user access control, and auditing. Conduct root cause analysis and tuning to continuously improve database performance and reduce downtime. Required Technical Skills Database Expertise: PostgreSQL: Advanced skills in replication, tuning, backup/recovery, partitioning, and logical/physical architecture. MariaDB/MySQL: Proven experience in high availability configurations, schema optimization, and performance tuning. MongoDB: Strong understanding of NoSQL structures, including indexing strategies, replica sets, and sharding. MS SQL Server: Capable of managing and maintaining enterprise-grade MS SQL Server environments. AWS RDS & Aurora: Deep familiarity with provisioning, monitoring, auto-scaling, snapshot management, and failover handling. Infrastructure & DevOps 6+ years of experience as a Database Administrator or DevOps Engineer in Linux-based environments. Hands-on expertise with Terraform, Ansible, and Infrastructure as Code (IaC) best practices. Knowledge of networking principles, firewalls, VPCs, and security hardening. Experience with monitoring tools such as Datadog, Splunk, SignalFx, and PagerDuty for observability and alerting. Strong working experience with AWS Cloud Services (EC2, VPC, IAM, CloudWatch, S3, etc.). Exposure to other cloud providers like GCP, Azure, or IBM Cloud is a plus. Familiarity with Docker, container orchestration, and integrating databases into containerized environments. Preferred Qualifications Excellent analytical and troubleshooting skills. Strong verbal and written communication skills. Ability to collaborate in cross-functional teams and drive initiatives independently. A passion for automation, observability, and scalability in production-grade environments. Must Have: AWS, Ansible, DevOps, Terraform Skills: postgresql,mariadb,datadog,containerization,networking,linux,mongodb,devops,terraform,aws aurora,cloud services,amazon web services (aws),ms sql server,ansible,aws,mysql,aws rds,docker,infrastructure,database
Posted 1 week ago
0.0 years
0 - 0 Lacs
Nagercoil, Tamil Nadu
Remote
We’re building agile, mid-sized web applications where your work will have an immediate impact. As our Senior Full-Stack MERN Developer, you’ll own features end-to-end from concept to deployment while using AI-assisted coding tools to speed up development without compromising code quality. Key Responsibilities Lead features from requirements to deployment (Node.js/Express + React) Build modular TypeScript services, APIs (REST/GraphQL), and integrations Create responsive React UIs with hooks, Context/Redux & code-splitting Use AI tools (Copilot, Cursor, RooCode, Cline, etc.) to scaffold code, tests, and infra Review and refine AI-generated code for security, edge cases & performance Maintain high quality with unit/integration tests (Jest/Mocha) and linting Containerize services (Docker) and collaborate on CI/CD pipelines Mentor junior devs and run AI best-practice workshops Must-Have Skills 5+ years MERN experience (MongoDB, Express, React, Node.js) with TypeScript Proven track record shipping mid-scale apps (5–15 screens) Hands-on experience with AI dev tools (share sample prompts) Strong automated testing & code quality discipline Bonus Points For: GraphQL, AWS (ECS/Lambda), Terraform, or prior remote-startup work. What We Offer Competitive Salary + performance bonuses Remote-first: On-site ramp for 3 months, then partial remote 5% annual salary bump at each work anniversary Learning stipend for courses, conferences & AI tools Internet allowance (₹1,000/month) How to Apply Email your Resume + GitHub/portfolio to contact@brownsofts.in with Subject: [Sr MERN + AI] Your Name Include: ✅ A mid-scale MERN project you led ✅ 1–2 sample AI prompts you’ve used in development For more info, contact: 83000 50033 Job Type: Full-time Pay: ₹15,000.00 - ₹20,000.00 per month Benefits: Flexible schedule Schedule: Day shift Monday to Friday Ability to commute/relocate: Nagercoil, Tamil Nadu: Reliably commute or planning to relocate before starting work (Preferred) Language: English (Required) Work Location: In person
Posted 1 week ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Hi Connections, Urgent - Hiring for below role About the Role: We are seeking a seasoned and highly skilled MLOps Engineer to join our growing team. The ideal candidate will have extensive hands-on experience with deploying, monitoring, and retraining machine learning models in production environments. You will be responsible for building and maintaining robust and scalable MLOps pipelines using tools like MLflow, Apache Airflow, Kubernetes, and Databricks or Azure ML. A strong understanding of infrastructure-as-code using Terraform is essential. You will play a key role in operationalizing AI/ML systems and ensuring high performance, availability, and automation across the ML lifecycle. --- Key Responsibilities: · Design and implement scalable MLOps pipelines for model training, validation, deployment, and monitoring. · Operationalize machine learning models using MLflow, Airflow, and containerized deployments via Kubernetes. · Automate and manage ML workflows across cloud platforms such as Azure ML or Databricks. · Develop infrastructure using Terraform for consistent and repeatable deployments. · Trace API calls to LLMs, Azure OCR and Paradigm · Implement performance monitoring, alerting, and logging for deployed models using custom and third-party tools. · Automate model retraining and continuous deployment pipelines based on data drift and model performance metrics. · Ensure traceability, reproducibility, and auditability of ML experiments and deployments. · Collaborate with Data Scientists, ML Engineers, and DevOps teams to streamline ML workflows. · Apply CI/CD practices and version control to the entire ML lifecycle. · Ensure secure, reliable, and compliant deployment of models in production environments. --- Required Qualifications: · 5+ years of experience in MLOps, DevOps, or ML engineering roles, with a focus on production ML systems. · Proven experience deploying machine learning models using MLflow and workflow orchestration with Apache Airflow. · Hands-on experience with Kubernetes for container orchestration in ML deployments. · Proficiency with Databricks and/or Azure ML, including model training and deployment capabilities. · Solid understanding and practical experience with Terraform for infrastructure-as-code. · Experience automating model monitoring and retraining processes based on data and model drift. · Knowledge of CI/CD tools and principles applied to ML systems. · Familiarity with monitoring tools and observability stacks (e.g., Prometheus, Grafana, Azure Monitor). · Strong scripting skills in Python · Deep understanding of ML lifecycle challenges including model versioning, rollback, and scaling. · Excellent communication skills and ability to collaborate across technical and non-technical teams. --- Nice to Have: · Experience with Azure DevOps or GitHub Actions for ML CI/CD. · Exposure to model performance optimization and A/B testing in production environments. · Familiarity with feature stores and online inference frameworks. · Knowledge of data governance and ML compliance frameworks. · Experience with ML libraries like scikit-learn, PyTorch, or TensorFlow. --- Education: · Bachelor’s or Master’s degree in Computer Science, Engineering, Data Science, or a related field.
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39817 Jobs | Dublin
Wipro
19388 Jobs | Bengaluru
Accenture in India
15458 Jobs | Dublin 2
EY
14907 Jobs | London
Uplers
11185 Jobs | Ahmedabad
Amazon
10459 Jobs | Seattle,WA
IBM
9256 Jobs | Armonk
Oracle
9226 Jobs | Redwood City
Accenture services Pvt Ltd
7971 Jobs |
Capgemini
7704 Jobs | Paris,France