Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
10.0 - 18.0 years
0 Lacs
indore, madhya pradesh
On-site
You should possess a BTech degree in computer science, engineering, or a related field of study, or have 12+ years of related work experience. Additionally, you should have at least 7 years of design and implementation experience with large-scale data-centric distributed applications. It is essential to have professional experience in architecting and operating cloud-based solutions, with a good understanding of core disciplines such as compute, networking, storage, security, and databases. A strong grasp of data engineering concepts like storage, governance, cataloging, data quality, and data modeling is required. Familiarity with various architecture patterns like data lake, data lake house, and data mesh is also important. You should have a good understanding of Data Warehousing concepts and hands-on experience with tools like Hive, Redshift, Snowflake, and Teradata. Experience in migrating or transforming legacy customer solutions to the cloud is highly valued. Moreover, experience working with services like AWS EMR, Glue, DMS, Kinesis, RDS, Redshift, Dynamo DB, Document DB, SNS, SQS, Lambda, EKS, and Data Zone is necessary. A thorough understanding of Big Data ecosystem technologies such as Hadoop, Spark, Hive, and HBase, along with other relevant tools and technologies, is expected. Knowledge in designing analytical solutions using AWS cognitive services like Textract, Comprehend, Rekognition, and Sagemaker is advantageous. You should also have experience with modern development workflows like git, continuous integration/continuous deployment pipelines, static code analysis tooling, and infrastructure-as-code. Proficiency in a programming or scripting language like Python, Java, or Scala is required. Possessing an AWS Professional/Specialty certification or relevant cloud expertise is a plus. In this role, you will be responsible for driving innovation within the Data Engineering domain by designing reusable and reliable accelerators, blueprints, and libraries. You should be capable of leading a technology team, fostering an innovative mindset, and enabling fast-paced deliveries. Adapting to new technologies, learning quickly, and managing high ambiguity are essential skills for this position. You will collaborate with business stakeholders, participate in various architectural, design, and status calls, and showcase good presentation skills when interacting with executives, IT Management, and developers. Furthermore, you will drive technology/software sales or pre-sales consulting discussions, ensure end-to-end ownership of tasks, and maintain high-quality software development with complete documentation and traceability. Fulfilling organizational responsibilities, sharing knowledge and experience with other teams/groups, conducting technical training sessions, and producing whitepapers, case studies, and blogs are also part of this role. The ideal candidate for this position should have 10 to 18 years of experience and be able to reference the job with the number 12895.,
Posted 2 days ago
12.0 - 16.0 years
0 Lacs
karnataka
On-site
Join us as a Performance Testing Specialist in this key role where you will be responsible for undertaking and enabling automated testing activities in all delivery models. You will have the opportunity to support teams in developing quality solutions and ensuring continuous integration for defect-free deployment of customer value. Working in a fast-paced environment, you will gain exposure by closely collaborating with various teams across the bank. This position is offered at the vice president level. As a Quality Automation Specialist, you will play a crucial role in transforming testing processes by utilizing quality processes, tools, and methodologies to enhance control, accuracy, and integrity. Your responsibilities will include ensuring that new sprint deliveries within a release cycle continue to meet Non-Functional Requirements (NFRs) such as response time, throughput rate, and resource consumption. In this collaborative role, you will lead debugging sessions with software providers, hardware providers, and internal teams to investigate findings and develop solutions. Additionally, you will evolve predictive and intelligent testing approaches based on automation and innovative testing products and solutions. You will work closely with your team to define and refine the scope of manual and automated testing, create automated test scripts, user documentation, and artifacts. Your decision-making process will be data-driven, focusing on return on investment and value measures that reflect thoughtful cost management. You will also play a key role in enabling the cross-skilling of colleagues in end-to-end automation testing. To excel in this role, you should have a minimum of twelve years of experience in automated testing, particularly in an Agile development or Continuous Integration/Continuous Delivery (CI/CD) environment. Proficiency in performance testing tools such as LoadRunner, Apache JMeter, or Neoload is essential, as well as experience in AWS EKS containers and microservices architecture. Familiarity with monitoring and analyzing performance tests using tools like Grafana, Jaeger, and Graylog is also required. Moreover, we are seeking candidates with expertise in end-to-end and automation testing using the latest tools recommended by the enterprise tooling framework. A background in designing, developing, and implementing automation frameworks in new environments is highly desirable. Effective communication skills to convey complex technical concepts to management-level colleagues and strong collaboration and stakeholder management skills are essential for success in this role.,
Posted 2 days ago
6.0 - 10.0 years
0 Lacs
pune, maharashtra
On-site
As an OpenShift Admin with 6 to 8 years of relevant experience, you will be responsible for building automation to support product development and data analytics initiatives. In this role, you will develop and maintain strong customer relationships to ensure effective service delivery and customer satisfaction. Regular interaction with customers will be essential to refine requirements, gain agreement on solutions and deliverables, provide progress reports, monitor satisfaction levels, identify and resolve concerns, and seek cooperation to achieve mutual objectives. To be successful in this role, you must have a minimum of 6 years of experience as an OpenShift Admin, with expertise in Kubernetes Administration, Automation tools such as Ansible, AWS EKS, Argo CD, and Linux administration. Extensive knowledge and experience with OpenShift and Kubernetes are crucial for this infrastructure-focused position. You should be experienced in deploying new app containers from scratch in OpenShift or Kubernetes, as well as upgrading OpenShift and working with observability in these environments. Additional skills that would be beneficial for this role include experience with Anthos/GKE for Hybrid Cloud, HashiCorp Terraform, and HashiCorp Vault. As an OpenShift Admin, you will be expected to create, maintain, and track designs at both high and detailed levels, identify new technologies for adoption, conduct consistent code reviews, and propose changes where necessary. You will also be responsible for provisioning infrastructure, developing automation scripts, monitoring system performance, integrating security and compliance measures, documenting configurations and processes, and deploying infrastructure as code and applications using automation and orchestration tools. The hiring process for this position will consist of screening rounds conducted by HR, followed by two technical rounds, and a final HR round. If you are someone with a strong background in OpenShift Administration and related technologies, and you are passionate about driving innovation and excellence in infrastructure management, we encourage you to apply for this role in our Pune office.,
Posted 3 days ago
3.0 - 7.0 years
0 Lacs
pune, maharashtra
On-site
Veeam, the global market leader in data resilience, envisions that businesses should have full control over their data whenever and wherever they require it. Veeam specializes in providing data resilience solutions encompassing data backup, recovery, portability, security, and intelligence. Headquartered in Seattle, Veeam serves over 550,000 customers worldwide, who rely on Veeam to ensure the continuity of their operations. As we progress together, learning, growing, and creating a significant impact for some of the world's most renowned brands, we present you with an opportunity to be part of this journey. We are in search of a Platform Engineer to join the Veeam Data Cloud team. The primary goal of the Platform Engineering team is to furnish a secure, dependable, and user-friendly platform that facilitates the development, testing, deployment, and monitoring of the VDC product. This role offers an exceptional opportunity for an individual with expertise in cloud infrastructure and software development to contribute to the development of the most successful and advanced data protection platform globally. Your responsibilities will include: - Developing and maintaining code to automate our public cloud infrastructure, software delivery pipeline, enablement tools, and internally consumed platform services - Documenting system design, configurations, processes, and decisions to support our asynchronous, distributed team culture - Collaborating with a team of remote engineers to construct the VDC platform - Utilizing a modern technology stack comprising containers, serverless infrastructure, public cloud services, and other cutting-edge technologies in the SaaS domain - Participating in an on-call rotation for product operations Technologies you will work with include: - Kubernetes, Azure AKS, AWS EKS, Helm, Docker, Terraform, Golang, Bash, Git, and more. Qualifications we seek from you: - Minimum of 3 years of experience in production operations for a SaaS or cloud service provider - Proficiency in automating infrastructure through code using tools like Pulumi or Terraform - Familiarity with GitHub Actions and a variety of public cloud services - Background in building and supporting enterprise SaaS products - Understanding of operational excellence principles in a SaaS environment - Proficiency in scripting languages such as Bash or Python - Knowledge and experience in implementing secure design principles in the cloud - Demonstrated ability to quickly learn new technologies and implement them effectively - Strong inclination towards taking action and maintaining direct, frequent communication - A technical degree from a university Desirable qualifications: - Experience with Azure - Proficiency in high-level programming languages like Go, Java, C/C++, etc. In return, we provide: - Family Medical Insurance - Annual flexible spending allowance for health and well-being - Life insurance and personal accident insurance - Employee Assistance Program - Comprehensive leave package, including parental leave - Meal Benefit Pass, Transportation Allowance, and Monthly Daycare Allowance - Veeam Care Days - additional 24 hours for volunteering activities - Professional training and education opportunities, including courses, workshops, internal meetups, and access to online learning platforms - Mentorship through our MentorLab program Please note: Veeam reserves the right to decline applications from candidates permanently located outside India. Veeam Software is dedicated to promoting diversity and equal opportunities and prohibits discrimination based on various factors. All personal data collected during the recruitment process will be handled in accordance with our Recruiting Privacy Notice. By applying for this position, you consent to the processing of your personal data as described in our Recruiting Privacy Notice. Your application and supporting documents should accurately represent your qualifications and experience. Any misrepresentation may lead to disqualification from employment consideration or termination if discovered after employment commences.,
Posted 6 days ago
4.0 - 8.0 years
0 Lacs
maharashtra
On-site
At PwC, the focus in data and analytics revolves around leveraging data to drive insights and make informed business decisions. By utilizing advanced analytics techniques, our team helps clients optimize operations and achieve strategic goals. As a professional in data analysis at PwC, you will specialize in utilizing advanced analytical techniques to extract insights from large datasets, supporting data-driven decision-making. Your role will involve leveraging skills in data manipulation, visualization, and statistical modeling to assist clients in solving complex business problems. PwC US - Acceleration Center is currently seeking individuals with a strong analytical background to join our Analytics Consulting practice. As a Senior Associate, you will be an essential part of business analytics teams in India, collaborating with clients and consultants in the U.S. You will lead teams for high-end analytics consulting engagements and provide business recommendations to project teams. **Years of Experience:** Candidates should possess 4+ years of hands-on experience. **Must Have:** - Experience in building ML models in cloud environments (At least 1 of the 3: Azure ML, GCPs Vertex AI platform, AWS SageMaker) - Knowledge of predictive/prescriptive analytics, particularly in the usage of Log-Log, Log-Linear, Bayesian Regression techniques, Machine Learning algorithms (Supervised and Unsupervised), deep learning algorithms, and Artificial Neural Networks - Good knowledge of statistics, including statistical tests & distributions - Experience in Data analysis, such as data cleansing, standardization, and data preparation for machine learning use cases - Experience in machine learning frameworks and tools (e.g., scikit-learn, mlr, caret, H2O, TensorFlow, Pytorch, MLlib) - Advanced level programming in SQL or Python/Pyspark - Expertise with visualization tools like Tableau, PowerBI, AWS QuickSight, etc. **Nice To Have:** - Working knowledge of containerization (e.g., AWS EKS, Kubernetes), Dockers, and data pipeline orchestration (e.g., Airflow) - Good communication and presentation skills **Roles And Responsibilities:** - Develop and execute project & analysis plans under the guidance of the Project Manager - Interact with and advise consultants/clients in the U.S. as a subject matter expert to formalize data sources, acquire datasets, and clarify data & use cases for a strong understanding of data and business problems - Drive and conduct analysis using advanced analytics tools and mentor junior team members - Implement quality control measures to ensure deliverable integrity - Validate analysis outcomes and recommendations with stakeholders, including the client team - Build storylines and deliver presentations to the client team and/or PwC project leadership team - Contribute to knowledge sharing and firm building activities **Professional And Educational Background:** - Any graduate / BE / B.Tech / MCA / M.Sc / M.E / M.Tech / Masters Degree / MBA,
Posted 1 week ago
10.0 - 14.0 years
0 Lacs
ahmedabad, gujarat
On-site
As the DevOps Lead, you will be responsible for leading the design, implementation, and management of enterprise Container orchestration platforms using Rafey and Kubernetes. Your role will involve overseeing the onboarding and deployment of applications on Rafey platforms, utilizing AWS EKS and Azure AKS. You will play a key role in developing and maintaining CI/CD pipelines to ensure efficient and reliable application deployment using Azure DevOps. Collaboration with cross-functional teams is essential to ensure seamless integration and operation of containerized applications. Your expertise will also be required to implement and manage infrastructure as code using tools such as Terraform, ensuring the security, reliability, and scalability of containerized applications and infrastructure. In addition to your technical responsibilities, you will be expected to mentor and guide junior DevOps engineers, fostering a culture of continuous improvement and innovation within the team. Monitoring and optimizing system performance, troubleshooting issues, and staying up-to-date with industry trends and best practices are also crucial aspects of this role. Qualifications: - Bachelor's or Master's degree in Computer Science, Engineering, or a related field. - 10+ years of experience in DevOps, with a focus on Container orchestration platforms. - Extensive hands-on experience with Kubernetes, EKS, AKS. Knowledge of Rafey platform is a plus. - Proven track record of onboarding and deploying applications on Kubernetes platforms, including AWS EKS and Azure AKS. - Strong knowledge of Kubernetes manifest files, Ingress, Ingress Controllers, and Azure DevOps CI/CD pipelines. - Proficiency in infrastructure as code tools like Terraform. - Excellent problem-solving skills, knowledge of Secret Management, RBAC configuration, and hands-on experience with Helm Charts. - Strong communication and collaboration skills, experience with cloud platforms (AWS, Azure), and security best practices in a DevOps environment. Preferred Skills: - Strong Cloud knowledge (AWS & Azure) and Kubernetes expertise. - Experience with other enterprise Container orchestration platforms and tools. - Familiarity with monitoring and logging tools like Datadog, understanding of network topology, and system architecture. - Ability to work in a fast-paced, dynamic environment. Good to Have: - Knowledge of Rafey platform (A Kubernetes Management Platform) and hands-on experience with GitOps technology.,
Posted 1 week ago
4.0 - 8.0 years
0 Lacs
noida, uttar pradesh
On-site
As a part of our team at our Technology company in Noida, Uttar Pradesh, India, you will play a crucial role in designing secure and scalable systems and applications for various industries using AWS/GCP/Azure or similar services. Your responsibilities will include integrations between different systems/applications/browser and network, as well as the analysis of business requirements for selecting appropriate solutions on the AWS platform. Additionally, you will deliver high-speed, pixel-perfect web applications and stay updated on the latest technology trends, with hands-on experience in modern architecture, Microservices, Containers, Kubernetes, etc. You will be expected to solve design and technical problems, demonstrate proficiency in various design patterns/architectures, and have hands-on experience with the latest tech stack such as MEAN, MERN, Java, Lambdas, etc. Experience with CI/CD and DevOps practices is essential for this role. Communication with customers, both Business and IT, is a key aspect of the position, along with supporting Pre-Sales teams in workshops and offer preparation. Having knowledge of multiple cloud platforms like AWS, GCP, or Azure is advantageous, with at least one being a requirement. Your responsibilities will involve facilitating technical discussions with customers, partners, and internal stakeholders, providing domain expertise around public cloud and enterprise technology, and promoting Google Cloud with customers. Creating and delivering best practice recommendations, tutorials, blog posts, and presentations will be part of your routine to support technical, business, and executive partners. Furthermore, you will provide feedback to product and engineering teams, contribute to the Solutions Go-to-Market team, and ensure timely delivery of high-quality work from team members. To succeed in this role, you should be proficient in a diverse application ecosystem tech stack, including programming languages like JavaScript/Typescript (preferred), HTML, and Java (Spring Boot). Knowledge of microservice architecture, PWA, responsive apps, micro-front end, Docker, Kubernetes, nginx, HA proxy, Jenkins, Loopback, Express, NextJS, NestJS, React/Angular, and data modeling for NoSQL or SQL databases is essential. Experience with cloud equivalents like AWS Cloud Front, AWS Lambda, Azure Cloud Front, Apache Kafka, Git version control, and engineering background in B.Tech/ M.Tech/ PhD are required. Nice-to-have skills include understanding of SQL or NoSQL databases, experience in architectural solutions for the Financial Services domain, working with sales teams to design appropriate solutions, detailed exposure to cloud providers like AWS, GCP, Azure, designing serverless secure web applications, working in a fast-paced startup environment, and certifications from Cloud or Data Solutions Providers like AWS, GCP, etc. Joining our team comes with benefits such as group medical policies, equal employment opportunity, maternity leave, skill development, 100% sponsorship for certification, work-life balance, flexible work hours, and zero leave tracking. If you are passionate about designing cutting-edge solutions and collaborating with clients to unlock the potential of AI, we welcome you to apply for this role and be a part of our dynamic team.,
Posted 1 week ago
7.0 - 11.0 years
0 Lacs
hyderabad, telangana
On-site
As a Software Engineer - Backend (Python) with 7+ years of experience, you will be responsible for designing and building the backend components of the GenAI Platform in Hyderabad. Your role will involve collaborating with geographically distributed cross-functional teams and participating in an on-call rotation to handle production incidents. The GenAI Platform offers safe, compliant, and cost-efficient access to LLMs, including Opensource & Commercial ones, while adhering to Experian standards and policies. You will work on building reusable tools, frameworks, and coding patterns for fine-tuning LLMs or developing RAG-based applications. To succeed in this role, you must possess the following skills: - 7+ years of professional backend web development experience with Python - Experience with AI and RAG - Proficiency in DevOps & IaC tools like Terraform, Jenkins - Familiarity with MLOps platforms such as AWS Sagemaker, Kubeflow, or MLflow - Expertise in web development frameworks such as Flask, Django, or FastAPI - Knowledge of concurrent programming designs like AsyncIO - Experience with public cloud platforms like AWS, Azure, GCP (preferably AWS) - Understanding of CI/CD practices, tools, and frameworks Additionally, the following skills would be considered nice to have: - Experience with Apache Kafka and developing Kafka client applications in Python - Familiarity with big data processing frameworks, especially Apache Spark - Knowledge of containers (Docker) and container platforms like AWS ECS or AWS EKS - Proficiency in unit and functional testing frameworks - Experience with various Python packaging options such as Wheel, PEX, or Conda - Understanding of metaprogramming techniques in Python Join our team and contribute to the development of cutting-edge technologies in a collaborative and dynamic environment.,
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
haryana
On-site
As a DevOps Engineer, you will play a crucial role in constructing and managing a robust, scalable, and reliable 0-downtime platform. You will be actively involved in a newly initiated greenfield project that utilizes modern infrastructure and automation tools to support our engineering teams. This presents a valuable opportunity to collaborate with an innovative team, fostering a culture of fresh thinking, integrating AI and automation, and contributing to our cloud-native journey. If you are enthusiastic about automation, cloud infrastructure, and delivering high-quality production-grade platforms, this position provides you with the opportunity to create a significant impact. Your primary responsibilities will include: - **Hands-On Development**: Design, implement, and optimize AWS infrastructure by engaging in hands-on development using Infrastructure as Code (IaC) tools. - **Automation & CI/CD**: Develop and maintain Continuous Integration/Continuous Deployment (CI/CD) pipelines to automate rapid, secure, and seamless deployments. - **Platform Reliability**: Ensure the high availability, scalability, and resilience of our platform by leveraging managed services. - **Monitoring & Observability**: Implement and oversee proactive observability using tools like DataDog to monitor system health, performance, and security, ensuring prompt issue identification and resolution. - **Cloud Security & Best Practices**: Apply cloud and security best practices, including configuring networking, encryption, secrets management, and identity/access management. - **Continuous Improvement**: Contribute innovative ideas and solutions to enhance our DevOps processes. - **AI & Future Tech**: Explore opportunities to incorporate AI into our DevOps processes and contribute towards AI-driven development. Your experience should encompass proficiency in the following technologies and concepts: - **Tech Stack**: Terraform, Terragrunt, Helm, Python, Bash, AWS (EKS, Lambda, EC2, RDS/Aurora), Linux OS, and Github Actions. - **Strong Expertise**: Hands-on experience with Terraform, IaC principles, CI/CD, and the AWS ecosystem. - **Networking & Cloud Configuration**: Proven experience with Networking (VPC, Subnets, Security Groups, API Gateway, Load Balancing, WAF) and Cloud configuration (Secrets Manager, IAM, KMS). - **Kubernetes & Deployment Strategies**: Comfortable with Kubernetes, ArgoCD, Istio, and deployment strategies like blue/green and canary. - **Cloud Security Services**: Familiarity with Cloud Security services such as Security Hub, Guard Duty, Inspector, and vulnerability observability. - **Observability Mindset**: Strong belief in measuring everything and utilizing tools like DataDog for platform health and security visibility. - **AI Integration**: Experience with embedding AI into DevOps processes is considered advantageous. This role presents an exciting opportunity to contribute to cutting-edge projects, collaborate with a forward-thinking team, and drive innovation in the realm of DevOps engineering.,
Posted 2 weeks ago
4.0 - 6.0 years
4 - 8 Lacs
Bengaluru
Hybrid
Hiring an AWS Data Engineer for a 6-month hybrid contractual role based in Bellandur, Bengaluru. The ideal candidate will have 46 years of experience in data engineering, with strong expertise in AWS services (S3, EC2, RDS, Lambda, EKS), PostgreSQL, Redis, Apache Iceberg, and Graph/Vector Databases. Proficiency in Python or Golang is essential. Responsibilities include designing and optimizing data pipelines on AWS, managing structured and in-memory data, implementing advanced analytics with vector/graph databases, and collaborating with cross-functional teams. Prior experience with CI/CD and containerization (Docker/Kubernetes) is a plus.
Posted 2 weeks ago
7.0 - 12.0 years
10 - 15 Lacs
Bengaluru
Hybrid
Hiring an AWS Data Engineer for a 6-month hybrid contractual role based in Bellandur, Bengaluru. The ideal candidate will have 7+ years of experience in data engineering, with strong expertise in AWS services (S3, EC2, RDS, Lambda, EKS), PostgreSQL, Redis, Apache Iceberg, and Graph/Vector Databases. Proficiency in Python or Golang is essential. Responsibilities include designing and optimizing data pipelines on AWS, managing structured and in-memory data, implementing advanced analytics with vector/graph databases, and collaborating with cross-functional teams. Prior experience with CI/CD and containerization (Docker/Kubernetes) is a plus.
Posted 3 weeks ago
4.0 - 6.0 years
10 - 20 Lacs
Pune
Work from Office
Role Overview We are looking for experienced DevOps Engineers (4+ years) with a strong background in cloud infrastructure, automation, and CI/CD processes. The ideal candidate will have hands-on experience in building, deploying, and maintaining cloud solutions using Infrastructure-as-Code (IaC) best practices. The role requires expertise in containerization, cloud security, networking, and monitoring tools to optimize and scale enterprise-level applications. Key Responsibilities Design, implement, and manage cloud infrastructure solutions on AWS, Azure, or GCP. Develop and maintain Infrastructure-as-Code (IaC) using Terraform, CloudFormation, or similar tools. Implement and manage CI/CD pipelines using tools like GitHub Actions, Jenkins, GitLab CI/CD, BitBucket Pipelines, or AWS CodePipeline. Manage and orchestrate containers using Kubernetes, OpenShift, AWS EKS, AWS ECS, and Docker. Work on cloud migrations, helping organizations transition from on-premises data centers to cloud-based infrastructure. Ensure system security and compliance with industry standards such as SOC 2, PCI, HIPAA, GDPR, and HITRUST. Set up and optimize monitoring, logging, and alerting using tools like Datadog, Dynatrace, AWS CloudWatch, Prometheus, ELK, or Splunk. Automate deployment, configuration, and management of cloud-native applications using Ansible, Chef, Puppet, or similar configuration management tools. Troubleshoot complex networking, Linux/Windows server issues, and cloud-related performance bottlenecks. Collaborate with development, security, and operations teams to streamline the DevSecOps process. Must-Have Skills 3+ years of experience in DevOps, cloud infrastructure, or platform engineering. Expertise in at least one major cloud provider: AWS, Azure, or GCP. Strong experience with Kubernetes, ECS, OpenShift, and container orchestration technologies. Hands-on experience in Infrastructure-as-Code (IaC) using Terraform, AWS CloudFormation, or similar tools. Proficiency in scripting/programming languages like Python, Bash, or PowerShell for automation. Strong knowledge of CI/CD tools such as Jenkins, GitHub Actions, GitLab CI/CD, or BitBucket Pipelines. Experience with Linux operating systems (RHEL, SUSE, Ubuntu, Amazon Linux) and Windows Server administration. Expertise in networking (VPCs, Subnets, Load Balancing, Security Groups, Firewalls). Experience in log management and monitoring tools like Datadog, CloudWatch, Prometheus, ELK, Dynatrace. Strong communication skills to work with cross-functional teams and external customers. Knowledge of Cloud Security best practices, including IAM, WAF, GuardDuty, CVE scanning, vulnerability management. Good-to-Have Skills Knowledge of cloud-native security solutions (AWS Security Hub, Azure Security Center, Google Security Command Center). Experience in compliance frameworks (SOC 2, PCI, HIPAA, GDPR, HITRUST). Exposure to Windows Server administration alongside Linux environments. Familiarity with centralized logging solutions (Splunk, Fluentd, AWS OpenSearch). GitOps experience with tools like ArgoCD or Flux. Background in penetration testing, intrusion detection, and vulnerability scanning. Experience in cost optimization strategies for cloud infrastructure. Passion for mentoring teams and sharing DevOps best practices.
Posted 1 month ago
4.0 - 6.0 years
6 - 8 Lacs
Bengaluru, Bellandur
Hybrid
Hiring an AWS Data Engineer for a 6-month hybrid contractual role based in Bellandur, Bengaluru. The ideal candidate will have 4-6 years of experience in data engineering, with strong expertise in AWS services (S3, EC2, RDS, Lambda, EKS), PostgreSQL, Redis, Apache Iceberg, and Graph/Vector Databases. Proficiency in Python or Golang is essential. Responsibilities include designing and optimizing data pipelines on AWS, managing structured and in-memory data, implementing advanced analytics with vector/graph databases, and collaborating with cross-functional teams. Prior experience with CI/CD and containerization (Docker/Kubernetes) is a plus.
Posted 1 month ago
8.0 - 10.0 years
25 - 35 Lacs
Mangaluru
Hybrid
Work Mode: Hybrid (3 days in Mangalore office of which Tuesday/ Wednesday compulsory) Experience: 8-10 years of full-time experience in software engineering or related domains. Proven ability to design and deliver features autonomously, handling all aspects from concept to maintenance. Education: Bachelor's or Masters degree in Computer Science or a related field. The Role This role demands a versatile software professional with strong leadership qualities, a deep sense of ownership, and the ability to tackle challenges across the full stack. Required Skills: Technical Expertise: Proficiency in one or more of the following technologies: Frontend: ReactJS Backend & Serverless: AWS Cognito, GraphQL API (AWS AppSync), AWS Lambda Microservices: Event-driven architecture, AWS EKS, TypeScript/Node.js Database: Aurora Postgres, AWS QLDB Infrastructure: Infrastructure as Code (Terraform, Terragrunt) Familiarity with setting up foundational frameworks and tech stacks from scratch. Key Responsibilities: Build scalable and customer-centric software solutions aligned with business goals. Design, develop, test, deploy, and maintain features across the product lifecycle. Take full ownership of your workfrom design and implementation to release and maintenance. Work in a hybrid mode, with in-office collaboration required on Tuesdays and Wednesdays.
Posted 1 month ago
8.0 - 13.0 years
12 - 17 Lacs
Bengaluru
Work from Office
Sr.DevOps Engineer . They will be responsible for deploying product updates, identifying production issues and implementing integrations that meet our customers' needs. If you have a solid background in software engineering and are familiar with AWS EKS, ISTIO/Services Mesh/tetrate, Terraform,Helm Charts, KONG API Gateway, Azure DevOps, SpringBoot , Ansible, Kafka/MOngoDB wed love to speak with you. Objectives of this Role Building and setting up new development tools and infrastructure Understanding the needs of stakeholders and conveying this to developers Working on ways to automate and improve development and release processes Testing and examining code written by others and analyzing results Identifying technical problems and developing software updates and fixes Working with software developers and software engineers to ensure that development follows established processes and works as intended Monitoring the systems and setup required Tools Daily and Monthly Responsibilities Deploy updates and fixes Provide Level 3 technical support Build tools to reduce occurrences of errors and improve customer experience Develop software to integrate with internal back-end systems Perform root cause analysis for production errors Investigate and resolve technical issues Develop scripts to automate visualization Design procedures for system troubleshooting and maintenance Skills and Qualifications B.Tech in Computer Science, Engineering or relevant field Experience as a DevOps Engineer or similar software engineering role minimum 5- 8 Yrs Proficient with git and git workflows Good knowledge of Kubernets EKS,Teraform,CICD ,AWS Problem-solving attitude Collaborative team spirit
Posted 1 month ago
5.0 - 10.0 years
7 - 12 Lacs
Hyderabad
Work from Office
Summary: Are you passionate about cutting-edge cloud-native platforms and driven to build the foundational services that power enterprise-grade products? We're seeking a highly skilled and strategic Senior Product Manager (Technical) to own the Plexus Application Infrastructure Platform, a critical component of our cloud-native ecosystem. This pivotal role within our Platform Engineering organization is central to our mission: to build a durable competitive advantage by providing robust "building blocks" that accelerate value-to-market for all Thomson Reuters' products. Thomson Reuters leads at the intersection of content and technology with trusted data, workflow automation, and AI. You'll be instrumental in shaping the future of our digital product delivery, working closely with the dedicated Plexus Service Mesh team, which engineers and operates our sophisticated microservice platform based on Kubernetes and Istio. If you're ready to Compete to Win by driving innovation and helping us Obsess over our Customers by delivering exceptional infrastructure, we want to hear from you. About the Role As the Senior Product Manager (Technical) for the Plexus Application Infrastructure Platform, you will be the driving force behind our Service Mesh capability, a critical microservice platform built on Kubernetes and Istio. Your responsibilities will be diverse and impactful, requiring a strategic mindset and a collaborative spirit: Define and Champion Product Strategy: Develop and own the product vision, strategy, and roadmap for the Plexus Application Infrastructure Platform, aligning it with overall organizational goals and anticipating future technology trends, especially within the CNCF landscape. Obsess Over Our Customers: Serve as the authoritative voice of the customer for engineering teams, deeply understanding their needs and translating complex infrastructure challenges into clear, actionable requirements. You will prioritize the product backlog to maximize business and customer value, driving platform capabilities that foster adoption. Compete to Win: Proactively identify and assess new technologies, market trends, and competitive advantages in the cloud-native infrastructure space to ensure our platform remains at the forefront of innovation. Challenge Your Thinking: Advocate for innovative approaches to microservice architecture and platform design. You'll lead efforts to enhance transparency and collaboration across all product and engineering teams, always seeking better ways to build and deliver value. Act Fast. Learn Fast.: Exhibit extreme ownership of the platform's performance and reliability. You'll participate across the full development lifecyclefrom Ideation and Design through Build, Test, and Operateembracing our DevOps culture where 'you build it, you run it.' You'll continuously iterate, analyze metrics, and rapidly adapt to deliver an exceptional user experience. Stronger Together: Lead cross-functional product discovery and delivery, collaborating seamlessly with development managers, architects, scrum masters, software engineers, DevOps engineers, and other product managers. You will foster an environment where collective expertise achieves shared success. Drive Engineering Excellence: Establish and champion software engineering best practices, advocating for tooling that makes compliance frictionless and embedding a strong emphasis on test and deployment automation within our platform. About You We are looking for a visionary and hands-on leader who embodies a unique blend of deep technical understanding and astute product management expertise. You are someone who thrives in a dynamic, fast-paced environment and is driven to make a significant impact. Product Management: 5+ years of progressive experience in Product Management, with a significant portion dedicated to technical products, platforms, or infrastructure services. (Candidates with a strong software development background (e.g., 6+ years) looking to transition into a technical product management role for cloud-native platforms will also be highly considered). Cloud-Native Expertise: Deep technical acumen in cloud-native infrastructure, with hands-on experience building or managing platforms on major cloud providers (AWS, Azure, GCP). Containerization & Service Mesh: Expert-level understanding and practical experience with Kubernetes (ideally AWS EKS and/or Azure AKS) and Istio or other Service Mesh technologies. DevOps & Infrastructure-as-Code: Familiarity with container security, supply chain security, declarative infrastructure-as-code (e.g., Terraform), CI/CD automation, and GitOps workflows. Architectural Understanding: Strong understanding of microservice architectures, API design principles, and distributed systems. Programming (Beneficial): An understanding of modern programming paradigms and languages (e.g., Golang, Python, Java) is highly beneficial, enabling effective collaboration with engineering teams. Problem-Solving: Exceptional problem-solving abilities, capable of dissecting complex technical challenges and translating them into clear product opportunities and solutions. Communication & Influence: Outstanding communication skills, with the ability to articulate complex technical concepts and product strategies clearly and concisely to diverse audiences, from engineers to executive leadership. Collaboration: A collaborative spirit and a history of successfully leading cross-functional teams, fostering an environment where every voice contributes to building the best possible platform. Strategic & Agile: Strategic thinking with a talent for balancing long-term vision with short-term execution. A strong sense of urgency, an agile mindset, and an insatiable curiosity that drives continuous learning and innovation. You're unafraid to challenge assumptions and push boundaries, constantly seeking better ways to build and deliver value. Customer Empathy: A customer-centric approach with a passion for understanding and addressing internal and external customer needs. Education: A bachelors degree in business administration, computer science, computer engineering, a related technical field or equivalent work experience Relevant certifications (e.g., Certified Kubernetes Administrator (CKA), Product Management certifications, or cloud platform certifications) are a plus.
Posted 1 month ago
3.0 - 5.0 years
5 - 8 Lacs
Bengaluru
Work from Office
Role Purpose The purpose of the role is to resolve, maintain and manage clients software/ hardware/ network based on the service requests raised from the end-user as per the defined SLAs ensuring client satisfaction Do Ensure timely response of all the tickets raised by the client end user Service requests solutioning by maintaining quality parameters Act as a custodian of clients network/ server/ system/ storage/ platform/ infrastructure and other equipments to keep track of each of their proper functioning and upkeep Keep a check on the number of tickets raised (dial home/ email/ chat/ IMS), ensuring right solutioning as per the defined resolution timeframe Perform root cause analysis of the tickets raised and create an action plan to resolve the problem to ensure right client satisfaction Provide an acceptance and immediate resolution to the high priority tickets/ service Installing and configuring software/ hardware requirements based on service requests 100% adherence to timeliness as per the priority of each issue, to manage client expectations and ensure zero escalations Provide application/ user access as per client requirements and requests to ensure timely solutioning Track all the tickets from acceptance to resolution stage as per the resolution time defined by the customer Maintain timely backup of important data/ logs and management resources to ensure the solution is of acceptable quality to maintain client satisfaction Coordinate with on-site team for complex problem resolution and ensure timely client servicing Review the log which Chat BOTS gather and ensure all the service requests/ issues are resolved in a timely manner Deliver NoPerformance ParameterMeasure 1.100% adherence to SLA/ timelines Multiple cases of red time Zero customer escalation Client appreciation emails Mandatory Skills: AWS EKS Admin.
Posted 1 month ago
2.0 - 4.0 years
4 - 6 Lacs
Bengaluru
Work from Office
Role Purpose The purpose of the role is to resolve, maintain and manage clients software/ hardware/ network based on the service requests raised from the end-user as per the defined SLAs ensuring client satisfaction Do Ensure timely response of all the tickets raised by the client end user Service requests solutioning by maintaining quality parameters Act as a custodian of clients network/ server/ system/ storage/ platform/ infrastructure and other equipments to keep track of each of their proper functioning and upkeep Keep a check on the number of tickets raised (dial home/ email/ chat/ IMS), ensuring right solutioning as per the defined resolution timeframe Perform root cause analysis of the tickets raised and create an action plan to resolve the problem to ensure right client satisfaction Provide an acceptance and immediate resolution to the high priority tickets/ service Installing and configuring software/ hardware requirements based on service requests 100% adherence to timeliness as per the priority of each issue, to manage client expectations and ensure zero escalations Provide application/ user access as per client requirements and requests to ensure timely solutioning Track all the tickets from acceptance to resolution stage as per the resolution time defined by the customer Maintain timely backup of important data/ logs and management resources to ensure the solution is of acceptable quality to maintain client satisfaction Coordinate with on-site team for complex problem resolution and ensure timely client servicing Review the log which Chat BOTS gather and ensure all the service requests/ issues are resolved in a timely manner Deliver NoPerformance ParameterMeasure1.100% adherence to SLA/ timelines Multiple cases of red time Zero customer escalation Client appreciation emails Mandatory Skills: AWS EKS Admin.
Posted 1 month ago
1.0 - 3.0 years
4 - 8 Lacs
Bengaluru
Hybrid
Working Mode : Hybrid Payroll: IDESLABS Location : Pan India PF Detection is mandatory Job Description: Snowflake: Administration Experience: Managing user access, roles, and security protocols Setting up and maintaining database replication and failover procedures Setting up programmatic access OpenSearch OpenSearch Experience: Deploying and scaling OpenSearch domains Managing security and access controls Setting up monitoring and alerting General AWS Skills: Infrastructure as Code (CloudFormation)Experience building cloud native infrastructure, applications and services on AWS, Azure Hands-on experience managing Kubernetes clusters (Administrative knowledge), ideally AWS EKS and/or Azure AKS Experience with Istio or other Service Mesh technologies Experience with container technology and best practices, including container and supply chain security Experience with declarative infrastructure-as-code with tools like Terraform, Crossplane Experience with GitOps with tools like ArgoCD
Posted 1 month ago
8.0 - 10.0 years
10 - 15 Lacs
Hyderabad
Work from Office
Summary As an employee at Thomson Reuters, you will play a role in shaping and leading the global knowledge economy. Our technology drives global markets and helps professionals around the world make decisions that matter. As the worlds leading provider of intelligent information, we want your unique perspective to create the solutions that advance our businessand your career. About the Role As a Senior DevOps Engineer you will be responsible for building and supporting AWS infrastructure used to host a platform offering audit solutions. This engineer is constantly looking to optimize systems and services for security, automation, and performance/availability, while ensuring solutions developed adhere and align to architecture standards. This individual is responsible for ensuring that technology systems and related procedures adhere to organizational values. The person will also assist Developers with technical issues in the initiation, planning, and execution phases of projects. These activities include: the definition of needs, benefits, and technical strategy; research & development within the project life cycle; technical analysis and design; and support of operations staff in executing, testing and rolling-out the solutions. This role will be responsible for: Plan, deploy, and maintain critical business applications in prod/non-prod AWS environments Design and implement appropriate environments for those applications, engineer suitable release management procedures and provide production support Influence broader technology groups in adopting Cloud technologies, processes, and best practices Drive improvements to processes and design enhancements to automation to continuously improve production environments Maintain and contribute to our knowledge base and documentation Provide leadership, technical support, user support, technical orientation, and technical education activities to project teams and staff Manage change requests between development, staging, and production environments Provision and configure hardware, peripherals, services, settings, directories, storage, etc. in accordance with standards and project/operational requirements Perform daily system monitoring, verifying the integrity and availability of all hardware, server resources, systems and key processes, reviewing system and application logs, and verifying completion of automated processes Perform ongoing performance tuning, infrastructure upgrades, and resource optimization as required Provide Tier II support for incidents and requests from various constituencies Investigate and troubleshoot issues Research, develop, and implement innovative and where possible automated approaches for system administration tasks About you You are fit for the role of a Senior DevOps Engineering role if your background includes: Required: 8+ years at Senior DevOps Level. Knowledge of Azure AWS cloud platform s3, cloudfront, cloudformation, RDS, OpenSearch, Active MQ. Knowledge of CI/CD, preferably on AWS Developer tools Scripting knowledge, preferably in Python Bash or Powershell Have contributed as a DevOps engineer responsible for planning, building and deploying cloud-based solutions Knowledge on building and deploying containers Kubernetes. (also, exposure to AWS EKS is preferable) Knowledge on Infrastructure as code like: Bicep or Terraform, Ansible Knowledge on GitHub Action, Powershell and GitOps Nice to have: Experience with build and deploying .net core java-based solutions Strong understanding on API first strategy Knowledge and some experience implementing testing strategy in a continuous deployment environment Have owned and operated continuous delivery deployment. Have setup monitoring tools and disaster recovery plans to ensure business continuity.
Posted 2 months ago
4 - 9 years
6 - 11 Lacs
Bengaluru
Work from Office
Is your passion for Cloud Native Platform ? That is, envisioning and building the core services that underpin all Thomson Reuters products? Then we want you on our India -based team! This role is in the Platform Engineering organization where we build the foundational services that power Thomson Reuters products. We focus on the subset of capabilities that help Thomson Reuters deliver digital products to our customers. Our mission is to build a durable competitive advantage for TR by providing building blocks that get value-to-market faster. About the Role In this opportunity as a Senior Software Engineer , you will Establish software engineering best practices; provide tooling that makes compliance frictionless Drive a strong emphasis on test and deployment automation Participate in all aspects of the development lifecycle: Ideation, Design, Build, Test and Operate. We embrace a DevOps culture (you build it, you run it); while we have dedicated 24x7 level-1 support engineers, you may be called on to assist with level-2 support Collaboration with all product teams; transparency is a must! Collaborate with development managers, architects, scrum masters, software engineers, DevOps engineers, product managers and project managers to deliver phenomenal software Ongoing participation in a Scrum team, and an embrace of the agile work model Keep up-to-date with emerging cloud technology trends especially in CNCF landscape About You 4+ years software development experience 2+ years of experience building cloud native infrastructure, applications and services on AWS, Azure or GCP Hands-on experience with Kubernetes , ideally AWS EKS and/or Azure AKS Experience with Istio or other Service Mesh technologies Experience with container security and supply chain security Experience with declarative infrastructure-as-code, CI/CD automation and GitOps Experience with Kubernetes operators written in Golang A Bachelor's Degree in Computer Science, Computer Engineering or similar
Posted 2 months ago
7 - 9 years
25 - 32 Lacs
Chennai, Bengaluru
Work from Office
Hiring Cloud Engineers for an 8-month contract role based in Chennai or Bangalore with hybrid/remote flexibility. The ideal candidate will have 8+ years of IT experience, including 4+ years in AWS cloud migrations, with strong hands-on expertise in AWS MGN, EC2, EKS, Terraform, and scripting using Python or Shell. Responsibilities include leading lift-and-shift migrations, automating infrastructure, migrating storage to EBS, S3, EFS, and modernizing legacy applications. AWS/Terraform certifications and experience in monolithic and microservices architectures are preferred
Posted 2 months ago
6 - 11 years
15 - 30 Lacs
Pune, Bengaluru, Mumbai (All Areas)
Work from Office
Roles and Responsibilities Design, develop, test, deploy, and maintain scalable cloud-based applications using AWS EKS. Collaborate with cross-functional teams to identify requirements and implement solutions that meet business needs. Ensure high availability, scalability, security, and performance of deployed applications on Amazon EC2 instances. Troubleshoot issues related to containerized applications running on Fargate or Lambda functions. Participate in code reviews to ensure adherence to coding standards and best practices.
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough