Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
2.0 - 4.0 years
4 - 6 Lacs
Pune
Work from Office
Primary skills- Python, Cloud formation, Lambda, Storage gateway . ,File share, SQS, S3, event rule, API gateway, Cloud Watch, AURORA DB Roles Responsibilities Gathering and understanding the requirement. Creation and Documentation of Design encompassing a multiple layered process, infrastructure, and IT services management system. CICD Automation Production support .
Posted 1 week ago
2.0 - 7.0 years
5 - 15 Lacs
Hyderabad
Work from Office
Role Description Aderant has an extensive cloud portfolio across multiple cloud providers with a high requirement for operational visibility and responsive remediation. We are looking for capable and motivated engineers to join the team to participate in continued maturity of our platform observability and monitoring development to achieve high levels of production service management. Responsibilities: Familiarity with technical architecture patterns of systems and applications to understand dependencies, points of failure, impacts, and external and internal interfaces, to provide monitoring recommendations for applications and infrastructure. Facilitate observation of platform health through various monitoring platforms and frameworks and resolve service impacting issues through execution of triage, runbook execution and remediating actions. Communicate with all stakeholders, ensuring that they are aware of critical incidents and any observed patterns of occurrence. Champion monitoring processes and tools. Ensure that monitoring principles, processes, and tools are established and adhered to. Encourage teams to adopt these processes and tools by demonstrating best practices and giving presentations to groups or creating/updating documentation. Consult with and advise Senior Management and Engineers on monitoring topics. Create, refine, implement, document, and maintain appropriate monitoring policies and processes. Manage observability costs through architecture insight and operational reporting. Skills & Qualifications: A minimum of two years of experience with monitoring with the AWS cloud platform preferred. A minimum of three years of Windows administration experience. Proven skillset in the use of AWS Cloudwatch, Cloudtrail and Log Insights. The ability to analyze and understand complex technical systems built from many separate software and hardware components, specifically web application architectures. Intermediate Powershell skills. Understanding of basic web application architecture resources such as IIS, SQL, Fileservices, etc. Understanding of basic AWS resources such as EC2, S3, Load Balancers, EBS, etc. Evidence of improving monitoring or other processes across a medium to large organization. Excellent verbal and written communication skills. Excellent time management and organizational skills. Ability to work effectively independently and in a team-oriented environment. Culture: Our purpose is to help law firms excel and to improve peoples lives.Our commitment to each other is to foster a culture of innovation, collaboration, and personal growth where challenging the norm is celebrated. Culture unites and defines our global company.We encourage our diverse workforce to bring their whole selves to work – ideas, experience, and passion to drive our business forward.From entry level to experienced professional, our environment supports connection, the human kind.
Posted 2 weeks ago
5.0 - 10.0 years
18 - 30 Lacs
Pune, Bengaluru, Delhi / NCR
Hybrid
Job Location- Delhi NCR/Bangalore/Hyderabad/Pune/Mumbai/Chennai Shift timings- 1:30PM -11:30PM Work mode- Hybrid Exp- 5-8 years We are looking for AWS experts with the following exp- Experience Requirements: candidates with 5+ years of experience in AWS and cloud services. Exp in Serverless app dev. Python knowledge for the role, specifically for scripting to handle infrastructure and manipulate AWS services. Looking for candidates with experience in Python scripting for AWS services, not just web application development . Now is the time to bring your expertise to Insight. We are not just a tech company; we are a people-first company. We believe that by unlocking the power of people and technology, we can accelerate transformation and achieve extraordinary results. As a Fortune 500 Solutions Integrator with deep expertise in cloud, data, AI, cybersecurity, and intelligent edge, we guide organisations through complex digital decisions. Responsibilities Design and implement cloud security solutions using AWS services, ensuring compliance with industry standards Design and implement microservice based solutions using Serverless and Containerization services. Develop and maintain automation scripts and tools using Python to streamline security processes and enhance operational efficiency. Collaborate with DevOps, development, and security teams to integrate security best practices into the software development lifecycle (SDLC). Monitor cloud environments for security incidents and respond to alerts, conduct investigations and implement corrective actions, as required. Stay up to date with the latest cloud security trends, threats, and best practices, and provide recommendations for continuous improvement. Create and maintain documentation related to security policies, procedures, and compliance requirements. Provide mentorship and guidance to junior engineers and team members on cloud security and compliance practices. Key Skills Bachelor/masters degree in computer science, Information Technology, or a related field. 5+ years of experience in cloud engineering, with a focus on AWS services and cloud security. Strong proficiency in Python programming for automation and scripting. Hands-on experience with working on Python Automation testing using Unit tests and BDDs. In-depth knowledge of AWS security services (AWS Lambda, AWS IAM, S3, CloudWatch, SNS, SQS, Step Functions) is a must. Experience with microservice and containerization in AWS using Amazon EKS is a plus. Experience with Infrastructure as Code (IaC) tools such as AWS CloudFormation. Strong understanding of networking, encryption, and security protocols in cloud environments. Basic understanding of tools like Jenkins, artifactory is required. Excellent problem-solving skills and the ability to work independently and collaboratively in a fast-paced environment. Relevant certifications (e.g., AWS Certified Solutions Architect Associate, AWS Certified Developer Associate, AWS) are a plus. Experience on the AI services in the AWS environment in a plus. Excellent communication skills. What you can expect Were legendary for taking care of you, your family and to help you engage with your local community. We want you to enjoy a full, meaningful life and own your career at Insight. Some of our benefits include: Freedom to work from another locationeven an international destination—for up to 30 consecutive calendar days per year.
Posted 2 weeks ago
6.0 - 11.0 years
19 - 25 Lacs
Chennai, Bengaluru
Hybrid
Role & responsibilities Title : Senior Backend Developer Position Type: Full Time Locations: Any location but Chennai/Bangalore preferable. Experience : 6+ Description: Seeking a senior-level developer with strong experience in backend development, especially in building and supporting highly available, resilient, and performant systems. Should be a self-starter who can quickly get familiar with our codebase and begin contributing within a few days of onboarding, with the following technical skills: Strong proficiency in Java (8 or above), with a solid understanding of functional and asynchronous APIs Hands-on experience with Scala 2, and good understanding of reactive programming concepts Familiarity with Akka and Akka Streams Experience with messaging frameworks like Kafka, including handling advanced integration use cases Solid understanding of AWS services including DynamoDB, Neptune, Lambda, API Gateway, EC2, ECS Comfortable with DevOps and observability using CloudWatch logs, metrics, and traces
Posted 2 weeks ago
3.0 - 6.0 years
4 - 8 Lacs
Chennai
Work from Office
Role & responsibilities Must have skills: 1. Experience in deploying PHP/ Magento Based applications 2. Experience AWS EC2, ELB, Jenkin,Gitlab 3. Experience in Cloud watch, cloud front, DNS Records and WAF Management Technical Summary DevOps engineer with hands-on experience in architecting, deploying, and managing secure and high-performance infrastructure for PHP-based applications. Operating Systems: Ubuntu, AlmaLinux, CentOS Web Servers: Apache, Nginx Databases: MySQL, MariaDB PHP Versions: 5.x to 8.x Monitoring & Observability: AWS CloudWatch, New Relic Development Environments: Local PHP stack configuration, EC2 instance provisioning and configuration, load balancer (ELB) setup for high availability and traffic distribution, Version Control: Git setup and handling (branching, merging, conflict resolution, hook scripting) Log Debugging & Performance Tuning Proficient in analyzing system, web server, and application logs to identify critical errors, bottlenecks, or misconfigurations. Experienced in isolating slow or failed API/web requests using tools like AWS CloudWatch Logs, New Relic APM, and ELK Stack. Investigates key performance metrics such as Time to First Byte (TTFB), page load time, database query latency, and PHP execution time. Identifies and fixes issues related to high TTFB by tuning PHP-FPM, optimizing Apache/Nginx configurations, and managing concurrent requests. Implements and audits caching strategies at various layers OPcache for PHP, Redis/Memcached for object/session caching, and Varnish for full-page caching. Monitors and reduces repeated queries, long query execution, and large payload responses by analyzing SQL logs and query plans. Performs root cause analysis across infrastructure and application layers to ensure stability and reduce downtime. Follows a structured debugging process to replicate and resolve speed-related issues under real load conditions. Core Competencies Infrastructure Design & Deployment: Expertise in designing and deploying scalable infrastructure for PHP applications using the LAMP stack (Linux, Apache, MySQL, PHP). Hands-on experience in provisioning EC2 instances, configuring web servers (Apache, Nginx), and setting up load balancers (ELB) for high availability. CI/CD Implementation: Skilled in implementing and managing automated CI/CD pipelines using tools like GitLab CI, Jenkins, and GitHub Actions to streamline application deployment and updates. Caching: Extensive knowledge in optimizing application performance through caching techniques, including: Redis/Memcached for session and object caching. Varnish for full-page HTTP caching. OPcache for PHP script caching to improve response times and reduce server load. CDN Integration: Proficient in integrating Cloudflare and AWS CloudFront for Content Delivery Network (CDN) to enhance global content distribution, reduce latency, and improve site performance. Session Management: Expertise in configuring centralized session management using Redis in load-balanced environments to ensure session persistence across multiple servers. Log Aggregation & Monitoring: Strong experience in configuring log aggregation, monitoring, and alerting systems using AWS CloudWatch and New Relic, ensuring proactive issue detection and real-time response to performance bottlenecks or errors. DNS Configurations: Experienced in managing DNS records (A, CNAME, MX, TXT) using platforms like AWS Route 53 and Cloudflare, ensuring high availability, Configured DNS failover strategies to ensure high availability and minimize downtime, utilizing features like latency-based routing and geo-routing. WAF Management: Proficient in configuring AWS WAF, Cloudflare WAF, and ModSecurity to safeguard applications from security vulnerabilities like SQL injection, XSS, and CSRF. Ability to create custom WAF rules, monitor traffic, and optimize security without impacting performance.
Posted 3 weeks ago
5 - 10 years
12 - 19 Lacs
Pune, Bengaluru, Delhi / NCR
Hybrid
Job Location- Delhi NCR/Bangalore/Hyderabad/Pune/Mumbai/Chennai Shift timings- 1:30PM -11:30PM Work mode- Hybrid Exp- 5-8 years We are looking for AWS experts with the following exp- Experience Requirements: candidates with 5+ years of experience in AWS and cloud services. Python knowledge for the role, specifically for scripting to handle infrastructure and manipulate AWS services. Looking for candidates with experience in Python scripting for AWS services, not just web application development . Now is the time to bring your expertise to Insight. We are not just a tech company; we are a people-first company. We believe that by unlocking the power of people and technology, we can accelerate transformation and achieve extraordinary results. As a Fortune 500 Solutions Integrator with deep expertise in cloud, data, AI, cybersecurity, and intelligent edge, we guide organisations through complex digital decisions. Responsibilities Design and implement cloud security solutions using AWS services, ensuring compliance with industry standards Design and implement microservice based solutions using Serverless and Containerization services. Develop and maintain automation scripts and tools using Python to streamline security processes and enhance operational efficiency. Collaborate with DevOps, development, and security teams to integrate security best practices into the software development lifecycle (SDLC). Monitor cloud environments for security incidents and respond to alerts, conduct investigations and implement corrective actions, as required. Stay up to date with the latest cloud security trends, threats, and best practices, and provide recommendations for continuous improvement. Create and maintain documentation related to security policies, procedures, and compliance requirements. Provide mentorship and guidance to junior engineers and team members on cloud security and compliance practices. Key Skills Bachelor/masters degree in computer science, Information Technology, or a related field. 5+ years of experience in cloud engineering, with a focus on AWS services and cloud security. Strong proficiency in Python programming for automation and scripting. Hands-on experience with working on Python Automation testing using Unit tests and BDDs. In-depth knowledge of AWS security services (AWS Lambda, AWS IAM, S3, CloudWatch, SNS, SQS, Step Functions) is a must. Experience with microservice and containerization in AWS using Amazon EKS is a plus. Experience with Infrastructure as Code (IaC) tools such as AWS CloudFormation. Strong understanding of networking, encryption, and security protocols in cloud environments. Basic understanding of tools like Jenkins, artifactory is required. Excellent problem-solving skills and the ability to work independently and collaboratively in a fast-paced environment. Relevant certifications (e.g., AWS Certified Solutions Architect Associate, AWS Certified Developer Associate, AWS) are a plus. Experience on the AI services in the AWS environment in a plus. Excellent communication skills. What you can expect We’re legendary for taking care of you, your family and to help you engage with your local community. We want you to enjoy a full, meaningful life and own your career at Insight. Some of our benefits include: Freedom to work from another location—even an international destination—for up to 30 consecutive calendar days per year.
Posted 1 month ago
8 - 13 years
12 - 22 Lacs
Gurugram
Work from Office
Data & Information Architecture Lead 8 to 15 years - Gurgaon Summary An Excellent opportunity for Data Architect professionals with expertise in Data Engineering, Analytics, AWS and Database. Location Gurgaon Your Future Employer : A leading financial services provider specializing in delivering innovative and tailored solutions to meet the diverse needs of our clients and offer a wide range of services, including investment management, risk analysis, and financial consulting. Responsibilities Design and optimize architecture of end-to-end data fabric inclusive of data lake, data stores and EDW in alignment with EA guidelines and standards for cataloging and maintaining data repositories Undertake detailed analysis of the information management requirements across all systems, platforms & applications to guide the development of info. management standards Lead the design of the information architecture, across multiple data types working closely with various business partners/consumers, MIS team, AI/ML team and other departments to design, deliver and govern future proof data assets and solutions Design and ensure delivery excellence for a) large & complex data transformation programs, b) small and nimble data initiatives to realize quick gains, c) work with OEMs and Partners to bring the best tools and delivery methods. Drive data domain modeling, data engineering and data resiliency design standards across the micro services and analytics application fabric for autonomy, agility and scale Requirements Deep understanding of the data and information architecture discipline, processes, concepts and best practices Hands on expertise in building and implementing data architecture for large enterprises Proven architecture modelling skills, strong analytics and reporting experience Strong Data Design, management and maintenance experience Strong experience on data modelling tools Extensive experience in areas of cloud native lake technologies e.g. AWS Native Lake Solution onsibilities
Posted 1 month ago
2 - 4 years
3 - 4 Lacs
Chennai
Work from Office
Planning & designing the cloud infrastructure with AWS Technical Exp on Cloud & Datacenter technologies including Private & Public Cloud Deploying new cloud-based solutions like Ec2, VPC, VPN, EFS, FSX, S3, SNS, Cloud Watch & SQS Call 7397778272
Posted 2 months ago
6 - 10 years
5 - 15 Lacs
Delhi NCR, Bengaluru, Hyderabad
Hybrid
Here are the various AWS services we need expertise on: CDK with Python Amazon AppFlow Step Functions Lambda S3 EventBridge CloudWatch/CloudTrail/XRay GitHub Role & responsibilities Preferred candidate profile Immediate joiner only Perks and benefits
Posted 2 months ago
3 - 8 years
4 - 9 Lacs
Bengaluru
Hybrid
Hi, We have a urgent opening for TechOps-DE-Cloudops-Senior for Bangalore location. The opportunity As a Senior Data Engineer, this role will play a pivotal role in managing and optimizing large-scale data architectures that are crucial for providing valuable insights to business users and downstream systems. We are looking for an innovative and experienced professional who is adept at overseeing data flow from diverse sources and ensuring the continuous operation of production systems. Your expertise will be instrumental in maintaining data platforms that empower front-end analytics, contributing to the effectiveness of Takedas dashboards and reporting tools. As a key member of the Analytics Production Support team, you will ensure seamless end-to-end data flow and coordinate with stakeholders and team members across various regions, including India and Mexico. The ability to manage major incidents effectively, including handling Major Incident Management (MIM) bridges, is crucial and flexible to work in 24x7x365 support model. Your key responsibilities Manage and maintain the Data Pipeline (ETL/ ELT Layer) to guarantee high availability and performance. Resolve Data Quality issues within the Service Level Agreement (SLA) parameters by coordinating with cross-functional teams and stakeholders. Proactively monitor the system and take pre-emptive measures against alerts such as Databricks job failures and data quality issues. Monitor and maintain AWS data services, including S3, DMS, Step Functions, and Lambda, to ensure efficient and reliable data loading processes Conduct thorough analyses of code repositories to understand Databricks job failures and determine appropriate corrective actions. Take ownership of support tickets, ensuring timely and effective resolution. Manage major incidents with meticulous attention to detail, ensuring compliance with regulatory requirements and effective data presentation. Perform root cause analysis for major incidents, recurring incidents and propose solutions for permanent resolution. Identify and execute automation opportunities to enhance operational efficiency. Escalate complex issues to the next level of support to ensure a swift resolution. Mentor junior team members, providing a structured training plan for skill enhancement and professional growth. . Skills and attributes for success 3 to 8 years of experience in Data Analytics, with a focus on maintaining and supporting ETL data pipelines using Databricks & AWS Services Proficiency in Databricks and PySpark for code debugging and root cause analysis. Proven experience in a Production Support environment and readiness to work in a 24x7 support model. Strong understanding of: Relational SQL databases. Data Engineering Programming Languages (e.g., Python). Distributed Data Technologies (e.g., PySpark). Cloud platform deployment and tools (e.g., Kubernetes). AWS cloud services and technologies (e.g., Lambda, S3, DMS, Step Functions, Event Bridge, Cloud Watch, RDS). Databricks/ETL processes. Familiarity with ITIL principles. Effective communication skills for collaborating with multifunctional teams and strategic partners. Strong problem-solving and troubleshooting abilities. Capacity to thrive in a dynamic environment and adapt to evolving business needs. Commitment to continuous integration and delivery principles to automate code deployment and improve code quality. Familiarity with the following tools and technologies will be considered as an added advantage: Power BI Informatica Intelligent Cloud Services (IICS) Tidal. Must be a proactive learner, eager to cross-skill and advance within our innovative team environment. Databricks Associate Certification is required. ITIL 4 Foundation Level Certification is a plus. To qualify for the role, you must have Databricks Associate Certification Relational SQL databases. Data Engineering Programming Languages (e.g., Python). Distributed Data Technologies (e.g., PySpark). Cloud platform deployment and tools (e.g., Kubernetes). AWS cloud services and technologies (e.g., Lambda, S3, DMS, Step Functions, Event Bridge, Cloud Watch, RDS). Databricks/ETL processesWhat we offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations Argentina, China, India, the Philippines, Poland and the UK and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. Well introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: Youll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs.
Posted 2 months ago
10 - 15 years
32 - 37 Lacs
Hyderabad
Work from Office
Position Overview: Join the team that manages the end-to-end technology and life cycle management of the applications hosted both on premise and in the AWS cloud. We are looking for an experienced Senior Software Engineer with more than 10 years of experience to join the Data and Analytics Engineering (D&AE) operational effectiveness team. The Senior Advisor will be responsible for ensuring the development of reports and data feeds using SAS based platform and migration of reports to a cloud-based platform and, for the over-all team deliverables. The ideal candidate will have a strong technical background and experience in creating & managing complex data queries and pipelines. Responsibilities: Strong knowledge of SAS with programming experience Develop and manage report and data feeds involving complex queries. Perform root cause analysis on for data pipeline failures and data discrepancies. Work with business partners to resolve defects identified in a timely manner. Understand the requirements and use cases provided. Automation of application health checks and manual workarounds using python. Enable the self-healing capabilities to reduce the human interference, user impact and to increase the availability. Partner with other teams and stake holders for any clarifications needed for the timely deliverables. Ability to provide alternative solutions. Experienced with agile methodology. Mentor the junior members in the team on the industry best practices, adoption, and maturity. Help the team members and users on the technical challenges. Responsible for the team deliverables Qualifications Required Skills: Experience with large datasets management Experience with data pipeline scheduling Experience with AWS services IAM, EC2, S3, Cloud Watch, StepFunctions, Lambdas, MWAA, SQS, SNS, Glue, Athena etc. Experience with continuous integration and continuous delivery (CI/CD) tools like GitHub, Jenkins etc. Experience with MS Office suite & VBA along with good presentation skills. Experience with monitoring and logging tools such as Dynatrace or Splunk etc. Excellent problem-solving skills and attention to details Strong communication and collaboration skills Required Experience & Education: Bachelors degree in the related field. 10+ years of experience in developer role. 6+ years of hands-on development experience with Python 6+ years of experience with SAS programming 4+ years of relational database management systems like Oracle, SQL server OR Teradata, MongoDB OR POSTGRESQL 3+ years of experience in AWS. Desired Experience: AWS certification preferred
Posted 2 months ago
6 - 8 years
8 - 12 Lacs
Bengaluru
Work from Office
Responsibilities Maintaining several critical applications that are part of the big platform (15+ various services written on Go/Scala/Java), applications serve clients all over the world? Deploying those services to warehouses Providing 24/7 support (in office and out of office hours) of applications - paid extra Performing security updates, bugfixes, application improvements, configuration changes Applying SRE best practices and improving SRE across the organization Creating an advanced observability and alerting framework around those services Requirements Deep understanding of Gos syntax, core libraries, idioms, and best practices. Experience with goroutines, channels, and Gos concurrency patterns to build efficient, scalable applications Ability to design, build, and maintain RESTful APIs or gRPC services, often as part of microservices architectures Familiarity with both SQL (Postgress) and NoSQL(Dynamo or Mongo) databases and integrating them within Go applications Ability to design, build, and maintain RESTful APIs or gRPC services, often as part of microservices architectures Amazon AWS services and infrastructure (RDS, ECS, SQS, Cloud watch) Prometheus, New Relic, Grafana Familiarity with continuous delivery concepts Experience working in agile environments and effective communication within cross-functional teams A desire to learn from others to enhance your breadth and depth of application knowledge and WMS domain Aiming to be a versatile asset in SRE, ready to engage with multiple programming languages Self-motivated, results-oriented team player Proficient in communication and writing Strong problem-solving, issue identification, and process optimization skills Nice to have Tech geek: Experience with other languages (Java, C#, Scala, Python) DevOps experience: Experience with containerization (e.g., Docker), orchestration (e.g., Kubernetes), and continuous integration/continuous deployment (CI/CD) pipelines
Posted 2 months ago
6 - 10 years
12 - 20 Lacs
Chennai
Work from Office
AWS & GCP Senior Consultants - Digital Transformation Location: Tiruvanmiyur, Chennai Experience: 6+ Years Employment Type: Full-time (Onsite) Notice Period: Immediate joiners or candidates with a notice period of 30 days or less Background Verification: Required No. of positions: 2 (1 each) About the Company Our client is a leading provider of digital transformation solutions for enterprises. They specialize in cloud modernization, automation, and security across AWS and GCP platforms. We are hiring experienced AWS & GCP Senior Consultants to manage, optimize, and support enterprise cloud environments. If you have a strong technical background in cloud operations and automation, this opportunity is for you. AWS Senior Consultant Key Responsibilities Manage and maintain AWS infrastructure for enterprise clients. Implement AWS security best practices (IAM, Security Groups, WAF, etc.). Automate cloud operations using Terraform, CloudFormation, and Ansible . Optimize cloud costs and resource utilization . Ensure compliance with industry standards and regulations . Troubleshoot issues and ensure high availability of cloud services. Collaborate with cross-functional teams for seamless cloud operations. Required Skills & Qualifications Expertise in AWS Services (EC2, S3, RDS, VPC, Lambda, IAM, Route 53, etc.). Hands-on experience with Infrastructure-as-Code (Terraform, CloudFormation) . Strong knowledge of DevOps tools (CI/CD, Git, Jenkins, Docker, Kubernetes) . Understanding of networking concepts (VPC, VPN, Direct Connect) . Familiarity with AWS Monitoring & Logging (CloudWatch, AWS Config, GuardDuty). AWS Certifications (AWS Solutions Architect, AWS SysOps) preferred. GCP Senior Consultant Key Responsibilities Manage and support GCP cloud infrastructure for enterprise clients. Implement GCP security best practices (IAM, Firewall, Cloud Armor, VPC Service Controls). Automate cloud operations using Terraform, Deployment Manager, or Ansible . Optimize cloud resources for cost efficiency and performance . Monitor and troubleshoot GCP environments for high availability . Ensure compliance with security and regulatory frameworks (ISO, GDPR, etc.) . Provide technical leadership and mentoring to junior engineers. Required Skills & Qualifications Strong expertise in Google Cloud Services (Compute Engine, GKE, Cloud Storage, BigQuery, Cloud SQL, IAM, VPC). Hands-on experience with Infrastructure-as-Code (Terraform, Deployment Manager, Ansible) . Experience with DevOps & CI/CD tools (GitHub, GitLab, Jenkins, Kubernetes, Docker) . Strong understanding of networking concepts (VPC, VPN, Interconnect, Cloud DNS) . Experience in GCP monitoring and logging (Cloud Logging, Stackdriver, Operations Suite) . GCP Certifications (Professional Cloud Architect, Professional DevOps Engineer) preferred. Why Join Us? Work on cutting-edge enterprise cloud transformation projects . Collaborate with a dynamic and talented cloud engineering team. Career growth opportunities in a fast-paced digital environment .
Posted 2 months ago
3 - 5 years
3 - 4 Lacs
Chennai
Work from Office
Planning & designing the cloud infrastructure with AWS Technical Exp on Cloud & Datacenter technologies including Private & Public Cloud Deploying new cloud-based solutions like Ec2, VPC, VPN, EFS, FSX, S3, SNS, Cloud Watch & SQS Call 7397778272
Posted 3 months ago
4 - 9 years
12 - 22 Lacs
Hyderabad
Work from Office
Hi Everyone, we are looking for persons who have experience in .net developer with AWS. Mandatory skills:.net core, c#, AWS, Lambda,cloud watch, EC2. Location:Hyderabad. Notice period: 0 to 15 Days. Interested persons can share their resume for below mail id: anusha.p@precisiontechcorp.com
Posted 3 months ago
8 - 12 years
30 - 35 Lacs
Hyderabad
Work from Office
S&P Dow Jones Indices is seeking a Lead database developer to be a key player in the implementation and support of Platforms for S&P Dow Jones Indices. This role requires a seasoned technologist who contributes to application development and maintenance. The candidate should actively evaluate new products and technologies to build solutions that streamline business operations. The candidate must be delivery-focused with solid financial applications experience. The candidate will assist in day-to-day support and operations functions, design, development, and unit testing. Responsibilities and Impact: Lead the design and implementation of database solutions using Postgres and other relational databases. Ensure database performance, security, and reliability; coordinate with application teams to define database design. Collaborate with cross-functional teams to support data-driven initiatives. Mentor junior team members and promote best practices. Oversee database maintenance and troubleshooting. Drive innovation by evaluating and integrating new technologies. Produce system design documents and participate in technical walkthroughs. Perform application and system performance tuning and troubleshoot performance issues. Effectively interact with global customers, business users, and IT staff. What Were Looking For: Basic Required Qualifications: Bachelors degree in Computer Science, Information Systems, or Engineering, or equivalent work experience. 8+ years of IT experience in application support or development. Strong experience in database environments SQL, PL/SQL programming. In-depth knowledge of Oracle and PostgreSQL architecture. Drive end-to-end availability, performance monitoring, and capacity planning for databases using different tools like AWS DMS, Cloudwatch Experience with SPARK and NoSQL-related database technologies is preferred Experience in working with multi-threaded, high-performance, low-latency messaging systems. Experience in AWS cloud-based technologies. Experience using system tools, source control systems, utilities, and third-party products. Experience with financial applications such as Index/Benchmarks, Asset Management, Portfolio Investment modeling, or trading systems is preferred Excellent communication skills, with strong verbal and writing proficiencies. Additional Preferred Qualifications: Proficiency in building data analytics solutions on AWS Cloud. Thorough understanding of replication strategies and implementation of HA protocols for 99.99% uptime. Experience with implementation of data governance and data lineage. Proficiency in implementing data models in SQL and NoSQL databases. Experience with microservice and serverless architecture implementation.
Posted 3 months ago
8 - 13 years
6 - 10 Lacs
Hyderabad
Work from Office
Title:AWS Connect, Lambda, Python. Mandatory Skills:AWS connect, Amazon Connect Agent Workspace , Python , Amazon LEX, Lambda Integration, Lambda, Step Functions, S3, API Gateway , Dynamo DB, Cloud Watch, CloudFormation, IAM, Cloud Front. Must have good communication skills and Team leading experience.At least 8 years of AWS Connect Experience. 2. Must have experience in working and implementing Amazon Connect Agent Workspace for contact centers. 3. Must have experience in working and implementing AWS connect, Amazon Connect Agent Workspace. 4. Must have experience in Customize Amazon Connect Agent Workspace, integrate with 3rd party applications and design Agent workflows Guides3. Experience in building conversational interfaces for applications using voice and text leveraging Lex service. 5. Should be capable of writing Cloud formation templates 6. Should have experience in working with Azuredevops Git hub. Mandatory Skills:AWS connect, Amazon Connect Agent Workspace , API Gateway, Amazon LEX ,Python , Lambda Integration, Lambda, Step Functions, S3, Dynamo DB, Cloud Watch, Cloud Formation Core working hours will be from 1pm IST to 10:30pm IST Qualification 1. At least 8 years of AWS Connect Experience. 2. Must have experience in working and implementing Amazon Connect Agent Workspace for contact centers. 3. Must have experience in working and implementing AWS connect, Amazon Connect Agent Workspace. 4. Must have experience in Customize Amazon Connect Agent Workspace, integrate with 3rd party applications and design Agent workflows Guides3. Experience in building conversational interfaces for applications using voice and text leveraging Lex service. 5. Should be capable of writing Cloud formation templates 6. Should have experience in working with Azuredevops Git hub. Mandatory Skills:AWS connect, Amazon Connect Agent Workspace , API Gateway, Amazon LEX ,Python , Lambda Integration, Lambda, Step Functions, S3, Dynamo DB, Cloud Watch, Cloud Formation Core working hours will be from 1pm IST to 10:30pm IST.
Posted 3 months ago
7 - 12 years
9 - 14 Lacs
Bengaluru
Work from Office
Expertise in development using Core Java, J2EE,Spring Boot, Microservices, Web Services SOA experience SOAP as well as Restful with JSON formats, with Messaging Kafka. Working proficiency in enterprise developmental toolsets like Jenkins, Git/ Bitbucket, Sonar, Black Duck, Splunk, Apigee etc. Experience in AWS cloud monitoring tools like Datadog, Cloud watch, Lambda is needed. Experience with XACML Authorization policies. Experience in NoSQL , SQL database such as Cassandra, Aurora, Oracle. Good understanding of React JS ,Photon framework , Design, Kubernetes Working with GIT/Bitbucket, Maven, Gradle, Jenkins tools to build and deploy code deployment to production environments.
Posted 3 months ago
4 - 7 years
8 - 14 Lacs
Ahmedabad
Work from Office
Full-stack develop- Node.js, Next.js, and AngularJS AWS services: EC2, S3, IoT Core, Lambda, DynamoDB, API Gateway, and CloudWatch. Proficiency JavaScript, HTML5, CSS3, modern frontend frameworks Familiarity with RESTful services, cloud architecture Required Candidate profile 4+ years of exp. Build, integrate, and maintain APIs and cloud-based infrastructure Optimize performance and scalability across both frontend and backend systems
Posted 3 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2