Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
14 - 19 years
16 - 20 Lacs
Bengaluru
Work from Office
Technical Architect - J48767 Architect will lead the design and solutioning of scalable, high-performance systems. The ideal candidate will have expertise in Microservices Architecture, AWS, and containerization technologies like Docker and Kubernetes. He will be responsible for modernizing production systems, creating POCs, and ensuring system performance. Key Responsibilities: Design and implement Microservices-based architectures. Lead solutioning for cloud-based applications, primarily using AWS (EC2, RDS, EKS, S3, etc.). Develop and validate POCs to demonstrate architecture viability. Optimize system performance and scalability. Work with stakeholders to design API gateways and other integrations. Modernize live production systems and ensure seamless transitions. Create architectural diagrams using Draw.io and collaborate using Confluence and Miro. Required Candidate profile Candidate Experience Should Be : 14 To 20 Candidate Degree Should Be : BE-Comp/IT,BTech-Comp/IT,ME-Comp/IT
Posted 3 months ago
8 - 12 years
27 - 32 Lacs
Noida
Work from Office
Responsibilities: Help us create an end-to-end Framework to encompass all the dimensions of Non-Functional testing. Implement a performance test framework using JMeter. Own and manage performance test environment. Stay up to date with industry best practices and emerging trends in performance testing and non-functional testing methodologies. Understand non-functional requirements (Stress, Spike, Capacity, load, and scalability), develop performance test scripts, follow test plan schedule and scope. Run performance tests on the application under test. Report defects and performance KPIs as part of the performance test report. Provide feedback on process improvement when theres an opportunity and strive to apply testing best practices in the performance test. Experience in testing high-volume web and batch-based transaction enterprise applications. Experience in setup and management of test environments on different platforms (Windows/ Unix). Good understanding of troubleshooting performance issues encountered during performance testing. Stay up-to-date with industry best practices and emerging trends in performance testing and non-functional testing methodologies Collaborate with development, QA, and business teams to identify performance bottlenecks and drive the implementation of performance tuning and optimization solutions. Conduct load testing, stress testing, scalability testing, and endurance testing to assess the system's ability to handle high volumes of traffic and data. Establish performance benchmarks and baselines for different environments and configurations. Report and communicate performance testing results, findings, and recommendations to stakeholders, including senior management and technical teams Works with the different project stakeholders to help define and document performance SLAs, requirements, and expectations around critical factors such as response time, throughput, transactions/second, concurrent users, CPU utilization, memory, disk, network utilization, thread counts, connection pooling, hit ratios. Develops data-driven test scripts and execute performance and load testing of the applications. Review the performance test scripts created or modified by peers to ensure compliance with standards. Ability to interpret Network/system diagram, results of performance tests and identify improvements. Understand business scenarios in depth to define workload modelling for different scenarios Able to accurately analyze performance test results and present analysis in both technical and business language What Were Looking For: Education level Engineering Graduate Year of Experience. 8 to 12 years Experience with Azure DevOps and working with Agile teams Specific skills Technologies: C# .NET, .NET Core, Angular, JavaScript, Typescript, PowerShell, YAML, REST, GIT Automation: Selenium, Webdriver.io, Spec Flow, Appvance Tools: JMeter, Blazemeter, Lighthouse, WebPagetest , Containers, Grafana, InfluxDB. Ability to write and understand complex SQL queries Proficient in writing and interpreting XMl scripts Knowledge of containers and microservice API Testing Familiar with AWS for managing performance scripts and pipelines Knowledge of Docker for application deployment in containerized environment Ability to analyze the logs and monitoring tools such as Cloudwatch, Splunk Strong understanding and hands on exposure required on different types of non-functional testing like Performance (Load, Stress, Scalability etc.), Security, Usability, Compatibility, Reliability, Failover, Scalability, Usability, Accessibility Fair understanding of AWS (Amazon Web Services) services like S3 Buckets, EC2, Lambda, Cloud Formation, AMIs etc. Good to have skills. Exposure to BFSI Domain. Key Soft Skills Personal characteristics. Self Motivated, Team player. Passionate about test automation and technologies Adaptive to new teams, technologies, and projects Asks questions, challenges the status quo Ability to influence Development and Product counterparts to drive quality Strong written and verbal communication skills About S&P Global Market Intelligence At S&P Global Market Intelligence, a division of S&P Global we understand the importance of accurate, deep and insightful information. Our team of experts delivers unrivaled insights and leading data and technology solutions, partnering with customers to expand their perspective, operate with confidence, and make decisions with conviction. For more information, visit . Our People: We're more than 35,000 strong worldwideso we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. Were committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. Were constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. Thats why we provide everything youand your careerneed to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: Its not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awardssmall perks can make a big difference.
Posted 3 months ago
5 - 10 years
7 - 12 Lacs
Bengaluru
Work from Office
Roles and Responsibilties Plan, execute, and deploy multiple software releases to non-Production and Production environments 5+ years Build Automation & Release Management experience. Experience in using Docker, Jenkins, and Linux Experience with automating build and deployment pipeline using scripts (Ant or Shell) Good: Extensive experience architecting, designing and programming applications and Microservice in an AWS Cloud environment and other Cloud Platforms. Experience with designing and building an application using AWS services such as EC2, AWS Elastic Beanstalk. Experience architecting highly available systems that utilize load balancing, horizontal scalability, and high availability Agile software development expert Experience with continuous integration tools (e.g. Jenkins) Hands-on familiarity with Terraform. Hands-on familiarity with Kubernetes, Docker and other Orchestration tools. Experience with configuration management platforms (e.g. Chef, Puppet, Salt, or Ansible) Strong scripting skills (e.g. Python, Bash, shell) Strong practical application experience on Linux.
Posted 3 months ago
0 - 5 years
10 - 20 Lacs
Bengaluru
Work from Office
Hi, Greetings from Sun Technology Integrators!! This is regarding a job opening with Sun Technology Integrators, Bangalore. Please find below the job description for your reference. Kindly let me know your interest and share your updated CV to nandinis@suntechnologies.com with the below details ASAP. C.CTC- E.CTC- Notice Period- Current location- Are you serving Notice period/immediate- Exp in Snowflake- Exp in Matillion- 2:00PM-11:00PM-shift timings (free cab facility-drop) +food Please let me know, if any of your friends are looking for a job change. Kindly share the references. Only Serving/ Immediate candidates can apply. Interview Process-2 Rounds(Virtual)+Final Round(F2F) Please Note: WFO-Work From Office (No hybrid or Work From Home) Mandatory skills : Snowflake, SQL, ETL, Data Ingestion, Data Modeling, Data Warehouse,Python, Matillion, AWS S3, EC2 Preferred skills : SSIR, SSIS, Informatica, Shell Scripting Venue Details: Sun Technology Integrators Pvt Ltd No. 496, 4th Block, 1st Stage HBR Layout (a stop ahead from Nagawara towards to K. R. Puram) Bangalore 560043 Company URL: www.suntechnologies.com Thanks and Regards,Nandini S | Sr.Technical Recruiter Sun Technology Integrators Pvt. Ltd. nandinis@suntechnologies.com www.suntechnologies.com
Posted 3 months ago
7 - 10 years
8 - 12 Lacs
Hyderabad
Work from Office
Responsibilities : Demonstrate the ability to acquire new skills and techniques and apply them within assigned engineering tasks Perform analysis and troubleshooting of highly advanced software constructs Champion the adoption of coding standard practices and procedures by the team Participate in the research, design, and development of complex software components Construct unit tests over complex algorithms to ensure a high degree of quality in code Collaborate with other engineers on the team and across the technology organization Provide high level estimates at a project level Participate in functional requirements review meetings with Product Owners Mentor other team members Challenge the team to think about code-quality in terms of long-term maintainability Take an active role in ensuring the team meets sprint commitments Participate in cross-functional meetings Proactively convey details regarding project status and deliverables to key stakeholders. Qualifications : Bachelors degree in computer science or related field Modern Angular is a must have 7+ years experience in the following is required: Agile methodology C#, ASP. Net 4.5+, .NET Core MS SQL relational database design and querying JavaScript Framework (jQuery, Angular 8+, Node) ORM Tools (NHibernate, Dapper) Amazon Web Services (S3, EC2, Lambda, SNS, SQS, etc.) Microservice and Event Driven architectures SaaS/multi-tenant platform Caching Platforms (Redis/Memcached) Experience with the following is preferred: Jira, GitHub, Office365, Slack, Zoom, Confluence
Posted 3 months ago
5 - 8 years
8 - 10 Lacs
Hyderabad
Work from Office
S&P Dow Jones Indices is seeking a Python/Bigdata developer to be a key player in the implementation and support of data Platforms for S&P Dow Jones Indices. This role requires a seasoned technologist who contributes to application development and maintenance. The candidate should actively evaluate new products and technologies to build solutions that streamline business operations. The candidate must be delivery-focused with solid financial applications experience. The candidate will assist in day-to-day support and operations functions, design, development, and unit testing. Responsibilities and Impact: Lead the design and implementation of EMR / Spark workloads using Python, including data access from relational databases and cloud storage technologies. Implement new powerful functionalities using Python, Pyspark, AWS and Delta Lake. Independently come up with optimal designs for the business use cases and implement the same using big data technologies. Enhance existing functionalities in Oracle/Postgres procedures, functions. Performance tuning of existing Spark jobs. Respond to technical queries from operations and product management team. Implement new functionalities in Python, Spark, Hive. Enhance existing functionalities in Postgres procedures, functions. Collaborate with cross-functional teams to support data-driven initiatives. Mentor junior team members and promote best practices. Respond to technical queries from the operations and product management team. What Were Looking For: Basic Required Qualifications: Bachelors degree in computer science, Information Systems, or Engineering, or equivalent work experience. 5 - 8 years of IT experience in application support or development. Hands on development experience on writing effective and scalable Python programs. Deep understanding of OOP concepts and development models in Python. Knowledge of popular Python libraries/ORM libraries and frameworks. Exposure to unit testing frameworks like Pytest. Good understanding of spark architecture as the system involves data intensive operations. Good amount of work experience in spark performance tuning. Experience/exposure in Kafka messaging platform. Experience in Build technology like Maven, Pybuilder. Exposure with AWS offerings such as EC2, RDS, EMR, lambda, S3,Redis. Hands on experience in at least one relational database (Oracle, Sybase, SQL Server, PostgreSQL). Hands on experience in SQL queries and writing stored procedures, functions. A strong willingness to learn new technologies. Excellent communication skills, with strong verbal and writing proficiencies. Additional Preferred Qualifications: Proficiency in building data analytics solutions on AWS Cloud. Experience with microservice and serverless architecture implementation.
Posted 3 months ago
10 - 15 years
25 - 30 Lacs
Chennai, Hyderabad, Noida
Hybrid
Job title: AWS solution delivery expert Job Summary We are seeking a highly skilled AWS solution delivery expert to join our team, The ideal candidature should have extensive experience in designing, implementing, and managing AWS solutions for business teams across various regions. This role requires a deep understanding of AWS services, good problem-solving skills, and the ability to communicate technical concepts effectively to the required stakeholders. Job description to support AWS environment. Understand the business requirements from external collaborators to create end-to-end infrastructure design and deliver solutions on AWS cloud platform. Assist scientists and external collaborators throughout the entire infrastructure delivery process, guiding them in provisioning necessary AWS services and utilizing the NVS infrastructure efficiently. Proficiency in AWS services like EC2, VPC, S3, RDS, Lambda, CloudFront, EBS, EFS, ASG, IAM, ELB, Data sync, Route53,EKS, ECS etc. Experience on Linux/Windows OS and CICD implementation using Jenkins, Ansible, AWS Cloud formation,Terraform, containerization technologies such as docker, K8S and HPC. Proficiency with monitoring, logging, and troubleshooting tools for AWSsuch as CloudWatch, CloudTrail, Splunk, etc. Familiarity of data analysis toolsand programming/scripting languages such as SQL, Python, bash Technical expertise on Virtualization, On-premises Storage, Network Connectivity, Data Protection (Backup and recovery), DR & HA Expertise in designing skills using Draw.io for AWS infrastructure architecture. Hands on experience in implementing data transfer solutions from external vendor accounts to internal accounts and vice versa. Collaborate with cross-functional teams, including developers, architects, and project managers, to ensure successful solution delivery. Qualifications Candidate with bachelors degree and minimum of 5+ yrs. experience with strong technical background Proven experience as an AWS Solution Architect and DevOps, preferably within the life sciences industry Candidature with excellent communication and presentation skills to effectively collaborate with business and internal teams. Familiarity with regulatory frameworks and standards applicable to the life sciences industry, such as GxP, HIPAA and FDA regulations. Stay up to date with the latest AWS services, features, and best practices. AWS Certified Solutions Architect certification is an added advantage. Ability to work independently and collaboratively in a team environment. Candidature should be flexible to work during US time zone.
Posted 3 months ago
8 - 13 years
8 - 18 Lacs
Bangalore Rural
Work from Office
Hi, We are looking for AWS administrator. Please check the below job description. Responsibilities: 1. AWS Administration: - Manage and maintain a complex and scalable AWS environment, ensuring optimal performance, security, and reliability. - Implement and monitor AWS services, including EC2 instances, S3 storage, EMR, RDS databases, Lambda functions, and more. - Troubleshoot and resolve infrastructure issues, network connectivity problems, and performance bottlenecks. - Collaborate with cross-functional teams to design and implement solutions that align with business needs. 2. Production Deployment: - Lead and execute the end-to-end process of production deployments, and release coordination. - Develop and maintain deployment pipelines using tools such as AWS CodePipeline, Jenkins, or similar. - Implement best practices for version control, configuration management, and infrastructure as code (IaC) using tools like Git and CloudFormation. - Perform release management, rollback procedures, and ensure zero-downtime deployments. - Collaborate with development and QA teams to ensure proper testing and validation of deployment processes. 3. Infrastructure Automation: - Automate routine operational tasks, such as provisioning and scaling of resources, using infrastructure automation tools. - Create and maintain scripts and templates for infrastructure provisioning and configuration management. 4. Security and Compliance: - Implement security best practices for AWS environments, including identity and access management (IAM), encryption, and monitoring. - Ensure compliance with industry standards and regulations, and actively participate in security audits and assessments. 5. Performance Optimization: - Monitor system performance and make recommendations for improvements to ensure high availability and scalability. - Optimize AWS resources to maximize cost efficiency while maintaining performance. 6. Documentation and Knowledge Sharing: - Document system architecture, configurations, and deployment processes. - Share knowledge and mentor junior team members in AWS administration and production deployment practices. Interested Candidates, please share your CV to shereena.muthukutty@thakralone.in
Posted 3 months ago
1 - 4 years
2 - 5 Lacs
Mumbai
Work from Office
Key Responsibilities: Cloud Infrastructure: Design, implement, and manage scalable, secure, and cost-effective AWS infrastructure. Monitor cloud environments, ensuring uptime, security, and performance. Database Optimization (PostgreSQL): Fine-tune PostgreSQL databases for optimal performance. Perform query optimization, indexing strategies, and database health checks. Implement database backup, recovery, and disaster recovery strategies. CI/CD Pipeline Management: Build and maintain automated CI/CD pipelines using tools like Jenkins, GitLab CI, AWS CodePipeline, etc. Implement infrastructure as code (IaC) using Terraform, CloudFormation, or similar tools. Automate testing, deployment, and release processes to ensure faster delivery cycles. Collaboration & Documentation: Collaborate with development, QA, and security teams to streamline deployment and development cycles. Maintain clear documentation of infrastructure, CI/CD processes, and database optimization strategies. Required Skills: Cloud Platforms: Strong experience with AWS services (EC2, RDS, VPC, S3, CloudWatch, IAM, etc.). Database: Expertise in PostgreSQL database management, query performance tuning, replication, and optimization. CI/CD: Proficiency in building and managing CI/CD pipelines (Jenkins, GitLab CI, AWS CodePipeline, or similar). Scripting: Knowledge of scripting languages like Bash, Python, or Shell. IaC Tools: Hands-on experience with Terraform, CloudFormation, or Ansible. Monitoring & Logging: Experience with monitoring tools (CloudWatch, Prometheus, Grafana, ELK stack, etc.). Preferred Qualifications: AWS Certified (Solutions Architect, DevOps Engineer, or similar certification). Experience with Docker & Kubernetes. Familiarity with security best practices in AWS & database environments. Strong problem-solving skills and the ability to work in a fast-paced environment. s
Posted 3 months ago
4 - 6 years
7 - 8 Lacs
Noida
Work from Office
Notice Period: Immediate to 15 days Work Mode: Onsite (Client Office) Primary Skills: Backend Development: Node.js, Express.js/Koa.js/Socket.io Frontend: JavaScript, HTML, CSS, AJAX Databases: MongoDB (Expert Level), PostgreSQL, Redis, MySQL Async Programming: Callbacks, Promises, Async/Await Cloud Services: AWS (EC2, ELB, AutoScaling, CloudFront, S3) Queue Systems: Kafka Job Scheduler: Bull Infrastructure: Docker, Kubernetes (K8s) Logging & Monitoring: Expertise in logging, tracing, and application monitoring Secondary Skills: Experience working with large datasets Familiarity with Frontend frameworks like Vue.js (Preferred) Good understanding of Data Structures, Algorithms, and Operating Systems Job Responsibilities: Develop and optimize consumer-facing web and app products Work with Node.js and at least one backend framework (Express.js, Koa.js, or Socket.io) Implement efficient async programming techniques using Callbacks, Promises, and Async/Await Manage databases including MongoDB, PostgreSQL, Redis, and MySQL Utilize AWS services for cloud-based application development Work with Kafka for queue management and Bull for job scheduling Implement frontend functionality using JavaScript, HTML, CSS, and AJAX Monitor and optimize application performance using logging, tracing, and monitoring tools Collaborate with cross-functional teams to ensure smooth project execution To Apply: Share resumes with: Current CTC Expected CTC Location Preference
Posted 3 months ago
5 - 10 years
10 - 20 Lacs
Hyderabad
Work from Office
Responsibilities Understand business requirements to create end-to-end infrastructure design and deliver solutions on AWS cloud platform Assist scientists and external collaborators throughout the infrastructure delivery process, guiding them in provisioning necessary AWS services and utilizing the AWS infrastructure efficiently Collaborate with cross-functional teams, including developers, architects, and project managers, to ensure successful solution delivery Stay up to date with the latest AWS services, features, and best practices Requirements Bachelors degree and minimum of 5+ years. experience with strong technical background Proven experience as an AWS Solution Architect and DevOps, preferably within the life sciences industry Familiarity with regulatory frameworks and standards applicable to the life sciences industry, such as GxP, HIPAA, and FDA regulations Proficiency in AWS services like EC2, VPC, S3, RDS, Lambda, CloudFront, EBS, EFS, ASG, IAM, ELB, Data sync, Route53, EKS, ECS etc Experience on Linux/Windows OS and CICD implementation using Jenkins, Ansible, AWS Cloud formation, Terraform, containerization technologies such as docker, K8S and HPC Proficiency with monitoring, logging, and troubleshooting tools for AWS such as CloudWatch, CloudTrail, Splunk, etc Familiarity of data analysis tools and programming/scripting languages such as SQL, Python, bash Technical expertise on Virtualization, On-premises Storage, Network Connectivity, Data Protection (Backup and recovery), DR & HA Expertise in designing skills using Draw.io for AWS infrastructure architecture Hands-on experience in implementing data transfer solutions from external vendor accounts to internal accounts and vice versa Flexible to work during US time zone Ability to work independently and collaboratively in a team environment Excellent communication and presentation skills to effectively collaborate with business and internal teams B2+ English level proficiency Nice to have AWS Certified Solutions Architect certification is an added advantag
Posted 3 months ago
2 - 7 years
2 - 6 Lacs
Bengaluru
Work from Office
Job Summary: AWS Engineer with approximately 2 years of experience to join our growing team. The ideal candidate will have hands-on experience in designing, deploying, and managing AWS cloud infrastructure. You will work closely with development and operations teams to ensure the reliability, scalability, and security of our cloud-based applications. Responsibilities: Design, implement, and maintain AWS cloud infrastructure using best practices. Deploy and manage applications on AWS services such as EC2, S3, RDS, VPC, and Lambda. Implement and maintain CI/CD pipelines for automated deployments. Monitor and troubleshoot AWS infrastructure and applications to ensure high availability and performance. Implement security best practices and ensure compliance with security policies. Automate infrastructure tasks using infrastructure-as-code tools (e.g., CloudFormation, Terraform). Collaborate with development and operations teams to resolve technical issues. Document infrastructure configurations and operational procedures. Participate in on-call rotations as needed. Optimize AWS costs and resource utilization. Required Skills and Qualifications: Bachelors degree in computer science, Information Technology, or a related field. 2+ years of hands-on experience1 with AWS cloud services. Proficiency in AWS services such as EC2, S3, RDS, VPC, IAM, and Lambda. Experience with infrastructure-as-code tools (e.g., CloudFormation, Terraform). Experience with CI/CD pipelines and tools (e.g., Jenkins, AWS CodePipeline). Excellent problem-solving and troubleshooting skills. Strong communication and collaboration skills. Desire to learn new technologies. Perks & Benefits: Health and Wellness: Healthcare policy covering your family and parents. Food: Enjoy a scrumptious buffet lunch at the office every day (For Bangalore) Professional Development: Learn and propel your career. We provide workshops, funded online courses and other learning opportunities based on individual needs. Rewards and Recognitions: Recognition and rewards programs in place to celebrate your achievements and contributions. Why join Relanto? Health & Family: Comprehensive benefits for you and your loved ones, ensuring well-being. Growth Mindset: Continuous learning opportunities to stay ahead in your field. Dynamic & Inclusive: Vibrant culture fostering collaboration, creativity, and belonging. Career Ladder: Internal promotions and clear path for advancement. Recognition & Rewards: Celebrate your achievements and contributions. Work-Life Harmony: Flexible arrangements to balance your commitments. To find out more about us, head over to our and
Posted 3 months ago
4 - 9 years
20 - 35 Lacs
Pune
Hybrid
Job Title: AWS Data Engineer About Us Capco, a Wipro company, is a global technology and management consulting firm. Awarded with Consultancy of the year in the British Bank Award and has been ranked Top 100 Best Companies for Women in India 2022 by Avtar & Seramount. With our presence across 32 cities across globe, we support 100+ clients across banking, financial and Energy sectors. We are recognized for our deep transformation execution and delivery. WHY JOIN CAPCO? You will work on engaging projects with the largest international and local banks, insurance companies, payment service providers and other key players in the industry. The projects that will transform the financial services industry. MAKE AN IMPACT Innovative thinking, delivery excellence and thought leadership to help our clients transform their business. Together with our clients and industry partners, we deliver disruptive work that is changing energy and financial services. #BEYOURSELFATWORK Capco has a tolerant, open culture that values diversity, inclusivity, and creativity. CAREER ADVANCEMENT With no forced hierarchy at Capco, everyone has the opportunity to grow as we grow, taking their career into their own hands. DIVERSITY & INCLUSION We believe that diversity of people and perspective gives us a competitive advantage. MAKE AN IMPACT Job Title:AWS Data Engineer Skill set: Design, develop, test, deploy, and maintain large-scale data pipelines using AWS Glue. Good Understanding of Spark/Pyspark Collaborate with cross-functional teams to gather requirements and deliver high-quality ETL solutions. Understanding and technical knowledge on AWS service like EC2, S3. Should have used these technologies in previously executed projects. Strong AWS development experience for data ETL/pipeline/integration/automation work. Should have a deep understanding of BI & Analytics Solution development lifecycle. Should have good understanding of AWS Services like Redshift, Glue, Lambda, Athena, S3, EC2
Posted 3 months ago
12 - 18 years
25 - 40 Lacs
Pune
Hybrid
Job Title: Manager-AWS Data Engineer About Us Capco, a Wipro company, is a global technology and management consulting firm. Awarded with Consultancy of the year in the British Bank Award and has been ranked Top 100 Best Companies for Women in India 2022 by Avtar & Seramount. With our presence across 32 cities across globe, we support 100+ clients across banking, financial and Energy sectors. We are recognized for our deep transformation execution and delivery. WHY JOIN CAPCO? You will work on engaging projects with the largest international and local banks, insurance companies, payment service providers and other key players in the industry. The projects that will transform the financial services industry. MAKE AN IMPACT Innovative thinking, delivery excellence and thought leadership to help our clients transform their business. Together with our clients and industry partners, we deliver disruptive work that is changing energy and financial services. #BEYOURSELFATWORK Capco has a tolerant, open culture that values diversity, inclusivity, and creativity. CAREER ADVANCEMENT With no forced hierarchy at Capco, everyone has the opportunity to grow as we grow, taking their career into their own hands. DIVERSITY & INCLUSION We believe that diversity of people and perspective gives us a competitive advantage. MAKE AN IMPACT Job Title: Manager-AWS Data Engineer Skill set: Design, develop, test, deploy, and maintain large-scale data pipelines using AWS Glue. Good Understanding of Spark/Pyspark Collaborate with cross-functional teams to gather requirements and deliver high-quality ETL solutions. Understanding and technical knowledge on AWS service like EC2, S3. Should have used these technologies in previously executed projects. Strong AWS development experience for data ETL/pipeline/integration/automation work. Should have a deep understanding of BI & Analytics Solution development lifecycle. Should have good understanding of AWS Services like Redshift, Glue, Lambda, Athena, S3, EC2
Posted 3 months ago
6 - 11 years
8 - 14 Lacs
Bengaluru
Work from Office
Role:MSSQL Server & MongoDB Database Administrator (AWS Cloud) We are seeking a highly skilled SQL Server Database Administrator with expertise in AWS cloud environments. The ideal candidate will have a deep understanding of SQL Server database administration, Cloud-native technologies, and strong hands-on experience managing databases hosted on AWS. This role involves ensuring the performance, availability, and Security of SQL Server databases in a cloud-first environment. Key Responsibilities Hands-on expertise in data migration between databases on-prem to Mongo Atlas. Experience in creating clusters, databases and creating the users. SQL Server Administration:Install, configure, upgrade, and manage SQL Server databases hosted on AWS EC2, RDS. AWS Cloud Integration:Design, deploy, and manage SQL Server instances using AWS services like RDS, EC2, S3, CloudFormation, and IAM. Performance Tuning:Optimize database performance through query tuning, indexing strategies, and resource allocation within AWS environments. High Availability and Disaster Recovery:Implement and manage HA/DR solutions such as Always On Availability Groups, Multi-AZ deployments, or read replicas on AWS. Backup and Restore:Configure and automate backup strategies using AWS services like S3 and Lifecycle Policies while ensuring database integrity and recovery objectives. Security and Compliance:Manage database security, encryption, and compliance standards (e.g., GDPR, HIPAA) using AWS services like KMS and GuardDuty. Monitoring and Automation:Monitor database performance using AWS CloudWatch, SQL Profiler, and third-party tools. Automate routine tasks using PowerShell, AWS Lambda, or AWS SystemsManager. Collaboration:Work closely with development, DevOps, and architecture teams to integrate SQL Server solutions into cloud-based applications. Documentation:Maintain thorough documentation of database configurations, operational processes, and security procedures. Required Skills and Experience 6+ years of experience in SQL Server database administration and 3+ years of experience in MongoDB administration. Extensive hands-on experience with AWS cloud services (e.g., RDS, EC2, S3, VPC, IAM). Proficiency in T-SQL programming and query optimization. Strong understanding of SQL Server HA/DR configurations in AWS (Multi-AZ, Read Replicas). Experience with monitoring and logging tools such as AWS CloudWatch, CloudTrail, or third-party solutions. Knowledge of cloud cost management and database scaling strategies. Familiarity with infrastructure-as-code tools (e.g., CloudFormation, Terraform). Strong scripting skills with PowerShell, Python, or similar languages. Preferred Skills and Certifications Knowledge of database migration tools like AWS DMS or native backup/restore processes for cloud migrations. Understanding of AWS security best practices and tools such as KMS, GuardDuty, and AWS Config. Certifications such as AWS Certified Solutions Architect, AWS Certified Database Specialty, or Microsoft Certified: Azure Database Administrator Associate. Educational Qualification Bachelors degree in computer science, Information Technology, or a related field. Skills PRIMARY COMPETENCY : Data Engineering PRIMARY SKILL : Microsoft SQL Server APPS DBA PRIMARY SKILL PERCENTAGE : 70 SECONDARY COMPETENCY : Data Engineering SECONDARY SKILL : MongoDB APPS DBA SECONDARY SKILL PERCENTAGE : 30
Posted 3 months ago
10 - 15 years
37 - 45 Lacs
Bengaluru
Work from Office
Bachelors degree in Computer Science/Information Technology, or in a related technical field or equivalent technology experience. 10+ years experience in software development 8+ years of experience in DevOps Experience with the following Cloud Native tools: Git, Jenkins, Grafana, Prometheus, Ansible, Artifactory, Vault, Splunk, Consul, Terraform, Kubernetes Working knowledge of Containers, i.e., Docker Kubernetes, ideally with experiencetransitioning an organization through its adoption Demonstrable experience with configuration, orchestration, and automation tools suchas Jenkins, Puppet, Ansible, Maven, and Ant to provide full stack integration Strong working knowledge of enterprise platforms, tools and principles including WebServices, Load Balancers, Shell Scripting, Authentication, IT Security, and PerformanceTuning Demonstrated understanding of system resiliency, redundancy, failovers, and disasterrecovery Experience working with a variety of vendor APIs including cloud, physical and logicalinfrastructure devices Strong working knowledge of Cloud offerings & Cloud DevOps Services (EC2, ECS, IAM, Lambda, Cloud services, AWS CodeBuild, CodeDeploy, Code Pipeline etc or Azure DevOps, API management, PaaS) Experience managing and deploying Infrastructure as Code, using tools like Terraform Helm charts etc. Manage and maintain standards for Devops tools used by the team
Posted 3 months ago
6 - 8 years
8 - 10 Lacs
Hyderabad
Work from Office
Java, J2ee, Spring boot. Experience in Design, Kubernetes, AWS (EKS, EC2) is needed. Experience in AWS cloud monitoring tools like Datadog, Cloud watch, Lambda is needed. Experience with XACML Authorization policies. Experience in NoSQL , SQL database such as Cassandra, Aurora, Oracle. Experience with Web Services SOA experience (SOAP as well as Restful with JSON formats), with Messaging (Kafka). Hands on with development and test automation tools/frameworks (e.g. BDD and Cucumber)
Posted 3 months ago
7 - 10 years
25 - 35 Lacs
Pune, Bengaluru, Gurgaon
Hybrid
Responsibilities: Monitors and reacts to alerts in real-time and triages issues Executes runbook instructions to resolve routine problems and user requests. Escalates complex or unresolved issues to L2. Documents new findings to improve runbooks and knowledge base. Participates in shift handovers to ensure seamless coverage Participates in ceremonies to share operational status Technical Skills: System Administration: Basic troubleshooting, monitoring, and operational support. Cloud Platforms: Familiarity with AWS services (e.g., EC2, S3, Lambda, IAM). Infrastructure as Code (IaC): Exposure to Terraform, CloudFormation, or similar tools. CI/CD Pipelines: Understanding of Git, Jenkins, GitHub Actions, or similar tools. Linux Fundamentals: Command-line proficiency, scripting, process management. Programming & Data: Python, SQL, and Spark (nice to have, but not mandatory). Data Engineering Awareness: Understanding of data pipelines, ETL processes, and workflow orchestration (e.g., Airflow). DevOps Practices: Observability, logging, alerting, and automation. Soft Skills (Most Important): Strong Communication: Ability to articulate technical concepts clearly to different audiences. Collaboration & Teamwork: Works well with engineering, operations, and business teams. Problem-Solving: Logical thinking and troubleshooting mindset. Documentation & Knowledge Sharing: Contributes to runbooks and operational guides.
Posted 3 months ago
5 - 10 years
7 - 12 Lacs
Pune, Bengaluru, Hyderabad
Work from Office
Overview: We're looking for an experienced individual (5-10 years) to build and maintain cloud-based APIs and AI/ ML models. You'll work with AWS services and focus on creating reliable, scalable systems. What You'll Do: Design and implement AI/ML based cloud solutions using AWS Build and maintain Python-based APIs and AI/ML models Proven experience in processing data, integrating with AI/ML models, and external APIs. Automating the building, testing, and packaging of machine learning pipelines Must Have Skills: Strong Python programming experience building AI/ ML applications Experience with large language models and prompt engineering Experience with AWS services (Lambda, EC2, S3) Knowledge of API development (FastAPI or Flask) Database experience (PostgreSQL, Vector DB) Understanding of Docker and Kubernetes Experience with CI/CD tools
Posted 3 months ago
4 - 7 years
12 - 17 Lacs
Hyderabad
Work from Office
What youll be doing... Verizon is looking for a dynamic and talented individual for the PQC Team. The Post-Quantum Cryptography (PQC) project focuses on securing organizational data against emerging quantum threats by identifying security vulnerabilities and enhancing data protection at the IP level. The team will assess and address potential weaknesses, ensuring resilience against quantum computing-based attacks. By enriching data from every IP within the organization, the project strengthens threat intelligence and fortifies security protocols, enabling a future-proof defense against evolving cyber risks. Design, develop, and maintain front-end applications using Angular. Build and optimize back-end services with Node.js. Work with SQL databases to design schemas, optimize queries, and ensure data integrity. Collaborate with cross-functional teams to define, design, and deliver new features. Implement cloud solutions, preferably using AWS, to enhance scalability and reliability. Ensure best practices in coding, security, and performance optimization. Debug and resolve technical issues while ensuring a seamless user experience. Stay up to date with industry trends and emerging technologies. What we are looking for Youll need to have: Bachelors degree or four or more years of work experience. Four or more years of experience in full-stack development. Strong proficiency in Angular (latest versions) for front-end development. Hands-on experience with Node.js for back-end development. Expertise in SQL databases (MySQL, PostgreSQL, or SQL Server). Understanding of cloud technologies, preferably AWS (EC2, S3, Lambda, RDS, etc.). Experience with RESTful APIs and microservices architecture. Strong problem-solving skills and ability to work independently or in a team. Good understanding of DevOps, CI/CD pipelines, and containerization (Docker/Kubernetes) is a plus. Even better if you have one or more of the following: AWS Certification
Posted 3 months ago
5 - 8 years
4 - 8 Lacs
Maharashtra
Work from Office
Design, develop, and optimize data integration workflows using Apache NiFi. Work on mission critical projects, ensuring high availability, reliability, and performance of data pipelines. Integrate NiFi with cloud platforms (e.g., AWS, Azure, GCP) for scalable data processing and storage. Develop custom NiFi processors and extensions using Java. Implement real time data streaming solutions using Apache Kafka. Work with MongoDB for NoSQL data storage and retrieval. Use GoldenGate for real time data replication and integration. Troubleshoot and resolve complex issues related to NiFi workflows and data pipelines. Collaborate with cross functional teams to deliver robust, production ready solutions. Follow best practices in coding, testing, and deployment to ensure high quality deliverables. Mentor junior team members and provide technical leadership. Mandatory Skills and Qualifications: 5+ years of hands on experience in Apache NiFi for data integration and workflow automation. Senior level Java programming knowledge, including experience in developing custom NiFi processors and extensions. Strong knowledge of cloud platforms (e.g., AWS, Azure, GCP) and their data services (e.g., S3, EC2, Lambda, Azure Data Lake, etc.). Proficiency in Linux environments, including shell scripting and system administration. Experience with Apache Kafka for real time data streaming and event driven architectures. Hands on experience with MongoDB for NoSQL data management. Familiarity with GoldenGate for real time data replication and integration. Experience in performance tuning and optimization of NiFi workflows. Solid understanding of data engineering concepts, including ETL/ELT, data lakes, and data warehouses. Ability to work independently and deliver results in a fast paced, high pressure environment. Excellent problem solving, debugging, and analytical skills. Good to Have Skills: Experience with containerization tools like Docker and Kubernetes. Knowledge of DevOps practices and CI/CD pipelines. Familiarity with big data technologies like Hadoop, Spark, or Kafka. Understanding of security best practices for data pipelines and cloud environments. Interview Focus Areas: Hands on NiFi Development: Practical assessment of NiFi workflow design and optimization. Java Programming: Senior level coding skills, including custom NiFi processor development. Cloud Integration: Understanding of how NiFi integrates with cloud platforms for data processing and storage. Kafka and MongoDB: Expertise in real time data streaming and NoSQL data management. GoldenGate: Knowledge of real time data replication and integration. Linux Proficiency: Ability to work in Linux environments and troubleshoot system level issues. Problem Solving: Analytical skills to resolve complex data integration challenges. Shift Requirements: Flexible shift hours with the shift ending by midday US time. Willingness to adapt to dynamic project needs and timelines.
Posted 3 months ago
4 - 6 years
7 - 8 Lacs
Bengaluru
Work from Office
What were looking for... Device analytics team in Verizon India is looking for a senior Devops Engineer with the following skillset: Collaborate with engineering, operations, and product teams to develop. Write and deploy automation software to maintain deployments across multiple environments. Create and maintain modules for a Configuration Management platform across multiple environments. Implement and support user-facing, large-scale, secure tech stacks on cloud or on-prem. Interface with internal customers in order to consult on requirements and discuss solutions. Write code, integrate systems and build configurations to drive and innovate around our server-based platform across the data centers. Maintain day-to-day management and administration of projects. Building and maintaining CI\CD pipeline in different environments. Writing Python, Bash and Shell scripts used in automating the routine activities and run the administrative tasks. Design and build automated code deployment systems that simplify development, create automation framework by orchestrating environment deployment from OS all the way through the application layers of a solution. Work in a cloud-based or on-prem environment long enough to understand how to appropriately manage PaaS/IaaS/SaaS and Agile environments. What we are looking for You'll need to have: Bachelors degree or four or more years of work experience. Experience using Shell or Python (or similar scripting language); Git or Subversion; Linux system administration (RHEL); and Ansible or Puppet. Experience with the microservice architecture running in containers (Docker or other containerization technology), administration and development of CI/CD pipelines using Jenkins; and AWS Cloud platform with EC2, VPC, AMI and IAM. Experience with developing automation pipelines and scripts. Experience with open-source software development with Python, Java, Node. Knowledge of networking technologies including subnetting, firewalls, and VPNs. Experience with high-availability distributed systems like hadoop, spark etc. Even better if you have one or more of the following: Strong communication skills to collaborate with cross-functional teams and explain technical concepts. Experience with Agile methodologies and DevOps culture. Knowledge of security best practices in cloud and Kubernetes environments. Excellent problem-solving skills and ability to debug complex issues across the entire stack.
Posted 3 months ago
4 - 8 years
6 - 10 Lacs
Hyderabad
Work from Office
What youll be doing... Verizon is looking for a dynamic and talented individual for the PQC Team. The Post-Quantum Cryptography (PQC) project focuses on securing organizational data against emerging quantum threats by identifying security vulnerabilities and enhancing data protection at the IP level. The team will assess and address potential weaknesses, ensuring resilience against quantum computing-based attacks. By enriching data from every IP within the organization, the project strengthens threat intelligence and fortifies security protocols, enabling a future-proof defense against evolving cyber risks.. Design, develop, and maintain front-end applications using Angular. Build and optimize back-end services with Node.js. Work with SQL databases to design schemas, optimize queries, and ensure data integrity. Collaborate with cross-functional teams to define, design, and deliver new features. Implement cloud solutions, preferably using AWS, to enhance scalability and reliability. Ensure best practices in coding, security, and performance optimization. Debug and resolve technical issues while ensuring a seamless user experience. Stay up to date with industry trends and emerging technologies. What were looking for.. Youll need to have: Bachelors degree or four or more years of work experience. Four or more years of experience in full-stack development. Strong proficiency in Angular (latest versions) for front-end development. Hands-on experience with Node.js for back-end development. Expertise in SQL databases (MySQL, PostgreSQL, or SQL Server). Understanding of cloud technologies, preferably AWS (EC2, S3, Lambda, RDS, etc.). Experience with RESTful APIs and microservices architecture. Strong problem-solving skills and ability to work independently or in a team. Good understanding of DevOps, CI/CD pipelines, and containerization (Docker/Kubernetes) is a plus. Even better if you have one or more of the following: AWS Certification
Posted 3 months ago
5 - 8 years
12 - 15 Lacs
Delhi, Mumbai, Kolkata
Work from Office
We are seeking an experienced AWS Cloud and DevOps Engineer to join our dynamic team. The ideal candidate should have a strong background in cloud technologies, particularly AWS, and possess a deep understanding of DevOps practices. As a Cloud and DevOps Engineer, you will be responsible for designing, implementing, and maintaining our cloud infrastructure and ensuring smooth deployment and operation of our applications. If you are passionate about cutting-edge cloud technologies and automation and have a proven track record of driving DevOps practices, we would love to hear from you. Job Responsibilities: Design, deploy, and manage scalable and highly available AWS cloud infrastructure to support our applications and services. Collaborate with development teams to define and implement CI/CD pipelines to enable automated application deployment and release management. Develop and maintain automated monitoring, alerting, and logging systems to ensure the health and performance of our cloud environment. Implement security best practices and ensure the security and compliance of our cloud infrastructure and applications. Troubleshoot and resolve infrastructure issues, performance bottlenecks, and application-related incidents. Collaborate with cross-functional teams to optimize the performance and cost-efficiency of our cloud services. Stay up to date with the latest AWS services [EC2, ELB, Autoscaling Groups, CloudFront, S3, AWS Lambda, Jenkins, Github, RDS, Migration], features, and best practices and evaluate their potential impact on our infrastructure and processes. Participate in on-call rotation to provide support for critical infrastructure issues. Document and maintain comprehensive documentation for system configurations, procedures, and troubleshooting guides. Provide 24x7 on-call support for critical infrastructure issues and participate in incident response and resolution efforts. Requirements: Bachelors degree in computer science, Information Technology, or related field, or equivalent experience. 5-8 years of hands-on experience working as a Cloud Engineer, DevOps Engineer, or similar role. Extensive experience with AWS services such as EC2, S3, Lambda, RDS, VPC, and IAM. Proficiency in scripting languages such as Python, Bash, or PowerShell for automation and infrastructure as code. Strong experience in building and managing CI/CD pipelines using tools like Jenkins, GitLab CI/CD, or AWS Code Pipeline. In-depth knowledge of containerization technologies like Docker and container orchestration platforms like Kubernetes. Solid understanding of infrastructure-as-code tools like Terraform or AWS CloudFormation. Familiarity with configuration management tools like Ansible, Puppet, or Chef. Experience with logging and monitoring tools such as CloudWatch, ELK Stack, or Prometheus/Grafana. Strong understanding of networking concepts, including TCP/IP, DNS, load balancing, and firewalls. Knowledge of security best practices and experience implementing security controls in AWS environments. Excellent problem-solving skills and the ability to work in a fast-paced, collaborative environment. Preferred Qualifications: AWS certifications such as AWS Certified Solutions Architect, AWS Certified DevOps Engineer, or AWS Certified SysOps Administrator. Experience with other cloud platforms like Microsoft Azure or Oracle Cloud Platform. Familiarity with serverless computing and event-driven architectures. Previous experience in a software development role or exposure to software development practices. Understanding of Agile and DevOps methodologies. Location: Remote- Delhi / NCR,Bangalore/Bengaluru,Hyderabad/Secunderabad,Chennai,Pune,Kolkata,Ahmedabad,Mumbai
Posted 3 months ago
8 - 13 years
15 - 30 Lacs
Hyderabad
Work from Office
Job Overview: We are seeking a Senior Software Engineer to design, develop, and optimize high-performance applications. You will play a crucial role in architecting scalable solutions , leading development efforts, and mentoring junior developers. Key Responsibilities: Architect, develop, and deploy scalable, high-performance software applications. Write clean, efficient, and maintainable code using best practices. Lead and mentor a team of developers, providing technical guidance. Optimize application performance and ensure system reliability. Collaborate with cross-functional teams to understand business requirements and deliver solutions. Perform code reviews and enforce coding standards. Integrate APIs and third-party services for enhanced functionalities. Work with cloud services (AWS, Azure, GCP) for scalable deployments. Implement DevOps practices for CI/CD, automation, and monitoring. Skills : - Python, JavaScript, JSS, Django, Flask, Tornado, Nginx, SQL Server, CI/CD, Azure DevOps, AWS (APIs, EC2, ALB, Lambda), AI deployment (GPU/NVIDIA), Cybersecurity (Cloudguard, Upguard, Dome9, Contrast), GCISO, CIAM, Medtronic cardiac data, debugging, source control.
Posted 3 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2