Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
4.0 - 8.0 years
0 Lacs
noida, uttar pradesh
On-site
As a Solution Architect in the Pre-Sales department, with 4-6 years of experience in cloud infrastructure deployment, migration, and managed services, your primary responsibility will be to design AWS Cloud Professional Services and AWS Cloud Managed Services solutions tailored to meet customer needs and requirements. You will engage with customers to analyze their requirements, ensuring cost-effective and technically sound solutions are provided. Your role will also involve developing technical and commercial proposals in response to various client inquiries such as Requests for Information (RFI), Requests for Quotation (RFQ), and Requests for Proposal (RFP). Additionally, you will prepare and deliver technical presentations to clients, highlighting the value and capabilities of AWS solutions. Collaborating closely with the sales team, you will work towards supporting their objectives and closing deals that align with business needs. Your ability to apply creative and analytical problem-solving skills to address complex challenges using AWS technology will be crucial. Furthermore, you should possess hands-on experience in planning, designing, and implementing AWS IaaS, PaaS, and SaaS services. Experience in executing end-to-end cloud migrations to AWS, including discovery, assessment, and implementation, is required. You must also be proficient in designing and deploying well-architected landing zones and disaster recovery environments on AWS. Excellent communication skills, both written and verbal, are essential for effectively articulating solutions to technical and non-technical stakeholders. Your organizational, time management, problem-solving, and analytical skills will play a vital role in driving consistent business performance and exceeding targets. Desirable skills include intermediate-level experience with AWS services like AppStream, Elastic BeanStalk, ECS, Elasticache, and more, as well as IT orchestration and automation tools such as Ansible, Puppet, and Chef. Knowledge of Terraform, Azure DevOps, and AWS development services will be advantageous. In this role based in Noida, Uttar Pradesh, India, you will have the opportunity to collaborate with technical and non-technical teams across the organization, ensuring scalable, efficient, and secure solutions are delivered on the AWS platform.,
Posted 1 day ago
10.0 - 14.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Role - Cloud Architect Analytics & Data Products Were looking for a Cloud Architect / Lead to design, build, and manage scalable AWS infrastructure that powers our analytics and data product initiatives. This role focuses on automating infrastructure provisioning , application/API hosting , and enabling data and GenAI workloads through a modern, secure cloud environment. Key Responsibilities Design and provision AWS infrastructure using Terraform or AWS CloudFormation to support evolving data product needs. Develop and manage CI/CD pipelines using Jenkins , AWS Code Pipeline , Code Build , or GitHub Actions . Deploy and host internal tools, APIs, and applications using ECS , EKS , Lambda , API Gateway , and ELB . Provision and support analytics and data platforms using S3 , Glue , Redshift , Athena , Lake Formation , and orchestration tools like Step Functions or Apache Airflow (MWAA) . Implement cloud security, networking, and compliance using IAM , VPC , KMS , CloudWatch , CloudTrail , and AWS Config . Collaborate with data engineers, ML engineers, and analytics teams to align infrastructure with application and data product requirements. Support GenAI infrastructure, including Amazon Bedrock , Sage Maker , or integrations with APIs like Open AI . Requirements 10-14 years of experience in cloud engineering, DevOps, or cloud architecture roles. Strong hands-on expertise with the AWS ecosystem and tools listed above. Proficiency in scripting (e.g., Python , Bash ) and infrastructure automation. Experience deploying containerized workloads using Docker , ECS , EKS , or Fargate . Familiarity with data engineering and GenAI workflows is a plus. AWS certifications (e.g., Solutions Architect , DevOps Engineer ) are preferred. Show more Show less
Posted 3 days ago
3.0 - 7.0 years
0 Lacs
pune, maharashtra
On-site
Join us as a Cloud Data Engineer at Barclays, where you'll spearhead the evolution of the digital landscape, driving innovation and excellence. You'll harness cutting-edge technology to revolutionize digital offerings, ensuring unparalleled customer experiences. You may be assessed on key critical skills relevant for success in the role, such as risk and control, change and transformations, business acumen, strategic thinking, and digital technology, as well as job-specific skill sets. To be successful as a Cloud Data Engineer, you should have experience with: - Experience on AWS Cloud technology for data processing and a good understanding of AWS architecture. - Experience with computer services like EC2, Lambda, Auto Scaling, VPC, EC2. - Experience with Storage and container services like ECS, S3, DynamoDB, RDS. - Experience with Management & Governance KMS, IAM, CloudFormation, CloudWatch, CloudTrail. - Experience with Analytics services such as Glue, Athena, Crawler, Lake Formation, Redshift. - Experience with Solution delivery for data processing components in larger End to End projects. Desirable skill sets/good to have: - AWS Certified professional. - Experience in Data Processing on Databricks and unity catalog. - Ability to drive projects technically with right first deliveries within schedule and budget. - Ability to collaborate across teams to deliver complex systems and components and manage stakeholders" expectations well. - Understanding of different project methodologies, project lifecycles, major phases, dependencies and milestones within a project, and the required documentation needs. - Experienced with planning, estimating, organizing, and working on multiple projects. This role will be based out of Pune. Purpose of the role: To build and maintain systems that collect, store, process, and analyze data, such as data pipelines, data warehouses, and data lakes to ensure that all data is accurate, accessible, and secure. Accountabilities: - Build and maintenance of data architecture pipelines that enable the transfer and processing of durable, complete, and consistent data. - Design and implementation of data warehouses and data lakes that manage appropriate data volumes and velocity and adhere to required security measures. - Development of processing and analysis algorithms fit for the intended data complexity and volumes. - Collaboration with data scientists to build and deploy machine learning models. Analyst Expectations: - Will have an impact on the work of related teams within the area. - Partner with other functions and business areas. - Takes responsibility for end results of a team's operational processing and activities. - Escalate breaches of policies/procedure appropriately. - Take responsibility for embedding new policies/procedures adopted due to risk mitigation. - Advise and influence decision making within own area of expertise. - Take ownership for managing risk and strengthening controls in relation to the work you own or contribute to. - Deliver your work and areas of responsibility in line with relevant rules, regulations, and codes of conduct. - Maintain and continually build an understanding of how own sub-function integrates with function, alongside knowledge of the organization's products, services, and processes within the function. - Demonstrate understanding of how areas coordinate and contribute to the achievement of the objectives of the organization sub-function. - Resolve problems by identifying and selecting solutions through the application of acquired technical experience and will be guided by precedents. - Guide and persuade team members and communicate complex/sensitive information. - Act as a contact point for stakeholders outside of the immediate function, while building a network of contacts outside the team and external to the organization. All colleagues will be expected to demonstrate the Barclays Values of Respect, Integrity, Service, Excellence, and Stewardship our moral compass, helping us do what we believe is right. They will also be expected to demonstrate the Barclays Mindset to Empower, Challenge, and Drive the operating manual for how we behave.,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
kochi, kerala
On-site
As an AWS Cloud Engineer at our company based in Kerala, you will play a crucial role in designing, implementing, and maintaining scalable, secure, and highly available infrastructure solutions on AWS. Your primary responsibility will be to collaborate closely with developers, DevOps engineers, and security teams to support cloud-native applications and business services. Your key responsibilities will include designing, deploying, and maintaining cloud infrastructure using various AWS services such as EC2, S3, RDS, Lambda, and VPC. Additionally, you will be tasked with building and managing CI/CD pipelines, automating infrastructure provisioning using tools like Terraform or AWS CloudFormation, and monitoring and optimizing cloud resources through CloudWatch, CloudTrail, and other third-party tools. Furthermore, you will be responsible for managing user permissions and security policies using IAM, ensuring compliance, implementing backup and disaster recovery plans, troubleshooting infrastructure issues, and responding to incidents promptly. It is essential that you stay updated with AWS best practices and new service releases to enhance our overall cloud infrastructure. To be successful in this role, you should possess a minimum of 3 years of hands-on experience with AWS cloud services, a solid understanding of networking, security, and Linux system administration, as well as experience with DevOps practices and Infrastructure as Code (IaC). Proficiency in scripting languages such as Python and Bash, familiarity with containerization tools like Docker and Kubernetes (EKS preferred), and holding an AWS Certification (e.g., AWS Solutions Architect Associate or higher) would be advantageous. It would be considered a plus if you have experience with multi-account AWS environments, exposure to serverless architecture (Lambda, API Gateway, Step Functions), familiarity with cost optimization, and the Well-Architected Framework. Any previous experience in a fast-paced startup or SaaS environment would also be beneficial. Your expertise in AWS CloudFormation, Kubernetes (EKS), AWS services (EC2, S3, RDS, Lambda, VPC), cloudtrail, cloud, scripting (Python, Bash), CI/CD pipelines, CloudWatch, Docker, IAM, Terraform, and other cloud services will be invaluable in fulfilling the responsibilities of this role effectively.,
Posted 1 week ago
8.0 - 12.0 years
0 Lacs
hyderabad, telangana
On-site
The Senior Specialist Test Expert in the Data & Analytics Team at Novartis, located in Hyderabad, plays a crucial role in ensuring the quality and reliability of multiple products. Your responsibilities include validating new products, existing product enhancements, defect fixes, and recommending improvements to enhance the overall user experience. You will collaborate with AWS Analytics Platform team members to define overall test strategies for various products. It will be your responsibility to drive the standard execution of multiple test phases such as system, integration, functional, regression, acceptance, and performance testing. Ensuring all test-related deliverables are of high quality in line with defined GxP and Non-GxP processes is essential. Regular collaboration with eCompliance, Quality Managers, and Security Architects/Managers will be required for reviewing different artifacts for final sign-off. Your role will involve enabling automation with approved tools or in-house developed code-based frameworks as necessary. You will expand the testing scope by including boundary cases, negative cases, edge/corner cases, and integrating testing with the DevOps pipeline using tools like Bitbucket, Jenkins, and Artifactory for seamless test and deployment to higher environments. Additionally, working closely with resources from partner organizations for mentoring, delegation of work, and timely delivery is crucial. To be eligible for this role, you should possess a Bachelor's/Master's degree in Computer Science or a related field and have at least 8 years of experience in quality engineering/testing. Strong expertise in manual and automation testing, particularly with tools like Selenium, Postman, and Jmeter, is required. Excellent knowledge of programming languages such as JAVA and familiarity with Python is preferred. Experience with a broad range of AWS technologies and DevOps tools like Bitbucket, GIT, Jenkins, and Artifactory is necessary. Proficiency in JIRA and Confluence, along with excellent verbal and written communication skills in a global environment, is a must. An open and learning mindset, adaptability to new learning approaches and methods, and the ability to work as part of an Agile development team are also essential. Novartis is dedicated to reimagining medicine to enhance and extend people's lives with a vision to become the most valued and trusted medicines company globally. By joining Novartis, you can be part of a mission-driven organization where associates are empowered to drive ambitions forward. If you are passionate about making a difference in the world of healthcare, consider joining our diverse and inclusive team at Novartis. For more information on Novartis and our commitment to diversity and inclusion, visit https://www.novartis.com/about/strategy/people-and-culture. Additionally, to explore career opportunities and stay connected with Novartis, join the Novartis Network at https://talentnetwork.novartis.com/network. Novartis offers a supportive work environment that values diversity and inclusion, empowering teams to make a positive impact on patients and communities. If you are ready to collaborate, support, and inspire breakthroughs that change lives, consider joining our dedicated team at Novartis. To learn about the benefits and rewards of working at Novartis, read our handbook at https://www.novartis.com/careers/benefits-rewards. If this role does not align with your current career goals but you wish to explore future opportunities at Novartis, sign up for our talent community at https://talentnetwork.novartis.com/network.,
Posted 2 weeks ago
8.0 - 12.0 years
0 Lacs
indore, madhya pradesh
On-site
You should possess expert-level proficiency in Python and Python frameworks or Java. Additionally, you must have hands-on experience with AWS Development, PySpark, Lambdas, Cloud Watch (Alerts), SNS, SQS, CloudFormation, Docker, ECS, Fargate, and ECR. Your deep experience should cover key AWS services such as Compute (PySpark, Lambda, ECS), Storage (S3), Databases (DynamoDB, Snowflake), Networking (VPC, 53, CloudFront, API Gateway), DevOps/CI-CD (CloudFormation, CDK), Security (IAM, KMS, Secrets Manager), and Monitoring (CloudWatch, X-Ray, CloudTrail). Moreover, you should be proficient in NoSQL Databases like Cassandra, PostgreSQL, and have strong hands-on knowledge of using Python for integrations between systems through different data formats. Your expertise should extend to deploying and maintaining applications in AWS, with hands-on experience in Kinesis streams and Auto-scaling. Designing and implementing distributed systems and microservices, scalability, high availability, and fault tolerance best practices are also key aspects of this role. You should have strong problem-solving and debugging skills, with the ability to lead technical discussions and mentor junior engineers. Excellent communication skills, both written and verbal, are essential. You should be comfortable working in agile teams with modern development practices, collaborating with business and other teams to understand business requirements and work on project deliverables. Participation in requirements gathering and understanding, designing solutions based on available frameworks and code, and experience with data engineering tools or ML platforms (e.g., Pandas, Airflow, SageMaker) are expected. An AWS certification (AWS Certified Solutions Architect or Developer) would be advantageous. This position is based in multiple locations in India, including Indore, Mumbai, Noida, Bangalore, and Chennai. To qualify, you should hold a Bachelor's degree or a foreign equivalent from an accredited institution. Alternatively, three years of progressive experience in the specialty can be considered in lieu of each year of education. A minimum of 8+ years of Information Technology experience is required for this role.,
Posted 2 weeks ago
10.0 - 20.0 years
25 - 40 Lacs
Hyderabad
Hybrid
Role & responsibilities : We are seeking dynamic individuals to join our team as individual contributors, collaborating closely with stakeholders to drive impactful results. Working hours - 5:30 pm to 1:30 am (Hybrid model) Must have Skills* 1. 15 years of experience in design and delivery of Distributed Systems capable of handling petabytes of data in a distributed environment. 2. 10 years of experience in the development of Data Lakes with Data Ingestion from disparate data sources, including relational databases, flat files, APIs, and streaming data. 3. Experience in providing Design and development of Data Platforms and data ingestion from disparate data sources into the cloud. 4. Expertise in core AWS Services including AWS IAM, VPC, EC2, EKS/ECS, S3, RDS, DMS, Lambda, CloudWatch, CloudFormation, CloudTrail, CloudWatch. 5. Proficiency in programming languages like Python and PySpark to ensure efficient data processing. preferably Python. 6. Architect and implement robust ETL pipelines using AWS Glue, Lamda, and step-functions defining data extraction methods, transformation logic, and data loading procedures across different data sources 7. Experience in the development of Event-Driven Distributed Systems in the Cloud using Serverless Architecture. 8. Ability to work with Infrastructure team for AWS service provisioning for databases, services, network design, IAM roles and AWS cluster. 9. 2-3 years of experience working with Document DB or MongoDB environment. Nice to have Skills: 1. 10 years of experience in the development of Data Audit, Compliance and Retention standards for Data Governance, and automation of the governance processes. 2. Experience in data modelling with NoSQL Databases like Document DB. 3. Experience in using column-oriented data file format like Apache Parquet, and Apache Iceberg as the table format for analytical datasets. 4. Expertise in development of Retrieval-Augmented Generation (RAG) and Agentic Workflows for providing context to LLMs based on proprietary enterprise data. 5. Ability to develop re-ranking strategies using results from Index and Vector stores for LLMs to improve the quality of the output. 6. Knowledge of AWS AI Services like AWS Entity Resolution, AWS Comprehend.
Posted 3 weeks ago
4.0 - 8.0 years
0 Lacs
karnataka
On-site
We are looking for individuals who are risk-takers, collaborators, inspired, and inspirational. We seek those who are courageous enough to work on the cutting edge and develop solutions that will enhance and enrich the lives of people globally. If you aspire to make a difference that wows the world, we are eager to have a conversation with you. If you believe this role aligns with your ambitions and skill set, we invite you to begin the application process. Explore our other available positions as well, as our numerous opportunities can pave the way for endless possibilities. With 4 to 8 years of experience, the ideal candidate should possess the following primary skills: - Proficiency in Server Side (Java) & AWS serverless framework. - Hands-on experience with serverless framework is a must. - Design knowledge and experience in cloud-based web applications. Familiarity with software design representation tools like astah, visio, etc. - Strong experience in AWS, including but not limited to EC2 Volume, EC2 Security Group, EC2 AMI, Lambda, S3, AWSbackup, CloudWatch, CloudFormation, CloudTrail, IAM, SecretsManager, StepFunction, CostExplorer, KMS, VPC/Subnet. - Ability to understand business requirements concerning UI/UX. - Work experience on development/staging/production servers. - Proficient in testing and verification, knowledge of SSL certificates, and encryption. - Familiarity with Docker containerization. In addition to technical skills, soft skills are also crucial, including: - Excellent interpersonal, oral, and written communication skills. - Strong analytical and problem-solving abilities. - Capability to comprehend and analyze customer requirements and expectations. - Experience in interacting with customers. - Previous work with international cross-culture teams is a plus. Secondary Skills include: - Scripting using Python. - Knowledge of identity management is advantageous. - Understanding of UI/UX, ReactJS/typescript/bootstrap. - Proficiency in business use cases concerning UI/UX. - Troubleshooting issues related to integration on the cloud (front end/back end/system/services APIs).,
Posted 3 weeks ago
4.0 - 6.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Job Title: AWS Developer About the Company/Team Oracle FSGIU's Finergy division is a specialized team dedicated to the Banking, Financial Services, and Insurance (BFSI) industry, offering deep domain expertise and innovative solutions. With a focus on accelerated implementation, Finergy helps financial institutions rapidly deploy multi-channel platforms, ensuring an exceptional customer experience. Our team provides end-to-end banking solutions, leveraging integrated analytics and dashboards for improved efficiency. Finergy's consulting services offer strategic guidance, aligning technology investments with business objectives. Job Summary We are on the lookout for a skilled AWS Developer with 4-6 years of experience to design and build cutting-edge applications on the Amazon Web Services (AWS) platform. The ideal candidate will have hands-on expertise in developing serverless and containerized applications, integrating various AWS services, and ensuring the performance, security, and scalability of cloud-native solutions. Key Responsibilities Design and develop scalable applications using AWS Lambda, API Gateway, and other AWS services, focusing on serverless architecture. Build and manage RESTful APIs, integrating with Amazon DynamoDB, RDS, and S3 for data storage and management. Implement Infrastructure as Code (IaC) using CloudFormation or Terraform to provision and manage AWS resources. Set up and maintain CI/CD pipelines using AWS CodePipeline, CodeBuild, and CodeDeploy for efficient software delivery. Automate workflows and background processes using Step Functions, SQS, and SNS for enhanced application functionality. Utilize CloudWatch, X-Ray, and CloudTrail for logging, monitoring, and troubleshooting, ensuring application health. Implement security measures using IAM roles, KMS, and Secrets Manager to protect sensitive data. Collaborate closely with DevOps, testers, and product owners in an Agile environment to deliver high-quality solutions. Qualifications & Skills Mandatory: 4-6 years of software development experience, including at least 2 years in AWS development. Proficiency in Node.js, Python, or Java for backend development. In-depth knowledge of AWS services: Lambda, API Gateway, S3, DynamoDB, RDS, IAM, SNS/SQS. Hands-on experience with CI/CD pipelines and version control systems like Git, GitHub, or Bitbucket. Understanding of containerization: Docker, and familiarity with Amazon ECS or EKS. Scripting skills using Bash, Python, or AWS CLI for automation. Awareness of cloud security best practices, cost optimization techniques, and performance tuning. Good-to-Have: AWS certification: AWS Certified Developer - Associate or AWS Certified Solutions Architect - Associate. Experience with microservices, serverless computing, and event-driven architecture. Exposure to multi-cloud or hybrid cloud environments. Strong communication and collaboration skills, with a problem-solving mindset. Self-Assessment Questions: Describe a serverless application you developed on AWS. What services did you use, and how did you ensure scalability and security Explain your experience with CI/CD pipelines on AWS. How have you utilized CodePipeline, CodeBuild, and CodeDeploy to automate the deployment process Share your approach to monitoring and troubleshooting AWS-based applications. What tools do you use, and how do you identify and resolve issues Discuss a scenario where you implemented security measures using AWS IAM and other security services. Job Title: AWS Developer About the Company/Team Oracle FSGIU's Finergy division is a specialized team dedicated to the Banking, Financial Services, and Insurance (BFSI) industry, offering deep domain expertise and innovative solutions. With a focus on accelerated implementation, Finergy helps financial institutions rapidly deploy multi-channel platforms, ensuring an exceptional customer experience. Our team provides end-to-end banking solutions, leveraging integrated analytics and dashboards for improved efficiency. Finergy's consulting services offer strategic guidance, aligning technology investments with business objectives. Job Summary We are on the lookout for a skilled AWS Developer with 4-6 years of experience to design and build cutting-edge applications on the Amazon Web Services (AWS) platform. The ideal candidate will have hands-on expertise in developing serverless and containerized applications, integrating various AWS services, and ensuring the performance, security, and scalability of cloud-native solutions. Key Responsibilities Design and develop scalable applications using AWS Lambda, API Gateway, and other AWS services, focusing on serverless architecture. Build and manage RESTful APIs, integrating with Amazon DynamoDB, RDS, and S3 for data storage and management. Implement Infrastructure as Code (IaC) using CloudFormation or Terraform to provision and manage AWS resources. Set up and maintain CI/CD pipelines using AWS CodePipeline, CodeBuild, and CodeDeploy for efficient software delivery. Automate workflows and background processes using Step Functions, SQS, and SNS for enhanced application functionality. Utilize CloudWatch, X-Ray, and CloudTrail for logging, monitoring, and troubleshooting, ensuring application health. Implement security measures using IAM roles, KMS, and Secrets Manager to protect sensitive data. Collaborate closely with DevOps, testers, and product owners in an Agile environment to deliver high-quality solutions. Qualifications & Skills Mandatory: 4-6 years of software development experience, including at least 2 years in AWS development. Proficiency in Node.js, Python, or Java for backend development. In-depth knowledge of AWS services: Lambda, API Gateway, S3, DynamoDB, RDS, IAM, SNS/SQS. Hands-on experience with CI/CD pipelines and version control systems like Git, GitHub, or Bitbucket. Understanding of containerization: Docker, and familiarity with Amazon ECS or EKS. Scripting skills using Bash, Python, or AWS CLI for automation. Awareness of cloud security best practices, cost optimization techniques, and performance tuning. Good-to-Have: AWS certification: AWS Certified Developer - Associate or AWS Certified Solutions Architect - Associate. Experience with microservices, serverless computing, and event-driven architecture. Exposure to multi-cloud or hybrid cloud environments. Strong communication and collaboration skills, with a problem-solving mindset. Self-Assessment Questions: Describe a serverless application you developed on AWS. What services did you use, and how did you ensure scalability and security Explain your experience with CI/CD pipelines on AWS. How have you utilized CodePipeline, CodeBuild, and CodeDeploy to automate the deployment process Share your approach to monitoring and troubleshooting AWS-based applications. What tools do you use, and how do you identify and resolve issues Discuss a scenario where you implemented security measures using AWS IAM and other security services. Career Level - IC2
Posted 3 weeks ago
7.0 - 9.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a APIGEE Administrator to join our team in Bangalore, Karn?taka (IN-KA), India (IN). Role: APIGEE Administrator Responsibilities - 1. Designing and developing API proxies, implementing security policies (e.g., OAuth, JWT), and creating API product bundles. 2. Support users and administer Apigee OPDK. Integrating APIs with various systems and backend services. 3. Participate and contribute to the migration to Apigee X. Planning and executing API migrations between different Apigee environments 4. Automation of platform processes 5. Implementing security measures like authentication, authorization, mitigation, as well as managing traffic and performance optimization. 6. On-call support - Identifying and resolving API-related issues, providing support to developers and consumers, and ensuring high availability. 7. Implement architecture, including tests/CICD/monitoring/alerting/resilience/SLAs/Documentation 8. Collaborating with development teams, product owners, and other stakeholders to ensure seamless API integration and adoption Requirement - 1. Bachelor's degree (Computer Science/Information Technology/Electronics & Communication/ Information Science/Telecommunications) 2. 7+ years of work experience in IT Industry and strong knowledge in implementing/designing solutions using s/w application technologies 3. Good knowledge and experience of the Apigee OPDK platform and troubleshooting 4. Experience in AWS administration (EC2, Route53, Cloudtrail AWS WAF, Cloudwatch, EKS, AWS System Manager) 5. Good hands on experience in Redhat Linux administration and Shell scripting programming 6. Strong understanding of API design principles and best practices. 7. Kubernetes Admin, Github Cassandra Admin, Google Cloud 8. Familiar in managing Dynatrace Desirable . Jenkins . Proxy API Development . Kafka administration based on SASS (Confluent) . Knowledge of Azure . ELK About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at NTT DATA endeavors to make accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at . This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click . If you'd like more information on your EEO rights under the law, please click . For Pay Transparency information, please click.
Posted 1 month ago
0.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Ready to shape the future of work At Genpact, we don&rsquot just adapt to change&mdashwe drive it. AI and digital innovation are redefining industries, and we&rsquore leading the charge. Genpact&rsquos , our industry-first accelerator, is an example of how we&rsquore scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to , our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that&rsquos shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions - we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at and on , , , and . Inviting applications for the role of L ead Consultant - C loud Engineer! In this role, you will be responsible for designing, provisioning, and securing scalable cloud infrastructure to support AI/ML and Generative AI workloads. A key focus will be ensuring high availability, cost efficiency, and performance optimization of infrastructure through best practices in architecture and automation. Responsibilities Design and implement secure VPC architecture, subnets, NAT gateways, and route tables. Build and maintain IAC modules for repeatable infrastructure provisioning. Build CI/CD pipelines that support secure, auto-scalable AI deployments using GitHub Actions, AWS CodePipeline , and Lambda triggers. Monitor and tune infrastructure health using AWS CloudWatch, GuardDuty , and custom alerting. Track and optimize cloud spend using AWS Cost Explorer, Trusted Advisor, and usage dashboards. Deploy and manage cloud-native services including SageMaker, Lambda, ECR, API Gateway etc. Implement IAM policies, Secrets Manager, and KMS encryption for secure deployments. Enable logging and monitoring using CloudWatch and configure alerts and dashboards. Set up and manage CloudTrail, GuardDuty , and AWS Config for audit and security compliance. Assist with cost optimization strategies including usage analysis and budget alerting. Support multi-cloud or hybrid integration patterns (e.g., data exchange between AWS and Azure/GCP). Collaborate with MLOps and Data Science teams to translate ML/ GenAI requirements into production-grade, resilient AWS environments. Maintain multi-cloud compatibility as needed (e.g., data egress readiness, common abstraction layers). Be engaging in the design, development and maintenance of data pipelines for various AI use cases Required to actively contribution to key deliverables as part of an agile development team Be collaborating with others to source, analyse, test and deploy data processes. Qualifications we seek in you! Minimum Qualifications AWS infrastructure experience in production environments. Degree/qualification in Computer Science or a related field, or equivalent work experience Proficiency in Terraform, AWS CLI, and Python or Bash scripting. Strong knowledge of IAM, VPC, ECS/EKS, Lambda, and serverless computing. Experience supporting AI/ML or GenAI pipelines in AWS (especially for compute and networking). Hands on experience to multiple AI / ML /RAG/LLM workloads and model deployment infrastructure. Exposure to multi-cloud architecture basics (e.g., SSO, networking, blob exchange, shared VPC setups). AWS Certified DevOps Engineer or Solutions Architect - Associate/Professional. Experience in developing, testing, and deploying data pipelines using public cloud. Clear and effective communication skills to interact with team members, stakeholders and end users Preferred Qualifications/ Skills Experience deploying infrastructure in both AWS and another major cloud provider (Azure or GCP). Familiarity with multi-cloud tools (e.g., HashiCorp Vault, Kubernetes with cross-cloud clusters). Strong understanding of DevSecOps best practices and compliance requirements. Exposure to RAG/LLM workloads and model deployment infrastructure. Knowledge of governance and compliance policies, standards, and procedures Why join Genpact Be a transformation leader - Work at the cutting edge of AI, automation, and digital innovation Make an impact - Drive change for global enterprises and solve business challenges that matter Accelerate your career - Get hands-on experience, mentorship, and continuous learning opportunities Work with the best - Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture - Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let&rsquos build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.
Posted 1 month ago
6.0 - 8.0 years
20 - 25 Lacs
Hyderabad
Work from Office
Picture Yourself at Pega: As a Senior Cloud Security Operations Analyst, you will play a critical role in ensuring the confidentiality, integrity, and availability of Pega's commercial cloud infrastructure and assets. You will be key in the continuous monitoring and protection of all global cloud security operations at Pega as well as an active participant in incident response efforts. As a key member of a team consisting of highly capable and talented problem-solving analysts and engineers, you'll help develop processes that drive proactive, automated detection and incident response tactics to support the quick resolution of cloud security events and incidents. You will accomplish this by collaborating with cross-functional teams including other security analysts, threat detection engineers, vulnerability analysts, security engineers, system administrators, and developers to proactively identify potential security risks and vulnerabilities within our cloud environment. You will leverage your strong analytical skills to assess and prioritize threats, applying your knowledge of industry best practices and cloud security frameworks. As a Senior Cloud Security Operations Analyst at Pega, you'll contribute to the success of our globally recognized brand. Your efforts will directly impact the security and trust our clients place in us, as we help them transform their business processes and drive meaningful digital experiences. So, picture yourself at Pega, where your expertise in cloud security is valued, and your passion for protecting data is celebrated. join us in shaping the future of secure cloud operations and make a lasting impact on the world of technology. What You'll Do at Pega: Perform security monitoring of Pega Cloud commercial environments using multiple security tools/dashboards Perform security investigations to identify indicators of compromise (IOCs) and better protect Pega Cloud and our clients from unauthorized or malicious activity Actively contribute to incident response activities as we identify, contain, eradicate, and recover Contribute to standard operating procedure (SOP) and policy development for CSOC detection and analysis tools and methodologies Assist in enhancing security incident response plans, conducting thorough investigations, and recommending remediation measures to prevent future incidents. Perform threat hunts for adversarial activities within Pega Cloud to identify evidence of attacker presence that may have not been identified by existing detection mechanisms Assist the threat detection team in developing high confidence Splunk notables focused on use cases for known and emerging threats, based on hypotheses derived from the Pega threat landscape Assist in the development of dashboards, reports, and other non-alert based content to maintain and improve situational awareness of Pega Cloud's security posture Assist in the development of playbooks for use by analysts to investigate both high confidence and anomalous activity Who You Are: You have an insatiable curiosity with an inborn tenacity for finding creative ways to deter, detect, deny, delay, and defend against bad actors of all shapes and sizes. You have been in the security trenches and you know what an efficient security operations center looks like. You have conducted in-depth analyses of various security events/alerts, contributed to incident response efforts, and developed new methods for detecting and mitigating badness wherever you see it. You bring a wealth of cloud security experience to the table and are ready to harness that expertise to dive into cloud-centric, technical analysis and incident response to make Pega Cloud the most secure it can be. You have a history of success in the information security industry. Your list of accolades include : SANS, Offensive Security, or other top-tier industry recognized technical security certifications focused on analysis, detection, and/or incident response Industry recognition for identifying security gaps to secure applications or products What You've Accomplished: Minimum of 6+ years of industry-relevant experience, with a demonstrated working knowledge of cloud architecture, infrastructure, and resources, along with the associated services, threats, and mitigations. Minimum of 4+ years in operational SIEM (Security Information and Event Management) roles, focusing on analysis, investigations, and incident response, with experience in Google Chronicle SIEM being an added advantage. 3+ years of operational cloud security experience preferably AWS and/or GCP including knowledge and analysis of various cloud logs such as CloudTrail, Cloud Audit, GuardDuty, Security Command Center, CloudWatch, Cloud Ops, Trusted Advisor, Recommender, VPCFIow, and WAF logs. 4+ years of operational experience with EDR/XDR platforms and related analysis and response techniques Operational experience performing investigations and incident response within Linux and Windows hosts as well as AWS, GCP, and related Kubernetes environments (EKS/GKE) Solid working knowledge of MITRE ATT&CK framework and the associated TTP's and how to map detections against it, particularly the cloud matrix portion Familiarity with the OWASP Top 10 vulnerabilities and best practices for mitigating these security risks. A solid foundational understanding of computer, OS (Linux/Windows), and network architecture concepts, and various related exploits/attacks Experience developing standard operating procedures (SOPs), incident response plans, runbooks/playbooks for repeated actions, and security operations policies Experience with Python, Linux shell/bash, and PowerShell scripting is a plus Excellent verbal and written communication skills, including poise in high pressure situations A demonstrated ability to work in a team environment and foster a healthy, productive team culture A Bachelor's degree in Cybersecurity, Computer Science, Data Science, or related field
Posted 1 month ago
3.0 - 8.0 years
5 - 13 Lacs
Chennai, Bengaluru
Work from Office
Job Description Role Snowflake DevOps Engineer Visit our website bmwtechworks.in to know more. Follow us on LinkedIn I Instagram I Facebook I X for the exciting updates. Location Bangalore/ Chennai Experience: 2 to 8 years Number of openings 4 What awaits you/ Job Profile Supporting Snowflakes BMW side Use Case Customers Consulting for Use Case Onboarding Monitoring of Data Synchronization jobs (Reconciliation between Cloud Data Hub and Snowflake) Cost Monitoring Reports to Use Cases Further technical implementations like M2M Authentication for applications, data traffic over VPC Integrate Snowflake to Use Case application process within Data Portal (automated Use Case Setup triggered by Data Portal) Technical Documentation Executing standard service requests (service user lifecycle etc.) Compiling of user and operational manuals Organizing and documenting knowledge regarding incidents/customer cases in a knowledge base Enhancing and editing process documentation Ability and willingness to coach and give training to fellow colleagues and users when Ability to resolve 2nd level incidents within the Data Analytics Platform (could entail basic code changes) Close collaboration with 3rd Level Support/Development and SaaS vendor teams Implementation of new development changes and assist and contribute to the development needs. What should you bring along Strong understanding and experience with Python AWS IAM, S3, KMS, Glue, Cloudwatch. Github Understanding of APIs Understanding of Software Development and background in Business Intelligence SQL (Queries, DDL, Materialized Views, Tasks, Procedures, Optimization) Any Data Portal or Cloud Data Hub Experience A technical background in operating and supporting IT Platforms IT Service Management (according to ITIL) 2nd Level Support Strong understanding of Problem, Incident and Change processes High Customer Orientation Working in a highly complex environment (many stakeholders, multi-platform/product environment, mission-critical use cases, high business exposure, complex ticket routing) Flexible communication on multiple support channels (ITSM, Teams, email) Precise and diligent execution of ops processes Working OnCall (Standby) Mindset of Continuous Learning (highly complex software stack with changing features) Proactive in Communication Must have technical skill Snowflake, Python, Lambda, IAM, S3, KMS, Glue, CloudWatch, Terraform, Scrum Good to have Technical skills AWS VPC, Route53, Bridgeevent, SNS, Cloudtrail, Confluence, Jira
Posted 1 month ago
5.0 - 7.0 years
0 Lacs
Bengaluru / Bangalore, Karnataka, India
On-site
Ready to shape the future of work At Genpact, we don&rsquot just adapt to change&mdashwe drive it. AI and digital innovation are redefining industries, and we&rsquore leading the charge. Genpact&rsquos , our industry-first accelerator, is an example of how we&rsquore scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to , our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that&rsquos shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions - we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at and on , , , and . Inviting applications for the role of Lead Consultant - Cloud Engineer! In this role, you will be responsible for designing, provisioning, and securing scalable cloud infrastructure to support AI/ML and Generative AI workloads. A key focus will be ensuring high availability, cost efficiency, and performance optimization of infrastructure through best practices in architecture and automation. Responsibilities Design and implement secure VPC architecture, subnets, NAT gateways, and route tables. Build and maintain IAC modules for repeatable infrastructure provisioning. Build CI/CD pipelines that support secure, auto-scalable AI deployments using GitHub Actions, AWS CodePipeline , and Lambda triggers. Monitor and tune infrastructure health using AWS CloudWatch, GuardDuty , and custom alerting. Track and optimize cloud spend using AWS Cost Explorer, Trusted Advisor, and usage dashboards. Deploy and manage cloud-native services including SageMaker, Lambda, ECR, API Gateway etc. Implement IAM policies, Secrets Manager, and KMS encryption for secure deployments. Enable logging and monitoring using CloudWatch and configure alerts and dashboards. Set up and manage CloudTrail, GuardDuty , and AWS Config for audit and security compliance. Assist with cost optimization strategies including usage analysis and budget alerting. Support multi-cloud or hybrid integration patterns (e.g., data exchange between AWS and Azure/GCP). Collaborate with MLOps and Data Science teams to translate ML/ GenAI requirements into production-grade, resilient AWS environments. Maintain multi-cloud compatibility as needed (e.g., data egress readiness, common abstraction layers). Be engaging in the design, development and maintenance of data pipelines for various AI use cases Required to actively contribution to key deliverables as part of an agile development team Be collaborating with others to source, analyse, test and deploy data processes. Qualifications we seek in you! Minimum Qualifications Good years of hands-on AWS infrastructure experience in production environments. Degree/qualification in Computer Science or a related field, or equivalent work experience Proficiency in Terraform, AWS CLI, and Python or Bash scripting. Strong knowledge of IAM, VPC, ECS/EKS, Lambda, and serverless computing. Experience supporting AI/ML or GenAI pipelines in AWS (especially for compute and networking). Hands on experience to multiple AI / ML /RAG/LLM workloads and model deployment infrastructure. Exposure to multi-cloud architecture basics (e.g., SSO, networking, blob exchange, shared VPC setups). AWS Certified DevOps Engineer or Solutions Architect - Associate/Professional. Experience in developing, testing, and deploying data pipelines using public cloud. Clear and effective communication skills to interact with team members, stakeholders and end users Preferred Qualifications/ Skills Experience deploying infrastructure in both AWS and another major cloud provider (Azure or GCP). Familiarity with multi-cloud tools (e.g., HashiCorp Vault, Kubernetes with cross-cloud clusters). Strong understanding of DevSecOps best practices and compliance requirements. Exposure to RAG/LLM workloads and model deployment infrastructure. Knowledge of governance and compliance policies, standards, and procedures Why join Genpact Be a transformation leader - Work at the cutting edge of AI, automation, and digital innovation Make an impact - Drive change for global enterprises and solve business challenges that matter Accelerate your career - Get hands-on experience, mentorship, and continuous learning opportunities Work with the best - Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture - Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let&rsquos build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.
Posted 1 month ago
8.0 - 13.0 years
15 - 25 Lacs
Gurugram
Remote
Minimum 6 years of hands-on experience deploying, enhancing, and troubleshooting foundational AWS Services (EC2, S3, RDS, VPC, CloudTrail, CloudFront, Lambda, EKS, ECS, etc.) • 3+ years of experience with serverless technologies, services, and container technologies (Docker, Kubernetes, etc.) o Manage Kubernetes charts using helm. o Managed production application deployments in Kubernetes cluster using KubeCTL. o Expertise in deploying distributed apps with containers (Docker) & orchestration (Kubernetes EKS,). o Experience in infrastructure-as-code tools for provisioning and managing Kubernetes infrastructure. o (Preferred) Certification in container orchestration systems and/or Certified Kubernetes Administrator. o Experience with Log Management and Analytics tools such as Splunk / ELK • 3+ years of experience with writing, debugging, and enhancing Terraform to write infrastructure as code to create scrips for EKS, EC2, S3, and other AWS services. o Expertise with working with Terraform Key features such as Infrastructure as code, execution plans, resource graphs, and change automation. o Implemented cluster services using Kubernetes and docker to manage local deployments in Kubernetes by building self-hosted Kubernetes clusters using Terraform. o Managed provisioning of AWS infrastructure using Terraform. o Develop and maintain infrastructure-as-code solutions using Terraform. • Ability to write scripts in JavaScript, Bash, Python, Typescript, or similar languages. • Able to work independently and as a team to architect and implement new solutions and technologies. • Very strong written and verbal communication skills; the ability to communicate verbally and in writing with all levels of employees and management, capable of successful formal and informal communication, speaks and writes clearly and understandably at the right level. • Ability to identify, evaluate, learn, and POC new technologies for implementation. • Experience in designing and implementing highly resilient AWS solutions.
Posted 1 month ago
6.0 - 8.0 years
20 - 25 Lacs
Pune
Work from Office
Roles & Responsibilities:- Design and implement secure AWS cloud architectures aligned with business and compliance requirements. Automate security controls and integrate them into CI/CD pipelines Manage and monitor AWS security tools such as GuardDuty, Security Hub, and CloudTrail Develop and enforce IAM policies using least privilege principles Conduct threat modeling, vulnerability assessments, and cloud security posture evaluations Ensure compliance with standards like SOC 2, ISO 27001, and NIST Support audit readiness and implement risk treatment plans Collaborate with DevOps teams to embed security in cloud deployments Promote DevSecOps culture across development and operations teams Create and maintain security-as-code using CloudFormation, Terraform, and scripting Automate detection, remediation, and incident response processes Provide security guidance during cloud migrations and new service adoptions Qualification:- Bachelors in Cybersecurity, Computer Science, or related field (Masters preferred) 7+ years in cybersecurity, with 5+ years in cloud security Strong expertise in AWS security tools (GuardDuty, Security Hub, IAM, KMS, CloudTrail) Familiar with cloud security frameworks (AWS Well-Architected, NIST CSF, CSA CCM) Experience in securing CI/CD pipelines and implementing IaC security (CloudFormation/Terraform) Hands-on with CSPM tools and automated security validation Deep understanding of IAM principles and DevSecOps practices Proficient in scripting (Python, Bash) for automation Strong knowledge of network, container, and serverless security Excellent communication skills (verbal and written) Certifications: AWS Security Specialty, CCSP, CISSP, or equivalent Preferred Qualifications :- Experience with multi-cloud (AWS, Azure, GCP) security Understanding of regulatory frameworks (e.g., GDPR, HIPAA, ISO) Hands-on with container security (Docker, Kubernetes, ECS/EKS) Experience with Zero Trust security models in cloud Familiarity with automated incident response and cloud-native tools Knowledge of Hashicorp Vault or similar tools for secrets management Experience securing data lakes and analytics platforms Worked with CWPP and serverless security best practices Cloud security experience in energy efficiency/sustainability domains Experience in cloud threat modeling and collaborating with global teams
Posted 1 month ago
14.0 - 20.0 years
25 - 40 Lacs
Hyderabad, Bengaluru
Hybrid
We are Hiring Senior Consultant Cyber Security Solution Architect Location: Bangalore, Hyderabad Experience: 14+ years Are you passionate about designing secure, scalable cloud and enterprise security architectures? Join us as a Cyber Security Solution Architect and be at the forefront of helping clients secure their digital ecosystems. What You’ll Do: Design and deliver end-to-end cybersecurity solutions for enterprise clients Collaborate with infra/application architects to embed security in architecture Lead cloud security (Azure/AWS/GCP), DDoS, SIEM, WAF, and container security design Define KPIs and lead security assessments & compliance initiatives (ISO 27001, NIST) Build client-facing proposals and conduct solution defense with stakeholders What We’re Looking For: 14+ years of overall experience, with 5+ years in complex security engineering projects Strong hands-on with tools: Azure Security Center, GuardDuty, Palo Alto, Qualys, etc. Exposure to CASB, Zero Trust, IAM, and multi-cloud security Excellent communication, client interaction, and solutioning skills Preferred Certifications: CISSP | CISM | CEH | CCSP | TOGAF | AWS/Azure/GCP Security Ready to shape the future of enterprise security? Apply now / email at mary.nancy1@sonata-software.com
Posted 1 month ago
6.0 - 8.0 years
8 - 10 Lacs
Hyderabad
Hybrid
Job Title: P r o g r a m M a n a g e r Business Unit: Piramal Swasthya Domain: Social Sector Location: PSMRI office, Hyderabad Big Bet: Shared Services Department: IT Design, implement, and maintain scalable, secure, and high-performance DevOps practices on AWS infrastructure. Drive the integration of security into every phase of the DevOps lifecycle. Support automation, deployment, monitoring, and security hardening to achieve agility, reliability, and compliance. Purpose of Job Essential Qualifications : *Any Graduate, preferably bachelor's degree in computer science or information technology. *Overall Working Experience of 6-8 Years *Minimum 4 years of experience in DevOps with AWS and security automation *AWS Certified DevOps Engineer / AWS Security Specialty (preferred) Preferred Key Skill /Qualifications : Deep understanding of AWS services (EC2, S3, IAM, RDS, Lambda, CloudTrail, CloudWatch, Config, GuardDuty, VPC, EKS, CloudFormation) Infrastructure as Code (Terraform, AWS CDK, CloudFormation) Network security, firewall configuration, security groups, VPC subnetting Identity & Access Management (IAM, RBAC, MFA, SSO) Vulnerability assessment, penetration testing, patch management CI/CD tools (Jenkins, GitHub Actions, GitLab CI/CD) Scripting (Python, Bash, Shell) Containerization (Docker, Kubernetes/EKS) Security-as-Code and automated compliance tools (e.g., Checkov, TFSec, Open Policy Agent) Preferred Key Skill /Qualifications : Secrets management (HashiCorp Vault, AWS Secrets Manager) SAST/DAST tools (SonarQube, OWASP ZAP, Snyk, Fortify) IAM roles, policies, encryption, KMS, and secure configurations Centralized logging and monitoring (CloudWatch, ELK Stack, Prometheus, Grafana) Excellent documentation, communication, and collaboration skills Essential Experience : • Automating security checks in CI/CD pipelines • Implementing least privilege access control and identity federation Securing infrastructure provisioning and deployments • Conducting threat modeling and vulnerability remediation • Enforcing compliance and audit readiness (ISO, HIPAA, etc.) Working knowledge of secure networking (VPCs, firewalls, VPNs, NACLs, etc.) • Incident response and root cause analysis for infrastructure issues Partnering with infosec teams to roll out security best practices Supporting development teams to adopt security-by-design Competencies • Strong analytical, problem-solving, and debugging skills Security-first mindset across DevOps practices Effective communicator and team collaborator Continuous learning and proactive approach Decision Making Choice of security tools and DevSecOps frameworks Cloud architecture decisions aligned with compliance Key Roles/Responsibilities: • Build and maintain AWS cloud environments with infrastructure as code • Integrate security controls into CI/CD pipelines and DevOps workflows • Automate security scans, testing, and compliance validation • Support secure deployment of applications in containers and serverless setups • Implement cloud-native logging, monitoring, and alerting systems • Manage secrets, certificates, and access securely • Work closely with development, infosec, and operations teams to enforce DevSecOps best practices • Document all configurations, procedures, and known issues • Conduct internal training on DevOps, secure coding and best security practices • Respond to security incidents and participate in audits and reviews • Ensure adherence to SLAs, compliance mandates, and performance KPIs
Posted 2 months ago
0.0 years
0 Lacs
Bengaluru / Bangalore, Karnataka, India
On-site
Req ID: 314827 NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a AWS -Systems Integration Specialist Advisor to join our team in Bengaluru, Karn?taka (IN-KA), India (IN). Pipeline RR The following items are considered in scope and may be varied by mutual agreement with Esyasoft, depending on the specific requirements of the assignment: Cloud Infrastructure services- Build, Configure and perform activities related to AWS infrastructure components. Design, Build, Configure and perform activities related to Oracle Cloud Infrastructure. Prepare Technotes for TDA approvals. Prepare/ Review Technical design documents (HLD, LLD). Design, Build and perform activities related to Active Directory (AD) for AWS environments. AD separation activities. Design, Build and perform activities related to Okta, PAM Necessary Support (if requested) for Citrix/ Maximo infrastructure. Project support for AWS environment build. Project support on Oracle Cloud infrastructure. AWS account management. AWS security optimization. Build of new Syslog servers/load balancers in Data Staging via CF/Debugging of syslog logs and forwarding to sentinel Onboarding of CloudTrail/ Guard duty logs into sentinel. Necessary Support (if requested) for Patching. Change reviews, and impact assessments. Install agents on server/ client instances. Troubleshooting, Diagnosis and issue resolution. Create TRAP/TRM, Knowledge KT, Build notes and onboard into service. About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at NTT DATA endeavors to make accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at . This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click . If you'd like more information on your EEO rights under the law, please click . For Pay Transparency information, please click.
Posted 2 months ago
3.0 - 6.0 years
4 - 9 Lacs
Chennai
Work from Office
**Position Overview:** We are seeking an experienced AWS Cloud Engineer with a robust background in Site Reliability Engineering (SRE). The ideal candidate will have 3 to 6 years of hands-on experience managing and optimizing AWS cloud environments with a strong focus on performance, reliability, scalability, and cost efficiency. **Key Responsibilities:** * Deploy, manage, and maintain AWS infrastructure, including EC2, ECS Fargate, EKS, RDS Aurora, VPC, Glue, Lambda, S3, CloudWatch, CloudTrail, API Gateway (REST), Cognito, Elasticsearch, ElastiCache, and Athena. * Implement and manage Kubernetes (K8s) clusters, ensuring high availability, security, and optimal performance. * Create, optimize, and manage containerized applications using Docker. * Develop and manage CI/CD pipelines using AWS native services and YAML configurations. * Proactively identify cost-saving opportunities and apply AWS cost optimization techniques. * Set up secure access and permissions using IAM roles and policies. * Install, configure, and maintain application environments including: * Python-based frameworks: Django, Flask, FastAPI * PHP frameworks: CodeIgniter 4 (CI4), Laravel * Node.js applications * Install and integrate AWS SDKs into application environments for seamless service interaction. * Automate infrastructure provisioning, monitoring, and remediation using scripting and Infrastructure as Code (IaC). * Monitor, log, and alert on infrastructure and application performance using CloudWatch and other observability tools. * Manage and configure SSL certificates with ACM and load balancing using ELB. * Conduct advanced troubleshooting and root-cause analysis to ensure system stability and resilience. **Technical Skills:** * Strong experience with AWS services: EC2, ECS, EKS, Lambda, RDS Aurora, S3, VPC, Glue, API Gateway, Cognito, IAM, CloudWatch, CloudTrail, Athena, ACM, ELB, ElastiCache, and Elasticsearch. * Proficiency in container orchestration and microservices using Docker and Kubernetes. * Competence in scripting (Shell/Bash), configuration with YAML, and automation tools. * Deep understanding of SRE best practices, SLAs, SLOs, and incident response. * Experience deploying and supporting production-grade applications in Python (Django, Flask, FastAPI), PHP (CI4, Laravel), and Node.js. * Solid grasp of CI/CD workflows using AWS services. * Strong troubleshooting skills and familiarity with logging/monitoring stacks.
Posted 2 months ago
7.0 - 9.0 years
0 Lacs
Gurgaon / Gurugram, Haryana, India
On-site
Req ID: 319099 NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a Systems Integration Specialist Advisor to join our team in Gurgaon, Hary?na (IN-HR), India (IN). AWS Developer 7+ years of experience. Primary Skill - C# and Python CDK Here are the various AWS services we need expertise on: CDK with Python Amazon AppFlow Step Functions Lambda S3 EventBridge CloudWatch/CloudTrail/XRay GitHub About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at NTT DATA endeavors to make accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at . This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click . If you'd like more information on your EEO rights under the law, please click . For Pay Transparency information, please click.
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough