Jobs
Interviews

1098 S3 Jobs - Page 44

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

10.0 - 12.0 years

35 - 45 Lacs

pune

Work from Office

Expert-level exp in backend development using .NetCore, C# & EF Core. Strong expertise in PostgreSQL & efficient database design. Proficient in building & maintaining RESTful APIs at scale. Strong frontend dev exp with ReactJS, JavaScript, TypeScript Required Candidate profile Proficiency in HTML5, CSS3, and responsive design best practices. Hands-on experience with AWS Cloud Services, specifically designing systems with SNS, SQS, EC2, Lambda, and S3.

Posted Date not available

Apply

12.0 - 15.0 years

35 - 45 Lacs

pune

Hybrid

Strong frontend development experience with ReactJS, JavaScript or TypeScript. Proficiency in HTML5, CSS3 & responsive design best practices. Hands-on exp with AWS Cloud Services, specifically designing systems with SNS, SQS, EC2, Lambda & S3. Required Candidate profile Expert-level exp in backend development using .NetCore, C# & EF Core. Strong expertise in PostgreSQL & efficient database design. Proficient in building & maintaining RESTful APIs at scale.

Posted Date not available

Apply

12.0 - 15.0 years

35 - 45 Lacs

bengaluru

Hybrid

Strong frontend development experience with ReactJS, JavaScript or TypeScript. Proficiency in HTML5, CSS3 & responsive design best practices. Hands-on exp with AWS Cloud Services, specifically designing systems with SNS, SQS, EC2, Lambda & S3. Required Candidate profile Expert-level exp in backend development using .NetCore, C# & EF Core. Strong expertise in PostgreSQL & efficient database design. Proficient in building & maintaining RESTful APIs at scale.

Posted Date not available

Apply

7.0 - 12.0 years

18 - 27 Lacs

pune

Work from Office

Technical Skills Cloud & Compute: EC2, Auto Scaling, Elastic Load Balancer (ELB), AWS Lambda, AWS Batch Containers & Orchestration: Amazon EKS (Kubernetes), ECS, ECR Storage Services: S3 (Standard, Infrequent Access, Glacier), EBS, EFS Database Services: RDS (MySQL, PostgreSQL), Aurora, DynamoDB, ElastiCache Networking & Security: VPC, Subnets, NAT Gateway, Transit Gateway, Route 53, PrivateLink Domain mapping & configuration, DNS management CloudFront (CDN), AWS WAF, AWS Shield, 3rd-party SSL integration IAM, KMS, Secrets Manager, Security Hub, GuardDuty Monitoring & Cost Optimization: CloudWatch (logs, metrics, alarms, dashboards), AWS Config, Trusted Advisor, Cost Explorer Serverless Technologies: AWS Lambda, API Gateway, EventBridge Backup & Disaster Recovery: AWS Backup, Cross-region S3 replication, Disaster Recovery planning Operating Systems: Linux (Ubuntu, Amazon Linux, RHEL) Shell scripting, system administration Windows Server – PowerShell scripting, Active Directory, IIS hosting DevOps Tools: Kubernetes (EKS), Ansible, Terraform (basic) Basic infrastructure automation and provisioning Networking & Firewalls: Core networking (IP addressing, routing, subnets, DNS) Basic hands-on experience with Palo Alto Firewall (rules, policies) Certification: AWS Certified Solutions Architect – Associate Demonstrated expertise in designing and deploying scalable, highly available, and fault-tolerant systems on AWS.

Posted Date not available

Apply

3.0 - 8.0 years

5 - 10 Lacs

hyderabad

Work from Office

Key Responsibilities Administer and maintain AWS environments supporting data pipelines, including S3, EMR, Athena, Glue, Lambda, CloudFormation, and Redshift. Cost Analysis use AWS Cost Explorer to analyze services and usages, create dashboards to alert outliers on usage and cost Performance and Audit use AWS Cloud Trail and Cloud Watch to monitory system performance and usage Monitor, troubleshoot, and optimize infrastructure performance and availability. Provision and manage cloud resources using Infrastructure as Code (IaC) tools (e.g., AWS CloudFormation, Terraform). Collaborate with data engineers working in PySpark, Hive, Kafka, and Python to ensure infrastructure alignment with processing needs. Support code integration with GIT repositories Implement and maintain security policies, IAM roles, and access controls. Participate in incident response and support resolution of operational issues, including on-call responsibilities. Manage backup, recovery, and disaster recovery processes for AWS-hosted data and services. Interface directly with client teams to gather requirements, provide updates, and resolve issues professionally. Create and maintain technical documentation and operational runbooks Required Qualifications 3+ years of hands-on administration experience managing AWS infrastructure, particularly in support of data-centric workloads. Strong knowledge of AWS services including but not limited to S3, EMR, Glue, Lambda, Redshift, and Athena. Experience with infrastructure automation and configuration management tools (e.g., CloudFormation, Terraform, AWS CLI). Proficiency in Linux administration and shell scripting, including Installing and managing software on Linux servers Familiarity with Kafka, Hive, and distributed processing frameworks such as Apache Spark. Ability to manage and troubleshoot IAM configurations, networking, and cloud security best practices. Demonstrated experience in monitoring tools (e.g., CloudWatch, Prometheus, Grafana) and alerting systems. Excellent verbal and written communication skills. Comfortable working with cross-functional teams and engaging directly with clients. Preferred Qualifications AWS Certification (e.g., Solutions Architect Associate, SysOps Administrator) Experience supporting data science or analytics teams Familiarity with DevOps practices and CI/CD pipelines Familiarity with Apache Icebergbased data pipelines

Posted Date not available

Apply

5.0 - 8.0 years

7 - 17 Lacs

gurugram, bengaluru

Hybrid

JOB POSITION Carelon Global Solutions India is seeking Senior Software Engineer a talented and motivated AWS and Snowflake Developer to join our team. Reporting to Team Lead, Senior Software Engineer will be responsible for developing and maintaining our data infrastructure, ensuring optimal performance, and supporting our data analytics initiatives. This role requires a deep understanding of both AWS cloud services and Snowflake data warehousing solutions as well as mentor team in their day to day task and help as and when needed. JOB RESPONSIBILITY Design, develop, and maintain scalable data pipelines and ETL processes using AWS services and Snowflake. Implement data models, data integration, and data migration solutions. QUALIFICATION Full time IT Engineering or equivalent degree (preferably in Computers ) EXPERIENCE 5+ years as AWS, Snowflake Developer Proven experience in developing and managing data solutions using AWS services (e.g., S3, Lambda, Redshift, Glue). Extensive experience with Snowflake, including data warehousing concepts, performance tuning, and security. Experience with other data warehousing and analytics platforms (e.g., Big Query, Redshift). Experience with version control systems (e.g., Git) and CI/CD pipelines. SKILLS AND COMPETENCIES Design and Development: Design, develop, and maintain scalable data pipelines and ETL processes using AWS services and Snowflake. Implement data models, data integration, and data migration solutions. Data Management: Manage and optimize Snowflake environments to ensure efficient performance and cost-effectiveness. Develop and implement data governance policies and procedures. Collaboration: Work closely with data analysts, data scientists, and other stakeholders to understand data requirements and deliver solutions that meet business needs. Collaborate with DevOps teams to ensure seamless integration and deployment of data solutions. Performance Optimization: Monitor and optimize the performance of data pipelines and Snowflake queries. Troubleshoot and resolve any issues related to data processing and storage. Security and Compliance: Ensure data security and compliance with relevant regulations and standards. Implement and manage access controls and encryption for sensitive data. Technical Skills: Proficiency in SQL and experience with data modeling. Strong programming skills in languages such as Python, or Scala. Familiarity with data integration tools and ETL processes. Good to have AWS certification (e.g., AWS Cloud Practitioner, AWS Certified Solutions Architect, AWS Certified Big Data Specialty). Knowledge of data visualization tools (e.g., Tableau, Power BI).

Posted Date not available

Apply

5.0 - 10.0 years

20 - 30 Lacs

bengaluru

Work from Office

Python API development Fast API Framework AWS(EC2, S3, Lambda, and RDS) Agile Preferred candidate profile Proficiency in Python API development like Fast API Framework. Understanding of core AWS services like EC2, S3, Lambda, and RDS. Knowledge of managing relational databases like AWS RDS and Proficiency in SQL for querying and managing data, and performance tuning. Experience in working in an Agile environment. Good to have Knowledge on Oracle Application R12. Experience with continuous integration and continuous deployment (CI/CD) pipelines using DevOps tools like Jenkins, Git, Kompass. Knowledge of Kubernetes concepts (pods, services, deployments, namespaces, clusters, scaling, monitoring) and YAML files. Experience with Apache NiFi for automating data flows between systems. Ability to configure and manage NiFi processors for data ingestion and transformation. Experience in Oracle PL/SQL for writing and debugging stored procedures, functions, and triggers. Oracle SOA Suite for building, deploying, and managing service-oriented architectures. Experience with BPEL (Business Process Execution Language) for orchestrating business processes

Posted Date not available

Apply

4.0 - 9.0 years

17 - 27 Lacs

hyderabad

Work from Office

Job Title: Data Engineer Mandatory Skills: Data Engineer, Python, AWS, SQL, Glue, Lambda, S3, SNS, ML, SQS Job Summary: We are seeking a highly skilled Data Engineer (SDET) to join our team, responsible for ensuring the quality and reliability of complex data workflows, data migrations, and analytics solutions across both cloud and on-premises environments. The ideal candidate will have extensive experience in SQL, Python, AWS, and ETL testing, along with a strong background in data quality assurance, data science platforms, DevOps pipelines, and automation frameworks. This role involves close collaboration with business analysts, developers, and data architects to support end-to-end testing,data validation, and continuous integration for data products. Expertise in tools like Redshift, EMR,Athena, Jenkins, and various ETL platforms is essential, as is experience with NoSQL databases, big data technologies, and cloud-native testing strategies. Role and Responsibilities: Work with business stakeholders, Business Systems Analysts and Developers to ensure quality delivery of software. Interact with key business functions to confirm data quality policies and governed attributes. Follow quality management best practices and processes to bring consistency and completeness to integration service testing. Designing and managing the testing AWS environments of data workflows during development and deployment of data products Provide assistance to the team in Test Estimation & Test Planning Design, development of Reports and dashboards. Analyzing and evaluating data sources, data volume, and business rules. Proficiency with SQL, familiarity with Python, Scala, Athena, EMR, Redshift and AWS. No SQL data and unstructured data experience. Extensive experience in programming tools like Map Reduce to HIVEQL. Experience in data science platforms like SageMaker/Machine Learning Studio/ H2O. Should be well versed with the Data flow and Test Strategy for Cloud/ On Prem ETL Testing. Interpret and analyses data from various source systems to support data integration and data reporting needs. Experience in testing Database Application to validate source to destination data movement and transformation. Work with team leads to prioritize business and information needs. Develop complex SQL scripts (Primarily Advanced SQL) for Cloud ETL and On prem. Develop and summarize Data Quality analysis and dashboards. Knowledge of Data modeling and Data warehousing concepts with emphasis on Cloud/ On Prem ETL. Execute testing of data analytic and data integration on time and within budget. Work with team leads to prioritize business and information needs Troubleshoot & determine best resolution for data issues and anomalies Experience in Functional Testing, Regression Testing, System Testing, Integration Testing & End to End testing. Has deep understanding of data architecture & data modeling best practices and guidelines for different data and analytic platforms Required Skills and Qualifications: Extensive Experience in Data migration is a must (Teradata to Redshift preferred). Extensive testing Experience with SQL/Unix/Linux scripting is a must. Extensive experience testing Cloud/On Prem ETL (e.g. Abinitio, Informatica, SSIS, Datastage, Alteryx, Glu). Extensive experience DBMS like Oracle, Teradata, SQL Server, DB2, Redshift, Postgres and Sybase. Extensive experience using Python scripting and AWS and Cloud Technologies. Extensive experience using Athena, EMR, Redshift, AWS, and Cloud Technologies. Experienced in large-scale application development testing Cloud/ On Prem Data warehouse, Data Lake, Data science. Experience with multi-year, large-scale projects. Expert technical skills with hands-on testing experience using SQL queries. Extensive experience with both data migration and data transformation testing. Extensive experience DBMS like Oracle, Teradata, SQL Server, DB2, Redshift, Postgres and Sybase. Extensive testing Experience with SQL/Unix/Linux. Extensive experience testing Cloud/On Prem ETL (e.g. Abinitio, Informatica, SSIS, Datastage, Alteryx, Glu). Extensive experience using Python scripting and AWS and Cloud Technologies. Extensive experience using Athena, EMR , Redshift and AWS and Cloud Technologies. API/Rest Assured automation, building reusable frameworks, and good technical expertise/acumen. Java/Java Script - Implement core Java, Integration, Core Java and API. Functional/UI/ Selenium - BDD/Cucumber, Specflow, Data Validation/Kafka, BigData, also automation experience using Cypress. AWS/Cloud - Jenkins/ Gitlab/ EC2 machine, S3 and building Jenkins and CI/CD pipelines, SouceLabs. Preferred Skills: API/Rest API - Rest API and Micro Services using JSON, SoapUI. Extensive experience in DevOps/Data Ops space. Strong experience in working with DevOps and build pipelines. Strong experience of AWS data services including Redshift, Glue, Kinesis, Kafka (MSK) and EMR/Spark, Sage Maker etc. Experience with technologies like Kubeflow, EKS, Docker. Extensive experience using No SQL data and unstructured data experience like MongoDB, Cassandra, Redis, ZooKeeper. Extensive experience in Map reduce using tools like Hadoop, Hive, Pig, Kafka, S4, Map R. Experience using Jenkins and Gitlab. Experience using both Waterfall and Agile methodologies. Experience in testing storage tools like S3, HDFS. Experience with one or more industry-standard defect or Test Case management Tools. Great communication skills (regularly interacts with cross functional team members).

Posted Date not available

Apply

4.0 - 8.0 years

6 - 10 Lacs

pune

Work from Office

BASIC PURPOSE: The primary responsibility of the DevOps Engineer is to convert the Companys technical requirements into cloud-based solutions and strategy. The DevOps Engineer will deploy, automate, maintain, and manage cloud-based application systems to ensure the availability, performance, scalability, and security of productions systems. The DevOps Engineer also develops cloud adoption plans, cloud application design, as well as cloud management and monitoring. ESSENTIAL FUNCTIONS: Build, deploy, automate, maintain, manage, and support AWS cloud-based infrastructure (servers, databases, services, networks, monitoring, reporting), to ensure the availability, performance, scalability and security of development, test, and productions systems. Design and build solutions using AWS Services such as S3, CloudFront, Route53, API Gateway, EC2, Lambda, SSM, SNS, SQS, Event bridge, Glue, ALB, VPC, IAM, RDS. Test, validate and implement performance and resource optimization improvements in consultation with AWS DevOps Teams. Support Continuous Integration and Continuous Delivery (CI/CD) process. System troubleshooting and problem solving across platform and application domains. Ensure critical system security using best in class cloud security solutions. Integrate with other on-premises systems and third-party cloud applications. Configure and support on-premises to cloud connectivity. Implement CI/CD process for the IT Organization. (Gitlab CI) Maintain CI/CD pipeline by enabling suitable DevOps channels across the organization. Analyze, execute, and streamline DevOps practices and facilitate the deployment process and automation. Experience with Terraform for IaaC Evaluate new cloud technology options and products and suggest architecture improvements and recommend process improvements. Work closely with the architect and engineers to design networks, systems, data models, and storage environment that effectively reflect business needs, security requirements, and service level requirements in AWS. Design hybrid cloud architectures between Azure and AWS supporting interoperability between the two cloud environments. Design highly available BC/DR strategies for all cloud resources. Must be able to perform the essential functions of the job, with or without reasonable accommodation. POSITION SCOPE: Develop and maintain scalable and automated cloud infrastructure using AWS and Azure. Ensure robust CI/CD pipelines to support continuous integration and continuous delivery practices. Design and implement effective cloud security strategies and operations to protect data and maintain compliance with industry standards. Collaborate with various stakeholders to assess functional needs and translate them into robust and scalable infrastructure solutions. Stay updated with emerging cloud technologies and enhancements, and advocate for the adoption of new technologies that will benefit the organization. Troubleshoot and resolve complex cloud infrastructure issues across multiple domains and platforms. REPORTING RELATIONSHIPS: Assistant Manager, IT Infra-DevOps QUALIFICATIONS: Bachelors degree in computer science, Information Technology, Engineering, or a related field. A masters degree is a plus. Proficient with AWS cloud services including EC2, S3, RDS, Lambda, CloudFront, Route53, API Gateway, and IAM. Strong knowledge of infrastructure as code (IaC) tools, particularly Terraform. Hands-on experience with scripting languages such as Python, Bash, or similar. Deep understanding of network architectures and cloud security protocols. Knowledge of Linux and Windows server environments. Certifications in AWS, Azure, or other relevant areas are highly desirable. Experience with containerization technologies such as Docker, Kubernetes, or similar. Knowledge of additional programming languages like Java, .NET, or PHP is a plus. Strong analytical and troubleshooting skills. Ability to work independently and as part of a team in a dynamic and fast-paced environment. Excellent communication and interpersonal skills. CRITICAL COMPETENCIES FOR SUCCESS: Technical Proficiency: Demonstrates expert-level knowledge in managing and configuring cloud environments, especially in AWS and Azure. Skilled in various DevOps tools and practices, capable of designing and executing solutions that enhance operational efficiency. Problem-Solving Skills: Able to quickly identify and troubleshoot issues across various platforms and applications, proposing and implementing robust solutions. Exhibits a strong analytical mindset and uses a systematic approach to problem resolution. Innovation and Continuous Improvement: Continuously seeks ways to improve processes and implement best practices in infrastructure management and automation. Keeps abreast of the latest industry trends and technologies and integrates new tools and techniques to maintain competitive advantage. Communication Skills: Effectively communicates technical information to non-technical stakeholders and clearly articulates problem statements and solutions to team members. Capable of writing clear and comprehensive documentation and reports. Collaboration and Teamwork: Works cooperatively with team members across different departments to achieve common goals. Supports and mentors junior staff and contributes positively to the team culture. Adaptability: Thrives in a fast-paced, evolving environment. Quickly adapts to changes in technology, processes, and team structures. Handles pressure and tight deadlines with professionalism. Security Awareness: Maintains a high standard of security within all cloud-based infrastructures and understands the best practices and compliance requirements related to cloud security. WORK CONDITIONS: Flexible work environment with a combination of remote and on-site work. Regular interaction with team members via virtual communication tools.

Posted Date not available

Apply

1.0 - 6.0 years

1 - 6 Lacs

hyderabad, bengaluru, mumbai (all areas)

Work from Office

Candidate will be responsible for designing, building & maintaining scalable, secure, & high-performance back-end systems and APIs and hands-on experience with frameworks. implement solutions using microservices & RESTful / GraphQL APIs. Required Candidate profile JD Optimize application performance, database queries, and system scalability, Collaborate with cross-functional teams including frontend, DevOps, QA, and product.

Posted Date not available

Apply

14.0 - 20.0 years

30 - 45 Lacs

gurugram, bengaluru

Hybrid

Job Title: Senior Principal Data Engineer Location: Gurgaon, Bangalore Work Schedule: 12:00 PM 8:30 PM IST Job Type: Full-Time Position Overview: We are seeking a Senior Principal Data Engineer with expertise in cloud-native AI/ML architecture and Generative AI . This role will be pivotal in designing innovative solutions, guiding development teams, and ensuring alignment with architectural best practices while leveraging cutting-edge technologies and AWS cloud services. Key Responsibilities: Architect Generative AI solutions using AWS services: Bedrock , SageMaker , Kendra , S3 , PGVector. Lead end-to-end solution design for AI/ML initiatives and complex data systems. Collaborate with cross-functional teams (onshore and offshore) to implement solutions. Conduct technical reviews, guide developers, and ensure code quality and best practices. Navigate and manage governance and architectural approval processes. Present solutions and provide technical advice to senior leadership and stakeholders . Mentor junior engineers and promote a culture of innovation and continuous learning. Evaluate and integrate emerging technologies like LangChain . Work closely with data scientists and data analysts to optimize model performance and data usage. Design and implement ETL pipelines , data flows, and support diverse data types (structured/unstructured). Lead technical workshops, documentation, and architecture planning. Required Qualifications: 12-15 years of experience in software engineering, data architecture, and solution design. Proven track record in building and deploying AI/ML, analytics, and data-driven platforms. Expert-level knowledge of AWS services: Bedrock SageMaker Kendra S3 PGVector Strong hands-on experience in Python (primary), with additional skills in Java or Scala. Deep understanding of data structures, algorithms, and software design patterns. Strong background in data pipelines, ETL, and working with structured, semi-structured, and unstructured data. Familiarity with DevOps, CI/CD pipelines, and infrastructure-as-code principles. Awareness of AI ethics, bias mitigation, and responsible AI frameworks. Experience with tools/frameworks like LangChain is highly desirable.

Posted Date not available

Apply

5.0 - 8.0 years

5 - 7 Lacs

bengaluru

Work from Office

Seeking a highly skilled Java Developer with experience in cloud-native development , mainframe integration , and modern Java frameworks . The ideal candidate will have hands-on expertise in Spring Boot , Hibernate , AWS services , and containerization technologies , along with a solid understanding of message queues , inter-process communication , and system integration . Key Responsibilities: Design, develop, and maintain scalable backend applications using Java , Spring , Spring Boot , and Hibernate . Build and integrate RESTful APIs and services with AWS cloud infrastructure (EC2, ECS, S3, SQS, Lambda). Implement serverless and containerized solutions using AWS Serverless Framework and Docker . Collaborate with infrastructure teams to integrate with mainframe systems (e.g., ICL VME ) and legacy platforms. Work with SQL and NoSQL databases for data querying and persistence. Ensure secure and efficient system interfaces , data exchange , and message passing . Participate in code reviews , performance tuning , and DevOps practices . Required Skills & Qualifications: Bachelors or Masters degree in Computer Science , Information Technology , or related field. 5-8 years of experience in Java development with strong knowledge of: Spring Boot , Hibernate , Java frameworks AWS services (EC2, ECS, S3, SQS, Lambda) SQL , S3 programming , data querying Containerization and serverless computing Experience with mainframe systems , inter-process communication , and message queues . Familiarity with multi-paradigm programming , software development best practices , and system integration . Preferred Qualifications: Experience with virtualization , cloud-native architecture , or microservices . Exposure to mainframe operations , middleware , or legacy modernization projects. Knowledge of DevOps tools , CI/CD pipelines , and infrastructure as code (IaC) .

Posted Date not available

Apply

5.0 - 8.0 years

13 - 18 Lacs

chennai

Hybrid

Responsibilities: Design, develop, and implement scalable machine learning models and algorithms to solve complex problems related to claims processing, fraud detection, risk stratification, member engagement, and predictive analytics within the payer landscape. Collaborate closely with data scientists, product managers, and other engineering teams to translate business requirements into technical specifications and deliver end-to-end ML solutions. Develop and optimize ML model training pipelines, ensuring data quality, feature engineering, and efficient model iteration. Conduct rigorous model evaluation, hyperparameter tuning, and performance optimization using statistical analysis and best practices. Integrate ML models into existing applications and systems, ensuring seamless deployment and operation. Write clean, well-documented, and production-ready code, adhering to high software engineering standards. Participate in code reviews, contribute to architectural discussions, and mentor junior engineers. Stay abreast of the latest advancements in machine learning, healthcare technology, and industry best practices, actively proposing innovative solutions. Ensure all ML solutions comply with relevant healthcare regulations and data privacy standards (e.g., HIPAA). Required Technical Skills: Programming Language: Expert proficiency in Python. Machine Learning Libraries: Strong experience with PyTorch and scikit-learn. Version Control: Proficient with Git and GitHub. Testing: Solid understanding and experience with Python unittest framework and Pytest for unit, integration, and API testing. Deployment: Hands-on experience with Dockerized deployment on AWS or Azure cloud platforms. CI/CD: Experience with CI/CD pipelines using AWS CodePipeline or similar alternatives (e.g., Jenkins, GitLab CI). Cloud Platforms: Experience with AWS or Azure services relevant to ML workloads (e.g., Sagemaker, EC2, S3, Azure ML, Azure Functions).

Posted Date not available

Apply

5.0 - 10.0 years

10 - 18 Lacs

noida, hyderabad, gurugram

Work from Office

The Team: Cloud Solutions is a horizontal team within Market Intelligence. We provide common services to business lines within Market Intelligence and across other divisions within S&P Global. Specifically, Cloud Solutions provides: cloud engineering expertise to fast-track our product teams from on-premise hosting to cloud native architectures support in implementation of divisional guardrails to ensure the Market Intelligence cloud estate is secure and cost efficient enablement through upskilling programs to educate our technologists on cloud best practices and corporate technology standards. We use the open source tool Cloud Custodian to monitor resources in AWS and take corrective action where appropriate. This tool is key to ensuring consistent guardrails across the organisation. Job Summary: We are looking for an experienced python developer to help support and further develop the MI Cloud Custodian Framework and the internal reporting tied to it. Key Responsibilities: Collaborate with cross-functional teams to understand cloud governance needs and translate them into actionable policies Engage with product teams to define and implement policies aligned with Market Intelligence standards Develop and maintain new features and enhancements within the Python framework to improve its functionality and performance Design and improve internal reporting to deliver actionable insights from policy execution Create and manage GitHub workflows and automation pipelines to improve development and deployment processes What Were Looking For: Required Qualifications A bachelor or masters degree (or equivalent) in (but not necessarily limited to) Computer Science or Engineering Strong critical thinking and problem-solving skills 5+ years experience in Python programming and experience with Python libraries Excellent collaboration and communication skills in a cross-functional environment Hands-on experience with EC2, S3, RDS, Lambda, and other AWS services Preferred (Nice to Have) Experience with Github workflows and Actions Knowledge of infrastructure as code (IaC) tools such as Terraform or CloudFormation. Understanding of cloud cost optimization and security best practices

Posted Date not available

Apply

5.0 - 10.0 years

20 - 35 Lacs

noida, hyderabad

Hybrid

Mandatory Skills are: AWS machine learning services(AWS machine learning services) • data science SaaS tools (Dataiku, Indico, H2O.ai, or similar platforms) • knowledge of AWS data engineering services (S3, Glue, Athena, Lambda) • Python and common data manipulation libraries Experience: 5+ years of experience in machine learning engineering or a related field. Technical Skills: * Programming Languages: Proficient in Python and experience with other languages (e.g., Java, Scala, R) is a plus. * Machine Learning Libraries: Strong experience with machine learning libraries and frameworks such as scikit-learn, TensorFlow, PyTorch, Keras, etc. * Data Processing: Experience with data manipulation and processing using libraries like Pandas, NumPy, and Spark. * Model Deployment: Experience with model deployment frameworks and platforms (e.g., TensorFlow Serving, TorchServe, Seldon, AWS SageMaker, Google AI Platform, Azure Machine Learning). * Databases: Experience with relational and NoSQL databases (e.g., SQL, MongoDB, Cassandra). * Version Control: Experience with Git and other version control systems. * DevOps: Familiarity with DevOps practices and tools. * Strong understanding of machine learning concepts and algorithms: Regression, Classification, Clustering, Deep Learning etc. * Soft Skills: * Excellent problem-solving and analytical skills. * Strong communication and collaboration skills. * Ability

Posted Date not available

Apply

7.0 - 12.0 years

10 - 20 Lacs

hyderabad, pune, bengaluru

Work from Office

Hi Team, We are Looking for a Candidate with Python and AWS Experience Only those candidates revert who is Having at least 4+ Years of Exp in Python and AWS Kanishk.mittal@thehrsolution.in Mail me your Updated CV here. Job Summary: As a Python Developer with AWS , you will be responsible for developing cloud-based applications, building data pipelines, and integrating with various AWS services. You will work closely with DevOps, Data Engineering, and Product teams to design and deploy solutions that are scalable, resilient, and efficient in an AWS cloud environment. Notice Period Immediate to 45 days Key Responsibilities: Python Development : Design, develop, and maintain applications and services using Python in a cloud environment. AWS Cloud Services : Leverage AWS services such as EC2 , S3 , Lambda , RDS , DynamoDB , and API Gateway to build scalable solutions. Data Pipelines : Develop and maintain data pipelines, including integrating data from various sources into AWS-based storage solutions. API Integration : Design and integrate RESTful APIs for application communication and data exchange. Cloud Optimization : Monitor and optimize cloud resources for cost efficiency, performance, and security. Automation : Automate workflows and deployment processes using AWS Lambda , CloudFormation , and other automation tools. Security & Compliance : Implement security best practices (e.g., IAM roles, encryption) to protect data and maintain compliance within the cloud environment. Collaboration : Work with DevOps, Cloud Engineers, and other developers to ensure seamless deployment and integration of applications. Continuous Improvement : Participate in the continuous improvement of development processes and deployment practices. Required Qualifications: Python Expertise : Strong experience in Python programming, including using libraries like Pandas , NumPy , Boto3 (AWS SDK for Python), and frameworks like Flask or Django . AWS Knowledge : Hands-on experience with AWS services such as S3 , EC2 , Lambda , RDS , DynamoDB , CloudFormation , and API Gateway . Cloud Infrastructure : Experience in designing, deploying, and maintaining cloud-based applications using AWS. API Development : Experience in designing and developing RESTful APIs, integrating with external services, and managing data exchanges. Automation & Scripting : Experience with automation tools and scripts (e.g., using AWS Lambda , Boto3 , CloudFormation ). Version Control : Proficiency with version control tools such as Git . CI/CD Pipelines : Experience building and maintaining CI/CD pipelines for cloud-based applications. Preferred Qualifications: Familiarity with serverless architectures using AWS Lambda and other AWS serverless services. AWS Certification (e.g., AWS Certified Developer Associate , AWS Certified Solutions Architect Associate ) is a plus. Knowledge of containerization tools like Docker and orchestration platforms such as Kubernetes . Experience with Infrastructure as Code (IaC) tools such as Terraform or AWS CloudFormation . Skills & Attributes: Strong analytical and problem-solving skills. Ability to work effectively in an agile environment. Excellent communication and collaboration skills to work with cross-functional teams. Focus on continuous learning and staying up to date with emerging cloud technologies. Strong attention to detail and a commitment to high-quality code.

Posted Date not available

Apply

9.0 - 14.0 years

15 - 30 Lacs

bengaluru

Work from Office

Position Purpose The Senior Storage Engineer plays a critical role in supporting and advancing the enterprise storage infrastructure. This position exists to ensure the stability, scalability, and performance of SAN, NAS, and S3 storage platforms, directly contributing to the organization's data availability, business continuity, and digital transformation objectives. The engineer will work closely with cross-functional teams to deliver secure, efficient, and highly available storage solutions aligned with business and IT strategies. Key Responsibilities Direct Responsibilities Essential: Administer and optimize SAN storage systems, including VMAX/PMAX, Pure Storage, and Brocade environments. Manage and support NAS technologies such as NetApp and VNX. Expert-level experience with S3 object storage platforms like Cleversafe and Dell EMC ECS. Collaborate across teams to deliver appropriate storage solutions for strategic and business-critical initiatives. Maintain uptime and performance across enterprise-wide storage environments. Provide high-level support for incidents, problem resolution, and root cause analysis. Ensure projects are delivered on time and aligned with enterprise architecture. Demonstrate a customer-centric approach in managing service delivery. Contributing Responsibilities Actively participate in the automation of storage operations using Python, Bash, or PowerShell scripting. Use DevOps practices, including version control (Git) and CI/CD pipelines, to streamline operations and infrastructure management. Desirable: Hands-on experience with EMC Unity. Exposure to Big Data ecosystems, including ELK Stack and distributed workloads. Knowledge of stateful containers, Kubernetes, and persistent storage for containerized environments. Technical & Behavioral Competencies Technical Skills: SAN Technologies: VMAX/PMAX, Pure Storage, Brocade NAS Technologies: NetApp, VNX Object Storage: S3 (Cleversafe, ECS) Scripting: Python, Bash, PowerShell DevOps: Git, CI/CD tools Platforms: EMC Unity, ELK, Kubernetes (Desirable) Behavioral Skills: Strong analytical and problem-solving skills. Self-driven and capable of learning new technologies independently. Excellent communication and interpersonal skills. Team-oriented mindset with a collaborative approach. Ability to perform under pressure in a fast-paced environment. Curious, proactive, and open to innovation. Qualifications Bachelor's Degree in Computer Science or equivalent (preferred but not mandatory). ITIL Foundation Certification (desirable). Languages Fluent English (Spoken and Written) Level B2/C or higher Mandatory. Interested candidates can share your resume to subashini.gopalan@kiya.ai

Posted Date not available

Apply

2.0 - 5.0 years

4 - 7 Lacs

pune

Work from Office

about our diversity, equity, and inclusion efforts and the networks ZS supports to assist our ZSers in cultivating community spaces, obtaining the resources they need to thrive, and sharing the messages they are passionate about. ZSs Platform Development team designs, implements, tests and supports ZSs ZAIDYN Platform which helps drive superior customer experiences and revenue outcomes through integrated products analytics. Whether writing distributed optimization algorithms or advanced mapping and visualization interfaces, you will have an opportunity to solve challenging problems, make an immediate impact and contribute to bring better health outcomes. What you'll do: As part of our full-stack product engineering team, you will build multi-tenant cloud-based software products/platforms and internal assets that will leverage cutting edge based on the Amazon AWS cloud platform. Pair program, write unit tests, lead code reviews, and collaborate with QA analysts to ensure you develop the highest quality multi-tenant software that can be productized. Work with junior developers to implement large features that are on the cutting edge of Big Data Be a technical leader to your team, and help them improve their technical skills Stand up for engineering practices that ensure quality products: automated testing, unit testing, agile development, continuous integration, code reviews, and technical design Work with product managers and architects to design product architecture and to work on POCs Take immediate responsibility for project deliverables Understand client business issues and design features that meet client needs Undergo on-the-job and formal trainings and certifications, and will constantly advance your knowledge and problem solving skills What you'll bring: 1-3 years of experience in developing software, ideally building SaaS products and services Bachelor's Degree in CS, IT, or related discipline Strong analytic, problem solving, and programming ability Good hands on to work with AWS services (EC2, EMR, S3, Serverless stack, RDS, Sagemaker, IAM, EKS etc) Experience in coding in an object-oriented language such as Python, Java, C# etc. Hands on experience on Apache Spark, EMR, Hadoop, HDFS, or other big data technologies Experience with development on the AWS (Amazon Web Services) platform is preferable Experience in Linux shell or PowerShell scripting is preferable Experience in HTML5, JavaScript, and JavaScript libraries is preferable Good to have Pharma domain understanding Initiative and drive to contribute Excellent organizational and task management skills Strong communication skills Ability to work in global cross-office teams ZS is a global firm; fluency in English is required

Posted Date not available

Apply

1.0 - 4.0 years

3 - 7 Lacs

pune

Work from Office

As a Cloud Engineer , you will be an individual contributor and subject matter expert who maintains and participates in the design and implementation of technology solutions. The engineer will collaborate within a team of technologists to produce enterprise scale solutions for our clients needs. This position will be working with the latest Amazon Web Services technologies around cloud architecture, infrastructure automation, and network security. What You'll Do: Identify, test prototype solution and proof of concepts on public clouds Help develop architectural standards and guidelines for scalability, performance, resilience, and efficient operations while adhering to necessary security and compliance standards Work with application development teams to select and automate repeatable tasks along with participating and assisting in root cause analysis activities Architect cloud solutions using industry-leading DevSecOps best practices and technologies Review software product designs to ensure consistency with architectural best practices; participate in regular implementation reviews to ensure consistent quality and adherence with internal standards Partner closely with cross-functional leaders (platform engineers / software development / product management / business leaders) to ensure a clear understanding of business and technical needs; jointly select the best strategy after evaluating the benefits and costs associated with different approaches Closely collaborate with implementation teams to ensure understanding and utilization of the most optimal approach What You'll Bring: 1-4 years in an Infrastructure Engineering / Software Engineering / DevOps role, deploying and maintaining SaaS applications 1-4 years experience with AWS/Azure/GCP cloud technologies and at least one cloud proficiency certification is required Hands-on experience with AWS services like Lambda, S3, RDS, EMR, CloudFormation (or Terraform), CodeBuild, Config, Systems Manager, ServiceCatalog, Lambda, etc. Experience building automation using scripting languages like Bash/Python/PowerShell Experience working and contributing to software applications development/deployment/management processes Experience in architecting and implementing cloud-based solutions with robust Business Continuity and Disaster Recover requirements Experience working in agile teams with short release cycles Possess strong verbal, written and team presentation communication skills. ZS is a global firm; fluency in English is required This role requires healthy doses of initiative and the ability to remain flexible and responsive in a very dynamic environment Ability to work around unknowns and develop robust solutions. Experience of delivering quality work on defined tasks with limited oversight Ability to quickly learn new platforms, cloud technologies, languages, tools, and techniques as needed to meet project requirements.

Posted Date not available

Apply

1.0 - 6.0 years

10 - 20 Lacs

chennai

Hybrid

Qualifications: Bachelors degree in Computer Science, Information Technology, or a related field, or equivalent work experience. Experience working in the AWS cloud platform. Data engineer with expertise in developing big data and data warehouse platforms. Experience working with structured and semi-structured data. Expertise in developing big data solutions, ETL/ELT pipelines for data ingestion, data transformation, and optimization techniques. Experience working directly with technical and business teams. Able to create technical documentation. Excellent problem-solving and analytical skills. Strong communication and collaboration abilities. Skillsets (must have): AWS (Big Data services) - S3, Glue, Athena, EMR Programming - Python, Spark, SQL, Mulesoft,Talend, Dbt Data warehouse - ETL, Redshift / Snowflake Role & responsibilities Skillset (good to have) Experience in data modeling. Certified in AWS platform for Data Engineer skills. Experience with ITSM processes/tools such as ServiceNow, Jira Understanding of Spark, Hive, Kafka, Kinesis, Spark Streaming, and Airflow

Posted Date not available

Apply

14.0 - 20.0 years

40 - 50 Lacs

gurugram, bengaluru

Hybrid

Job Title: Senior Principal Data Engineer Location: Gurgaon Work Schedule: 12:00 PM 8:30 PM IST Job Type: Full-Time Job Summary: We are seeking an experienced Senior Principal Data Engineer with a strong background in AI/ML , Generative AI , and cloud-native architectures . This role involves leading the design and implementation of advanced data and AI solutions using AWS, mentoring technical teams, and driving innovation through emerging technologies. Key Responsibilities: Architect and design Generative AI solutions leveraging AWS services (e.g., Bedrock, S3, PGVector, Kendra, SageMaker). Collaborate with engineering teams throughout the software development lifecycle to ensure robust and scalable solutions. Lead technical decision-making and resolve complex AI/ML challenges. Conduct solution reviews and ensure alignment with best practices and security policies. Guide solution governance and secure necessary architectural approvals. Integrate emerging technologies and frameworks (e.g., LangChain) into solution designs. Deliver technical presentations, workshops, and knowledge-sharing sessions. Create and maintain architectural documentation and design specifications. Mentor junior engineers and contribute to a culture of continuous learning. Partner with data scientists and analysts to enable effective model development and deployment. Coordinate with stakeholders and clients to align data architecture with business objectives. Stay current with industry trends in AI, machine learning, data engineering, and cloud technologies. Required Qualifications: 12–15 years of experience in software development and architecture. Proven expertise in designing and delivering AI/ML and data-driven solutions. Deep understanding of AWS cloud services , especially: Bedrock SageMaker Kendra S3 PGVector Strong programming skills in Python (required), with additional experience in Java or Scala preferred. Solid foundation in data structures , algorithms , and software design patterns . Experience in building ETL/data pipelines and working with diverse data types (structured, unstructured, semi-structured). Understanding of DevOps practices and CI/CD pipelines . Familiarity with Generative AI frameworks such as LangChain . Knowledge of AI ethics , bias mitigation, and responsible AI principles. Nice to Have: Experience working with large-scale enterprise data systems. Exposure to cloud governance and architectural review boards. Certifications in AWS or AI/ML technologies.

Posted Date not available

Apply

5.0 - 10.0 years

0 - 2 Lacs

hyderabad, chennai

Hybrid

Job Title: AWS DevOps + Cloud Engineer Experience: 58 Years Location: Hyderabad/Chennai. Employment Type: Hybrid. Job Summary: We are seeking a skilled and experienced AWS DevOps + Cloud Engineer to join our team. The ideal candidate will have hands-on experience in managing cloud infrastructure, implementing CI/CD pipelines using Jenkins, and working with AWS services like EC2 and ECR. You will play a key role in automating deployments, optimizing cloud resources, and ensuring high availability and scalability of applications. Key Responsibilities: Design, implement, and manage scalable, secure, and highly available AWS infrastructure. Configure and maintain EC2 instances, ECR repositories, and other AWS services. Develop and maintain CI/CD pipelines using Jenkins for automated deployments. Monitor system performance and troubleshoot issues across cloud environments. Collaborate with development and QA teams to streamline release processes. Implement infrastructure as code (IaC) using tools like Terraform or CloudFormation. Ensure compliance with security best practices and policies. Optimize cost and performance of cloud resources. Required Skills: 5–8 years of experience in DevOps and Cloud Engineering. Strong expertise in AWS services, especially EC2, ECR, IAM, VPC, S3, CloudWatch. Proficiency in CI/CD tools, particularly Jenkins. Experience with containerization (Docker) and orchestration tools (Kubernetes is a plus). Familiarity with scripting languages (Python, Bash, etc.). Knowledge of infrastructure as code (Terraform, CloudFormation). Good understanding of networking, security, and monitoring in cloud environments. Preferred Qualifications: AWS Certified DevOps Engineer or equivalent certification. Experience with Git, GitHub/GitLab, and version control best practices. Exposure to Agile/Scrum methodologies. Strong problem-solving and communication skills.

Posted Date not available

Apply

7.0 - 10.0 years

12 - 22 Lacs

pune, chennai

Hybrid

Role & responsibilities Architectural Design and Implementation Design and deploy scalable, highly available, and fault-tolerant systems on AWS. Develop and implement cloud infrastructure solutions using AWS services such as EC2, S3, VPC, RDS, Lambda, etc. Utilize Infrastructure as Code (IaC) tools like Terraform, CloudFormation, and AWS CDK to automate the provisioning and management of AWS resources. Kubernetes and Containerization Deploy, manage, and scale Kubernetes clusters on AWS (EKS). Design container orchestration solutions and manage containerized applications. Implement best practices for Kubernetes resource management, networking, and security. Observability and Monitoring Implement comprehensive monitoring, logging, and alerting solutions using tools like Prometheus, Grafana, ELK Stack, and AWS CloudWatch. Develop strategies for proactive performance monitoring and incident response. DevOps and CI/CD Build and maintain CI/CD pipelines using tools like Jenkins, GitLab CI, AWS CodePipeline, etc Collaborate with development teams to ensure smooth integration and deployment of applications. Implement automation scripts and tools to streamline operations and improve efficiency. Security and Compliance Ensure cloud infrastructure security by implementing best practices for IAM, network security, and data protection. Conduct regular security assessments and audits to maintain compliance with industry standards and regulations. Collaboration and Leadership Work closely with cross-functional teams, including developers, system administrators, and product managers. Provide technical guidance and mentorship to junior team members. Stay abreast of emerging technologies and industry trends, making recommendations for adoption as appropriate. Preferred candidate profile Mandatory Skills 1. AWS Expertise: o Expertise in AWS services and architecture. o Proficiency with Infrastructure as Code (IaC) tools such as Terraform, CloudFormation, and AWS CDK. o Strong knowledge of Kubernetes, Docker, and container management. o Experience with monitoring and observability tools like Prometheus, Grafana, ELK Stack, and AWS CloudWatch. o Solid understanding of CI/CD processes and tools. o Familiarity with security best practices in cloud environments. 2. Security Architecture: o Proven experience in designing and implementing secure cloud architectures. o Ability to assess and enhance the security posture of existing AWS infrastructure. 3. Identity and Access Management (IAM): o Strong understanding of IAM principles and hands-on experience implementing IAM policies and procedures. o Proficient in managing user access, roles, and permissions in AWS environments. 4. Network Security: o Expertise in configuring and managing AWS Virtual Private Clouds (VPCs), security groups, and network ACLs. o Experience in designing and implementing network security controls for AWS. 5. Data Security: o Knowledge of encryption mechanisms for data in transit and at rest in AWS. o Experience in defining and enforcing data classification and handling policies. 6. Scripting and Automation: o Proficient in scripting languages such as Python or Shell scripting for automating security tasks. o Experience with Infrastructure as Code (IaC) tools for automating security configurations. 7. Compliance and Best Practices: o Strong understanding of cloud security best practices, industry standards, and compliance frameworks. o Knowledge of regulatory requirements related to cloud security. 8. Communication and Collaboration: o Excellent verbal and written communication skills. o Ability to collaborate effectively with cross-functional teams, including developers and operations teams. 9. Continuous Learning: o Commitment to staying current with emerging trends, technologies, and best practices in cloud security. Desired Skills • Ability to learn quickly, perform R&D, build POCs and propose end to end solutions. • Exceptionally good communication and interpersonal skills • Experience with Agile/Scrum based project execution.

Posted Date not available

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies