Jobs
Interviews

1084 S3 Jobs - Page 4

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 7.0 years

0 Lacs

noida, uttar pradesh

On-site

As a Java Developer specializing in Legacy code optimization, modernization, and on-premises cloud migration to AWS, you will play a crucial role in designing scalable, cost-effective, and efficient cloud solutions. Your expertise in AWS services such as ALB, ECS, S3, ElastiCache, IAM, CloudWatch, and S3 Glacier will be essential for designing cloud-native applications. You will be responsible for deploying infrastructure through automation tools like AWS CloudFormation and Terraform, as well as migrating existing on-premises applications to AWS while modernizing legacy systems. Your proven expertise in Java development, particularly in Java 11+, and experience in building scalable backend systems will be valuable assets in this role. Additionally, your experience in modernizing legacy systems to optimize performance, scalability, and cost-efficiency will be crucial. Proficiency in CI/CD pipelines using tools like Jenkins will ensure the efficient delivery of high-quality software. Strong database expertise, including experience with relational and NoSQL databases, along with skills in database design, optimization, and management, will also be required. Nice to have skills include experience with AI tools that enhance productivity. You will have the opportunity to work with the latest technologies, including the AI revolution, to further enhance your skills and contribute to innovative solutions. At GlobalLogic, we prioritize a culture of caring, where you will experience an inclusive environment focused on acceptance and belonging. Continuous learning and development opportunities will be available to support your growth and advancement. You will have the chance to work on interesting and meaningful projects that make a real impact for clients worldwide. We believe in the importance of balance and flexibility, offering various work arrangements to help you achieve a harmonious work-life balance. Joining GlobalLogic means becoming part of a high-trust organization that values integrity and trust. You can trust in our commitment to providing a safe, reliable, and ethical work environment where truthfulness, candor, and integrity are fundamental to everything we do. GlobalLogic, a Hitachi Group Company, is a trusted digital engineering partner known for collaborating with forward-thinking companies to create innovative digital products and experiences. By joining our team, you will have the opportunity to work on cutting-edge solutions that shape the world today.,

Posted 5 days ago

Apply

10.0 - 17.0 years

20 - 35 Lacs

hyderabad, bengaluru

Hybrid

Mandatory: Strong experience with Spring Boot, Microservices, APIs (integration, versioning), AWS (Fargate, Lambda, EC2, SNS, SQS, S3), Design Patterns, Exception handling etc Roles and Responsibilities Design, develop, test, deploy and maintain scalable microservices using Spring Boot on AWS platform. Collaborate with cross-functional teams to identify requirements and design solutions that meet business needs. Implement API integrations using Verisoning and integrate services such as SNS, SQS, S3, EC2, Fargate Lambda. Ensure high availability of applications by implementing exception handling mechanisms and monitoring systems. Participate in code reviews to ensure adherence to coding standards and best practices. Desired Candidate Profile 10+ years of experience in Java development with expertise in Spring Boot framework. Strong understanding of AWS ecosystem including EC2, SNS, SQS, S3 etc. . Experience with containerization using Docker/Kubernetes or similar technologies (desired). Proficiency in design patterns (e.g., Microservices) and ability to write clean modularized code.

Posted 5 days ago

Apply

6.0 - 10.0 years

20 - 25 Lacs

hyderabad, bengaluru

Hybrid

Mandatory: Strong experience with Spring Boot, Microservices, APIs (integration, versioning), AWS (Fargate, Lambda, EC2, SNS, SQS, S3), Design Patterns, Exception handling etc Roles and Responsibilities Design, develop, test, deploy and maintain scalable microservices using Spring Boot on AWS platform. Collaborate with cross-functional teams to identify requirements and design solutions that meet business needs. Implement API integrations using Verisoning and integrate services such as SNS, SQS, S3, EC2, Fargate Lambda. Ensure high availability of applications by implementing exception handling mechanisms and monitoring systems. Participate in code reviews to ensure adherence to coding standards and best practices. Desired Candidate Profile 6-10 years of experience in Java development with expertise in Spring Boot framework. Strong understanding of AWS ecosystem including EC2, SNS, SQS, S3 etc. . Experience with containerization using Docker/Kubernetes or similar technologies (desired). Proficiency in design patterns (e.g., Microservices) and ability to write clean modularized code.

Posted 5 days ago

Apply

1.0 - 4.0 years

3 - 5 Lacs

bengaluru

Work from Office

Role Overview: As a Junior AWS & Cloud Applications Engineer, you will help design, deploy, and maintain AWS-based infrastructure and SaaS applications. You will support day to day operations across cloud platforms, security, cost governance, and vendor coordinationensuring reliability, scalability, security, and fiscal discipline. Key Responsibilities AWS Engineering & Operations Assist in the design, development, and deployment of cloud-based applications and infrastructure on AWS. Learn and utilize AWS services such as EC2, S3, Lambda, RDS, EKS, and related tooling. Troubleshoot and resolve technical issues related to AWS services, applications, and infrastructure. Assist in automation using tools such as Terraform and CloudFormation. Monitor and optimize cloud resource utilization, performance, and costs. Participate in code reviews and adhere to best practices and security standards. Create and maintain technical documentation for AWS infrastructure, policies, and processes. SaaS / Cloud Application Management Administer, onboard, and offboard users for enterprise SaaS tools (e.g., collaboration, productivity,security, and IT ops platforms). Maintain application configurations, license allocation, role/permission hygiene, and integration health(SSO/IdP, SCIM, webhooks, APIs). Track renewals, plan upgrades/downgrades, and coordinate change windows with stakeholders. Billing, Budgeting & Vendor Coordination –Track cloud and SaaS spend; maintain budgets and forecasts with guidance from seniors. –Reconcile usage, licenses, and invoices; raise POs and GRNs as applicable; follow up with Accounts/Finance and service providers on payments and credits. –Prepare monthly spend summaries and right sizing recommendations; proactively surface anomalies and savings opportunities (e.g., RI/SP commitments, cleanup). –Maintain vendor/support portals, contracts, SLAs, and renewal calendars. Cloud Security & Compliance –Maintain cloud security baselines for AWS and SaaS apps, including identity, least privilege access, MFA/SSO, encryption, and secure configurations. –Support security for email and cloud servers (e.g., SPF/DKIM/DMARC hygiene, anti phishing, mailbox and server hardening, backup/retention). –Assist with vulnerability management, patching cycles, and compliance evidence (policies, SOPs,asset inventory). Endpoint Security, SIEM & Threat Operations –Operate CrowdStrike (central management) for sensor health, policy assignments, detections, and response actions. –Triage alerts, investigate suspicious activity, and contribute to detection/prevention rules undersupervision. –Integrate CrowdStrike/other telemetry with a SIEM; tune rules and basic dashboards; escalateincidents per playbooks. –Maintain runbooks for incident response and participate in tabletop exercises as needed. Collaboration & Support –Work with IT Support and application owners to resolve operational incidents and user requests. –Gather requirements from teams, propose practical solutions, and track delivery to closure with clearcommunication. Candidate Profile Bachelor’s degree in Computer Science, Engineering, or a related field. 1–2 years of experience with AWS or other cloud platforms. Foundational knowledge of AWS services (compute, storage, networking, security). Basic understanding of Linux and/or Windows administration. Familiarity with scripting (Python, Bash, or PowerShell). Strong analytical, troubleshooting, and documentation skills. Curiosity to learn, attention to detail, and a security first mindset.

Posted 5 days ago

Apply

5.0 - 10.0 years

0 - 1 Lacs

hyderabad

Work from Office

Role & responsibilities Experience in full stack software development. Proficiency in front-end technologies such as JavaScript, TypeScript, React, . Strong experience of back-end development using Java. Experience working with databases (SQL and NoSQL) such as PostgreSQL, MySQL, or MongoDB. Expertise in AWS cloud services, including EC2, S3, Lambda, API Gateway, RDS, and DynamoDB. Familiarity with RESTful APIs, GraphQL, and microservices architecture. Knowledge of version control systems like Git and development workflows (CI/CD). Strong understanding of cloud security, scalability, and performance optimization on AWS. Experience with infrastructure as code (IaC) using Terraform or AWS CloudFormation. Strong problem-solving skills, attention to detail, and ability to work both independently and collaboratively. Preferred candidate profile Experience: 5 to 10 years in full stack software development , with strong expertise in Java (Spring Boot) and ReactJS Technical Skills: Proficient in Java, Spring Boot, React, JavaScript, TypeScript Experience with SQL/NoSQL databases like PostgreSQL, MySQL, or MongoDB Hands-on experience with AWS cloud services (EC2, Lambda, S3, RDS, API Gateway) Familiarity with RESTful APIs, GraphQL , and microservices architecture Knowledge of CI/CD pipelines , version control (Git), and infrastructure as code (Terraform or CloudFormation) Work Location: Candidates currently in Hyderabad or willing to relocate/work in a hybrid model Industry Background: Preferably from IT Services , Software Product , or E-Commerce/Internet companies Education: B.E./B.Tech in Computer Science , IT , or related field (MCA/M.Tech preferred but not mandatory) Soft Skills: Strong analytical skills, collaborative mindset, attention to detail, and ability to work independently

Posted 5 days ago

Apply

5.0 - 7.0 years

20 - 25 Lacs

pune

Work from Office

Data Management: Proficiency in data architectures such as data warehouses, data lakes, and data hubs, along with supporting processes like data integration, governance, and metadata management DevOps Practices Required Candidate profile Data Pipeline Construction: Designing, building, 5 years of experience in Python, Apache Spark, DBT(1-2 Years), AWS (EMR, LAMBDA, EKS, EC2, RDS, S3, VPC) and Snowflake.

Posted 5 days ago

Apply

3.0 - 8.0 years

7 - 12 Lacs

thiruvananthapuram

Work from Office

• Design, deploy, and manage AWS cloud infrastructure (ECS/EKS, Lambda, S3, RDS, API Gateway). • Implement CI/CD pipelines for ML, Data, and Engineering workflows. • Enforce security standards (encryption, IAM, RBAC).

Posted 5 days ago

Apply

3.0 - 5.0 years

0 - 0 Lacs

pune, jaipur

Work from Office

About the Role We are looking for a skilled DevOps Engineer with strong expertise in AWS, Terraform, CI/CD, Kubernetes , and GitOps practices . The ideal candidate should be proficient in automating deployments using GitOps tools like ArgoCD and have experience managing infrastructure as code (IaC) for highly available and secure systems. Key Responsibilities Design and manage AWS infrastructure using Terraform and follow Infrastructure as Code best practices. Implement and maintain GitOps workflows for application deployment and infrastructure changes. Configure and manage ArgoCD for continuous delivery to Kubernetes clusters. Build and optimize CI/CD pipelines using GitHub Actions (or similar tools). Deploy, monitor, and maintain Kubernetes clusters and containerized applications. Manage AWS resources including EC2, RDS, S3, VPC networking , and CloudWatch monitoring . Implement and manage AWS SSO for secure identity and access management. Collaborate with development teams to ensure smooth application releases using GitOps principles. Ensure system reliability, security compliance, and cost optimization. Required Technical Skills Cloud Platforms: AWS (EC2, RDS, S3, VPC, CloudWatch) Infrastructure as Code: Terraform GitOps & CD Tools: ArgoCD (mandatory) , Helm CI/CD: GitHub Actions, Jenkins, or GitLab CI Containers & Orchestration: Kubernetes, Docker Identity & Access Management: AWS IAM, AWS SSO Networking: VPC, Security Groups, Route 53 Monitoring & Logging: CloudWatch Preferred Skills Knowledge of Kustomize , FluxCD, or similar GitOps tools. Experience with Prometheus, Grafana for observability. Familiarity with Athena, Glue, or Redshift for data workflows. Multi-cloud exposure (Azure, GCP). Tools GitHub, AWS CLI, Terraform, Docker, Kubernetes CLI (kubectl), ArgoCD , Helm, AWS SSO , CloudWatch Logs. Role & responsibilities Preferred candidate profile

Posted 5 days ago

Apply

4.0 - 8.0 years

10 - 17 Lacs

navi mumbai

Work from Office

Hi, We are hiring a AWS Data Engineer for one of our manufacturing industry client based in Airoli, Navi Mumbai. Work from office strictly - Monday to Friday - Day shift - No hybrid mode. Location - Airoli, Navi Mumbai Python, Pyspark and AWS mandatory (not gcp or azure) - All three should be recent experience. Candidate should be actively coding on python. Only BE/Btech candidates from IIT or Tier I / II colleges will be considered. No MCA Preferred early joiners, serving notice. Local candidates only or a candidate who's base location is mumbai and wants to move back to mumbai. Otherwise a strict no for outstation candidate. Please dont apply if you are not based in Mumbai/Navi Mumbai. About Client We are the leading partner for sustainable construction, creating value across the built environment from infrastructure and industry to buildings. We offer high-value end-to-end Building Materials and Building Solutions - from foundations and flooring to roofing and walling - powered by premium brands. More than 45,000 talented employees in 45 attractive markets - across Europe, Latin America and Asia, Middle East & Africa - are driven by our purpose to build progress for people and the planet, with sustainability and innovation at the core of everything we do. About The Role The Data Engineer will play an important role in enabling business for Data Driven Operations and Decision making in Agile and Product-centric IT environment. Education / Qualification BE / B. Tech from IIT or Tier I / II colleges Certification in Cloud Platforms AWS Experience Total Experience of 4-8years Hands on experience in python coding is must . Experience in data engineering which includes laudatory account Hands-on experience in Big Data cloud platforms like AWS (redshift, Glue, Lambda), Data Lakes, and Data Warehouses, Data Integration, data pipeline. Experience in SQL, writing code in spark engine using python, pyspark.. Experience in data pipeline and workflow management tools (such as Azkaban, Luigi, Airflow etc.) Key Personal Attributes Business focused, Customer & Service minded Strong Consultative and Management skills Good Communication and Interpersonal skills We are an equal opportunity employer and consider all qualified applicants without regard to race, color, religion, gender, sexual orientation, gender identity, age, disability, or any other characteristic protected by law. Please share your resume at saumya@hr-central.in

Posted 5 days ago

Apply

6.0 - 8.0 years

15 - 18 Lacs

noida, pune, bengaluru

Hybrid

Were Hiring Data Engineer (Python + Databricks) Experience: 56 Years Work Mode: Hybrid(Bangalore, Pune, Mumbai, Hyderabad, Noida) Level: Mid to SeniorTech Stack & Skills: Databricks, Python, Spark, SQL AWS (S3, Lambda) Airflow, DataStage (or similar ETL) Healthcare data processing systems Batch processing frameworksWere looking for a dynamic Data Engineer who can work with minimal guidance and deliver scalable solutions. Interested Share your CV at: Location - Pune, Bangalore, Noida, Mumbai, Hyderabad(Hybrid-3 days)

Posted 5 days ago

Apply

9.0 - 12.0 years

30 - 35 Lacs

bengaluru

Work from Office

Key Responsibilities Lead the end-to-end design, development, and deployment of Java-based applications and RESTful APIs. Collaborate with product managers and architects to define technical solutions and translate business requirements into scalable software. Guide and mentor team members in best coding practices, design patterns, and architectural decisions. Drive code reviews, technical discussions, and ensure high code quality and performance standards. Troubleshoot critical production issues and implement long-term fixes and improvements. Advocate for continuous improvement in tools, processes, and systems across the engineering team. Stay up to date with modern technologies and recommend their adoption where appropriate. Required Skills: 5+ years of experience in Java backend development with expertise in Spring/Spring Boot and RESTful services. Solid grasp of Object-Oriented Programming (OOP), system design, and design patterns. Proven experience leading a team of engineers or taking ownership of modules/projects. Experience with AWS Cloud services (EC2, Lambda, S3, etc.) is a strong advantage. Familiarity with Agile/Scrum methodologies and working in cross-functional teams. Excellent problem-solving, debugging, and analytical skills. Strong communication and leadership skills.

Posted 5 days ago

Apply

5.0 - 9.0 years

0 Lacs

haryana

On-site

The role requires you to lead the collaboration with ML Engineers and DevOps Engineers to formulate AI designs that can be built, tested, and deployed through the Route to Live and into Production using continuous integration/deployment. In this role, you will be responsible for Model Development & Deployment, including model fine-tuning using open-source libraries like DeepSpeed, Hugging Face Transformers, JAX, PyTorch, and TensorFlow to enhance model performance. You will also work on deploying and managing Large Language Models (LLMs) on cloud platforms, training and refining LLMs, and scaling LLMs up and down while ensuring blue/green deployments and roll back bad releases. Your tasks will also involve Data Management & Pipeline Operations, such as curating and preparing training data, monitoring data quality, transforming and aggregating data, building vector databases, and making data visible and shareable across teams. Monitoring & Evaluation will be a crucial part of your role, where you will track LLM performance, identify errors, optimize models, and create model and data monitoring pipelines with alerts for model drift and malicious user behavior. Infrastructure & DevOps tasks will include continuous integration and delivery (CI/CD), managing infrastructure for distributed model training using tools like SageMaker, Ray, Kubernetes, and deploying ML models using containerization technologies like Docker. Required Technical Skills for the role include proficiency in programming languages like Python, frameworks like PyTorch and TensorFlow, expertise in cloud platforms like AWS, Azure, or GCP, experience with containerization technologies, and familiarity with LLM-Specific Technologies such as vector databases, prompt engineering, and fine-tuning techniques. The position is located at DGS India in Pune, Baner, and falls under the Brand Merkle. It is a Full-Time role with a Permanent contract.,

Posted 6 days ago

Apply

5.0 - 9.0 years

0 Lacs

thiruvananthapuram, kerala

On-site

Greetings from ALIQAN Technologies!!! We are looking for a skilled .Net Developer with at least 8 years of experience to join one of our clients" MNCs located in Trivandrum. The ideal candidate should hold a Bachelors or Masters degree in Computer Science, Engineering, or a related field. As a .Net Developer, you will be responsible for software development, with a strong focus on C# and .NET Core. You should have a minimum of 3 years of experience with Angular (v8+ preferred) and proven expertise in architecting and deploying applications on AWS, utilizing services such as Lambda, ECS, RDS, S3, and API Gateway. In addition to your technical skills, you should possess a solid understanding of RESTful API design, microservices, and distributed systems. Experience with infrastructure-as-code tools like CloudFormation, CDK, or Terraform is highly desirable. Knowledge of software design patterns, SOLID principles, layered architecture, CI/CD practices, automated testing, and code quality tools is essential for this role. Preferred qualifications include being an AWS Certified Solutions Architect (Associate or Professional) or holding an equivalent AWS certification. Experience with OpenTelemetry, distributed tracing, application monitoring, containerization (Docker), orchestration (Kubernetes/ECS), and Agile/Scrum methodologies would be advantageous. We are looking for a team player with excellent communication, leadership, and mentoring skills. This is a full-time permanent position that requires you to work in person at the designated location. If you meet the required qualifications and have a passion for software development, we encourage you to apply for this exciting opportunity.,

Posted 6 days ago

Apply

8.0 - 12.0 years

0 Lacs

pune, maharashtra

On-site

Join us as a Senior Test Lead Java at Barclays, where you will be responsible for supporting the successful delivery of location strategy projects to plan, budget, agreed quality and governance standards. Spearhead the evolution of our digital landscape, driving innovation and excellence. You will harness cutting-edge technology to revolutionise our digital offerings, ensuring unparalleled customer experiences. To be successful as a Senior Test Lead Java, you should have experience with: - Drive CI/CD pipeline design and automation using tools such as Jenkins, GitLab, and Git. - Knowledge on Release Management related industry-wide CI/CD tool sets and concepts. - Experience in designing resilient AWS architecture designs. - Experience with CloudFormation & Service Catalog. - Analyse new requirements to finalize the most appropriate technical solution. - Experience with as many AWS services as possible: IAM, S3, EC2 auto scaling, Containers, Secrets Manager, VPC/load balancing/networking. Some other highly valued skills may include: - Exposure to working with CSO and CTO teams. - Have an inquisitive nature; able to work independently to take a problem, break it down, recognize additional questions, and find solutions. - Scripting & Programming: Proficiency in scripting languages like Bash, Python, or Go for automation and tooling. - Performance Tuning: Ability to optimize container performance and resource utilization. - Monitoring Stack: Experience with Prometheus, Grafana, ELK/EFK stack, or Datadog for advanced monitoring and visualization. You may be assessed on key critical skills relevant for success in the role, such as risk and controls, change and transformation, business acumen, strategic thinking, and digital and technology, as well as job-specific technical skills. This role is based out of Pune. Purpose of the role: To design, develop, and improve software, utilizing various engineering methodologies, that provides business, platform, and technology capabilities for our customers and colleagues. Accountabilities: - Development and delivery of high-quality software solutions by using industry-aligned programming languages, frameworks, and tools. Ensuring that code is scalable, maintainable, and optimized for performance. - Cross-functional collaboration with product managers, designers, and other engineers to define software requirements, devise solution strategies, and ensure seamless integration and alignment with business objectives. - Collaboration with peers, participate in code reviews, and promote a culture of code quality and knowledge sharing. - Stay informed of industry technology trends and innovations and actively contribute to the organization's technology communities to foster a culture of technical excellence and growth. - Adherence to secure coding practices to mitigate vulnerabilities, protect sensitive data, and ensure secure software solutions. - Implementation of effective unit testing practices to ensure proper code design, readability, and reliability. Assistant Vice President Expectations: - To advise and influence decision-making, contribute to policy development, and take responsibility for operational effectiveness. Collaborate closely with other functions/business divisions. - Lead a team performing complex tasks, using well-developed professional knowledge and skills to deliver on work that impacts the whole business function. Set objectives and coach employees in pursuit of those objectives, appraisal of performance relative to objectives, and determination of reward outcomes. - If the position has leadership responsibilities, People Leaders are expected to demonstrate a clear set of leadership behaviors to create an environment for colleagues to thrive and deliver to a consistently excellent standard. The four LEAD behaviors are: L Listen and be authentic, E Energize and inspire, A Align across the enterprise, D Develop others. - OR for an individual contributor, they will lead collaborative assignments and guide team members through structured assignments, identify the need for the inclusion of other areas of specialization to complete assignments. They will identify new directions for assignments and/or projects, identifying a combination of cross-functional methodologies or practices to meet required outcomes. - Consult on complex issues; providing advice to People Leaders to support the resolution of escalated issues. - Identify ways to mitigate risk and developing new policies/procedures in support of the control and governance agenda. - Take ownership for managing risk and strengthening controls in relation to the work done. - Perform work that is closely related to that of other areas, which requires an understanding of how areas coordinate and contribute to the achievement of the objectives of the organization sub-function. - Collaborate with other areas of work, for business-aligned support areas to keep up to speed with business activity and the business strategy. - Engage in complex analysis of data from multiple sources of information, internal and external sources such as procedures and practices (in other areas, teams, companies, etc.) to solve problems creatively and effectively. - Communicate complex information. "Complex" information could include sensitive information or information that is difficult to communicate because of its content or its audience. - Influence or convince stakeholders to achieve outcomes. All colleagues will be expected to demonstrate the Barclays Values of Respect, Integrity, Service, Excellence, and Stewardship our moral compass, helping us do what we believe is right. They will also be expected to demonstrate the Barclays Mindset to Empower, Challenge, and Drive the operating manual for how we behave.,

Posted 6 days ago

Apply

3.0 - 7.0 years

0 Lacs

maharashtra

On-site

Choosing Capgemini means choosing a company where you will be empowered to shape your career in the way you'd like, where you'll be supported and inspired by a collaborative community of colleagues around the world, and where you'll be able to reimagine what's possible. Join us and help the world's leading organizations unlock the value of technology and build a more sustainable, more inclusive world. Your role will involve designing and managing AWS infrastructure using services like EC2, S3, VPC, IAM, CloudFormation, and Lambda. You will be responsible for implementing CI/CD pipelines using tools such as Jenkins, GitHub Actions, or AWS CodePipeline. Automation of infrastructure provisioning using Infrastructure-as-Code (IaC) tools like Terraform or AWS CloudFormation will also be part of your responsibilities. Monitoring and optimizing system performance using CloudWatch, Datadog, or Prometheus will be crucial. Ensuring cloud security and compliance by applying best practices in IAM, encryption, and network security is another key aspect. Collaboration with development teams to streamline deployment processes and improve release cycles is essential. Additionally, you will troubleshoot and resolve issues in development, test, and production environments while maintaining documentation for infrastructure, processes, and configurations. Keeping updated with AWS services and DevOps trends to continuously improve systems is also expected. Your profile should include 3+ years of experience in DevOps or cloud engineering roles, strong hands-on experience with AWS services and architecture, proficiency in scripting languages (e.g., Python, Bash), experience with containerization tools (Docker, Kubernetes), familiarity with CI/CD tools and version control systems (Git), knowledge of monitoring and logging tools, and excellent problem-solving and communication skills. AWS certifications (e.g., AWS Certified DevOps Engineer) are a plus. At Capgemini, we are committed to ensuring that people of all backgrounds feel encouraged and have a sense of belonging. You are valued for who you are, and you can bring your original self to work. Every Monday, kick off the week with a musical performance by our in-house band - The Rubber Band. You will also get to participate in internal sports events, yoga challenges, or marathons. At Capgemini, you can work on cutting-edge projects in tech and engineering with industry leaders or create solutions to overcome societal and environmental challenges. Capgemini is a global business and technology transformation partner, helping organizations accelerate their dual transition to a digital and sustainable world, while creating tangible impact for enterprises and society. With a strong over 55-year heritage, Capgemini is trusted by its clients to unlock the value of technology to address the entire breadth of their business needs. It delivers end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fueled by its market-leading capabilities in AI, generative AI, cloud and data, combined with its deep industry expertise and partner ecosystem.,

Posted 6 days ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

As a Senior Developer in AWS Data Engineering, you will be responsible for utilizing your expertise in AWS Glue, Spark, S3, Redshift, Python, and SQL to design, build, and optimize large-scale data pipelines on the AWS platform. This role presents an exciting opportunity to create high-performance analytics and AI-ready data platforms. Your primary responsibilities will include building and managing data pipelines using Glue, Spark, and S3. You will also be tasked with optimizing Redshift queries, schema, and performance, as well as supporting large-scale data ingestion and transformation processes. Additionally, you will integrate with upstream APIs and external platforms to ensure seamless data flow. Monitoring data reliability, observability, and performance will be a key aspect of your role. To excel in this position, you should possess strong expertise in AWS Glue, S3, and Redshift, along with proficiency in Python, SQL, and Spark. A solid understanding of data warehouse and data lake concepts is essential, as well as experience with incremental pipelines and orchestration. Knowledge of monitoring and debugging best practices will further enhance your capabilities in this role. This position is based in Pune & Noida.,

Posted 6 days ago

Apply

5.0 - 9.0 years

0 Lacs

haryana

On-site

The company iQuippo Services Pvt. Ltd. is looking for an Experienced AWS Freelance Consultant with at least 5 years of experience to take charge of migrating AWS workloads from a reseller-managed account to an iQuippo-owned AWS account. This role provides the flexibility to work either onsite or remotely, with a preference for candidates located in the Delhi NCR region. As an AWS Freelance Consultant, your main responsibilities will include leading the migration of AWS workloads across various projects such as Marketplace, Auction Engine, and Valuation Engine. You will also be tasked with recreating and configuring AWS services like EC2, RDS, S3, Lambda, Fargate, API Gateway, DynamoDB, OpenSearch, and Kafka. Ensuring cloud security with Palo Alto Firewall using a BYOL license, optimizing costs, enhancing architecture, and ensuring zero downtime during the migration process are crucial aspects of this role. Furthermore, collaboration with Angular (frontend) and Django (backend) teams is essential, along with providing support for VAPT audit, monitoring, backup, and DR strategy. The ideal candidate should possess proven expertise in AWS migration, hands-on experience with Palo Alto Firewall on AWS, an AWS Certification (Solutions Architect Professional preferred), a solid background in DevOps, and proficiency in cloud cost optimization. The ability to work independently and ensure flawless execution of tasks is a key requirement for this role. This freelance position offers a hybrid work model, allowing you to choose between working onsite or remotely. Candidates based in Delhi NCR are preferred for this role. If you meet the requirements and are interested in this opportunity, please share your profiles at gaurav.sharma@iquippo.com.,

Posted 6 days ago

Apply

4.0 - 8.0 years

0 Lacs

chennai, tamil nadu

On-site

As a Senior Software Engineer at our company, you will be part of a dynamic and talented engineering team working on cutting-edge IoT and cloud technologies. You will play a crucial role in designing, developing, and maintaining scalable applications and backend services that interface with IoT gateways using MQTT, while leveraging AWS cloud services such as Lambda, Glue, Athena, and S3. Your key responsibilities will include designing, building, and maintaining scalable backend services using Node.js and Python, developing interactive front-end interfaces using React or Angular, integrating and managing IoT Gateways, designing and managing relational databases with PostgreSQL, building and optimizing data pipelines using AWS Glue, querying large datasets using Athena, and managing storage in S3 with Parquet. Additionally, you will be creating and deploying serverless solutions using AWS Lambda and documenting system architecture, API contracts, and workflows clearly and effectively. To excel in this role, you should have 4+ years of hands-on experience in full-stack development, with strong backend development skills in Node.js and Python, solid experience with React.js or Angular, expertise in PostgreSQL and schema design, hands-on experience with MQTT protocol and IoT gateway integration, a strong understanding of AWS services including Lambda, Glue, Athena, S3, proficiency in data serialization formats like Parquet, excellent documentation and communication skills, and the ability to work in an Agile/Scrum development process. Desired qualifications include experience with other cloud platforms such as Azure, GCP, familiarity with containerization using Docker and Kubernetes, experience with CI/CD tools like Jenkins, GitHub Actions, knowledge of security best practices for IoT and cloud-based systems. Join us for a competitive salary and benefits, as well as opportunities for growth and learning in a fast-paced environment.,

Posted 6 days ago

Apply

4.0 - 8.0 years

0 Lacs

hyderabad, telangana

On-site

At eshipjet.ai, we are at the forefront of revolutionizing global logistics by leveraging advanced IoT and RFID-powered supply chain solutions. Our innovative platforms seamlessly integrate edge devices, cloud systems, and data intelligence to drive smarter, faster, and more efficient shipping operations. We are currently seeking a talented AWS IoT & Python Engineer (RFID Integration Specialist) to join our dynamic engineering team and contribute to our growth. As an AWS IoT & Python Engineer (RFID Integration Specialist) at eshipjet.ai, you will play a crucial role in designing, developing, and optimizing IoT solutions that incorporate RFID technologies, edge devices, and AWS cloud services to create scalable, high-performance systems. This position offers an exciting opportunity to work on cutting-edge IoT applications that are at the forefront of powering intelligent logistics. Key Responsibilities: - Designing, developing, and deploying IoT solutions utilizing AWS IoT Core, IoT Greengrass, and related AWS services. - Integrating RFID systems with cloud platforms to facilitate seamless data ingestion, processing, and analytics. - Developing microservices and APIs using Python (FastAPI/Flask/Django) to support IoT workflows. - Implementing serverless architectures with AWS Lambda, Step Functions, and EventBridge. - Collaborating with various AWS services such as DynamoDB, S3, API Gateway, Kinesis, and CloudWatch. - Optimizing MQTT/HTTP communications to ensure high reliability and low latency. - Ensuring IoT system security, authentication, and compliance. - Collaborating with hardware, firmware, and backend teams to deliver end-to-end IoT solutions. - Troubleshooting RFID hardware/software integration challenges. - Contributing to CI/CD pipelines and adopting DevOps best practices. Required Skills & Experience: - 3-6 years of software engineering experience with a focus on IoT & cloud solutions. - Expertise in AWS IoT Core, IoT Greengrass, and Lambda. - Strong Python development skills including REST APIs and microservices. - Practical experience with RFID readers, tags, and middleware. - Knowledge of IoT protocols such as MQTT, WebSockets, and HTTP. - Hands-on experience with AWS services like API Gateway, DynamoDB, S3, Kinesis, and CloudFormation. - Understanding of serverless and event-driven architectures. - Familiarity with Docker, ECS, or EKS is a plus. - Solid grasp of IoT security best practices. Preferred Qualifications: - Experience with edge device integration and real-time data streaming. - Knowledge of RFID standards (UHF, HF, LF) and industrial IoT. - Exposure to data pipelines, analytics, or AI/ML on IoT data. - Familiarity with CI/CD tools like GitHub Actions, Jenkins, and AWS CodePipeline. - Industry experience in logistics, pharma, or manufacturing IoT projects. Join us at eshipjet.ai and be a part of shaping the future of intelligent logistics. Collaborate with passionate innovators, leverage cutting-edge technologies, and work on real-world IoT solutions that are making a significant impact on global supply chains.,

Posted 6 days ago

Apply

6.0 - 11.0 years

30 - 45 Lacs

hyderabad, pune

Work from Office

Hello Candidate, Greetings from Hungry Bird IT Consulting Services Pvt Ltd. We are hiring Senior AI Infrastructure Management Engineer for our client. Job Title: Senior AI Infrastructure Management Engineer Location: Hyderabad Job Type: Full-Time Work Mode: Hybrid E xperience Required: 6+ Years Job Summary: We are seeking a highly skilled Senior AI Infrastructure Management Engineer with expertise in Azure , AWS , and AI/ML deployment environments. This role demands deep technical knowledge of Linux , DevOps practices , cloud architecture , and AI/ML operations (MLOps/AIOps) . The ideal candidate will be responsible for architecting, deploying, and maintaining scalable and secure infrastructure for enterprise AI applications. Key Responsibilities: Linux System Expertise Manage and optimize Linux systems (CentOS, Ubuntu, Red Hat) Perform kernel tuning, file system configuration, and network optimization Develop shell scripts for automation and system management. Cloud Infrastructure (AWS & Azure) Design and implement secure, scalable cloud architectures on AWS and Azure Use services like EC2, S3, Lambda, Azure VMs, Blob Storage, and Functions Manage hybrid and multi-cloud environments and ensure seamless integration Infrastructure as Code (IaC) Automate infrastructure provisioning using Terraform , CloudFormation , or similar tools Maintain infrastructure versioning and ensure traceability of changes Enforce DevSecOps best practices and secure configurations AI/ML Infrastructure Management Deploy and manage cloud infrastructure for AI/ML workloads including GPUs Scale resources (GPU/CPU) for training and inference workloads Deploy AI/ML apps using Docker and Kubernetes Ensure high availability, performance, and reliability of AI applications Work on MLOps/AIOps pipelines for model deployment and monitoring Qualifications: Bachelor's degree in Computer Science, Engineering, or related field 6+ years of experience in Infrastructure/Cloud/DevOps roles Strong experience with AWS , Azure , and Linux systems Experience in AI/ML infrastructure setup and management Proficient in scripting: Python , Bash , PowerShell Hands-on with Kubernetes , Docker , and cloud-native services Experience with DevSecOps principles and CI/CD tools Certifications (preferred): AWS Solution Architect Associate / Cloud Practitioner Azure DevOps Engineer / Administrator Certified Kubernetes Administrator (CKA) Preferred Skills: Experience with GPU cluster management for AI workloads Strong knowledge of cloud security and compliance Familiarity with real-time monitoring and logging tools Exposure to modern data stacks and AI lifecycle management What We Offer: Work on high-impact, global AI infrastructure projects Access to continuous learning via online platforms and sponsored certifications Participate in Tech Talks, Hackathons, and R&D initiatives Comprehensive benefits: health insurance, retirement plans, flexible hours Supportive work environment promoting innovation and personal growth (Interested candidates can share their CV with us at or reach us at aradhana@hungrybird.in .) PLEASE MENTION THE RELEVANT POSITION IN THE SUBJECT LINE OF THE EMAIL. Example: KRISHNA, HR MANAGER, 7 YEARS, 20 20DAYS NOTICE. Name: Position applying for: Total experience: Notice period: Current Salary: Expected Salary: Thanks and Regards Aradhana +91 9959417171 aradhana@hungrybird.in

Posted 6 days ago

Apply

8.0 - 12.0 years

20 - 35 Lacs

chennai

Work from Office

Position Overview: We are seeking a highly skilled and proactive Data Engineer with a strong focus on Databricks and PySpark to join our data team. The ideal candidate will be responsible for designing, developing, and optimizing scalable data pipelines that drive high-performance data processing in a cloud environment. Key Responsibilities: Design, develop, and maintain efficient data pipelines using Databricks (PySpark) for large-scale data processing. Optimize data workflows to improve performance, scalability, and reliability across the data ecosystem. Develop complex SQL queries and manage data transformations to support analytics and reporting needs. Take full ownership of tasks and deliverables, ensuring timely and high-quality results. Collaborate with cross-functional teams including Data Scientists, Architects, and Business Analysts to understand data requirements. Implement best practices in data engineering, focusing on automation, reliability, and performance tuning. Manage cloud infrastructure components, preferably AWS services (e.g., S3, Redshift, Lambda). Work on data integration with Snowflake or other cloud data warehouses (preferred but not mandatory). Key Skills Required: Strong hands-on experience in Databricks, PySpark, and SQL development . Proven experience in building data pipelines in cloud environments. Good understanding of AWS cloud services related to data engineering. Excellent problem-solving, analytical, and communication skills. Experience with Snowflake is a plus.

Posted 6 days ago

Apply

8.0 - 10.0 years

35 - 45 Lacs

hyderabad, pune

Hybrid

Notice Period: Immediate Shift: Regular - Day Shift Job Type: Contract or Full-Time Job Description: We are seeking an experienced and proactive Lead Data Engineer to design, develop, and manage scalable data solutions on AWS Cloud. The ideal candidate will play a key role in shaping data strategies, developing cloud-native solutions, and integrating data workflows, with a focus on the Salesforce domain (desirable). Key Responsibilities: Design and implement robust data pipelines on AWS Cloud , ensuring efficient ETL processes. Work extensively with Data Engineering tools and services such as S3, Redshift, Glue, Lambda, and Data Pipeline. Develop and maintain data models, storage solutions, and data processing workflows to support analytics and business intelligence needs. Collaborate with cross-functional teams, including Data Scientists, Salesforce Developers, and Business Analysts. Optimize data solutions for scalability, performance, and security in a cloud environment. Mentor and lead a team of data engineers. Ensure data quality, governance, and documentation. Troubleshoot complex data processing and integration issues. Exposure to Salesforce domain integrations is a strong plus. Key Skills Required: Strong experience in Data Engineering + AWS Cloud solutions. Proficiency in AWS services: S3, Redshift, Glue, Lambda, EMR, etc. Experience in building scalable ETL pipelines. Good knowledge of SQL, Python, Spark, and Data Modeling. Excellent problem-solving and leadership skills. Exposure to Salesforce integrations is highly desirable.

Posted 6 days ago

Apply

5.0 - 10.0 years

10 - 18 Lacs

noida, hyderabad, gurugram

Work from Office

The Team: Cloud Solutions is a horizontal team within Market Intelligence. We provide common services to business lines within Market Intelligence and across other divisions within S&P Global. Specifically, Cloud Solutions provides: cloud engineering expertise to fast-track our product teams from on-premise hosting to cloud native architectures support in implementation of divisional guardrails to ensure the Market Intelligence cloud estate is secure and cost efficient enablement through upskilling programs to educate our technologists on cloud best practices and corporate technology standards. We use the open source tool Cloud Custodian to monitor resources in AWS and take corrective action where appropriate. This tool is key to ensuring consistent guardrails across the organisation. Job Summary: We are looking for an experienced python developer to help support and further develop the MI Cloud Custodian Framework and the internal reporting tied to it. Key Responsibilities: Collaborate with cross-functional teams to understand cloud governance needs and translate them into actionable policies Engage with product teams to define and implement policies aligned with Market Intelligence standards Develop and maintain new features and enhancements within the Python framework to improve its functionality and performance Design and improve internal reporting to deliver actionable insights from policy execution Create and manage GitHub workflows and automation pipelines to improve development and deployment processes What Were Looking For: Required Qualifications A bachelor or masters degree (or equivalent) in (but not necessarily limited to) Computer Science or Engineering Strong critical thinking and problem-solving skills 5+ years experience in Python programming and experience with Python libraries Excellent collaboration and communication skills in a cross-functional environment Hands-on experience with EC2, S3, RDS, Lambda, and other AWS services Preferred (Nice to Have) Experience with Github workflows and Actions Knowledge of infrastructure as code (IaC) tools such as Terraform or CloudFormation. Understanding of cloud cost optimization and security best practices

Posted 6 days ago

Apply

3.0 - 8.0 years

5 - 15 Lacs

bengaluru

Remote

- AWS cloud infrastructure optimization with appropriate choice of AWS services as per business requirements at minimum cost. - Troubleshoot problems across a wide array of services and functional areas Required Candidate profile - Hands-on experience of setting up & optimizing AWS. - Extensive experience with SES, AWS (ec2, AMI) - Should be able to work Independently and execute tasks with precision.

Posted 6 days ago

Apply

1.0 - 2.0 years

4 - 4 Lacs

mumbai

Work from Office

Develop ERPs and Martech Saas Products. Proficiency in Python FastAPI & JS is a must. You will have to work across React + TypeScript, Node.js/FastAPI, Mongo/PostgreSQL, Redis & AWS. Implement sockets, secure APIs and CI/CD.

Posted 6 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies