Jobs
Interviews

435 S3 Jobs - Page 13

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

7.0 - 12.0 years

9 - 14 Lacs

Bengaluru

Work from Office

We are looking for a self-motivated individual with appetite to learn new skills and be part of a fast-paced team that is delivering cutting edge solutions that drive new products and features that are critical for our customers. Our senior software engineers are responsible for designing, developing and ensuring the quality, reliability and availability of key systems that provide critical data and algorithms. Responsibilities of this role will include developing new and enhancing existing applications and you will work collaboratively with technical leads and architect to design, develop and test these critical applications. About the role Actively participate in the full life cycle of software delivery, including analysis, design, implementation and testing of new projects and features using Hadoop, Spark/Pyspark, Scala or Java, Hive, SQL, and other open-source tools and design patterns. Python knowledge is a bonus for this role. Working experience with HUDI , Snowflake or similar Must have technologies like Big Data, AWS services like EMR, S3, Lambdas, Elastic, step functions. Actively participate in the development and testing of features for assigned projects with little to no guidance. The position holds opportunities to work under technical experts and also to provide guidance and assistance to less experienced team members or new joiners in the path of the project. Appetite for learning will be key attribute for doing well in the role as the Org is very dynamic and have tremendous scope into various technical landscapes. We consider AI inclusion as a key to excel in this role, we want dynamic candidates who use AI tools as build partners and share experiences to ignite the Org. Proactively share knowledge and best practices on using new and emerging technologies across all of the development and testing groups Create, review and maintain technical documentation of software development and testing artifacts Work collaboratively with others in a team-based environment. Identify and participate in the resolution of issues with the appropriate technical and business resources Generate innovative approaches and solutions to technology challenges Effectively balance and prioritize multiple projects concurrently. About you Bachelors or Masters degree in computer science or a related field 7+ year experience in IT industry Product and Platform development preferred. Strong programming skill with Java or Scala. Must have technologies includes Big Data, AWS. Exposure to services like EMR, S3, Lambdas, Elastic, step functions. Knowledge of Python will be preferred. Experience with Agile methodology, continuous integration and/or Test-Driven Development. Self-motivated with a strong desire for continual learning Take personal responsibility to impact results and deliver on commitments. Effective verbal and written communication skills. Ability to work independently or as part of an agile development team.

Posted 1 month ago

Apply

6.0 - 11.0 years

15 - 20 Lacs

Pune, Chennai, Mumbai (All Areas)

Hybrid

Designation: AWS Java Lead Experience: 6+ Years Location : Pune, Chennai Or Mumbai Notice Period: Immediate Joiner JD : Should have min 6+ years of experience in software industry. Must have experience on Java, Data structures, Algorithms, Spring Boot, Microservices, Rest API, Design Pattern, Problem Solving & Knowledge on any cloud. 4+ experience with AWS (S3, Lambda, DynamoDB, API Gateway etc.) Hands on with engineering excellence, AWS CloudFormation, AWS DevOps toolchain and practices. Excellent problem solving and critical thinking. Independent and strong ownership of business problems and technical solutions Strong Communication and inter-personal skills Experience with open source (Apache Projects, Spring, Maven etc.) Expert knowledge of the Java language, platform, ecosystem and underlying concepts and constructs Knowledge of common design patterns and design principles Good knowledge of networking & security constructs specific to AWS AWS Associate or Professional Solutions Architect Certification will provide more weightage Sincerely, Sonia TS

Posted 1 month ago

Apply

7.0 - 12.0 years

12 - 22 Lacs

Ahmedabad

Work from Office

Expertise in Java, Spring Boot, Spring Framework, and Hibernate/JPA ,AWS or Azure, including services like EC2, Lambda, and S3. Experienced with CI/CD and Agile/Scrum practitioner. API design, JSON, and integrating external services and APIs

Posted 1 month ago

Apply

3.0 - 7.0 years

6 - 10 Lacs

Hyderabad

Work from Office

Collaborate on Node.js/Express backend tasks Optimize application performance and deployment pipelines on AWS EC2/Lambda/S3. Interface with Elastic Cloud / Elasticsearch for search, analytics, and logging solutions.

Posted 1 month ago

Apply

3.0 - 8.0 years

5 - 10 Lacs

Hyderabad

Work from Office

The Impact you will have in this role: The Enterprise Application Support role specializes in maintaining and providing technical support for all applications that are beyond the development stage and are running in the daily operations of the firm. This role works closely with development teams, infrastructure partners, and internal clients to advance and resolve technical support incidents. 3 days onsite is mandatory with 2 optional days remote work (Onsite Tuesdays, Wednesdays and a third day of your choosing) Maybe required to work Tuesday through Saturday or Sunday through Thursday on rotational or permanent basis. Your Primary Responsibilities: Experience with using ITIL Change, Incident and Problem management processes. Assist Major Incident calls and engaging the proper parties needed and helping to determine root cause. Troubleshoot and debug system component(s) to resolve technical issues in complex and highly regulated environments comprised of ground and cloud applications and services. Analyze proposed application design(s) and provide feedback on potential gaps or provide recommendations for optimization. Hands-on experience with Monitoring and Alerting processes in Distributed, Cloud and Mainframe environments. Knowledge and understanding of cyber security best practices and general security concepts like password rotation, access restriction and malware detection. Take part in Monthly Service Reviews (MSR) with Development partners to go over KPI metrics. Participate in Disaster Recovery / Loss of Region events (planned and unplanned) executing tasks and collecting evidence. Collaborate both within the team and across teams to resolve application issues and escalate as needed. Support audit requests in a timely fashion providing needed documentation and evidence. Plan and execute certificate creation/renewals as needed. Monitor Dashboards to better catch potential issues and aide in observability. Help gather and analyze project requirements and translate them into technical specification(s). Basic understanding of all lifecycle components (code, test, deploy). Good verbal and written communication and interpersonal skills, communicating openly with team members and others. Contribute to a culture where honesty and transparency are expected. On-call support with flexible work arrangement. **NOTE: The Primary Responsibilities of this role are not limited to the details above. ** Qualifications: Minimum of 3 years of relevant Production support experience. Bachelor's degree preferred or equivalent experience. Talents Needed for Success: Technical Qualifications (Distributed/Cloud): Hands on experience in Unix, Linux, Windows, SQL/PLSQL Familiarity working with relational databases (DB2, Oracle, Snowflake) Monitoring and Data Tools experience (Splunk, DynaTrace, Thousand Eyes, Grafana, Selenium, IBM Zolda) Cloud Technologies (AWS services (S3,EC2,Lambda,SQS,IAM roles), Azure, OpenShift, RDS Aurora, Postgress) Scheduling Tool experience (CA AutoSys, Control-M) Middleware experience (Solace, Tomcat, Liberty Server, WebSphere, WebLogic, JBoss) Messaging Queue Systems (IBM MQ, Oracle AQ, ActiveMQ, RabbitMQ, Kafka) Scripting languages (Bash, Python, Ruby, Shell, Perl, JavaScript) Hands on experience with ETL tools (Informatica Datahub/IDQ, Talend ) Technical Qualifications (Mainframe): Mainframe troubleshooting and support skills (COBOL, JCL, DB2, DB2 Stored Procedures, CICS, SPUFI, File aid) Mainframe scheduling (Job abends, Predecessor/Successor)

Posted 1 month ago

Apply

8.0 - 13.0 years

10 - 15 Lacs

Bengaluru

Work from Office

In this role, you will play a key role in designing, building, and optimizing scalable data products within the Telecom Analytics domain. You will collaborate with cross-functional teams to implement AI-driven analytics, autonomous operations, and programmable data solutions. This position offers the opportunity to work with cutting-edge Big Data and Cloud technologies, enhance your data engineering expertise, and contribute to advancing Nokias data-driven telecom strategies. If you are passionate about creating innovative data solutions, mastering cloud and big data platforms, and working in a fast-paced, collaborative environment, this role is for you! You have: Bachelors or masters degree in computer science, Data Engineering, or related field with 8+ years of experience in data engineering with a focus on Big Data, Cloud, and Telecom Analytics. Hands-on expertise in Ab Initio for data cataloguing, metadata management, and lineage. Skills in data warehousing, OLAP, and modelling using BigQuery, Clickhouse, and SQL. Experience with data persistence technologies like S3, HDFS, and Iceberg. Hold on, Python and scripting languages. It would be nice if you also had: Experience with data exploration and visualization using Superset or BI tools. Knowledge in ETL processes and streaming tools such as Kafka. Background in building data products for the telecom domain and understanding of AI and machine learning pipeline integration. Data Governance: Manage source data within the Metadata Hub and Data Catalog. ETL Development: Develop and execute data processing graphs using Express It and the Co-Operating System. ETL Optimization: Debug and optimize data processing graphs using the Graphical Development Environment (GDE). API Integration: Leverage Ab Initio APIs for metadata and graph artifact management. CI/CD Implementation: Implement and maintain CI/CD pipelines for metadata and graph deployments. Team Leadership & Mentorship: Mentor team members and foster best practices in Ab Initio development and deployment.

Posted 1 month ago

Apply

6.0 - 10.0 years

12 - 16 Lacs

Bengaluru

Work from Office

We are looking for a talented and experienced DevOps/Site Reliability Engineer (SRE) with a strong proficiency in Python. Skill & Experience Collaborate with development teams to design, develop, and maintain infrastructure for our highly available and scalable applications. Automate processes using Python scripting to streamline the deployment and monitoring of our applications. Monitor and manage cloud infrastructure on AWS, including EC2, S3, RDS, and Lambda. Implement and manage CI/CD pipelines for automated testing and deployment of applications. Troubleshoot and resolve production issues, ensuring high availability and performance of our systems. Collaborate with cross-functional teams to ensure security, scalability, and reliability of our infrastructure. Develop and maintain documentation for system configurations, processes, and procedures. Bachelors degree in Computer Science, Engineering, or a related field. 6+ years of experience in a DevOps/SRE role, with a strong focus on automation and infrastructure as code. Proficiency in Python scripting for automation and infrastructure management. Hands-on experience with containerization technologies such as Docker and Kubernetes. Strong knowledge of cloud platforms such as AWS, including infrastructure provisioning and management. Experience with monitoring and logging tools such as Prometheus, Grafana, and ELK stack. Knowledge of CI/CD tools like Jenkins, GitLab CI, or Travis CI. Familiarity with configuration management tools such as Ansible, Puppet, or Chef. Strong problem-solving and troubleshooting skills, with an ability to work in a fast-paced and dynamic environment. Excellent communication and collaboration skills to work effectively with cross-functional teams.

Posted 1 month ago

Apply

8.0 - 13.0 years

15 - 30 Lacs

Bengaluru

Work from Office

Job Title : Cloud Architect AWS Location : Bangalore Shift : Rotational Experience Required : 8+ years - 13 Years Type : Full-time Job Summary We are looking for a highly skilled and experienced AWS Cloud Architect with a strong foundation in Site Reliability Engineering (SRE) practices to lead cloud transformation initiatives. The ideal candidate will have hands-on experience with AWS infrastructure , DevOps automation , security governance , cost optimization , and infrastructure as code . You will work on high-impact projects in a cloud-first environment, collaborating with cross-functional teams to ensure scalable, secure, and reliable infrastructure. Key Responsibilities Cloud Architecture & Deployment Design, build, and optimize cloud-native architectures on AWS . Lead the migration of on-premises workloads to AWS using best practices. Define and enforce cloud governance, tagging policies , and account management standards . Security, IAM, and Compliance Implement AWS IAM, PIM, PAM , and manage VPC Security Groups , NACLs , and encryption policies . Conduct cloud security assessments , enforce SOC2 , ISO 27001 , or HIPAA compliance. Work with AWS Config , AWS CloudTrail , AWS GuardDuty , and Security Hub . Infrastructure Automation & DevOps Build and maintain CI/CD pipelines using Jenkins , Git , Terraform , CloudFormation , and Ansible . Manage containerized workloads using Docker , Kubernetes , and orchestration tools. Implement Infrastructure as Code (IaC) and Configuration Management for consistent deployments. Cost Optimization & Performance Tuning Use AWS Cost Explorer , Budgets , and Trusted Advisor to monitor and reduce costs. Optimize workloads through auto-scaling , spot instances , Savings Plans , and rightsizing . Regularly audit and report on cloud spend and performance KPIs. Monitoring & Reliability Set up logging and monitoring using CloudWatch , Prometheus , Nagios , or Datadog . Define and maintain SLA/SLO/SLI metrics, runbooks, and incident response procedures. Implement blue/green deployments , rollback strategies , and chaos engineering principles. Documentation & Collaboration Maintain architecture diagrams using Lucidchart , Draw.io , or Visio . Document SOPs for cloud operations, deployments, and recovery scenarios. Collaborate with engineering, security, product, and QA teams. Skills and Experience Required Cloud Platform : AWS (EC2, S3, RDS, CloudFront, Lambda, VPC, CloudFormation) DevOps Tools : Terraform, Ansible, Jenkins, GitHub Actions, Docker, Kubernetes Security & IAM : AWS IAM, PIM/PAM, encryption, CloudTrail, GuardDuty, Security Hub Scripting : Python, Bash, Shell scripting Monitoring & Logging : CloudWatch, ELK Stack, Datadog, Nagios Networking : VPC, Subnetting, Route Tables, NAT Gateway, VPNs Certifications (preferred): AWS Certified Solutions Architect Professional AWS Certified DevOps Engineer Professional Preferred Qualifications Experience in hybrid environments (AWS + on-prem) Working knowledge of Azure or GCP is a plus Familiarity with microservices architecture Hands-on with CI/CD using GitOps or DevOps pipelines Background in ERP, retail, or high-availability enterprise SaaS platforms Soft Skills Excellent communication and stakeholder management Strong analytical and problem-solving mindset Ability to work in 24x7 environments and rotational shifts Self-motivated and team-oriented approach

Posted 1 month ago

Apply

2.0 - 5.0 years

3 - 5 Lacs

Bengaluru

Work from Office

Job Title: Senior QA Engineer & Data Steward Data Team Location: Bangalore Experience: 24 Years Education: B.E./B.Tech/MCA or equivalent Employment Type: Full Time, Permanent Key Responsibilities: Design and develop test plans, test cases , and detailed QA documentation for data and platform projects. Develop and execute automated testing scripts to increase efficiency and minimize manual efforts. Perform functional, regression, integration, smoke, and user acceptance testing on data-driven systems. Conduct in-depth ETL, database, and data warehouse testing (Redshift, S3, MS SQL). Participate actively in Agile ceremonies and collaborate with developers, BAs, and business stakeholders. Ensure timely and accurate data onboarding in the Enterprise Data Warehouse (EDW) to support dashboards and analytics. Identify gaps in testing and drive continuous QA process improvements . Execute API and performance testing to ensure system reliability and scalability. Act as a data steward to maintain data quality, availability, and governance across key systems. Key Skills Required: Strong knowledge of Advanced SQL and RDBMS concepts. Experience with AWS technologies like Redshift, S3 and data warehousing concepts. Proficiency in MS Excel for data validation and reporting. Familiarity with QA testing methodologies and the Software Development Life Cycle (SDLC) . Hands-on experience with test automation tools . Experience in client/server testing , web and mobile application testing is a plus. Excellent analytical, troubleshooting, and problem-solving skills . Strong communication and interpersonal skills ; able to coordinate independently or in a team. Desired Candidate Profile: 24 years of relevant experience in QA and/or Data Engineering roles. Proven experience with QA in a data-heavy environment . Ability to think creatively and drive testing efficiency and data quality. Team player with the ability to work independently when needed. Interested Candidates Kindly share your updated details on amruthaj@titan.co.in

Posted 1 month ago

Apply

5.0 - 10.0 years

7 - 12 Lacs

Bengaluru

Work from Office

Job Summary Synechron is seeking a skilled Node.js Developer to design, develop, and maintain scalable, high-performance applications leveraging Node.js and associated technologies. In this role, you will collaborate with cross-functional teams to deliver innovative solutions that meet business needs while ensuring system reliability, security, and efficiency. Your expertise will contribute to the organizations digital transformation goals, supporting seamless product delivery and operational excellence. Software Requirements Required Skills: hands-on coding experience with Node.js and JavaScript professional experience with TypeScript for application development Practical knowledge of databases (SQL and NoSQL) relevant to Node.js applications Experience with performance tuning, debugging, and application monitoring Familiarity with CI/CD pipelines and associated tools for automation and deployment Preferred Skills: Experience with GraphQL API implementation Knowledge of API gateways such as 3Scale or similar Exposure to WebSocket, Pushpin, or same-message queue systems like Kafka, AWS SQS, or Azure Service Bus Cloud deployment experience, particularly with AWS or Azure Overall Responsibilities Develop and sustain scalable, efficient, and reliable backend systems using Node.js and related frameworks Design, implement, and optimize RESTful APIs and GraphQL endpoints to facilitate client-server communication Collaborate with teams including project managers, frontend developers, and DevOps to deliver end-to-end solutions Participate in the full software development lifecycle, including planning, coding, testing, deployment, and maintenance Troubleshoot, monitor, and enhance application performance and stability in distributed systems Implement security best practices including token-based authentication (JWT, OAuth) and Single Sign-On (SSO) Manage message queues and event-driven architectures using tools like Kafka, AWS, or Azure services Contribute to CI/CD process setup and continuous improvement initiatives using tools like Jenkins, Git, and Maven Stay updated on emerging technologies, industry best practices, and incorporate them into development activities Technical Skills (By Category) Programming Languages (Essential): Node.js and JavaScript (required) TypeScript (required for application development) Frameworks & Libraries (Essential): REST API development with Node.js GraphQL implementation and integrations Express.js or comparable web frameworks Data Management & Cloud Technologies (Preferred): Databases: SQL (MySQL, PostgreSQL), NoSQL (MongoDB) Cloud platforms: AWS (ECS, S3, Lambda), Azure (Container Apps, Functions) DevOps & CI/CD (Essential): Automated build and deployment pipelines with Jenkins, Git, Maven, Harness, or TeamCity Containerization: Docker, Kubernetes (preferred) Infrastructure as Code tools (optional but advantageous) Messaging & Queue Management (Preferred): Kafka, AWS SQS, Azure Service Bus, Pushpin or similar solutions Security & Authentication (Essential): Token-based authentication management (JWT, OAuth2) SSO integrations (preferred) Experience Requirements Minimum of 5 years professional application development experience in Node.js and JavaScript At least 3 years of direct experience using TypeScript in enterprise settings Proven track record in performance tuning, debugging, and application monitoring Experience working in distributed architecture environments, especially with cloud services and container orchestration platforms Demonstrated ability to work within Agile teams and contribute effectively to sprint cycles Day-to-Day Activities Write, review, and maintain high-quality, scalable backend code using Node.js and TypeScript Develop and optimize RESTful APIs and GraphQL endpoints according to project specifications Collaborate daily with cross-disciplinary teams during stand-ups, planning, and review sessions Troubleshoot application issues, conduct root cause analysis, and apply fixes proactively Build, test, and deploy containerized applications using Docker and Kubernetes in cloud environments Monitor system performance, implement improvements, and ensure system security and compliance standards Participate in code reviews, team knowledge sharing, and best practices implementation Maintain documentation for system architecture, APIs, deployment processes, and troubleshooting procedures Qualifications Bachelors degree in Computer Science, Information Technology, or related field, or equivalent industry experience Certifications in cloud platforms (AWS, Azure), DevOps, or API design are a plus Strong understanding of scalable, distributed system design and enterprise application architecture Professional Competencies Strong analytical and problem-solving capabilities, with a focus on performance and security Effective communicator with the ability to articulate complex technical concepts Leadership qualities with experience in guiding development teams and collaborative problem-solving Self-motivated with a proactive approach to learning and professional growth Adaptability in rapidly evolving technological environments Excellent time management skills, prioritizing tasks to meet project deadlines

Posted 1 month ago

Apply

6.0 - 9.0 years

15 - 20 Lacs

Bengaluru

Work from Office

Warm welcome from SP Staffing Services! Reaching out to you regarding permanent opportunity !! Job Description: Exp: 6-9 yrs Location: Bangalore Skill: Cloud Developer AWS Lambda/S3 (For Storage) Backend NodeJS Orchestration Tools StepFunctions OR Apache Airflow, Events Database - SQL Scripting Python Good if the resource has. Interested can share your resume to sangeetha.spstaffing@gmail.com with below inline details. Full Name as per PAN: Mobile No: Alt No/ Whatsapp No: Total Exp: Rel Exp in AWS/Lambda: Rel Exp in Python: Rel Exp in Node JS: Current CTC: Expected CTC: Notice Period (Official): Notice Period (Negotiable)/Reason: Date of Birth: PAN number: Reason for Job Change: Offer in Pipeline (Current Status): Availability for virtual interview on weekdays between 10 AM- 4 PM(plz mention time): Current Res Location: Preferred Job Location: Whether educational % in 10th std, 12th std, UG is all above 50%? Do you have any gaps in between your education or Career? If having gap, please mention the duration in months/year:

Posted 1 month ago

Apply

3.0 - 5.0 years

6 - 9 Lacs

Gurugram

Work from Office

Job Requirements Bachelors/Master in Computer Science or equivalent College is important, but your passion for computer science is most important. 3-5 years of experience in the industry solving complex problems from scratch. Design, develop, and maintain scalable backend systems using Node.js, Express.js, and TypeScript and database design with MySQL. Implement and manage CI/CD pipelines for efficient testing, building, and deployments. Use Git for version control to ensure clean, collaborative, and well-documented code. Leverage AWS services (EC2, S3, RDS, CodeDeploy, CloudFront, Secret Manager, IAM etc.) to build secure cloud-based solutions. Work with logging and monitoring to ensure system health and optimize performance. Improve scalability and performance of backend systems and infrastructure.Bonus Skills Frontend Knowledge: Familiarity with React.js, Next.js, and Tailwind CSS is a plus. Performance Optimizations: Understanding of performance optimization strategies on both backend and frontend. Note: Probo is a technology first company and we want to practice and grow the culture of technical excellence In this endeavor, we are also building an Internal Tooling Team which ideally builds for Probo to optimize work efficiency, transparency, and seamless collaboration You will be the rockstar developer whose tools will be used by everyone at Probo

Posted 1 month ago

Apply

4.0 - 6.0 years

7 - 9 Lacs

Chennai

Work from Office

You want more out of a career. A place to share your ideas freely even if theyre daring or different. Where the true you can learn, grow, and thrive. At Verizon, we power and empower how people live, work and play by connecting them to what brings them joy. We do what we love driving innovation, creativity, and impact in the world. Our V Team is a community of people who anticipate, lead, and believe that listening is where learning begins. In crisis and in celebration, we come together lifting our communities and building trust in how we show up, everywhere always. Want inJoin the #VTeamLife. What youll be doing... You will be playing a prominent role in supporting middleware products across all business portfolios. You will be involved in engineering activities, join CMDs to resolve Critical Blockers, Capacity planning, Performance Fine tuning, Middleware Product Upgrades etc. You will focus on developing automated self-healing solutions to make the application more resilient based on root cause analysis. The role requires good problem solving and automation skills to get deeper into the issues and improve MTTR. Being responsible for availability and stability of applications. Coordinating with multiple stakeholders for onboarding new application into cloud, application migration from On-Prem to Cloud, middleware product migration. Performing application performance fine tuning. Troubleshooting critical issues and performing root cause analysis. Performing middleware upgrades as part of maintaining security standards. Providing technical recommendations to improve application performance. Remediating middleware product vulnerabilities across various applications. Guiding and supporting fellow team members to ensure tasks activities / projects are tracked and completed on time. Where you'll be working... In this hybrid role, you'll have a defined work location that includes work from home and assigned office days set by your manager. Youll need to have: Bachelors degree or four or more years of work experience. Four or more years of relevant experience required, demonstrated through work experience Good experience in middleware technologiesincluding but not limited to Weblogic, Apache HTTPD, Apache Tomcat, Nginx etc. Strong end to end middleware product managementknowledge. Good experience in DevOps tools like Jenkins Artifactory Gitlab Good experience in AWS cloud (EC2, ELB, Auto Scaling, RDS, S3, CloudWatch, IAM, Cloud formation template etc..) Good knowledge in automation (shell scripting Ansible) Good experience in all Linux flavors and Solaris Even better if you have: A Masters degree.

Posted 1 month ago

Apply

9.0 - 14.0 years

15 - 25 Lacs

Pune, Bengaluru, Mumbai (All Areas)

Hybrid

Job Position - Lead AWS Infrastructure Devops Experience - 9 - 13 Years Location - Pune/Mumbai/Bangalore Notice Period - Only Immediate Joiner Can Apply ( Serving Notice Period Candidate max 15 June 2025 ) PAN Number is Mandatory - we have portal we need to upload your profile mandatory skills - AWS infrastructure, DataOps, Amazon Redshift and Databricks ,AWS Data Services: Glue, RDS, S3, EBS, EFS, Glacier, Lambda, Step Functions, API Gateway, Airflow , AWS Services Interested candidate please share your CV's on rutuja.s@bwbsol.com / 9850368787

Posted 1 month ago

Apply

3.0 - 6.0 years

4 - 9 Lacs

Chennai

Work from Office

**Position Overview:** We are seeking an experienced AWS Cloud Engineer with a robust background in Site Reliability Engineering (SRE). The ideal candidate will have 3 to 6 years of hands-on experience managing and optimizing AWS cloud environments with a strong focus on performance, reliability, scalability, and cost efficiency. **Key Responsibilities:** * Deploy, manage, and maintain AWS infrastructure, including EC2, ECS Fargate, EKS, RDS Aurora, VPC, Glue, Lambda, S3, CloudWatch, CloudTrail, API Gateway (REST), Cognito, Elasticsearch, ElastiCache, and Athena. * Implement and manage Kubernetes (K8s) clusters, ensuring high availability, security, and optimal performance. * Create, optimize, and manage containerized applications using Docker. * Develop and manage CI/CD pipelines using AWS native services and YAML configurations. * Proactively identify cost-saving opportunities and apply AWS cost optimization techniques. * Set up secure access and permissions using IAM roles and policies. * Install, configure, and maintain application environments including: * Python-based frameworks: Django, Flask, FastAPI * PHP frameworks: CodeIgniter 4 (CI4), Laravel * Node.js applications * Install and integrate AWS SDKs into application environments for seamless service interaction. * Automate infrastructure provisioning, monitoring, and remediation using scripting and Infrastructure as Code (IaC). * Monitor, log, and alert on infrastructure and application performance using CloudWatch and other observability tools. * Manage and configure SSL certificates with ACM and load balancing using ELB. * Conduct advanced troubleshooting and root-cause analysis to ensure system stability and resilience. **Technical Skills:** * Strong experience with AWS services: EC2, ECS, EKS, Lambda, RDS Aurora, S3, VPC, Glue, API Gateway, Cognito, IAM, CloudWatch, CloudTrail, Athena, ACM, ELB, ElastiCache, and Elasticsearch. * Proficiency in container orchestration and microservices using Docker and Kubernetes. * Competence in scripting (Shell/Bash), configuration with YAML, and automation tools. * Deep understanding of SRE best practices, SLAs, SLOs, and incident response. * Experience deploying and supporting production-grade applications in Python (Django, Flask, FastAPI), PHP (CI4, Laravel), and Node.js. * Solid grasp of CI/CD workflows using AWS services. * Strong troubleshooting skills and familiarity with logging/monitoring stacks.

Posted 1 month ago

Apply

1.0 - 5.0 years

9 - 13 Lacs

Bengaluru

Work from Office

Minimum Qualifications:- BA/BSc/B.E./BTech degree from Tier I, II college in Computer Science, Statistics, Mathematics, Economics or related fields- 1to 4 years of experience in working with data and conducting statistical and/or numerical analysis- Strong understanding of how data can be stored and accessed in different structures- Experience with writing computer programs to solve problems- Strong understanding of data operations such as sub-setting, sorting, merging, aggregating and CRUD operations- Ability to write SQL code and familiarity with R/Python, Linux shell commands- Be willing and able to quickly learn about new businesses, database technologies and analysis techniques- Ability to tell a good story and support it with numbers and visuals- Strong oral and written communication Preferred Qualifications:- Experience working with large datasets- Experience with AWS analytics infrastructure (Redshift, S3, Athena, Boto3)- Experience building analytics applications leveraging R, Python, Tableau, Looker or other- Experience in geo-spatial analysis with POSTGIS, QGIS

Posted 1 month ago

Apply

0.0 - 1.0 years

4 - 8 Lacs

Gurugram

Work from Office

Job Description: Good understanding of Python / Django / Flask tech stack with exposure to RDBMS. Understanding of OOPs and programming fundamentals. Should be able to write efficient algorithms to solve business problems. Should be flexible to cut across programming languages to solve a problem end to end and work with cross-stack dev team. Should be ready to work in high availability and complex business systems, with readiness to learn and contribute each day. Experience: 0 to 2 years Location: Gurgaon Qualification: BE / BTECH / MCA / MTECH in Computer Science or related stream Competencies: Drive for results, Very High on Aptitude, ANALYTICALLY SHARP and EAGER to learn new technologies. Job Responsibilities: Passionate about programming Ready to solve real world challenges with efficient coding using open source stack. Drive for results, ANALYTICALLY SHARP and EAGER to learn new technologies. Ready to work in challenging environment where technology is no bar Learn and improvise on the fly as every day would be a new day/new challenges Who you are: Understanding project requirements as provided in the Design Documents and develop the application modules to meet the requirements Work with developers and architects, to ensure bug free and timely delivery Following coding best practices and guidelines. Support live systems with enhancements, maintenance and/or bug fixes. Conducting unit testing / implementing unit test cases. Should be a PASSIONATE about work and delivering quality results. Strong programming and problem solving skills. Good understanding of OOPs / Python / Django and/or Flask Knowledge of AWS serverless stack (Lambda, DynamoDB, SQS, S3) would be a value add Knowledge of REST/JSON APIs and/or SOAP/XML webservices Experience with Github and advanced Github features (good to have). *Should be available to join within 30 days from the date of offer

Posted 1 month ago

Apply

3.0 - 8.0 years

10 - 18 Lacs

Kolkata, Hyderabad, Pune

Work from Office

JD is below: Design, develop, and deploy generative AI based applications using AWS Bedrock. Proficiency in prompt engineering and RAG pipeline Experience in building Agentic Generative AI applications Fine-tune and optimize foundation models from AWS Bedrock for various use cases. Integrate generative AI capabilities into enterprise applications and workflows. Collaborate with cross-functional teams, including data scientists, ML engineers, and software developers, to implement AI-powered solutions. Utilize AWS services (S3, Lambda, SageMaker, etc.) to build scalable AI solutions. Develop APIs and interfaces to enable seamless interaction with AI models. Monitor model performance, conduct A/B testing, and enhance AI-driven products. Ensure compliance with AI ethics, governance, and security best practices. Stay up-to-date with advancements in generative AI and AWS cloud technologies. Required Skills & Qualifications: Bachelor's or Masters degree in Computer Science, AI, Machine Learning, or a related field. 3+ years of experience in AI/ML development, with a focus on generative AI. Hands-on experience with AWS Bedrock and foundation models. Proficiency in Python and ML frameworks. Experience with AWS services such as SageMaker, Lambda, API Gateway, DynamoDB, and S3. Experience with prompt engineering, model fine-tuning, and inference optimization. Familiarity with MLOps practices and CI/CD pipelines for AI deployment. Ability to work with large-scale datasets and optimize AI models for performance. Excellent problem-solving skills and ability to work in an agile environment. Preferred Qualifications: AWS Certified Machine Learning – Specialty or equivalent certification. Experience in LLMOps and model lifecycle management. Knowledge of multi-modal AI models (text, image, video generation). Hands-on experience with other cloud AI platforms (Google Vertex AI, Azure OpenAI). Strong understanding of ethical AI principles and bias mitigation techniques.

Posted 1 month ago

Apply

3.0 - 5.0 years

7 - 9 Lacs

Bengaluru

Work from Office

We are looking for a skilled Senior Associate to join our team in Bengaluru, with 3-5 years of experience in AWS infrastructure solutions architecture. The ideal candidate will have a strong background in designing and implementing scalable cloud-based systems. Roles and Responsibility Design and implement secure, scalable, and highly available cloud-based systems using AWS services such as EC2, S3, EBS, and Lambda. Collaborate with cross-functional teams to identify business requirements and develop technical solutions that meet those needs. Develop and maintain technical documentation for cloud-based systems, including design documents and implementation guides. Troubleshoot and resolve complex technical issues related to cloud-based systems, ensuring minimal downtime and optimal system performance. Participate in code reviews to ensure high-quality code standards and adherence to best practices. Stay up-to-date with the latest trends and technologies in cloud computing, applying this knowledge to improve existing systems and processes. Job Requirements Strong understanding of cloud computing concepts, including IaaS, PaaS, and SaaS. Proficiency in programming languages such as Python, Java, or C++ is desirable. Experience with containerization using Docker and orchestration using Kubernetes is preferred. Knowledge of Agile methodologies and version control systems like Git is beneficial. Excellent problem-solving skills, with the ability to analyze complex technical issues and develop creative solutions. Strong communication and collaboration skills, with the ability to work effectively with cross-functional teams. At least 3 years of AWS cloud IaaS and PaaS hands-on experience. A seasoned candidate managing clients requirements end-to-end (discovery, planning, design, implementation, and transition). Plan, develop, and configure AWS infrastructure from conceptualization through stabilization using various AWS tools, methodology, design best practices, etc. Planning for data backup, disaster recovery, data privacy, and security requirements to ensure solution remains secured and compliant with security standards and frameworks. Monitoring, troubleshooting, and resolving infrastructure issues in AWS cloud. Experience in keeping cloud environment secure and proactively preventing downtime. Good knowledge in determining associated security risks and mitigation techniques. Ability to work both independently and in a multi-disciplinary team environment. Own the design documentation of solution implemented i.e., High Level & Low Level Design documents. Perform routine infrastructure analysis and evaluation on resource requirements necessary to maintain and/or improve SLAs. Strong problem-solving skills, customer service, and people skills. Excellent command of the English language (both verbal and written).

Posted 1 month ago

Apply

6.0 - 10.0 years

8 - 12 Lacs

Bengaluru

Work from Office

Notice Period : Immediate - 30 days Mandatory Skills : Big Data, Python, SQL, Spark/Pyspark, AWS Cloud JD and required Skills & Responsibilities : - Actively participate in all phases of the software development lifecycle, including requirements gathering, functional and technical design, development, testing, roll-out, and support. - Solve complex business problems by utilizing a disciplined development methodology. - Produce scalable, flexible, efficient, and supportable solutions using appropriate technologies. - Analyse the source and target system data. Map the transformation that meets the requirements. - Interact with the client and onsite coordinators during different phases of a project. - Design and implement product features in collaboration with business and Technology stakeholders. - Anticipate, identify, and solve issues concerning data management to improve data quality. - Clean, prepare, and optimize data at scale for ingestion and consumption. - Support the implementation of new data management projects and re-structure the current data architecture. - Implement automated workflows and routines using workflow scheduling tools. - Understand and use continuous integration, test-driven development, and production deployment frameworks. - Participate in design, code, test plans, and dataset implementation performed by other data engineers in support of maintaining data engineering standards. - Analyze and profile data for the purpose of designing scalable solutions. - Troubleshoot straightforward data issues and perform root cause analysis to proactively resolve product issues. Required Skills : - 5+ years of relevant experience developing Data and analytic solutions. - Experience building data lake solutions leveraging one or more of the following AWS, EMR, S3, Hive & PySpark - Experience with relational SQL. - Experience with scripting languages such as Python. - Experience with source control tools such as GitHub and related dev process. - Experience with workflow scheduling tools such as Airflow. - In-depth knowledge of AWS Cloud (S3, EMR, Databricks) - Has a passion for data solutions. - Has a strong problem-solving and analytical mindset - Working experience in the design, Development, and test of data pipelines. - Experience working with Agile Teams. - Able to influence and communicate effectively, both verbally and in writing, with team members and business stakeholders - Able to quickly pick up new programming languages, technologies, and frameworks. - Bachelor's degree in computer science Apply Insights Follow-up Save this job for future reference Did you find something suspicious? Report Here! Hide This Job? Click here to hide this job for you. You can also choose to hide all the jobs from the recruiter.

Posted 1 month ago

Apply

6.0 - 10.0 years

8 - 12 Lacs

Bengaluru

Work from Office

Notice Period : Immediate - 30 days Mandatory Skills : Big Data, Python, SQL, Spark/Pyspark, AWS Cloud JD and required Skills & Responsibilities : - Actively participate in all phases of the software development lifecycle, including requirements gathering, functional and technical design, development, testing, roll-out, and support. - Solve complex business problems by utilizing a disciplined development methodology. - Produce scalable, flexible, efficient, and supportable solutions using appropriate technologies. - Analyse the source and target system data. Map the transformation that meets the requirements. - Interact with the client and onsite coordinators during different phases of a project. - Design and implement product features in collaboration with business and Technology stakeholders. - Anticipate, identify, and solve issues concerning data management to improve data quality. - Clean, prepare, and optimize data at scale for ingestion and consumption. - Support the implementation of new data management projects and re-structure the current data architecture. - Implement automated workflows and routines using workflow scheduling tools. - Understand and use continuous integration, test-driven development, and production deployment frameworks. - Participate in design, code, test plans, and dataset implementation performed by other data engineers in support of maintaining data engineering standards. - Analyze and profile data for the purpose of designing scalable solutions. - Troubleshoot straightforward data issues and perform root cause analysis to proactively resolve product issues. Required Skills : - 5+ years of relevant experience developing Data and analytic solutions. - Experience building data lake solutions leveraging one or more of the following AWS, EMR, S3, Hive & PySpark - Experience with relational SQL. - Experience with scripting languages such as Python. - Experience with source control tools such as GitHub and related dev process. - Experience with workflow scheduling tools such as Airflow. - In-depth knowledge of AWS Cloud (S3, EMR, Databricks) - Has a passion for data solutions. - Has a strong problem-solving and analytical mindset - Working experience in the design, Development, and test of data pipelines. - Experience working with Agile Teams. - Able to influence and communicate effectively, both verbally and in writing, with team members and business stakeholders - Able to quickly pick up new programming languages, technologies, and frameworks. - Bachelor's degree in computer science

Posted 1 month ago

Apply

5.0 - 10.0 years

20 - 27 Lacs

Hyderabad

Work from Office

Position: Experienced Data Engineer We are seeking a skilled and experienced Data Engineer to join our fast-paced and innovative Data Science team. This role involves building and maintaining data pipelines across multiple cloud-based data platforms. Requirements: A minimum of 5 years of total experience, with at least 3-4 years specifically in Data Engineering on a cloud platform. Key Skills & Experience: Proficiency with AWS services such as Glue, Redshift, S3, Lambda, RDS , Amazon Aurora ,DynamoDB ,EMR, Athena, Data Pipeline , Batch Job. Strong expertise in: SQL and Python DBT and Snowflake OpenSearch, Apache NiFi, and Apache Kafka In-depth knowledge of ETL data patterns and Spark-based ETL pipelines. Advanced skills in infrastructure provisioning using Terraform and other Infrastructure-as-Code (IaC) tools. Hands-on experience with cloud-native delivery models, including PaaS, IaaS, and SaaS. Proficiency in Kubernetes, container orchestration, and CI/CD pipelines. Familiarity with GitHub Actions, GitLab, and other leading DevOps and CI/CD solutions. Experience with orchestration tools such as Apache Airflow and serverless/FaaS services. Exposure to NoSQL databases is a plus

Posted 1 month ago

Apply

10.0 - 14.0 years

12 - 17 Lacs

Hyderabad

Work from Office

Overview We are seeking a highly skilled and motivated Associate Manager AWS Site Reliability Engineer (SRE) to join our team. As an Associate Manager AWS SRE, you will play a critical role in designing, managing, and optimizing our cloud infrastructure to ensure high availability, reliability, and scalability of our services. You will collaborate with cross-functional teams to implement best practices, automate processes, and drive continuous improvements in our cloud environment Responsibilities Design and Implement Cloud Infrastructure: Architect, deploy, and maintain AWS infrastructure using Infrastructure-as-Code (IaC) tools such as Terraform or CloudFormation. Monitor and Optimize Performance: Develop and implement monitoring, alerting, and logging solutions to ensure the performance and reliability of our systems. Ensure High Availability: Design and implement strategies for achieving high availability and disaster recovery, including backup and failover mechanisms. Automate Processes: Automate repetitive tasks and processes to improve efficiency and reduce human error using tools such as AWS Lambda, Jenkins, and Ansible. Incident Response: Lead and participate in incident response activities, troubleshoot issues, and perform root cause analysis to prevent future occurrences. Security and Compliance: Implement and maintain security best practices and ensure compliance with industry standards and regulations. Collaborate with Development Teams: Work closely with software development teams to ensure smooth deployment and operation of applications in the cloud environment. Capacity Planning: Perform capacity planning and scalability assessments to ensure our infrastructure can handle growth and increased demand. Continuous Improvement: Drive continuous improvement initiatives by identifying and implementing new tools, technologies, and processes. Qualifications Experience: 10+ years of experience and Minimum of 5 years of experience in a Site Reliability Engineer (SRE) or DevOps role, with a focus on AWS cloud infrastructure. Technical Skills: Proficiency in AWS services such as EC2, S3, RDS, VPC, Lambda, CloudFormation, and CloudWatch. Automation Tools: Experience with Infrastructure-as-Code (IaC) tools such as Terraform or CloudFormation, and configuration management tools like Ansible or Chef. Scripting: Strong scripting skills in languages such as Python, Bash, or PowerShell. Monitoring and Logging: Experience with monitoring and logging tools such as Prometheus, Grafana, ELK Stack, or CloudWatch. Problem-Solving: Excellent troubleshooting and problem-solving skills, with a proactive and analytical approach. Communication: Strong communication and collaboration skills, with the ability to work effectively in a team-oriented environment. Certifications: AWS certifications such as AWS Certified Solutions Architect, AWS Certified DevOps Engineer, or AWS Certified SysOps Administrator are highly desirable. Education: Bachelors degree in Computer Science, Engineering, or a related field, or equivalent work experience.

Posted 1 month ago

Apply

4.0 - 9.0 years

11 - 21 Lacs

Bengaluru

Hybrid

Java,Spring Boot, AWSGraphQ,RDBMS PostgreSQL, REST APIs. AWS services including EC2, EKS, S3, CloudWatch, Lambda, SNS, and SQS, Junit/Jest, and AI Tools like GitHub Copilot.Desirable-Node.js, Hasura frameworks.

Posted 1 month ago

Apply

5.0 - 10.0 years

10 - 20 Lacs

Bengaluru, Mumbai (All Areas)

Work from Office

Key Responsibilities Design, develop, and optimize data pipelines using Python and AWS services such asGlue, Lambda, S3, EMR, Redshift, Athena, and Kinesis. Implement ETL/ELT processes to extract, transform, and load data from various sources into centralized repositories (e.g., data lakes or data warehouses). Collaborate with cross-functional teams to understand business requirements and translate them into scalable data solutions. Monitor, troubleshoot, and enhance data workflows for performance and cost optimization. Ensure data quality and consistency by implementing validation and governance practices. Work on data security best practices in compliance with organizational policies and regulations. Automate repetitive data engineering tasks using Python scripts and frameworks. Leverage CI/CD pipelines for deployment of data workflows on AWS. Required Skills and Qualifications Professional Experience:5+ years of experiencein data engineering or a related field. Programming: Strong proficiency inPython, with experience in libraries likepandas,pySpark,orboto3. AWS Expertise: Hands-on experience with core AWS services for data engineering, such as: AWS Gluefor ETL/ELT. S3for storage. RedshiftorAthenafor data warehousing and querying. Lambdafor serverless compute. KinesisorSNS/SQSfor data streaming. IAM Rolesfor security. Databases: Proficiency in SQL and experience withrelational(e.g., PostgreSQL, MySQL) andNoSQL(e.g., DynamoDB) databases. Data Processing: Knowledge of big data frameworks (e.g., Hadoop, Spark) is a plus. DevOps: Familiarity with CI/CD pipelines and tools like Jenkins, Git, and CodePipeline. Version Control: Proficient with Git-based workflows. Problem Solving: Excellent analytical and debugging skills. Optional Skills Knowledge ofdata modelinganddata warehouse designprinciples. Experience withdata visualization tools(e.g., Tableau, Power BI). Familiarity with containerization (e.g., Docker) and orchestration (e.g., Kubernetes). Exposure to other programming languages like Scala or Java.

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies