Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 - 5.0 years
6 - 9 Lacs
Gurugram
Work from Office
Job Requirements Bachelors/Master in Computer Science or equivalent College is important, but your passion for computer science is most important. 3-5 years of experience in the industry solving complex problems from scratch. Design, develop, and maintain scalable backend systems using Node.js, Express.js, and TypeScript and database design with MySQL. Implement and manage CI/CD pipelines for efficient testing, building, and deployments. Use Git for version control to ensure clean, collaborative, and well-documented code. Leverage AWS services (EC2, S3, RDS, CodeDeploy, CloudFront, Secret Manager, IAM etc.) to build secure cloud-based solutions. Work with logging and monitoring to ensure system health and optimize performance. Improve scalability and performance of backend systems and infrastructure.Bonus Skills Frontend Knowledge: Familiarity with React.js, Next.js, and Tailwind CSS is a plus. Performance Optimizations: Understanding of performance optimization strategies on both backend and frontend. Note: Probo is a technology first company and we want to practice and grow the culture of technical excellence In this endeavor, we are also building an Internal Tooling Team which ideally builds for Probo to optimize work efficiency, transparency, and seamless collaboration You will be the rockstar developer whose tools will be used by everyone at Probo
Posted 1 month ago
4.0 - 6.0 years
7 - 9 Lacs
Chennai
Work from Office
You want more out of a career. A place to share your ideas freely even if theyre daring or different. Where the true you can learn, grow, and thrive. At Verizon, we power and empower how people live, work and play by connecting them to what brings them joy. We do what we love driving innovation, creativity, and impact in the world. Our V Team is a community of people who anticipate, lead, and believe that listening is where learning begins. In crisis and in celebration, we come together lifting our communities and building trust in how we show up, everywhere always. Want inJoin the #VTeamLife. What youll be doing... You will be playing a prominent role in supporting middleware products across all business portfolios. You will be involved in engineering activities, join CMDs to resolve Critical Blockers, Capacity planning, Performance Fine tuning, Middleware Product Upgrades etc. You will focus on developing automated self-healing solutions to make the application more resilient based on root cause analysis. The role requires good problem solving and automation skills to get deeper into the issues and improve MTTR. Being responsible for availability and stability of applications. Coordinating with multiple stakeholders for onboarding new application into cloud, application migration from On-Prem to Cloud, middleware product migration. Performing application performance fine tuning. Troubleshooting critical issues and performing root cause analysis. Performing middleware upgrades as part of maintaining security standards. Providing technical recommendations to improve application performance. Remediating middleware product vulnerabilities across various applications. Guiding and supporting fellow team members to ensure tasks activities / projects are tracked and completed on time. Where you'll be working... In this hybrid role, you'll have a defined work location that includes work from home and assigned office days set by your manager. Youll need to have: Bachelors degree or four or more years of work experience. Four or more years of relevant experience required, demonstrated through work experience Good experience in middleware technologiesincluding but not limited to Weblogic, Apache HTTPD, Apache Tomcat, Nginx etc. Strong end to end middleware product managementknowledge. Good experience in DevOps tools like Jenkins Artifactory Gitlab Good experience in AWS cloud (EC2, ELB, Auto Scaling, RDS, S3, CloudWatch, IAM, Cloud formation template etc..) Good knowledge in automation (shell scripting Ansible) Good experience in all Linux flavors and Solaris Even better if you have: A Masters degree.
Posted 1 month ago
9.0 - 14.0 years
15 - 25 Lacs
Pune, Bengaluru, Mumbai (All Areas)
Hybrid
Job Position - Lead AWS Infrastructure Devops Experience - 9 - 13 Years Location - Pune/Mumbai/Bangalore Notice Period - Only Immediate Joiner Can Apply ( Serving Notice Period Candidate max 15 June 2025 ) PAN Number is Mandatory - we have portal we need to upload your profile mandatory skills - AWS infrastructure, DataOps, Amazon Redshift and Databricks ,AWS Data Services: Glue, RDS, S3, EBS, EFS, Glacier, Lambda, Step Functions, API Gateway, Airflow , AWS Services Interested candidate please share your CV's on rutuja.s@bwbsol.com / 9850368787
Posted 1 month ago
3.0 - 6.0 years
4 - 9 Lacs
Chennai
Work from Office
**Position Overview:** We are seeking an experienced AWS Cloud Engineer with a robust background in Site Reliability Engineering (SRE). The ideal candidate will have 3 to 6 years of hands-on experience managing and optimizing AWS cloud environments with a strong focus on performance, reliability, scalability, and cost efficiency. **Key Responsibilities:** * Deploy, manage, and maintain AWS infrastructure, including EC2, ECS Fargate, EKS, RDS Aurora, VPC, Glue, Lambda, S3, CloudWatch, CloudTrail, API Gateway (REST), Cognito, Elasticsearch, ElastiCache, and Athena. * Implement and manage Kubernetes (K8s) clusters, ensuring high availability, security, and optimal performance. * Create, optimize, and manage containerized applications using Docker. * Develop and manage CI/CD pipelines using AWS native services and YAML configurations. * Proactively identify cost-saving opportunities and apply AWS cost optimization techniques. * Set up secure access and permissions using IAM roles and policies. * Install, configure, and maintain application environments including: * Python-based frameworks: Django, Flask, FastAPI * PHP frameworks: CodeIgniter 4 (CI4), Laravel * Node.js applications * Install and integrate AWS SDKs into application environments for seamless service interaction. * Automate infrastructure provisioning, monitoring, and remediation using scripting and Infrastructure as Code (IaC). * Monitor, log, and alert on infrastructure and application performance using CloudWatch and other observability tools. * Manage and configure SSL certificates with ACM and load balancing using ELB. * Conduct advanced troubleshooting and root-cause analysis to ensure system stability and resilience. **Technical Skills:** * Strong experience with AWS services: EC2, ECS, EKS, Lambda, RDS Aurora, S3, VPC, Glue, API Gateway, Cognito, IAM, CloudWatch, CloudTrail, Athena, ACM, ELB, ElastiCache, and Elasticsearch. * Proficiency in container orchestration and microservices using Docker and Kubernetes. * Competence in scripting (Shell/Bash), configuration with YAML, and automation tools. * Deep understanding of SRE best practices, SLAs, SLOs, and incident response. * Experience deploying and supporting production-grade applications in Python (Django, Flask, FastAPI), PHP (CI4, Laravel), and Node.js. * Solid grasp of CI/CD workflows using AWS services. * Strong troubleshooting skills and familiarity with logging/monitoring stacks.
Posted 1 month ago
1.0 - 5.0 years
9 - 13 Lacs
Bengaluru
Work from Office
Minimum Qualifications:- BA/BSc/B.E./BTech degree from Tier I, II college in Computer Science, Statistics, Mathematics, Economics or related fields- 1to 4 years of experience in working with data and conducting statistical and/or numerical analysis- Strong understanding of how data can be stored and accessed in different structures- Experience with writing computer programs to solve problems- Strong understanding of data operations such as sub-setting, sorting, merging, aggregating and CRUD operations- Ability to write SQL code and familiarity with R/Python, Linux shell commands- Be willing and able to quickly learn about new businesses, database technologies and analysis techniques- Ability to tell a good story and support it with numbers and visuals- Strong oral and written communication Preferred Qualifications:- Experience working with large datasets- Experience with AWS analytics infrastructure (Redshift, S3, Athena, Boto3)- Experience building analytics applications leveraging R, Python, Tableau, Looker or other- Experience in geo-spatial analysis with POSTGIS, QGIS
Posted 1 month ago
0.0 - 1.0 years
4 - 8 Lacs
Gurugram
Work from Office
Job Description: Good understanding of Python / Django / Flask tech stack with exposure to RDBMS. Understanding of OOPs and programming fundamentals. Should be able to write efficient algorithms to solve business problems. Should be flexible to cut across programming languages to solve a problem end to end and work with cross-stack dev team. Should be ready to work in high availability and complex business systems, with readiness to learn and contribute each day. Experience: 0 to 2 years Location: Gurgaon Qualification: BE / BTECH / MCA / MTECH in Computer Science or related stream Competencies: Drive for results, Very High on Aptitude, ANALYTICALLY SHARP and EAGER to learn new technologies. Job Responsibilities: Passionate about programming Ready to solve real world challenges with efficient coding using open source stack. Drive for results, ANALYTICALLY SHARP and EAGER to learn new technologies. Ready to work in challenging environment where technology is no bar Learn and improvise on the fly as every day would be a new day/new challenges Who you are: Understanding project requirements as provided in the Design Documents and develop the application modules to meet the requirements Work with developers and architects, to ensure bug free and timely delivery Following coding best practices and guidelines. Support live systems with enhancements, maintenance and/or bug fixes. Conducting unit testing / implementing unit test cases. Should be a PASSIONATE about work and delivering quality results. Strong programming and problem solving skills. Good understanding of OOPs / Python / Django and/or Flask Knowledge of AWS serverless stack (Lambda, DynamoDB, SQS, S3) would be a value add Knowledge of REST/JSON APIs and/or SOAP/XML webservices Experience with Github and advanced Github features (good to have). *Should be available to join within 30 days from the date of offer
Posted 1 month ago
3.0 - 8.0 years
10 - 18 Lacs
Kolkata, Hyderabad, Pune
Work from Office
JD is below: Design, develop, and deploy generative AI based applications using AWS Bedrock. Proficiency in prompt engineering and RAG pipeline Experience in building Agentic Generative AI applications Fine-tune and optimize foundation models from AWS Bedrock for various use cases. Integrate generative AI capabilities into enterprise applications and workflows. Collaborate with cross-functional teams, including data scientists, ML engineers, and software developers, to implement AI-powered solutions. Utilize AWS services (S3, Lambda, SageMaker, etc.) to build scalable AI solutions. Develop APIs and interfaces to enable seamless interaction with AI models. Monitor model performance, conduct A/B testing, and enhance AI-driven products. Ensure compliance with AI ethics, governance, and security best practices. Stay up-to-date with advancements in generative AI and AWS cloud technologies. Required Skills & Qualifications: Bachelor's or Masters degree in Computer Science, AI, Machine Learning, or a related field. 3+ years of experience in AI/ML development, with a focus on generative AI. Hands-on experience with AWS Bedrock and foundation models. Proficiency in Python and ML frameworks. Experience with AWS services such as SageMaker, Lambda, API Gateway, DynamoDB, and S3. Experience with prompt engineering, model fine-tuning, and inference optimization. Familiarity with MLOps practices and CI/CD pipelines for AI deployment. Ability to work with large-scale datasets and optimize AI models for performance. Excellent problem-solving skills and ability to work in an agile environment. Preferred Qualifications: AWS Certified Machine Learning – Specialty or equivalent certification. Experience in LLMOps and model lifecycle management. Knowledge of multi-modal AI models (text, image, video generation). Hands-on experience with other cloud AI platforms (Google Vertex AI, Azure OpenAI). Strong understanding of ethical AI principles and bias mitigation techniques.
Posted 1 month ago
3.0 - 5.0 years
7 - 9 Lacs
Bengaluru
Work from Office
We are looking for a skilled Senior Associate to join our team in Bengaluru, with 3-5 years of experience in AWS infrastructure solutions architecture. The ideal candidate will have a strong background in designing and implementing scalable cloud-based systems. Roles and Responsibility Design and implement secure, scalable, and highly available cloud-based systems using AWS services such as EC2, S3, EBS, and Lambda. Collaborate with cross-functional teams to identify business requirements and develop technical solutions that meet those needs. Develop and maintain technical documentation for cloud-based systems, including design documents and implementation guides. Troubleshoot and resolve complex technical issues related to cloud-based systems, ensuring minimal downtime and optimal system performance. Participate in code reviews to ensure high-quality code standards and adherence to best practices. Stay up-to-date with the latest trends and technologies in cloud computing, applying this knowledge to improve existing systems and processes. Job Requirements Strong understanding of cloud computing concepts, including IaaS, PaaS, and SaaS. Proficiency in programming languages such as Python, Java, or C++ is desirable. Experience with containerization using Docker and orchestration using Kubernetes is preferred. Knowledge of Agile methodologies and version control systems like Git is beneficial. Excellent problem-solving skills, with the ability to analyze complex technical issues and develop creative solutions. Strong communication and collaboration skills, with the ability to work effectively with cross-functional teams. At least 3 years of AWS cloud IaaS and PaaS hands-on experience. A seasoned candidate managing clients requirements end-to-end (discovery, planning, design, implementation, and transition). Plan, develop, and configure AWS infrastructure from conceptualization through stabilization using various AWS tools, methodology, design best practices, etc. Planning for data backup, disaster recovery, data privacy, and security requirements to ensure solution remains secured and compliant with security standards and frameworks. Monitoring, troubleshooting, and resolving infrastructure issues in AWS cloud. Experience in keeping cloud environment secure and proactively preventing downtime. Good knowledge in determining associated security risks and mitigation techniques. Ability to work both independently and in a multi-disciplinary team environment. Own the design documentation of solution implemented i.e., High Level & Low Level Design documents. Perform routine infrastructure analysis and evaluation on resource requirements necessary to maintain and/or improve SLAs. Strong problem-solving skills, customer service, and people skills. Excellent command of the English language (both verbal and written).
Posted 1 month ago
6.0 - 10.0 years
8 - 12 Lacs
Bengaluru
Work from Office
Notice Period : Immediate - 30 days Mandatory Skills : Big Data, Python, SQL, Spark/Pyspark, AWS Cloud JD and required Skills & Responsibilities : - Actively participate in all phases of the software development lifecycle, including requirements gathering, functional and technical design, development, testing, roll-out, and support. - Solve complex business problems by utilizing a disciplined development methodology. - Produce scalable, flexible, efficient, and supportable solutions using appropriate technologies. - Analyse the source and target system data. Map the transformation that meets the requirements. - Interact with the client and onsite coordinators during different phases of a project. - Design and implement product features in collaboration with business and Technology stakeholders. - Anticipate, identify, and solve issues concerning data management to improve data quality. - Clean, prepare, and optimize data at scale for ingestion and consumption. - Support the implementation of new data management projects and re-structure the current data architecture. - Implement automated workflows and routines using workflow scheduling tools. - Understand and use continuous integration, test-driven development, and production deployment frameworks. - Participate in design, code, test plans, and dataset implementation performed by other data engineers in support of maintaining data engineering standards. - Analyze and profile data for the purpose of designing scalable solutions. - Troubleshoot straightforward data issues and perform root cause analysis to proactively resolve product issues. Required Skills : - 5+ years of relevant experience developing Data and analytic solutions. - Experience building data lake solutions leveraging one or more of the following AWS, EMR, S3, Hive & PySpark - Experience with relational SQL. - Experience with scripting languages such as Python. - Experience with source control tools such as GitHub and related dev process. - Experience with workflow scheduling tools such as Airflow. - In-depth knowledge of AWS Cloud (S3, EMR, Databricks) - Has a passion for data solutions. - Has a strong problem-solving and analytical mindset - Working experience in the design, Development, and test of data pipelines. - Experience working with Agile Teams. - Able to influence and communicate effectively, both verbally and in writing, with team members and business stakeholders - Able to quickly pick up new programming languages, technologies, and frameworks. - Bachelor's degree in computer science Apply Insights Follow-up Save this job for future reference Did you find something suspicious? Report Here! Hide This Job? Click here to hide this job for you. You can also choose to hide all the jobs from the recruiter.
Posted 1 month ago
6.0 - 10.0 years
8 - 12 Lacs
Bengaluru
Work from Office
Notice Period : Immediate - 30 days Mandatory Skills : Big Data, Python, SQL, Spark/Pyspark, AWS Cloud JD and required Skills & Responsibilities : - Actively participate in all phases of the software development lifecycle, including requirements gathering, functional and technical design, development, testing, roll-out, and support. - Solve complex business problems by utilizing a disciplined development methodology. - Produce scalable, flexible, efficient, and supportable solutions using appropriate technologies. - Analyse the source and target system data. Map the transformation that meets the requirements. - Interact with the client and onsite coordinators during different phases of a project. - Design and implement product features in collaboration with business and Technology stakeholders. - Anticipate, identify, and solve issues concerning data management to improve data quality. - Clean, prepare, and optimize data at scale for ingestion and consumption. - Support the implementation of new data management projects and re-structure the current data architecture. - Implement automated workflows and routines using workflow scheduling tools. - Understand and use continuous integration, test-driven development, and production deployment frameworks. - Participate in design, code, test plans, and dataset implementation performed by other data engineers in support of maintaining data engineering standards. - Analyze and profile data for the purpose of designing scalable solutions. - Troubleshoot straightforward data issues and perform root cause analysis to proactively resolve product issues. Required Skills : - 5+ years of relevant experience developing Data and analytic solutions. - Experience building data lake solutions leveraging one or more of the following AWS, EMR, S3, Hive & PySpark - Experience with relational SQL. - Experience with scripting languages such as Python. - Experience with source control tools such as GitHub and related dev process. - Experience with workflow scheduling tools such as Airflow. - In-depth knowledge of AWS Cloud (S3, EMR, Databricks) - Has a passion for data solutions. - Has a strong problem-solving and analytical mindset - Working experience in the design, Development, and test of data pipelines. - Experience working with Agile Teams. - Able to influence and communicate effectively, both verbally and in writing, with team members and business stakeholders - Able to quickly pick up new programming languages, technologies, and frameworks. - Bachelor's degree in computer science
Posted 1 month ago
5.0 - 10.0 years
20 - 27 Lacs
Hyderabad
Work from Office
Position: Experienced Data Engineer We are seeking a skilled and experienced Data Engineer to join our fast-paced and innovative Data Science team. This role involves building and maintaining data pipelines across multiple cloud-based data platforms. Requirements: A minimum of 5 years of total experience, with at least 3-4 years specifically in Data Engineering on a cloud platform. Key Skills & Experience: Proficiency with AWS services such as Glue, Redshift, S3, Lambda, RDS , Amazon Aurora ,DynamoDB ,EMR, Athena, Data Pipeline , Batch Job. Strong expertise in: SQL and Python DBT and Snowflake OpenSearch, Apache NiFi, and Apache Kafka In-depth knowledge of ETL data patterns and Spark-based ETL pipelines. Advanced skills in infrastructure provisioning using Terraform and other Infrastructure-as-Code (IaC) tools. Hands-on experience with cloud-native delivery models, including PaaS, IaaS, and SaaS. Proficiency in Kubernetes, container orchestration, and CI/CD pipelines. Familiarity with GitHub Actions, GitLab, and other leading DevOps and CI/CD solutions. Experience with orchestration tools such as Apache Airflow and serverless/FaaS services. Exposure to NoSQL databases is a plus
Posted 1 month ago
10.0 - 14.0 years
12 - 17 Lacs
Hyderabad
Work from Office
Overview We are seeking a highly skilled and motivated Associate Manager AWS Site Reliability Engineer (SRE) to join our team. As an Associate Manager AWS SRE, you will play a critical role in designing, managing, and optimizing our cloud infrastructure to ensure high availability, reliability, and scalability of our services. You will collaborate with cross-functional teams to implement best practices, automate processes, and drive continuous improvements in our cloud environment Responsibilities Design and Implement Cloud Infrastructure: Architect, deploy, and maintain AWS infrastructure using Infrastructure-as-Code (IaC) tools such as Terraform or CloudFormation. Monitor and Optimize Performance: Develop and implement monitoring, alerting, and logging solutions to ensure the performance and reliability of our systems. Ensure High Availability: Design and implement strategies for achieving high availability and disaster recovery, including backup and failover mechanisms. Automate Processes: Automate repetitive tasks and processes to improve efficiency and reduce human error using tools such as AWS Lambda, Jenkins, and Ansible. Incident Response: Lead and participate in incident response activities, troubleshoot issues, and perform root cause analysis to prevent future occurrences. Security and Compliance: Implement and maintain security best practices and ensure compliance with industry standards and regulations. Collaborate with Development Teams: Work closely with software development teams to ensure smooth deployment and operation of applications in the cloud environment. Capacity Planning: Perform capacity planning and scalability assessments to ensure our infrastructure can handle growth and increased demand. Continuous Improvement: Drive continuous improvement initiatives by identifying and implementing new tools, technologies, and processes. Qualifications Experience: 10+ years of experience and Minimum of 5 years of experience in a Site Reliability Engineer (SRE) or DevOps role, with a focus on AWS cloud infrastructure. Technical Skills: Proficiency in AWS services such as EC2, S3, RDS, VPC, Lambda, CloudFormation, and CloudWatch. Automation Tools: Experience with Infrastructure-as-Code (IaC) tools such as Terraform or CloudFormation, and configuration management tools like Ansible or Chef. Scripting: Strong scripting skills in languages such as Python, Bash, or PowerShell. Monitoring and Logging: Experience with monitoring and logging tools such as Prometheus, Grafana, ELK Stack, or CloudWatch. Problem-Solving: Excellent troubleshooting and problem-solving skills, with a proactive and analytical approach. Communication: Strong communication and collaboration skills, with the ability to work effectively in a team-oriented environment. Certifications: AWS certifications such as AWS Certified Solutions Architect, AWS Certified DevOps Engineer, or AWS Certified SysOps Administrator are highly desirable. Education: Bachelors degree in Computer Science, Engineering, or a related field, or equivalent work experience.
Posted 1 month ago
4.0 - 9.0 years
11 - 21 Lacs
Bengaluru
Hybrid
Java,Spring Boot, AWSGraphQ,RDBMS PostgreSQL, REST APIs. AWS services including EC2, EKS, S3, CloudWatch, Lambda, SNS, and SQS, Junit/Jest, and AI Tools like GitHub Copilot.Desirable-Node.js, Hasura frameworks.
Posted 1 month ago
5.0 - 10.0 years
10 - 20 Lacs
Bengaluru, Mumbai (All Areas)
Work from Office
Key Responsibilities Design, develop, and optimize data pipelines using Python and AWS services such asGlue, Lambda, S3, EMR, Redshift, Athena, and Kinesis. Implement ETL/ELT processes to extract, transform, and load data from various sources into centralized repositories (e.g., data lakes or data warehouses). Collaborate with cross-functional teams to understand business requirements and translate them into scalable data solutions. Monitor, troubleshoot, and enhance data workflows for performance and cost optimization. Ensure data quality and consistency by implementing validation and governance practices. Work on data security best practices in compliance with organizational policies and regulations. Automate repetitive data engineering tasks using Python scripts and frameworks. Leverage CI/CD pipelines for deployment of data workflows on AWS. Required Skills and Qualifications Professional Experience:5+ years of experiencein data engineering or a related field. Programming: Strong proficiency inPython, with experience in libraries likepandas,pySpark,orboto3. AWS Expertise: Hands-on experience with core AWS services for data engineering, such as: AWS Gluefor ETL/ELT. S3for storage. RedshiftorAthenafor data warehousing and querying. Lambdafor serverless compute. KinesisorSNS/SQSfor data streaming. IAM Rolesfor security. Databases: Proficiency in SQL and experience withrelational(e.g., PostgreSQL, MySQL) andNoSQL(e.g., DynamoDB) databases. Data Processing: Knowledge of big data frameworks (e.g., Hadoop, Spark) is a plus. DevOps: Familiarity with CI/CD pipelines and tools like Jenkins, Git, and CodePipeline. Version Control: Proficient with Git-based workflows. Problem Solving: Excellent analytical and debugging skills. Optional Skills Knowledge ofdata modelinganddata warehouse designprinciples. Experience withdata visualization tools(e.g., Tableau, Power BI). Familiarity with containerization (e.g., Docker) and orchestration (e.g., Kubernetes). Exposure to other programming languages like Scala or Java.
Posted 1 month ago
5.0 - 10.0 years
0 - 3 Lacs
Hyderabad
Hybrid
Dear Candidate, A Warm Greeting for SAIS IT Services! We are hiring for Java Developer for our client. Interested people can share your CV to Jyoti.r@saisservices.com, For more queries, kindly reach me on 8360298749 with the below mentioned details; Please fill the below details: Total Exp- CTC- ECTC- Notice Period- Current Location- Comfortable for Work from Office- Job Description: Were Hiring:Java Developer Location: Hyderabad Job Title: Java Developer Experience 5+ years Work Mode: Hybrid(3 Days WFO) Strong hands-on experience in Java + AWS (Lambda-Mandatory, s3, Ec2) (3 years in AWS) Regards, Jyoti Rani 8360298749 Jyoti.r@saisservices.com
Posted 1 month ago
7.0 - 12.0 years
10 - 20 Lacs
Hyderabad
Remote
Job Title: Senior Data Engineer Location: Remote Job Type: Fulltime Experience Level: 7+ years About the Role: We are seeking a highly skilled Senior Data Engineer to join our team in building a modern data platform on AWS. You will play a key role in transitioning from legacy systems to a scalable, cloud-native architecture using technologies like Apache Iceberg, AWS Glue, Redshift, and Atlan for governance. This role requires hands-on experience across both legacy (e.g., Siebel, Talend, Informatica) and modern data stacks. Responsibilities: Design, develop, and optimize data pipelines and ETL/ELT workflows on AWS. Migrate legacy data solutions (Siebel, Talend, Informatica) to modern AWS-native services. Implement and manage a data lake architecture using Apache Iceberg and AWS Glue. Work with Redshift for data warehousing solutions including performance tuning and modelling. Apply data quality and observability practices using Soda or similar tools. Ensure data governance and metadata management using Atlan (or other tools like Collibra, Alation). Collaborate with data architects, analysts, and business stakeholders to deliver robust data solutions. Build scalable, secure, and high-performing data platforms supporting both batch and real-time use cases. Participate in defining and enforcing data engineering best practices. Required Qualifications: 7+ years of experience in data engineering and data pipeline development. Strong expertise with AWS services, especially Redshift, Glue, S3, and Athena. Proven experience with Apache Iceberg or similar open table formats (like Delta Lake or Hudi). Experience with legacy tools like Siebel, Talend, and Informatica. Knowledge of data governance tools like Atlan, Collibra, or Alation. Experience implementing data quality checks using Soda or equivalent. Strong SQL and Python skills; familiarity with Spark is a plus. Solid understanding of data modeling, data warehousing, and big data architectures. Strong problem-solving skills and the ability to work in an Agile environment.
Posted 1 month ago
9.0 - 14.0 years
20 - 30 Lacs
Kochi, Bengaluru
Work from Office
Senior Data Engineer AWS (Glue, Data Warehousing, Optimization & Security) Experienced Senior Data Engineer (6+ Yrs) with deep expertise in AWS cloud Data services, particularly AWS Glue, to design, build, and optimize scalable data solutions. The ideal candidate will drive end-to-end data engineering initiatives — from ingestion to consumption — with a strong focus on data warehousing, performance optimization, self-service enablement, and data security. The candidate needs to have experience in doing consulting and troubleshooting exercise to design best-fit solutions. Key Responsibilities Consult with business and technology stakeholders to understand data requirements, troubleshoot and advise on best-fit AWS data solutions Design and implement scalable ETL pipelines using AWS Glue, handling structured and semi-structured data Architect and manage modern cloud data warehouses (e.g., Amazon Redshift, Snowflake, or equivalent) Optimize data pipelines and queries for performance, cost-efficiency, and scalability Develop solutions that enable self-service analytics for business and data science teams Implement data security, governance, and access controls Collaborate with data scientists, analysts, and business stakeholders to understand data needs Monitor, troubleshoot, and improve existing data solutions, ensuring high availability and reliability Required Skills & Experience 8+ years of experience in data engineering in AWS platform Strong hands-on experience with AWS Glue, Lambda, S3, Athena, Redshift, IAM Proven expertise in data modelling, data warehousing concepts, and SQL optimization Experience designing self-service data platforms for business users Solid understanding of data security, encryption, and access management Proficiency in Python Familiarity with DevOps practices & CI/CD Strong problem-solving Exposure to BI tools (e.g., QuickSight, Power BI, Tableau) for self-service enablement Preferred Qualifications AWS Certified Data Analytics – Specialty or Solutions Architect – Associate
Posted 1 month ago
7.0 - 12.0 years
22 - 27 Lacs
Hyderabad
Work from Office
Key Responsibilities Data Pipeline Development: Design, develop, and optimize robust data pipelines to efficiently collect, process, and store large-scale datasets for AI/ML applications. ETL Processes: Develop and maintain Extract, Transform, and Load (ETL) processes to ensure accurate and timely data delivery for machine learning models. Data Integration: Integrate diverse data sources (structured, unstructured, and semi-structured data) into a unified and scalable data architecture. Data Warehousing & Management: Design and manage data warehouses to store processed and raw data in a highly structured, accessible format for analytics and AI/ML models. AI/ML Model Development: Collaborate with Data Scientists to build, fine-tune, and deploy machine learning models into production environments. Focus on model optimization, scalability, and operationalization. Automation: Implement automation techniques to support model retraining, monitoring, and reporting. Cloud & Distributed Systems: Work with cloud platforms (AWS, Azure, GCP) and distributed systems to store and process data efficiently, ensuring that AI/ML models are scalable and maintainable in the cloud environment. Data Quality & Governance: Implement data quality checks, monitoring, and governance frameworks to ensure the integrity and security of the data being used for AI/ML models. Collaboration: Work cross-functionally with Data Science, Business Intelligence, and other engineering teams to meet organizational data needs and ensure seamless integration with analytics platforms. Required Skills and Qualifications Bachelor's or Masters Degree in Computer Science, Engineering, Data Science, or a related field. Strong proficiency in Python for AI/ML and data engineering tasks. Experience with AI/ML frameworks such as TensorFlow, PyTorch, Scikit-learn, and Keras. Proficient in SQL and working with relational databases (e.g., MySQL, PostgreSQL, SQL Server). Strong experience with ETL pipelines and data wrangling in large datasets. Familiarity with cloud-based data engineering tools and services (e.g., AWS (S3, Lambda, Redshift), Azure, GCP). Solid understanding of big data technologies like Hadoop, Spark, and Kafka for data processing at scale. Experience in managing and processing both structured and unstructured data. Knowledge of version control systems (e.g., Git) and agile development methodologies. Experience with data containers and orchestration tools such as Docker and Kubernetes. Strong communication skills to collaborate effectively with cross-functional teams. Preferred Skills Experience with Data Warehouses (e.g., Amazon Redshift, Google BigQuery, Snowflake). Familiarity with CI/CD pipelines for ML model deployment and automation. Familiarity with machine learning model monitoring and performance optimization. Experience with data visualization tools like Tableau, Power BI, or Plotly. Knowledge of deep learning models and frameworks. DevOps or MLOps experience for automating deployment of models. Advanced statistics or math background for improving model performance and accuracy.
Posted 1 month ago
5.0 - 8.0 years
3 - 7 Lacs
Hyderabad, Bengaluru
Work from Office
Key Responsibilities: Design, implement, and maintain cloud-based infrastructure on AWS. Manage and monitor AWS services, including EC2, S3, Lambda, RDS, CloudFormation, VPC, etc. Develop automation scripts for deployment, monitoring, and scaling using AWS services. Collaborate with DevOps teams to automate build, test, and deployment pipelines. Ensure the security and compliance of cloud environments using AWS security best practices. Optimize cloud resource usage to reduce costs while maintaining high performance. Troubleshoot issues related to cloud infrastructure and services. Participate in capacity planning and disaster recovery strategies. Monitor application performance and make necessary adjustments to ensure optimal performance. Stay current with new AWS features and tools and evaluate their applicability for the organization. Requirements: Bachelor's degree in Computer Science, Engineering, or a related field. Proven experience as an AWS Engineer or in a similar cloud infrastructure role. In-depth knowledge of AWS services, including EC2, S3, RDS, Lambda, VPC, CloudWatch, etc. Proficiency in scripting languages such as Python, Shell, or Bash. Experience with infrastructure-as-code tools like Terraform or AWS CloudFormation. Strong understanding of networking concepts, cloud security, and best practices. Familiarity with containerization technologies (e.g., Docker, Kubernetes) is a plus. Excellent problem-solving, analytical, and troubleshooting skills. Strong communication skills, both written and verbal. AWS certifications (AWS Certified Solutions Architect, AWS Certified DevOps Engineer, etc.) are preferred. Preferred Skills: Experience with serverless architectures and services. Knowledge of CI/CD pipelines and DevOps methodologies. Experience with monitoring and logging tools like CloudWatch, Datadog, or Prometheus. Knowledge in AWS FinOps.
Posted 1 month ago
5.0 - 10.0 years
15 - 25 Lacs
Bengaluru, Mumbai (All Areas)
Work from Office
Hello Candidates We are hiring for Java Developer Please find the Job Description below Position - Java Developer Experience - 5 + Years Location - Mumbai / Bengaluru Skills - Java, SQL, AWS Please find below the JD for the position. Proven experience in Java development, with a strong understanding of object-oriented programming principles. Experience with AWS services, including ECS, S3, RDS, Elasticache and CloudFormation. Experience with microservices architecture and RESTful API design. Strong problem-solving skills and attention to detail. Experience in the financial services industry, particularly in trading or risk management, is a plus. Excellent communication and collaboration skills. All Important Check points for all the requirements: Candidate should have all necessary documents as they have very strict Background Verification All Employment: 1) Documents from all the companies candidate worked till now (offer letters, Experience letters & reliving letters) 2) PF, UAN Number, Form 16 & Form 26 A - Mandatory 3) Educational Documents - Marksheets & Degree Certificates Kindly revert back with your acknowledgement on same, & share your updated CV. Total Experience: Relevant Experience: Current Salary: Expected Salary: Current Company / Last Company: Notice Period Last Working Date : Reason for Job Change: Current Location: Preferred Location Have you applied for Mphasis before : YES/NO Alternate Mail ID : Alternate Phone No : PAN CARD No. : NOTE: Interested candidates can share their resume - shrutia.talentsketchers@gmail.com Regards Shruti TS
Posted 1 month ago
6.0 - 11.0 years
15 - 30 Lacs
Hyderabad, Pune, Bengaluru
Hybrid
Warm Greetings from SP Staffing Services Private Limited!! We have an urgent opening with our CMMI Level5 client for the below position. Please send your update profile if you are interested. Relevant Experience: 6 - 15 Yrs Location: Pan India Job Description: Candidate must be experienced working in projects involving Other ideal qualifications include experiences in Primarily looking for a data engineer with expertise in processing data pipelines using Databricks Spark SQL on Hadoop distributions like AWS EMR Data bricks Cloudera etc. Should be very proficient in doing large scale data operations using Databricks and overall very comfortable using Python Familiarity with AWS compute storage and IAM concepts Experience in working with S3 Data Lake as the storage tier Any ETL background Talend AWS Glue etc. is a plus but not required Cloud Warehouse experience Snowflake etc. is a huge plus Carefully evaluates alternative risks and solutions before taking action. Optimizes the use of all available resources Develops solutions to meet business needs that reflect a clear understanding of the objectives practices and procedures of the corporation department and business unit Skills Hands on experience on Databricks Spark SQL AWS Cloud platform especially S3 EMR Databricks Cloudera etc. Experience on Shell scripting Exceptionally strong analytical and problem-solving skills Relevant experience with ETL methods and with retrieving data from dimensional data models and data warehouses Strong experience with relational databases and data access methods especially SQL Excellent collaboration and cross functional leadership skills Excellent communication skills both written and verbal Ability to manage multiple initiatives and priorities in a fast-paced collaborative environment Ability to leverage data assets to respond to complex questions that require timely answers has working knowledge on migrating relational and dimensional databases on AWS Cloud platform Skills Interested can share your resume to sankarspstaffings@gmail.com with below inline details. Over All Exp : Relevant Exp : Current CTC : Expected CTC : Notice Period :
Posted 1 month ago
5.0 - 10.0 years
25 - 30 Lacs
Pune
Remote
- Passionate about TDD (Test First Development) - Have at least 7 to 10 years of development experience with Python. - Have at least 2 to 3 years of experience with React. - Document key business workflows and software designs. Required Candidate profile - Have built complex applications with AWS Serverless technologies (AppSync, DynamoDB, DynamoDB Streams, Lambda, Cognito, S3, CloudFront, Route 53, Amplify) - Strong knowledge of GraphQL.
Posted 1 month ago
8.0 - 10.0 years
1 - 2 Lacs
Hyderabad, Chennai, Bengaluru
Work from Office
Urgent Hiring: Senior Software Engineer Java AWS Location: Bangalore – Domlur (Work from Office – 4 Days/Week) Type: Contract (6 Months, Extendable) Notice Period: Immediate to 15 Days Open Positions: 2 Required Skills: Java (8/11/17) , Spring Boot , Microservices Architecture Experience with Kafka or Apache Camel Minimum 2 Years of AWS Hands-on Experience with: EC2, ECS, S3, SQS, SNS, Lambda, DynamoDB, CloudFormation Experience: 5+ Years
Posted 1 month ago
5.0 - 8.0 years
5 - 8 Lacs
Bengaluru
Work from Office
Develop, test, and maintain applications using Java and Spring Boot. Design and implement microservices architecture. Work with databases to ensure data integrity and performance. Collaborate with cross-functional teams to define, design, Required Candidate profile Proficiency in Java programming. Experience with Spring Boot framework. Knowledge of microservices architecture. Familiarity with databases (SQL/NoSQL). Basic understanding of Kafka and S3.
Posted 1 month ago
3.0 - 8.0 years
5 - 15 Lacs
Pune
Work from Office
P2 3 Java Full stack - Angular 4-6 yrs DL 20 Java web developers Design, develop, and maintain REST-based microservices using Java. Develop intuitive and responsive user interfaces using modern front-end technologies such as Angular, React, and HTML5 Build and optimize robust back-end services, ensuring seamless integration with databases (SQL). Deploy and manage cloud-native applications on AWS infrastructure. Collaborate with cross-functional teams, including UI/UX designers, DevOps, and product owners, to deliver end-to-end solutions. Ensure the applications performance, scalability, and reliability. Write clean, maintainable, and efficient code while following best practices, including unit testing and code reviews. Troubleshoot, debug, and optimize application code and infrastructure. Stay up to date with emerging technologies and industry trends to drive continuous improvement. Required : We are seeking a highly skilled Software Engineers with expertise in full-stack development. The ideal candidate will have experience building scalable, cloud-native applications and a strong understanding of modern software development practices. Microservice Development: Hands-on experience developing and deploying REST-based microservices using Java frameworks (e.g., Spring and Hibernate). Full-Stack Development: Front-End: Proficiency in Angular, React, and HTML5 for building interactive UIs. Back-End: Expertise in Java for business logic and APIs. Database: Strong understanding of SQL and experience with relational databases. Cloud Experience: Hands-on experience with AWS services (e.g., EC2, S3, Lambda, RDS). Familiarity with cloud-native architecture and deployment practices. Experience with CI/CD tools (Jenkins, GitHub, etc.) and containerization technologies (Docker, Kubernetes). Solid understanding of software development principles, including design patterns, clean code, system design, software architecture and agile methodologies. Experience with Advertising, AdTech, Ad Server (SSP/DSP), OTT or Media Streaming will be preferred. Work Experience : - P2 : 3 to 5Yrs, P3 : 5 to 8yrs. P4 : 8 to 12Yrs Job Location : - Pan India
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
20312 Jobs | Dublin
Wipro
11977 Jobs | Bengaluru
EY
8165 Jobs | London
Accenture in India
6667 Jobs | Dublin 2
Uplers
6464 Jobs | Ahmedabad
Amazon
6352 Jobs | Seattle,WA
Oracle
5993 Jobs | Redwood City
IBM
5803 Jobs | Armonk
Capgemini
3897 Jobs | Paris,France
Tata Consultancy Services
3776 Jobs | Thane