Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 - 5.0 years
9 - 13 Lacs
Pune
Work from Office
Degree in Computer Science (or similar), alternatively well-founded professional experience in the desired field Roles Responsibilities: A Cloud Engineer (DevOps) in AWS is responsible for designing, implementing, and managing AWS-based solutions. This role involves ensuring the scalability, security, and efficiency of AWS infrastructure to support business operations and development activities. Collaborate with cross-functional teams to optimize cloud services and drive innovation. TasksDesign and implement scalable, secure, and reliable AWS cloud infrastructureManage and optimize AWS resources to ensure cost-efficiencyDevelop and maintain Infrastructure as Code (IaC) scriptsMonitor system performance and troubleshoot issuesImplement security best practices and compliance measuresCollaborate with development teams to support application deploymentAutomate operational tasks using scripting and automation toolsConduct regular system audits and generate reportsStay updated with the latest AWS features and industry trendsProvide technical guidance and support to team members Requirements: At least 5 years of experience as a AWS cloud engineer or AWS architect, preferably in the automotive sector. Business fluent in English (at least C1) Very good communication an d presentation skills Required Skill Set: Proficiency in AWS services (S3, ECS, Lambda, Glue, Athena, EC2, SageMaker, Batch Processing, Bedrock, API Gateway, Security Hub, AWS Inspector, etc.) Strong understanding of cloud architecture and best practices Experience with infrastructure as code (IaC) tool AWS CDK with a programming language like Python or Typescript Knowledge of networking concepts, security protocols and sonarqube Familiarity with CI/CD pipelines and DevOps practices in Gitlab Ability to troubleshoot and resolve technical issues Scripting skills (Python, Bash, etc.) Experience with monitoring and logging tools (CloudWatch, CloudTrail)Understanding of containerization (Docker, ECS) Excellent communication and collaboration skills
Posted 3 weeks ago
3.0 - 5.0 years
5 - 9 Lacs
Pune
Work from Office
Qualification: Degree in Computer Science (or similar), alternatively well-founded professional experience in the desired field Roles Responsibilities: As a Senior Data Engineer, you manage and develop the solutions in close alignment with various business and Spoke stakeholders. You are responsible for the implementation of the IT governance guidelines. Collaborate with the Spokes Data Scientists, Data Analysts, and Business Analysts, when relevant. Tasks Create and manage data pipeline architecture for data ingestion, pipeline setup and data curation Experience working with and creating cloud data solutions Assemble large, complex data sets that meet functional/non-functional business requirements Implement the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using Pyspark, SQL and AWS big data-technologies Build analytics tools that use the data pipeline to provide actionable insights into customer acquisition, operational efficiency, and other key business performance metrics Manipulate data at scale: getting data in a ready-to-use state in close alignment with various business an d Spoke stak eholders Must Have: Advanced knowledge: ETL Data Lake, Data Warehouse, RDS architectures knowledge Python, SQL (Any other OOP language is also valuable) Pyspark (preferably) or Spark Knowledge Object-oriented programming, Clean Code and good documentation skills AWS: S3, Athena, Lambda, Glue, IAM, SQS, EC2, Quicksight, and etc. Git Data Analysis Visualization Optional: AWS CDK Cloud Development Kit CI/CD knowledge
Posted 3 weeks ago
3.0 - 6.0 years
5 - 9 Lacs
Pune
Work from Office
Educational Qualification: Bachelors/master's degree in computer science, Engineering, or a related field 60% above in academics Responsibility: Provide sustainable and well-structured solutions along with documentation. Expertise with cloud services eg. AWS, Azure, GCP Expertise in Spark Expertise in Python and SQL programming. Experience with BI tools: QuickSight, Plotly-Dash, PowerBI, Tableau etc. Development and maintenance of Machine Learning pipelines for existing ML models. Requirements: Expertise in AWS services eg. Glue, SageMaker etc Good analytical skills Experience of working with International clients. Alignment with counterpart for requirements and results. Continuous learning attitude. Good Communication Skills in English
Posted 3 weeks ago
4.0 - 8.0 years
20 - 25 Lacs
Bengaluru
Hybrid
Job Title: AWS Engineer Experience: 4 - 8 Years Location: Bengaluru (Hybrid 2- 3 Days Onsite per Week) Employment Type: Full-Time Notice Period: Only Immediate to 15 Days Joiners Preferred Job Description: We are looking for an experienced AWS Engineer to join our dynamic data engineering team. The ideal candidate will have hands-on experience building and maintaining robust, scalable data pipelines and cloud-based architectures on AWS. Key Responsibilities: Design, develop, and maintain scalable data pipelines using AWS services such as Glue, Lambda, S3, Redshift, and EMR Collaborate with data scientists and ML engineers to operationalize machine learning models using AWS SageMaker Implement efficient data transformation and feature engineering workflows Optimize ETL/ELT processes and enforce best practices for data quality and governance Work with structured and unstructured data using Amazon Athena, DynamoDB, RDS, and similar services Build and manage CI/CD pipelines for data and ML workflows using AWS CodePipeline, CodeBuild, and Step Functions Monitor data infrastructure for performance, reliability, and cost-effectiveness Ensure data security and compliance with organizational and regulatory standards Required Skills: Strong experience with AWS data and ML services Solid knowledge of ETL/ELT frameworks and data modeling Proficiency in Python, SQL, and scripting for data engineering Experience with CI/CD and DevOps practices on AWS Good understanding of data governance and compliance standards Excellent collaboration and problem-solving skills
Posted 3 weeks ago
8.0 - 12.0 years
22 - 35 Lacs
Bengaluru
Hybrid
Role & responsibilities As a Senior Data Engineer and database specialist you will be designing, creating and managing the cloud databases and data pipelines that underpin our decoupled cloud architecture and API first approach. You have proven expertise in database design, data ingestion, transformation, data writing, scheduling and query management within a cloud environment. You will have proven experience and expertise in working with AWS Cloud Infrastructure Engineers, Software/API Developers and Architects to design, develop, deploy and operate data services and solutions that underpin a cloud ecosystem. You will take ownership and accountability of functional and non-functional design and work within a team of Engineers to create innovative solutions that unlock value and modernise technology designs. You will role model continuous improvement mindset in the team, and in your project interactions, by taking technical ownership of key assets, including roadmaps and technical direction of data services running on our AWS environments. See yourself in our team The Business Banking Technology Domain works in an Agile methodology with our business banking business to plan, prioritise and deliver on high value technology objectives with key results that meet our regulatory obligations and protect the community. You will work within the VRM Crew that is working on initiatives such as Gen AI based cash flow coach to provide relevant data to our regulators. To achieve our objectives, you will use you deep understanding of data modelling and data quality and your extensive experience with SQL to access relational databases such as Oracle and Postgres to identify, transform and validate data required for complex business reporting requirements. You will use your experience in designing and building reliable and efficient data pipelines preferably using modern cloud services on AWS such as S3, Lambda, Redshift, Glue, etc to process large volumes of data efficiently. Experience with data centric frameworks such as Spark with programming knowledge in Scala or Python is highly advantageous. As is experience working on Linux with shell and automation frameworks to manage code and infrastructure in a well-structured and reliable manner. Experience with Pega workflow software as a source or target for data integration is also highly regarded. Were interested in hearing from people who: • Can design and implement databases for data integration in the enterprise • Can performance tune applications from a database code and design perspective • Can automate data ingestion and transformation processes using scheduling tools. Monitor and troubleshoot data pipelines to ensure reliability and performance. • Have experience working through performance and scaling through horizontal scaling designs vs database tuning • Can design application logical database requirements and implement physical solutions • Can collaborate with business and technical teams in order to design and build critical databases and data pipelines • Can advise business owners on strategic database direction and application solution design Tech skills We use a broad range of tools, languages, and frameworks. We dont expect you to know them all but having significant experience and exposure with some of these (or equivalents) will set you up for success in this team. • AWS Data products such as AWS Glue and AWS EMR • Oracle and AWS Aurora RDS such as PostgreSQL • AWS S3 ingestion, transformation and writing to databases • Proficiency in programming languages like Python, Scala or Java for developing data ingestion and transformation scripts. • Strong knowledge of SQL for writing, optimizing, and debugging queries. • Familiarity with database design, indexing, and normalization principles. Understanding of data formats (JSON, CSV, XML) and techniques for converting between them. Ability to handle data validation, cleaning, and transformation. • Proficiency in automation tools and scripting (e.g., bash scripting, cron jobs) for scheduling and monitoring data processes. • Experience with version control systems (e.g., Git) for managing code and collaboration. Working with us: Whether youre passionate about customer service, driven by data, or called by creativity, a career with CommBank is for you. Our people bring their diverse backgrounds and unique perspectives to build a respectful, inclusive, and flexible workplace with flexible work locations. One where were driven by our values, and supported to share ideas, initiatives, and energy. One where making a positive impact for customers, communities and each other is part of our every day. Here, youll thrive. You’ll be supported when faced with challenges and empowered to tackle new opportunities. We’re hiring engineers from across all of Australia and have opened technology hubs in Melbourne and Perth. We really love working here, and we think you will too. We support our people with the flexibility to balance where work is done with at least half their time each month connecting in office. We also have many other flexible working options available including changing start and finish times, part-time arrangements and job share to name a few. Talk to us about how these arrangements might work in the role you’re interested in. If this sounds like the role for you then we would love to hear from you. Apply today! If you are interested for this job so Please share your detail with updated CV on Krishankant@thinkpeople.in Total Exp.- Rel Exp.- Current Company- CTC- ECTC- Notice Period- DOB- Edu.-
Posted 3 weeks ago
3.0 - 8.0 years
10 - 18 Lacs
Pune
Work from Office
Required Skills & Qualifications : Strong hands-on experience with AWS Glue , AWS Lambda , and Azure Data Services . Experience with Databricks for large-scale data processing and test validation. Proficiency in PySpark and Python scripting for test automation and data validation. Strong SQL skills for data validation and transformation testing. Familiarity with cloud-native monitoring and logging tools (e.g., CloudWatch, Azure Monitor). Understanding of data warehousing concepts, data lakes, and batch/streaming architectures. Experience with CI/CD pipelines and automated testing frameworks is a plus.
Posted 3 weeks ago
15.0 - 20.0 years
10 - 14 Lacs
Hyderabad
Work from Office
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Python (Programming Language), AWS Aurora, PySpark, Oracle Procedural Language Extensions to SQL (PLSQL) Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will be responsible for designing, building, and configuring applications to meet business process and application requirements. Your typical day will involve collaborating with team members to develop innovative solutions and ensure seamless application functionality. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Develop and implement software solutions using Python.- Collaborate with cross-functional teams to analyze and address technical issues.- Conduct code reviews and provide feedback to enhance code quality.- Stay updated on industry trends and best practices in software development.- Assist in troubleshooting and resolving application issues. Professional & Technical Skills: - Must To Have Skills: Proficiency in Python (Programming Language), Pyspark, Oracle or SQL DB, AWS - Aurora, S3, Glue- Strong understanding of software development methodologies.- Experience in developing and maintaining applications.- Knowledge of database management systems.- Familiarity with cloud computing platforms.- Ready to work in shifts -12 PM to 10 PM Additional Information:- The candidate should have a minimum of 6 years of experience in Python (Programming Language).- This position is based at our Hyderabad office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 3 weeks ago
5.0 - 10.0 years
10 - 20 Lacs
Hyderabad
Work from Office
Role & responsibilities As a Senior Spark Engineer (Scala), youll partner in a team of experienced software engineers, removing impediments and enabling the teams to deliver business value. Ensure team ownership of legacy systems with an emphasis on maintaining operational stability. Be a passionate leader committed to the development and mentorship of your teams. Partner with business and IT stakeholders to ensure alignment with key corporate priorities. Share ideas and work to bring people together to help solve sophisticated problems. Create a positive and collaborative environment by championing open communication and soliciting continuous feedback. Stay current with new technology trends. Additional Responsibilities Participates in the discussion and documentation of best practices and standards for application development Complies with all company policies and procedures Remains current in profession and industry trends Successfully completes regulatory and job training requirements Required Experience 6+ years of hands-on software engineering experience with any object-oriented language, Scala. 5+ years of experience using Spark, EMR, Glue or other serverless compute technology in the Cloud. 5+ years of experience architecting and enhancing data platforms and service-oriented architectures. Experience working within Agile/DevSecOps development environments. Excellent communication, collaboration, and mentoring skills. More recent experience in Cloud development preferred. Experience working with modern, web-based architectures, including REST APIs, Serverless, event-driven microservices. Bachelor’s degree or equivalent in Computer Science, Information Technology, or related discipline. Desired Experience Experience working with financial management stakeholders. Experience with Workday or other large ERP platforms desired. Life Insurance or financial services industry experience a plus. Preferred candidate profile
Posted 3 weeks ago
5.0 - 10.0 years
10 - 20 Lacs
Hyderabad
Work from Office
Role & responsibilities As a Senior Spark Engineer (Scala), youll partner in a team of experienced software engineers, removing impediments and enabling the teams to deliver business value. Ensure team ownership of legacy systems with an emphasis on maintaining operational stability. Be a passionate leader committed to the development and mentorship of your teams. Partner with business and IT stakeholders to ensure alignment with key corporate priorities. Share ideas and work to bring people together to help solve sophisticated problems. Create a positive and collaborative environment by championing open communication and soliciting continuous feedback. Stay current with new technology trends. Additional Responsibilities Participates in the discussion and documentation of best practices and standards for application development Complies with all company policies and procedures Remains current in profession and industry trends Successfully completes regulatory and job training requirements Required Experience 6+ years of hands-on software engineering experience with any object-oriented language, Scala. 5+ years of experience using Spark, EMR, Glue or other serverless compute technology in the Cloud. 5+ years of experience architecting and enhancing data platforms and service-oriented architectures. Experience working within Agile/DevSecOps development environments. Excellent communication, collaboration, and mentoring skills. More recent experience in Cloud development preferred. Experience working with modern, web-based architectures, including REST APIs, Serverless, event-driven microservices. Bachelor’s degree or equivalent in Computer Science, Information Technology, or related discipline. Desired Experience Experience working with financial management stakeholders. Experience with Workday or other large ERP platforms desired. Life Insurance or financial services industry experience a plus. Preferred candidate profile
Posted 3 weeks ago
10.0 - 20.0 years
20 - 35 Lacs
Navi Mumbai
Work from Office
Formulating, NPD & synthesizing new products to meet market demands in wood adhesive. Design/monitor progress on application & testing of new products and enhancements of existing products. Handle trouble shooting, cost reduction, product improvement Required Candidate profile Masters/Ph. D from reputed Institute in Polymer Chemistry. Experience in polymer chemistry formulation adhesives. Strong leadership skills & technical expertise is must.
Posted 3 weeks ago
6.0 - 9.0 years
10 - 20 Lacs
Pune
Hybrid
Pattern values data and engineering required to take full advantage of it. As a Senior Data Engineer at Pattern, you will be working on business problems that have a huge impact on how the company maintains its competitive edge. Essential Duties and Responsibilities Develop, deploy, and support real-time, automated, scalable data streams from a variety of sources into the data lake or data warehouse. Develop and implement data auditing strategies and processes to ensure data quality; identify and resolve problems associated with large-scale data processing workflows; implement technical solutions to maintain data pipeline processes and troubleshoot failures. Collaborate with technology teams and partners to specify data requirements and provide access to data. Tune application and query performance using profiling tools and SQL or other relevant query languages. Understand business, operations, and analytics requirements for data Build data expertise and own data quality for assigned areas of ownership Work with data infrastructure to triage issues and drive to resolution Required Qualifications Bachelors degree in data science, Data Analytics, Information Management, Computer Science, Information Technology, related field, or equivalent professional experience Overall experience should be more than 7 + years 3+ years of experience working with SQL 3+ years of experience in implementing modern data architecture-based data warehouses 2+ years of experience working with data warehouses such as Redshift, BigQuery, or Snowflake and understanding data architecture design Excellent software engineering and scripting knowledge Strong communication skills (both in presentation and comprehension) along with the aptitude for thought leadership in data management and analytics Expertise with data systems working with massive data sets from various data sources Ability to lead a team of Data Engineers Preferred Qualifications Experience working with time series databases Advanced knowledge of SQL , including the ability to write stored procedures, triggers, analytic/windowing functions, and tuning. Advanced knowledge of Snowflake, including the ability to write and orchestrate streams and tasks Background in Big Data, non-relational databases, Machine Learning and Data Mining Experience with cloud-based technologies including SNS , SQS , SES , S3 , Lambda , and Glue Experience with modern data platforms like Redshift , Cassandra , DynamoDB , Apache Airflow , Spark , or ElasticSearch Expertise in Data Quality and Data Governance Our Core Values Data Fanatics: Our edge is always found in the data Partner Obsessed: We are obsessed with partner success Team of Doers: We have a bias for action Game Changers: We encourage innovation
Posted 3 weeks ago
8.0 - 13.0 years
27 - 35 Lacs
Kochi, Bengaluru
Work from Office
About Us. DBiz Solution is a Transformational Partner. Digital transformation is intense. Wed like for you to have something to hold on to, whilst you set out bringing your ideas into existence. Beyond anything, we put humans first. This means solving real problems with real people and providing needs with real, working solutions. DBiz leverages a wealth of experience building a variety of software to improve our client's ability to respond to change and build tomorrows digital business. Were quite proud of our record of accomplishment. Having delivered over 150 projects for over 100 clients, we can honestly say we leave our clients happy and wanting more. Using data, we aim to unlock value and create platforms/products at scale that can evolve with business strategies using our innovative Rapid Application Development methodologies. The passion for creating an impact: Our passion for creating an impact drive everything we do. We believe that technology has the power to transform businesses and improve lives, and it is our mission to harness this power to make a difference. We constantly strive to innovate and deliver solutions that not only meet our client's needs but exceed their expectations, allowing them to achieve their goals and drive sustainable growth. Through our world-leading digital transformation strategies, we are always growing and improving. That means creating an environment where every one of us can strive together for excellence. Senior Data Engineer AWS (Glue, Data Warehousing, Optimization & Security) Experienced Senior Data Engineer (8+ Yrs) with deep expertise in AWS cloud Data services, particularly AWS Glue, to design, build, and optimize scalable data solutions. The ideal candidate will drive end-to-end data engineering initiatives from ingestion to consumption — with a strong focus on data warehousing, performance optimization, self-service enablement, and data security. The candidate needs to have experience in doing consulting and troubleshooting exercise to design best-fit solutions. Key Responsibilities Consult with business and technology stakeholders to understand data requirements, troubleshoot and advise on best-fit AWS data solutions Design and implement scalable ETL pipelines using AWS Glue, handling structured and semi-structured data Architect and manage modern cloud data warehouses (e.g., Amazon Redshift, Snowflake, or equivalent) Optimize data pipelines and queries for performance, cost-efficiency, and scalability Develop solutions that enable self-service analytics for business and data science teams Implement data security, governance, and access controls Collaborate with data scientists, analysts, and business stakeholders to understand data needs Monitor, troubleshoot, and improve existing data solutions, ensuring high availability and reliability Required Skills & Experience 8+ years of experience in data engineering in AWS platform Strong hands-on experience with AWS Glue, Lambda, S3, Athena, Redshift, IAM Proven expertise in data modelling, data warehousing concepts, and SQL optimization Experience designing self-service data platforms for business users Solid understanding of data security, encryption, and access management Proficiency in Python Familiarity with DevOps practices & CI/CD Strong problem-solving Exposure to BI tools (e.g., QuickSight, Power BI, Tableau) for self-service enablement Preferred Qualifications AWS Certified Data Analytics – Specialty or Solutions Architect – Associate
Posted 3 weeks ago
12.0 - 17.0 years
30 - 45 Lacs
Bengaluru
Work from Office
Work Location: Bangalore Experience :10+yrs Required Skills: Experience AWS cloud and AWS services such as S3 Buckets, Lambda, API Gateway, SQS queues; Experience with batch job scheduling and identifying data/job dependencies; Experience with data engineering using AWS platform and Python; Familiar with AWS Services like EC2, S3, Redshift/Spectrum, Glue, Athena, RDS, Lambda, and API gateway; Familiar with software DevOps CI/CD tools, such Git, Jenkins, Linux, and Shell Script Thanks & Regards Suganya R suganya@spstaffing.in
Posted 4 weeks ago
4.0 - 9.0 years
15 - 30 Lacs
Hyderabad, Chennai, Bengaluru
Work from Office
Work Location: Bangalore, Chennai, Hyderabad, Pune, Bhubaneshwar, Kochi Experience: 4-6yrs Required Skills: Experience in Pyspark Experience in AWS/Glue Please share your updated profile to suganya@spstaffing.in if you are actively looking for change
Posted 4 weeks ago
4.0 - 9.0 years
6 - 11 Lacs
Kochi
Work from Office
As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and AWS Cloud Data Platform Responsibilities: Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark, Scala, and Hive, Hbase or other NoSQL databases on Cloud Data Platforms (AWS) or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / AWS eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala ; Minimum 3 years of experience on Cloud Data Platforms on AWS; Experience in AWS EMR / AWS Glue / Data Bricks, AWS RedShift, DynamoDB Good to excellent SQL skills Exposure to streaming solutions and message brokers like Kafka technologies Preferred technical and professional experience Certification in AWS and Data Bricks or Cloudera Spark Certified developers
Posted 4 weeks ago
5.0 - 10.0 years
20 - 27 Lacs
Pune
Hybrid
Job Description Job Duties and Responsibilities: We are looking for a self-starter to join our Data Engineering team. You will work in a fast-paced environment where you will get an opportunity to build and contribute to the full lifecycle development and maintenance of the data engineering platform. With the Data Engineering team you will get an opportunity to - Design and implement data engineering solutions that is scalable, reliable and secure on the Cloud environment Understand and translate business needs into data engineering solutions Build large scale data pipelines that can handle big data sets using distributed data processing techniques that supports the efforts of the data science and data application teams Partner with cross-functional stakeholder including Product managers, Architects, Data Quality engineers, Application and Quantitative Science end users to deliver engineering solutions Contribute to defining data governance across the data platform Basic Requirements: A minimum of a BS degree in computer science, software engineering, or related scientific discipline is desired 3+ years of work experience in building scalable and robust data engineering solutions Strong understanding of Object Oriented programming and proficiency with programming in Python (TDD) and Pyspark to build scalable algorithms 3+ years of experience in distributed computing and big data processing using the Apache Spark framework including Spark optimization techniques 2+ years of experience with Databricks, Delta tables, unity catalog, Delta Sharing, Delta live tables(DLT) and incremental data processing Experience with Delta lake, Unity Catalog Advanced SQL coding and query optimization experience including the ability to write analytical and nested queries 3+ years of experience in building scalable ETL/ ELT Data Pipelines on Databricks and AWS (EMR) 2+ Experience of orchestrating data pipelines using Apache Airflow/ MWAA Understanding and experience of AWS Services that include ADX, EC2, S3 3+ years of experience with data modeling techniques for structured/ unstructured datasets Experience with relational/columnar databases - Redshift, RDS and interactive querying services - Athena/ Redshift Spectrum Passion towards healthcare and improving patient outcomes Demonstrate analytical thinking with strong problem solving skills Stay on top of emerging technologies and posses willingness to learn.
Posted 4 weeks ago
6.0 - 11.0 years
6 - 12 Lacs
Gurugram
Work from Office
Responsibilities: * Design, develop & maintain backend APIs using Rust microservices architecture with AWS services like S3, Lambda & Glue.
Posted 4 weeks ago
7.0 - 10.0 years
7 - 10 Lacs
Hyderabad, Telangana, India
On-site
Description We are seeking an experienced AWS Data Engineer to join our team in India. The ideal candidate will be responsible for designing and implementing data solutions on the AWS cloud platform, ensuring high performance and reliability. Responsibilities Design, develop, and maintain scalable data processing systems on AWS. Implement data pipelines using AWS services such as Glue, Lambda, and Kinesis. Optimize data storage solutions and data lake architectures using S3 and Redshift. Collaborate with data scientists and analysts to understand data needs and deliver appropriate solutions. Ensure data quality and integrity by implementing appropriate validation checks and monitoring systems. Automate data integration processes and develop ETL workflows to streamline data operations. Skills and Qualifications 7-10 years of experience in data engineering or related field. Strong proficiency in AWS services including S3, Redshift, Glue, Lambda, and Kinesis. Experience with SQL and NoSQL databases like PostgreSQL, DynamoDB, or similar. Proficient in programming languages such as Python, Java, or Scala. Solid understanding of data warehousing concepts and ETL processes. Experience with data modeling and data architecture design. Familiarity with data governance and data security best practices. Ability to work in an Agile environment and collaborate with cross-functional teams.
Posted 4 weeks ago
6.0 - 8.0 years
6 - 8 Lacs
Hyderabad, Telangana, India
On-site
Description We are seeking a skilled Data Engineer to join our team in India. The ideal candidate will be responsible for building and maintaining robust data pipelines and architectures that support our data-driven initiatives. Responsibilities Design, construct, install, and maintain large-scale processing systems and data pipelines. Develop data models and architectures to support business intelligence and analytics. Collaborate with data scientists and analysts to understand data requirements and ensure data quality. Implement data integration solutions from various sources to a centralized data warehouse. Monitor and optimize performance of data systems to ensure high availability and reliability. Ensure data governance and compliance with regulations. Skills and Qualifications Proficiency in SQL and NoSQL databases (e.g., MySQL, PostgreSQL, MongoDB). Experience with ETL tools and data warehousing solutions (e.g., Apache NiFi, Talend, AWS Redshift). Strong programming skills in languages such as Python, Java, or Scala. Familiarity with big data technologies (e.g., Hadoop, Spark, Kafka). Knowledge of cloud platforms (e.g., AWS, Azure, Google Cloud) and their data services. Understanding of data modeling concepts and best practices. Ability to work with large datasets and perform data analysis.
Posted 4 weeks ago
8.0 - 12.0 years
15 - 25 Lacs
Hyderabad, Pune, Bengaluru
Work from Office
Warm welcome from SP Staffing Services! Reaching out to you regarding permanent opportunity !! Job Description: Exp: 8-12 yrs Location: Chennai/Hyderabad/Bangalore/Pune/Bhubaneshwar/Kochi Skill: Pyspark/AWS Glue Implementing data ingestion pipelines from different types of data sources i.e Databases, S3, Files etc.. Experience in building ETL/ Data Warehouse transformation process. Developing Big Data and non-Big Data cloud-based enterprise solutions in PySpark and SparkSQL and related frameworks/libraries, Developing scalable and re-usable, self-service frameworks for data ingestion and processing, Integrating end to end data pipelines to take data from data source to target data repositories ensuring the quality and consistency of data, Processing performance analysis and optimization, Bringing best practices in following areas: Design & Analysis, Automation (Pipelining, IaC), Testing, Monitoring, Documentation. Experience working with structured and unstructured data. Good to have (Knowledge) 1.Experience in cloud-based solutions, 2.Knowledge of data management principles. Interested can share your resume to sangeetha.spstaffing@gmail.com with below inline details. Full Name as per PAN: Mobile No: Alt No/ Whatsapp No: Total Exp: Relevant Exp in Pyspark: Rel Exp in Python: Rel Exp in AWS Glue: Current CTC: Expected CTC: Notice Period (Official): Notice Period (Negotiable)/Reason: Date of Birth: PAN number: Reason for Job Change: Offer in Pipeline (Current Status): Availability for virtual interview on weekdays between 10 AM- 4 PM(plz mention time): Current Res Location: Preferred Job Location: Whether educational % in 10th std, 12th std, UG is all above 50%? Do you have any gaps in between your education or Career? If having gap, please mention the duration in months/year:
Posted 1 month ago
5.0 - 10.0 years
10 - 18 Lacs
Bengaluru, Mumbai (All Areas)
Hybrid
About the Role: We are seeking a passionate and experienced Subject Matter Expert and Trainer to deliver our comprehensive Data Engineering with AWS program. This role combines deep technical expertise with the ability to coach, mentor, and empower learners to build strong capabilities in data engineering, cloud services, and modern analytics tools. If you have a strong background in data engineering and love to teach, this is your opportunity to create impact by shaping the next generation of cloud data professionals. Key Responsibilities: Deliver end-to-end training on the Data Engineering with AWS curriculum, including: - Oracle SQL and ANSI SQL - Data Warehousing Concepts, ETL & ELT - Data Modeling and Data Vault - Python programming for data engineering - AWS Fundamentals (EC2, S3, Glue, Redshift, Athena, Kinesis, etc.) - Apache Spark and Databricks - Data Ingestion, Processing, and Migration Utilities - Real-time Analytics and Compute Services (Airflow, Step Functions) Facilitate engaging sessions virtual and in-person and adapt instructional methods to suit diverse learning styles. Guide learners through hands-on labs, coding exercises, and real-world projects. Assess learner progress through evaluations, assignments, and practical assessments. Provide mentorship, resolve doubts, and inspire confidence in learners. Collaborate with the program management team to continuously improve course delivery and learner experience. Maintain up-to-date knowledge of AWS and data engineering best practices. Ideal Candidate Profile: Experience: Minimum 5-8 years in Data Engineering, Big Data, or Cloud Data Solutions. Prior experience delivering technical training or conducting workshops is strongly preferred. Technical Expertise: Proficiency in SQL, Python, and Spark. Hands-on experience with AWS services: Glue, Redshift, Athena, S3, EC2, Kinesis, and related tools. Familiarity with Databricks, Airflow, Step Functions, and modern data pipelines. Certifications: AWS certifications (e.g., AWS Certified Data Analytics Specialty) are a plus. Soft Skills: Excellent communication, facilitation, and interpersonal skills. Ability to break down complex concepts into simple, relatable examples. Strong commitment to learner success and outcomes. Email your application to: careers@edubridgeindia.in.
Posted 1 month ago
5.0 - 7.0 years
5 - 5 Lacs
Kochi, Hyderabad, Thiruvananthapuram
Work from Office
Key Responsibilities Develop & Deliver: Build applications/features/components as per design specifications, ensuring high-quality code adhering to coding standards and project timelines. Testing & Debugging: Write, review, and execute unit test cases; debug code; validate results with users; and support defect analysis and mitigation. Technical Decision Making: Select optimal technical solutions including reuse or creation of components to enhance efficiency, cost-effectiveness, and quality. Documentation & Configuration: Create and review design documents, templates, checklists, and configuration management plans; ensure team compliance. Domain Expertise: Understand customer business domain deeply to advise developers and identify opportunities for value addition; obtain relevant certifications. Project & Release Management: Manage delivery of modules/user stories, estimate efforts, coordinate releases, and ensure adherence to engineering processes and timelines. Team Leadership: Set goals (FAST), provide feedback, mentor team members, maintain motivation, and manage people-related issues effectively. Customer Interaction: Clarify requirements, present design options, conduct demos, and build customer confidence through timely and quality deliverables. Technology Stack: Expertise in Big Data technologies (PySpark, Scala), plus preferred skills in AWS services (EMR, S3, Glue, Airflow, RDS, DynamoDB), CICD tools (Jenkins), relational & NoSQL databases, microservices, and containerization (Docker, Kubernetes). Soft Skills & Collaboration: Communicate clearly, work under pressure, handle dependencies and risks, collaborate with cross-functional teams, and proactively seek/offers help. Required Skills Big Data,Pyspark,Scala Additional Comments: Must-Have Skills Big Data (Py Spark + Java/Scala) Preferred Skills: AWS (EMR, S3, Glue, Airflow, RDS, Dynamodb, similar) CICD (Jenkins or another) Relational Databases experience (any) No SQL databases experience (any) Microservices or Domain services or API gateways or similar Containers (Docker, K8s, similar)
Posted 1 month ago
5.0 - 9.0 years
15 - 19 Lacs
Chennai
Work from Office
Senior Data Engineer - Azure Years of Experience : 5 Job location: Chennai Job Description : We are looking for a skilled and experienced Senior Azure Developer to join the team! As part of the team, you will be involved in the implementation of the ongoing and new initiatives for our company. If you love learning, thinking strategically, innovating,and helping others, this job is for you! Primary Skills : ADF,Databricks Secondary Skills : DBT,Python,Databricks,Airflow,Fivetran,Glue,Snowflake Role Description : Data engineering role requires creating and managing technological infrastructure of a data platform, be in-charge / involved in architecting, building, and managing data flows / pipelines and construct data storages (noSQL, SQL), tools to work with big data (Hadoop, Kafka), and integration tools to connect sources or other databases. Role Responsibility : l Translate functional specifications and change requests into technical specifications l Translate business requirement document, functional specification, and technical specification to related coding l Develop efficient code with unit testing and code documentation l Ensuring accuracy and integrity of data and applications through analysis, coding, documenting, testing, and problem solving l Setting up the development environment and configuration of the development tools l Communicate with all the project stakeholders on the project status l Manage, monitor, and ensure the security and privacy of data to satisfy business needs l Contribute to the automation of modules, wherever required l To be proficient in written, verbal and presentation communication (English) l Co-ordinating with the UAT team Role Requirement : l Proficient in basic and advanced SQL programming concepts (Procedures, Analytical functions etc.) l Good Knowledge and Understanding of Data warehouse concepts (Dimensional Modeling, change data capture, slowly changing dimensions etc.) l Knowledgeable in Shell / PowerShell scripting l Knowledgeable in relational databases, nonrelational databases, data streams, and file stores l Knowledgeable in performance tuning and optimization l Experience in Data Profiling and Data validation l Experience in requirements gathering and documentation processes and performing unit testing l Understanding and Implementing QA and various testing process in the project l Knowledge in any BI tools will be an added advantage l Sound aptitude, outstanding logical reasoning, and analytical skills l Willingness to learn and take initiatives l Ability to adapt to fast-paced Agile environment Additional Requirement : l Demonstrated expertise as a Data Engineer, specializing in Azure cloud services. l Highly skilled in Azure Data Factory, Azure Data Lake, Azure Databricks, and Azure Synapse Analytics. l Create and execute efficient, scalable, and dependable data pipelines utilizing Azure Data Factory. l Utilize Azure Databricks for data transformation and processing. l Effectively oversee and enhance data storage solutions, emphasizing Azure Data Lake and other Azure storage services. l Construct and uphold workflows for data orchestration and scheduling using Azure Data Factory or equivalent tools. l Proficient in programming languages like Python, SQL, and conversant with pertinent l scripting languages.
Posted 1 month ago
5.0 - 9.0 years
13 - 19 Lacs
Chennai
Work from Office
Senior Data Engineer - DBT and Snowflake Years of Experience : 5 Job location: Chennai Role Description: Data engineering role requires creating and managing technological infrastructure of a data platform, be in-charge / involved in architecting, building, and managing data flows / pipelines and construct data storages (noSQL, SQL), tools to work with big data (Hadoop, Kafka), and integration tools to connect sources or other databases. Should hold minimum 5 years of experience in DBT and Snowflake. Role Responsibility: Translate functional specifications and change requests into technical specifications Translate business requirement document, functional specification, and technical specification to related coding Develop efficient code with unit testing and code documentation Role Requirement: Proficient in basic and advanced SQL programming concepts (Procedures, Analytical functions etc.) Good Knowledge and Understanding of Data warehouse concepts (Dimensional Modeling, change data capture, slowly changing dimensions etc.) Knowledgeable in Shell / PowerShell scripting Knowledgeable in relational databases, non-relational databases, data streams, and file stores Knowledgeable in performance tuning and optimization Experience in Data Profiling and Data validation Experience in requirements gathering and documentation processes and performing unit testing Understanding and Implementing QA and various testing process in the project Additional Requirement: Design, develop, and maintain scalable data models and transformations using DBT in conjunction with Snowflake, ensure the effective transformation and load data from diverse sources into data warehouse or data lake. Implement and manage data models in DBT, guarantee accurate data transformation and alignment with business needs. Utilize DBT to convert raw, unstructured data into structured datasets, enabling efficient analysis and reporting. Write and optimize SQL queries within DBT to enhance data transformation processes and improve overall performance. Establish best DBT processes to improve performance, scalability, and reliability. Expertise in SQL and a strong understanding of Data Warehouse concepts and Modern Data Architectures. Familiarity with cloud-based platforms (e.g., AWS, Azure, GCP). Migrate legacy transformation code into modular DBT data models.
Posted 1 month ago
6.0 - 9.0 years
9 - 14 Lacs
Bengaluru
Work from Office
Your Role Knowledge in Cloud Computing by using AWS Services like Glue, Lamda, Athena, Step Functions, S3 etc. Knowledge in programming language Python/Scala. Knowledge in Spark/PySpark (Core and Streaming) and hands-on to transform using Streaming. Knowledge building real time or batch ingestion and transformation pipelines. Works in the area of Software Engineering, which encompasses the development, maintenance and optimization of software solutions/applications.1. Applies scientific methods to analyse and solve software engineering problems.2. He/she is responsible for the development and application of software engineering practice and knowledge, in research, design, development and maintenance.3. His/her work requires the exercise of original thought and judgement and the ability to supervise the technical and administrative work of other software engineers.4. The software engineer builds skills and expertise of his/her software engineering discipline to reach standard software engineer skills expectations for the applicable role, as defined in Professional Communities.5. The software engineer collaborates and acts as team player with other software engineers and stakeholders. Your Profile Working experience and strong knowledge in Databricks is a plus. Analyze existing queries for performance improvements. Develop procedures and scripts for data migration. Provide timely scheduled management reporting. Investigate exceptions regarding asset movements. What will you love working at Capgemini Were committed to ensure that people of all backgrounds feel encouraged and have a sense of belonging at Capgemini. You are valued for who you are, and you canbring your original self to work . Every Monday, kick off the week with a musical performance by our in-house band - The Rubber Band. Also get to participate in internalsports events , yoga challenges, or marathons. Capgemini serves clients across industries, so you may get to work on varied data engineering projects involving real-time data pipelines, big data processing, and analytics. You'll work extensively with AWS services like S3, Redshift, Glue, Lambda, and more.
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough