Jobs
Interviews

572 Glue Jobs - Page 20

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 8.0 years

10 - 20 Lacs

Chennai

Hybrid

Roles & Responsibilities : • We are looking for a strong Senior Data Engineering who will be majorly responsible for designing, building and maintaining ETL/ ELT pipelines . • Integration of data from multiple sources or vendors to provide the holistic insights from data. • You are expected to build and manage Data Lake and Data warehouse solutions, design data models, create ETL processes, implementing data quality mechanisms etc. • Perform EDA (exploratory data analysis) required to troubleshoot data related issues and assist in the resolution of data issues. • Should have experience in client interaction oral and written. • Experience in mentoring juniors and providing required guidance to the team. Required Technical Skills • Extensive experience in languages such as Python, Pyspark, SQL (basics and advanced). • Strong experience in Data Warehouse, ETL, Data Modelling, building ETL Pipelines, Data Architecture . • Must be proficient in Redshift, Azure Data Factory, Snowflake etc. • Hands-on experience in cloud services like AWS S3, Glue, Lambda, CloudWatch, Athena etc. • Good to have knowledge in Dataiku, Big Data Technologies and basic knowledge of BI tools like Power BI, Tableau etc will be plus. • Sound knowledge in Data management, data operations, data quality and data governance. • Knowledge of SFDC, Waterfall/ Agile methodology. • Strong knowledge of Pharma domain / life sciences commercial data operations. Qualifications • Bachelors or masters Engineering/ MCA or equivalent degree. • 4-6 years of relevant industry experience as Data Engineer . • Experience working on Pharma syndicated data such as IQVIA, Veeva, Symphony; Claims, CRM, Sales, Open Data etc. • High motivation, good work ethic, maturity, self-organized and personal initiative. • Ability to work collaboratively and providing the support to the team. • Excellent written and verbal communication skills. • Strong analytical and problem-solving skills. Location • Chennai, India

Posted 2 months ago

Apply

6.0 - 11.0 years

13 - 18 Lacs

Hyderabad

Work from Office

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by diversity and inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health equity on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities Lead a team to design, develop, test, deploy, maintain and continuously improve software Mentor the engineering team to develop and perform as highly as possible Guide and help the team adopt best engineering practices Support driving modern solutions to complex problems Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications 7+ years of Overall IT experience 3+ Years of experience - AWS (all services needed for Big Data pipelines like S3 , EMR , SNS/SQS , Eventbridge, Lambda , Cloudwatch , MSK , Glue, Container services etc.) . Spark . Scala, Hadoop 2+ Years of experience - Python. Shell scripting, Orchestration (Airflow or MWAA preferred). SQL. CI/CD (Git preferred and experience with Deployment pipelines), Devops (including supporting production stack and working with SRE teams) 1+ Years of experience - Infrastructure as code (Terraform preferred) 1+ Years of experience - Spark streaming Healthcare Domain & Data Standards Preferred Qualification Azure, Big Data and/or Cloud certifications At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes — an enterprise priority reflected in our mission.

Posted 2 months ago

Apply

6.0 - 11.0 years

16 - 20 Lacs

Hyderabad

Work from Office

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by diversity and inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health equity on a global scale. Join us to start Caring. Connecting. Growing together. The Optum Technology Digital team is on a mission to disrupt the healthcare industry, transforming UHG into an industry-leading Consumer brand. We deliver hyper-personalized digital solutions that empower direct-to-consumer, digital-first experiences, educating, guiding, and empowering consumers to access the right care at the right time. Our mission is to revolutionize healthcare for patients and providers by delivering cutting-edge, personalized and conversational digital solutions. We’re Consumer Obsessed, ensuring they receive exceptional support throughout their healthcare journeys. As we drive this transformation, we're revolutionizing customer interactions with the healthcare system, leveraging AI, cloud computing, and other disruptive technologies to tackle complex challenges. Serving UnitedHealth Group's digital technology needs, the Consumer Engineering team impacts millions of lives through UnitedHealthcare & Optum. We are seeking a dynamic individual who embodies modern engineering culture - someone with deep engineering expertise within a digital product model, a passion for innovation, and a relentless drive to enhance the consumer experience. Our ideal candidate thrives in an agile, fast-paced rapid-prototyping environment, embraces DevOps and continuous integration/continuous deployment (CI/CD) practices, and champions the Voice of the Customer. If you are driven by the pursuit of excellence, eager to innovate, and excited to make a tangible impact within a team that embraces modern technologies and consumer-centric strategies, while prioritizing robust cyber-security protocols, we invite you to explore this exciting opportunity with us. Join our team and be at the forefront of shaping the future of healthcare, where your unique skills will not only be recognized but celebrated. Primary Responsibilities Design and implement data models to analyse business, system, and security events for real-time insights and threat detection Conduct exploratory data analysis (EDA) to understand patterns and relationships across large data sets, and develop hypotheses for new model development Develop dashboards and reports to present actionable insights to business and security teams Build and automate near real-time analytics workflows on AWS, leveraging services like Kinesis, Glue, Redshift, and QuickSight Collaborate with AI/ML engineers to develop and validate data features for model inputs Interpret and communicate complex data trends to stakeholders and provide recommendations for data-driven decision-making Ensure data quality and governance standards, collaborating with data engineering teams to build quality data pipelines Develop data science algorithms & generate actionable insights as per platform needs and work closely with cross capability teams throughout solution development lifecycle from design to implementation & monitoring Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications B. Tech or Master’s degree or equivalent experience 12+ years of experience in data engineering roles in Data Warehouse 3+ years of experience as a Data Scientist with a focus on building models for analytics and insights in AWS environments Experience with AWS data and analytics services (e.g., Kinesis, Glue, Redshift, Athena, TimeStream) Hands-on experience with statistical analysis, anomaly detection and predictive modelling Proficiency with SQL, Python, and data visualization tools like QuickSight, Tableau, or Power BI Proficiency in data wrangling, cleansing, and feature engineering Preferred Qualifications Experience in security data analytics, focusing on threat detection and prevention Knowledge of AWS security tools and understanding of cloud data security principles Familiarity with deploying data workflows using CI/CD pipelines in AWS environments Background in working with real-time data streaming architectures and handling high-volume event-based data

Posted 2 months ago

Apply

3.0 - 7.0 years

12 - 17 Lacs

Bengaluru

Work from Office

Job Summary: Synechron is seeking an experienced Senior Data Engineer with expertise in AWS, Apache Airflow, and DBT to design and implement scalable, reliable data pipelines. The role involves collaborating with data teams and business stakeholders to develop data solutions that enable actionable insights and support organizational decision-making. The ideal candidate will bring data engineering experience, demonstrating strong technical skills, strategic thinking, and the ability to work in a fast-paced, evolving environment. Software Requirements: Required: Strong proficiency in AWS services including S3, Redshift, Lambda, and Glue, with proven hands-on experience Expertise in Apache Airflow for workflow orchestration and pipeline management Extensive experience with DBT for data transformation and modeling Solid knowledge of SQL for data querying and manipulation Preferred: Familiarity with Hadoop, Spark, or other big data technologies Experience with NoSQL databases (e.g., DynamoDB, Cassandra) Knowledge of data governance and security best practices within cloud environments Overall Responsibilities: Lead the design, development, and maintenance of scalable and efficient data pipelines and workflows utilizing AWS, Airflow, and DBT Collaborate with data scientists, analysts, and business teams to gather requirements and translate them into technical solutions Optimize Extract, Transform, Load (ETL) processes to enhance data quality, integrity, and timeliness Monitor pipeline performance, troubleshoot issues, and implement improvements to ensure operational excellence Enforce data management, governance, and security protocols across all data flows Mentor junior data engineers and promote best practices within the team Stay current with emerging data technologies and industry trends, recommending innovations for the data ecosystem Technical Skills (By Category): Programming Languages: Essential: SQL, Python (preferred for scripting and automation) Preferred: Spark, Scala, Java (for big data integration) Databases/Data Management: Extensive experience with data warehousing (Redshift, Snowflake, or similar) and relational databases (MySQL, PostgreSQL) Familiarity with NoSQL databases such as DynamoDB or Cassandra is a plus Cloud Technologies: AWS cloud platform, leveraging services like S3, Lambda, Glue, Redshift, and IAM security features Frameworks and Libraries: Apache Airflow, dbt, and related data orchestration and transformation tools Development Tools and Methodologies: Git, Jenkins, CI/CD pipelines, Agile/Scrum environment experience Security Protocols: Knowledge of data encryption, access control, and compliance standards in cloud data engineering Experience Requirements: At least 8 years of professional experience in data engineering or related roles with a focus on cloud ecosystems and big data pipelines Demonstrated experience designing and managing end-to-end data workflows in AWS environments Proven success in collaborating with cross-functional teams and translating business requirements into technical solutions Prior experience mentoring junior engineers and leading data projects is highly desirable Day-to-Day Activities: Develop, deploy, and monitor scalable data pipelines using AWS, Airflow, and DBT Collaborate regularly with data scientists, analysts, and business stakeholders to refine data requirements and deliver impactful solutions Troubleshoot production data pipeline issues to resolve data quality or performance bottlenecks Conduct code reviews, optimize existing workflows, and implement automation to improve efficiency Document data architecture, pipelines, and governance practices for knowledge sharing and compliance Keep abreast of emerging data tools and industry best practices, proposing enhancements to existing systems Qualifications: Bachelor’s degree in Computer Science, Data Science, Engineering, or related field; Master’s degree preferred Professional certifications such as AWS Certified Data Analytics – Specialty or related credentials are advantageous Commitment to continuous professional development and staying current with industry trends Professional Competencies: Strong analytical, problem-solving, and critical thinking skills Excellent communication abilities to effectively liaise with technical and business teams Proven leadership in mentoring team members and managing project deliverables Ability to work independently, prioritize tasks, and adapt to changing business needs Innovative mindset focused on scalable, efficient, and sustainable data solutions

Posted 2 months ago

Apply

5.0 - 10.0 years

18 - 32 Lacs

Hyderabad

Hybrid

Greetings from AstroSoft Technologies We are Back with Exciting job opportunity for AWS Data Engineer professionals. Join our Growing Team & Explore with us at Hyderabad office (Hybrid- Gachibowli) No.of Openings - 10 Positions Role : AWS Data Engineer Project Domain: USA Client-BFSI, Fintech Experience: 5+ Years Work Location : Hyderabad (Hybrid - Gachibowli) Job Type: Full-Time Company: AstroSoft Technologies (https://www.astrosofttech.com/) Astrosoft is an award-winning company that specializes in the areas of Data, Analytics, Cloud, AI/ML, Innovation, Digital. We have a customer first mindset and take extreme ownership in delivering solutions and projects for our customers and have consistently been recognized by our clients as the premium partner to work with. We bring to bear top tier talent, a robust and structured project execution framework, our significant experience over the years and have an impeccable record in delivering solutions and projects for our clients. Founded in 2004 , Headquarters in Florida, Texas-,USA, Corporate Office - India, Hyderabad Benefits from Astrosoft Technologies H1B Sponsorship (Depends on Project & Performance) Lunch & Dinner (Every day) Health Insurance Coverage- Group Industry Standards Leave Policy Skill Enhancement Certification Hybrid Mode JOB DETAILS: Role: Senior AWS Data Engineer Location : India, Hyderabad, Gachibowli (Vasavi SkyCity) Job Type : Full Time Shift Timings : 12.30 PM to 9.30 PM IST Experience Range - 5+ yrs. Work Mode: Hybrid (Fri & Mon-WFH) Interview Process : 3 Tech Rounds Job Summary: Strong experience and understanding of streaming architecture and development practices using kafka , Kinesis , spark , flink etc, Strong AWS development experience using S3 , SNS , SQS, MWAA ( Airflow ) Glue , DMS and EMR . Strong knowledge of one or more programing languages Python /Java/Scala (ideally Python ) Experience using Terraform to build IAC components in AWS . Strong experience with ETL Tools in AWS ; ODI experience is as plus. Strong experience with Database Platforms: Oracle, AWS Redshift Strong experience in SQL tuning, tuning ETL solutions, physical optimization of databases. Very familiar with SRE concepts which includes evaluating and implementing monitoring and observability tools like Splunk , Data Dog, CloudWatch and other job, log or dashboard concepts for customer support and application health checks. Ability to collaborate with our business partners to understand and implement their requirements. Excellent interpersonal skills and be able to build consensus across teams. Strong critical thinking and ability to think out-of-the box. Self-motivated and able to perform under pressure. AWS certified (preferred) Thanks & Regards Karthik Kumar HR-TAG Lead -India Astrosoft Technologies, Unit 1810, level 18, Vasavi Sky city, Gachibowli, Hyderabad, Telangana 500081. Contact: +91-8712229084 Email: karthik.jangam@astrosofttech.com

Posted 2 months ago

Apply

5.0 - 10.0 years

20 - 27 Lacs

Hyderabad

Work from Office

Position: Experienced Data Engineer We are seeking a skilled and experienced Data Engineer to join our fast-paced and innovative Data Science team. This role involves building and maintaining data pipelines across multiple cloud-based data platforms. Requirements: A minimum of 5 years of total experience, with at least 3-4 years specifically in Data Engineering on a cloud platform. Key Skills & Experience: Proficiency with AWS services such as Glue, Redshift, S3, Lambda, RDS , Amazon Aurora ,DynamoDB ,EMR, Athena, Data Pipeline , Batch Job. Strong expertise in: SQL and Python DBT and Snowflake OpenSearch, Apache NiFi, and Apache Kafka In-depth knowledge of ETL data patterns and Spark-based ETL pipelines. Advanced skills in infrastructure provisioning using Terraform and other Infrastructure-as-Code (IaC) tools. Hands-on experience with cloud-native delivery models, including PaaS, IaaS, and SaaS. Proficiency in Kubernetes, container orchestration, and CI/CD pipelines. Familiarity with GitHub Actions, GitLab, and other leading DevOps and CI/CD solutions. Experience with orchestration tools such as Apache Airflow and serverless/FaaS services. Exposure to NoSQL databases is a plus

Posted 2 months ago

Apply

6.0 - 10.0 years

0 - 2 Lacs

Pune, Chennai, Bengaluru

Hybrid

Primay Skills: Python, Pyspark, AWS, Glue Location: Pan India Roles and Responsibilities: Proficient in Python scripting and PySpark for data processing tasks Strong SQL capabilities with hands on experience managing big data using ETL tools like Informatica Experience with the AWS cloud platform and its data services including S3 Redshift Lambda EMR Airflow Postgres SNS and EventBridge Skilled in BASH Shell scripting Understanding of data lakehouse architecture particularly with Iceberg format is a plus Preferred Experience with Kafka and Mulesoft API Understanding of healthcare data systems is a plus Experience in Agile methodologies Strong analytical and problem-solving skills Effective communication and teamwork abilities Responsibilities Develop and maintain data pipelines and ETL processes to manage large scale datasets Collaborate to design test data architectures to align with business needs Implement and optimize data models for efficient querying and reporting Assist in the development and maintenance of data quality checks and monitoring processes Support the creation of data solutions that enable analytical capabilities

Posted 2 months ago

Apply

5.0 - 10.0 years

22 - 27 Lacs

Navi Mumbai

Work from Office

Data Strategy and PlanningDevelop and implement data architecture strategies that align with organizational goals and objectives. Collaborate with business stakeholders to understand data requirements and translate them into actionable plans. Data ModelingDesign and implement logical and physical data models to support business needs. Ensure data models are scalable, efficient, and comply with industry best practices. Database Design and ManagementOversee the design and management of databases, selecting appropriate database technologies based on requirements. Optimize database performance and ensure data integrity and security. Data IntegrationDefine and implement data integration strategies to facilitate seamless flow of information across. Responsibilities: Experience in data architecture and engineering Proven expertise with Snowflake data platform Strong understanding of ETL/ELT processes and data integration Experience with data modeling and data warehousing concepts Familiarity with performance tuning and optimization techniques Excellent problem-solving skills and attention to detail Strong communication and collaboration skills Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Cloud & Data ArchitectureAWS , Snowflake ETL & Data EngineeringAWS Glue, Apache Spark, Step Functions Big Data & AnalyticsAthena,Presto, Hadoop Database & StorageSQL, Snow sql Security & ComplianceIAM, KMS, Data Masking Preferred technical and professional experience Cloud Data WarehousingSnowflake (Data Modeling, Query Optimization) Data TransformationDBT (Data Build Tool) for ELT pipeline management Metadata & Data GovernanceAlation (Data Catalog, Lineage, Governance

Posted 2 months ago

Apply

6.0 - 9.0 years

27 - 42 Lacs

Chennai

Work from Office

Data Analyst (Visualisation Engineer) - Skills and Qualifications SQL - Mandatory Proficiency in Tableau, Power BI for data visualization. - Mandatory Strong programming skills in Python, including experience with data analysis libraries . - Mandatory Knowledge of AWS services like S3, Redshift, Glue, and Lambda. – Nice to have Familiarity with orchestration tools like Apache Airflow and AWS Step Functions. – Nice to have Understanding of statistical concepts and methodologies. Excellent communication and presentation skills. Job Summary We are seeking a highly skilled Sr. Developer with 6 to 9 years of experience to join our dynamic team. The ideal candidate will have extensive experience in Tableau API Database and SQL Tableau Cloud and Tableau. or Power BI Report Builder Power BI Service DAX - Power BI MS Power BI Database and SQL. This role is hybrid with day shifts and no travel required. The Sr. Developer will play a crucial role in developing and maintaining our data visualization solutions ensuring data accuracy and providing actionable insights to drive business decisions. Responsibilities Develop and maintain Tableau or power BI reports and dashboards and reports to provide actionable insights. Utilize Tableau API or power BI reports to integrate data from various sources and ensure seamless data flow. Design and optimize database schemas to support efficient data storage and retrieval. Write complex SQL queries to extract manipulate and analyze data. Collaborate with business stakeholders to understand their data needs and translate them into technical requirements. Ensure data accuracy and integrity by implementing data validation and quality checks. Provide technical support and troubleshooting for Tableau- or power BI related issues. Stay updated with the latest Tableau or power BI features and best practices to enhance data visualization capabilities. Conduct performance tuning and optimization of Tableau or power BI dashboards and reports. Train and mentor junior developers on Tableau or power BI and SQL best practices. Work closely with the data engineering team to ensure data pipelines are robust and scalable. Participate in code reviews to maintain high-quality code standards. Document technical specifications and user guides for developed solutions. Qualifications( Tableau) Must have extensive experience with Tableau API and Tableau Cloud. Strong proficiency in Database and SQL for data extraction and manipulation. Experience with Tableau Work Model in a hybrid environment. Excellent problem-solving skills and attention to detail. Ability to collaborate effectively with cross-functional teams. Strong communication skills to convey technical concepts to non-technical stakeholders. Nice to have experience in performance tuning and optimization of Tableau solutions. Qualifications( Power BI) Possess strong expertise in Power BI Report Builder Power BI Service DAX Power BI and MS Power BI. Demonstrate proficiency in SQL and database management. Exhibit excellent problem-solving and analytical skills . Show ability to work collaboratively in a hybrid work model. Display strong communication skills to interact effectively with stakeholders. Have a keen eye for detail and a commitment to data accuracy. Maintain a proactive approach to learning and adopting new technologies.

Posted 2 months ago

Apply

5.0 - 10.0 years

10 - 20 Lacs

Bengaluru, Mumbai (All Areas)

Work from Office

Key Responsibilities Design, develop, and optimize data pipelines using Python and AWS services such asGlue, Lambda, S3, EMR, Redshift, Athena, and Kinesis. Implement ETL/ELT processes to extract, transform, and load data from various sources into centralized repositories (e.g., data lakes or data warehouses). Collaborate with cross-functional teams to understand business requirements and translate them into scalable data solutions. Monitor, troubleshoot, and enhance data workflows for performance and cost optimization. Ensure data quality and consistency by implementing validation and governance practices. Work on data security best practices in compliance with organizational policies and regulations. Automate repetitive data engineering tasks using Python scripts and frameworks. Leverage CI/CD pipelines for deployment of data workflows on AWS. Required Skills and Qualifications Professional Experience:5+ years of experiencein data engineering or a related field. Programming: Strong proficiency inPython, with experience in libraries likepandas,pySpark,orboto3. AWS Expertise: Hands-on experience with core AWS services for data engineering, such as: AWS Gluefor ETL/ELT. S3for storage. RedshiftorAthenafor data warehousing and querying. Lambdafor serverless compute. KinesisorSNS/SQSfor data streaming. IAM Rolesfor security. Databases: Proficiency in SQL and experience withrelational(e.g., PostgreSQL, MySQL) andNoSQL(e.g., DynamoDB) databases. Data Processing: Knowledge of big data frameworks (e.g., Hadoop, Spark) is a plus. DevOps: Familiarity with CI/CD pipelines and tools like Jenkins, Git, and CodePipeline. Version Control: Proficient with Git-based workflows. Problem Solving: Excellent analytical and debugging skills. Optional Skills Knowledge ofdata modelinganddata warehouse designprinciples. Experience withdata visualization tools(e.g., Tableau, Power BI). Familiarity with containerization (e.g., Docker) and orchestration (e.g., Kubernetes). Exposure to other programming languages like Scala or Java.

Posted 2 months ago

Apply

7.0 - 12.0 years

10 - 20 Lacs

Hyderabad

Remote

Job Title: Senior Data Engineer Location: Remote Job Type: Fulltime Experience Level: 7+ years About the Role: We are seeking a highly skilled Senior Data Engineer to join our team in building a modern data platform on AWS. You will play a key role in transitioning from legacy systems to a scalable, cloud-native architecture using technologies like Apache Iceberg, AWS Glue, Redshift, and Atlan for governance. This role requires hands-on experience across both legacy (e.g., Siebel, Talend, Informatica) and modern data stacks. Responsibilities: Design, develop, and optimize data pipelines and ETL/ELT workflows on AWS. Migrate legacy data solutions (Siebel, Talend, Informatica) to modern AWS-native services. Implement and manage a data lake architecture using Apache Iceberg and AWS Glue. Work with Redshift for data warehousing solutions including performance tuning and modelling. Apply data quality and observability practices using Soda or similar tools. Ensure data governance and metadata management using Atlan (or other tools like Collibra, Alation). Collaborate with data architects, analysts, and business stakeholders to deliver robust data solutions. Build scalable, secure, and high-performing data platforms supporting both batch and real-time use cases. Participate in defining and enforcing data engineering best practices. Required Qualifications: 7+ years of experience in data engineering and data pipeline development. Strong expertise with AWS services, especially Redshift, Glue, S3, and Athena. Proven experience with Apache Iceberg or similar open table formats (like Delta Lake or Hudi). Experience with legacy tools like Siebel, Talend, and Informatica. Knowledge of data governance tools like Atlan, Collibra, or Alation. Experience implementing data quality checks using Soda or equivalent. Strong SQL and Python skills; familiarity with Spark is a plus. Solid understanding of data modeling, data warehousing, and big data architectures. Strong problem-solving skills and the ability to work in an Agile environment.

Posted 2 months ago

Apply

9.0 - 14.0 years

20 - 30 Lacs

Kochi, Bengaluru

Work from Office

Senior Data Engineer AWS (Glue, Data Warehousing, Optimization & Security) Experienced Senior Data Engineer (6+ Yrs) with deep expertise in AWS cloud Data services, particularly AWS Glue, to design, build, and optimize scalable data solutions. The ideal candidate will drive end-to-end data engineering initiatives — from ingestion to consumption — with a strong focus on data warehousing, performance optimization, self-service enablement, and data security. The candidate needs to have experience in doing consulting and troubleshooting exercise to design best-fit solutions. Key Responsibilities Consult with business and technology stakeholders to understand data requirements, troubleshoot and advise on best-fit AWS data solutions Design and implement scalable ETL pipelines using AWS Glue, handling structured and semi-structured data Architect and manage modern cloud data warehouses (e.g., Amazon Redshift, Snowflake, or equivalent) Optimize data pipelines and queries for performance, cost-efficiency, and scalability Develop solutions that enable self-service analytics for business and data science teams Implement data security, governance, and access controls Collaborate with data scientists, analysts, and business stakeholders to understand data needs Monitor, troubleshoot, and improve existing data solutions, ensuring high availability and reliability Required Skills & Experience 8+ years of experience in data engineering in AWS platform Strong hands-on experience with AWS Glue, Lambda, S3, Athena, Redshift, IAM Proven expertise in data modelling, data warehousing concepts, and SQL optimization Experience designing self-service data platforms for business users Solid understanding of data security, encryption, and access management Proficiency in Python Familiarity with DevOps practices & CI/CD Strong problem-solving Exposure to BI tools (e.g., QuickSight, Power BI, Tableau) for self-service enablement Preferred Qualifications AWS Certified Data Analytics – Specialty or Solutions Architect – Associate

Posted 2 months ago

Apply

6.0 - 11.0 years

15 - 30 Lacs

Hyderabad, Pune, Bengaluru

Hybrid

Warm Greetings from SP Staffing Services Private Limited!! We have an urgent opening with our CMMI Level5 client for the below position. Please send your update profile if you are interested. Relevant Experience: 6 - 15 Yrs Location: Pan India Job Description: Candidate must be experienced working in projects involving Other ideal qualifications include experiences in Primarily looking for a data engineer with expertise in processing data pipelines using Databricks Spark SQL on Hadoop distributions like AWS EMR Data bricks Cloudera etc. Should be very proficient in doing large scale data operations using Databricks and overall very comfortable using Python Familiarity with AWS compute storage and IAM concepts Experience in working with S3 Data Lake as the storage tier Any ETL background Talend AWS Glue etc. is a plus but not required Cloud Warehouse experience Snowflake etc. is a huge plus Carefully evaluates alternative risks and solutions before taking action. Optimizes the use of all available resources Develops solutions to meet business needs that reflect a clear understanding of the objectives practices and procedures of the corporation department and business unit Skills Hands on experience on Databricks Spark SQL AWS Cloud platform especially S3 EMR Databricks Cloudera etc. Experience on Shell scripting Exceptionally strong analytical and problem-solving skills Relevant experience with ETL methods and with retrieving data from dimensional data models and data warehouses Strong experience with relational databases and data access methods especially SQL Excellent collaboration and cross functional leadership skills Excellent communication skills both written and verbal Ability to manage multiple initiatives and priorities in a fast-paced collaborative environment Ability to leverage data assets to respond to complex questions that require timely answers has working knowledge on migrating relational and dimensional databases on AWS Cloud platform Skills Interested can share your resume to sankarspstaffings@gmail.com with below inline details. Over All Exp : Relevant Exp : Current CTC : Expected CTC : Notice Period :

Posted 2 months ago

Apply

3.0 - 5.0 years

12 - 14 Lacs

Hyderabad, Chennai, Bengaluru

Work from Office

Role & responsibilities Key Responsibilities: Design, develop, and maintain data pipelines and ETL workflows on AWS platform Work with AWS services like S3, Glue, Lambda, Redshift, EMR, and Athena for data ingestion, transformation, and analytics Collaborate with Data Scientists, Analysts, and Business teams to understand data requirements Optimize data workflows for performance, scalability, and reliability Troubleshoot data issues, monitor jobs, and ensure data quality and integrity Write efficient SQL queries and automate data processing tasks Implement data security and compliance best practices Maintain technical documentation and data pipeline monitoring dashboards Required Skills: 3 to 5 years of hands-on experience as a Data Engineer on AWS Cloud Strong expertise with AWS data services: S3, Glue, Redshift, Athena, EMR, Lambda Proficient in SQL , Python, or Scala for data processing and scripting Experience with ETL tools and frameworks on AWS Understanding of data warehousing concepts and architecture Familiarity with CI/CD for data pipelines is a plus Strong problem-solving and communication skills Ability to work in Agile environment and handle multiple priorities Preferred candidate profile

Posted 2 months ago

Apply

3.0 - 8.0 years

5 - 10 Lacs

Bengaluru

Work from Office

Duration: 8Months Job Type: Contract Work Type: Onsite The top 3 Responsibilities: Manage AWS resources including EC2, RDS, Redshift, Kinesis, EMR, Lambda, Glue, Apache Airflow etc. Build and deliver high quality data architecture and pipelines to support business analyst, data scientists, and customer reporting needs. Interface with other technology teams to extract, transform, and load data from a wide variety of data sources Continually improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers Leadership Principles: Ownership, Customer obsession, Dive Deep and Deliver results Mandatory requirements: 3+ years of data engineering experience Experience with data modeling, warehousing and building ETL pipelines Experience with SQL & SQL Tuning Basic to Mid-level proficiency in scripting with Python Education or Certification Requirements: Any Graduation

Posted 2 months ago

Apply

6.0 - 11.0 years

15 - 30 Lacs

Bengaluru

Work from Office

Interested candidates can share their updated CV at: heena.ruchwani@gspann.com Join GSPANN Technologies as a Senior AWS Data Engineer and play a critical role in designing, building, and optimizing scalable data pipelines in the cloud. Were looking for an experienced engineer who can turn complex data into actionable insights using the AWS ecosystem. Key Responsibilities: Design, develop, and maintain scalable data pipelines on AWS. Work with large datasets to perform ETL/ELT transformations using tools like AWS Glue, EMR, and Lambda . Optimize and monitor data workflows , ensuring reliability and performance. Collaborate with data analysts, architects, and other engineers to build data solutions that support business needs. Implement and manage data lakes , data warehouses , and streaming architectures . Ensure data quality, governance, and security standards are met across platforms. Participate in code reviews , documentation, and mentoring of junior data engineers. Required Skills & Qualifications: 5+ years of experience in data engineering , with strong hands-on work in the AWS cloud ecosystem . Proficiency in Python , PySpark , and SQL . Strong experience with AWS services : AWS Glue , Lambda , EMR , S3 , Athena , Redshift , Kinesis , etc. Expertise in data pipeline development and workflow orchestration (e.g., Airflow , Step Functions ). Solid understanding of data warehousing and data lake architecture. Experience with CI/CD , version control (GitHub) , and DevOps practices for data environments. Familiarity with Snowflake , Databricks , or Looker is a plus. Excellent communication and problem-solving skills. Interested candidates can share their updated CV at: heena.ruchwani@gspann.com

Posted 2 months ago

Apply

5.0 - 7.0 years

2 - 6 Lacs

Bengaluru

Work from Office

Job Title DevOps Engineer Experience 5-7 Years Location Bangalore : Looking for a senior resource with min 5-7 years of hands on experience This resource needs to be hands on/have worked with github actions, terragrunt, terraform, has a great background AWS services and is very good at designing/implementing HA-DR topologies in AWS. Experience in Services like S3, Glue, RDS etc. Need to be proficient in Mongo db atlas Mongo db certification is a must have. AWS CI/CD or ADO or Azure Pipeline or Code Pipeline GitHub Terraform Shell Scripting, Qualification: Bachelor’s or master’s degree in computer science, Information Systems, Engineering or equivalent. Skills PRIMARY COMPETENCY DevOps PRIMARY AWS / Azure Container Services PRIMARY PERCENTAGE 51 SECONDARY COMPETENCY DevOps SECONDARY Terraform SECONDARY PERCENTAGE 29 TERTIARY COMPETENCY Data Eng

Posted 2 months ago

Apply

5.0 - 10.0 years

1 - 5 Lacs

Bengaluru

Work from Office

Job Title:AWS Data Engineer Experience5-10 Years Location:Bangalore : Technical Skills: 5 + Years of experience as AWS Data Engineer, AWS S3, Glue Catalog, Glue Crawler, Glue ETL, Athena write Glue ETLs to convert data in AWS RDS for SQL Server and Oracle DB to Parquet format in S3 Execute Glue crawlers to catalog S3 files. Create catalog of S3 files for easier querying Create SQL queries in Athena Define data lifecycle management for S3 files Strong experience in developing, debugging, and optimizing Glue ETL jobs using PySpark or Glue Studio. Ability to connect Glue ETLs with AWS RDS (SQL Server and Oracle) for data extraction and write transformed data into Parquet format in S3. Proficiency in setting up and managing Glue Crawlers to catalog data in S3. Deep understanding of S3 architecture and best practices for storing large datasets. Experience in partitioning and organizing data for efficient querying in S3. Knowledge of Parquet file format advantages for optimized storage and querying. Expertise in creating and managing the AWS Glue Data Catalog to enable structured and schema-aware querying of data in S3. Experience with Amazon Athena for writing complex SQL queries and optimizing query performance. Familiarity with creating views or transformations in Athena for business use cases. Knowledge of securing data in S3 using IAM policies, S3 bucket policies, and KMS encryption. Understanding of regulatory requirements (e.g., GDPR) and implementing secure data handling practices. Non-Technical Skills: Candidate needs to be Good Team Player Effective interpersonal, team building and communication skills. Ability to communicate complex technology to no tech audience in simple and precise manner.

Posted 2 months ago

Apply

4.0 - 6.0 years

2 - 6 Lacs

Hyderabad, Pune, Gurugram

Work from Office

Job Title:Sr AWS Data Engineer Experience4-6 Years Location:Pune, Hyderabad, Gurgaon, Bangalore [Hybrid] : PySpark, Python, SQL, AWS Services - S3, Athena, Glue, EMR/Spark, Redshift, Lambda, Step Functions, IAM, CloudWatch.

Posted 2 months ago

Apply

5.0 - 10.0 years

3 - 7 Lacs

Bengaluru

Work from Office

Job Title:EMR_Spark SME Experience:5-10 Years Location:Bangalore : Technical Skills: 5+ years of experience in big data technologies with hands-on expertise in AWS EMR and Apache Spark. Proficiency in Spark Core, Spark SQL, and Spark Streaming for large-scale data processing. Strong experience with data formats (Parquet, Avro, JSON) and data storage solutions (Amazon S3, HDFS). Solid understanding of distributed systems architecture and cluster resource management (YARN). Familiarity with AWS services (S3, IAM, Lambda, Glue, Redshift, Athena). Experience in scripting and programming languages such as Python, Scala, and Java. Knowledge of containerization and orchestration (Docker, Kubernetes) is a plus. Architect and develop scalable data processing solutions using AWS EMR and Apache Spark. Optimize and tune Spark jobs for performance and cost efficiency on EMR clusters. Monitor, troubleshoot, and resolve issues related to EMR and Spark workloads. Implement best practices for cluster management, data partitioning, and job execution. Collaborate with data engineering and analytics teams to integrate Spark solutions with broader data ecosystems (S3, RDS, Redshift, Glue, etc.). Automate deployments and cluster management using infrastructure-as-code tools like CloudFormation, Terraform, and CI/CD pipelines. Ensure data security and governance in EMR and Spark environments in compliance with company policies. Provide technical leadership and mentorship to junior engineers and data analysts. Stay current with new AWS EMR features and Spark versions to recommend improvements and upgrades. Requirements and Skills Performance tuning and optimization of Spark jobs. Problem-solving skills with the ability to diagnose and resolve complex technical issues. Strong experience with version control systems (Git) and CI/CD pipelines. Excellent communication skills to explain technical concepts to both technical and non-technical audiences. Qualification: Education qualificationB.Tech, BE, BCA, MCA, M. Tech or equivalent technical degree from a reputed college. Certifications: AWS Certified Solutions Architect – Associate/Professional AWS Certified Data Analytics – Specialty

Posted 2 months ago

Apply

5.0 - 8.0 years

7 - 10 Lacs

Mumbai

Work from Office

So, whats the job? You'll lead the design, development, and optimization of scalable, maintainable, and high-performance ETL/ELT pipelines using Informatica IDMC CDI. You'll manage and optimize cloud-based storage environments, including AWS S3 buckets. You'll implement robust data integration solutions that ingest, cleanse, transform, and deliver structured and semi-structured data from diverse sources to downstream systems and data warehouses. You'll support data integration from source systems, ensuring data quality and completeness. You'll automate data loading and transformation processes using tools such as Python, SQL, and orchestration frameworks. You'll contribute to the strategic transition toward cloud-native data platforms (e.g., AWS S3, Snowflake) by designing hybrid or fully cloud-based data solutions. You'll collaborate with Data Architects to align data models and structures with enterprise standards. You'll maintain clear documentation of data pipelines, processes, and technical standards, and mentor team members in best practices and tool usage. You'll implement and enforce data security, access controls, and compliance measures in line with organizational policies. And what are we looking for? You'll have a Bachelors degree in Computer Science, Engineering, or a related field with a minimum of 5 years of industry experience. You'll be an expert in designing, developing, and optimizing ETL/ELT pipelines using Informatica IDMC Cloud Data Integration (CDI). You'll bring strong experience with data ingestion, transformation, and delivery across diverse data sources and targets. You'll have a deep understanding of data integration patterns, orchestration strategies, and data pipeline lifecycle management. You'll be proficient in implementing incremental loads, CDC (Change Data Capture), and data synchronization. You'll bring strong experience with SQL Server, including performance tuning, stored procedures, and indexing strategies. You'll possess a solid understanding of data modeling, data warehousing concepts (star/snowflake schema), and dimensional modeling. You'll have experience integrating with cloud data warehouses such as Snowflake. You'll be familiar with cloud storage and compute platforms such as AWS S3, EC2, Lambda, Glue, and RDS. You'll design and implement cloud-native data architectures using modern tools and best practices. You'll have exposure to data migration and hybrid architecture design (on-prem to cloud). You'll be experienced with Informatica Intelligent Cloud Services (IICS), especially IDMC CDI. You'll have strong proficiency in SQL, T-SQL, and scripting languages like Python or Shell. You'll have experience with workflow orchestration tools like Apache Airflow, Informatica task flows, or Control-M. You'll be knowledgeable in API integration, REST/SOAP, and file-based data exchange (e.g., SFTP, CSV, Parquet). You'll implement data validation, error handling, and data quality frameworks. You'll have an understanding of data lineage, metadata management, and governance best practices. You'll set up monitoring, logging, and alerting for ETL processes.

Posted 2 months ago

Apply

3.0 - 6.0 years

15 - 25 Lacs

Chennai

Work from Office

Data Engineer with Kafka and Informatica Power Exchange Skills and Qualifications SQL - Mandatory Expertise in AWS services (e.g., S3, Glue, Redshift, Lambda). - Mandatory Proficiency in Kafka for real-time data streaming. - Mandatory Experience with Informatica PowerExchange CDC for data replication. - Mandatory Strong programming skills in Python, Java. - Mandatory Familiarity with orchestration tools like Apache Airflow and AWS Step Functions. – Nice to have Knowledge of ETL processes, and data warehousing. Understanding of data modeling ,data governance and security best practices. Job Summary We are seeking a skilled Developer with 3 to 6 years of experience to join our team. The ideal candidate will have expertise in Data Engineer with Kafka and Informatica +SQL This role offers a hybrid work model and operates during the day shift. The candidate will contribute to our projects by leveraging their technical skills to drive innovation and efficiency. Responsibilities Implement and manage continuous integration and continuous deployment (CI/CD) pipelines. Write clean maintainable and efficient code in Python. Design and optimize SQL queries for data retrieval and manipulation. Collaborate with cross-functional teams to define design and ship new features. Troubleshoot and resolve issues in development test and production environments. Ensure the performance quality and responsiveness of applications. Conduct code reviews to maintain code quality and share knowledge with team members. Automate repetitive tasks to improve efficiency and reduce manual effort. Monitor application performance and implement improvements as needed. Provide technical guidance and mentorship to junior developers. Stay updated with the latest industry trends and technologies to ensure our solutions remain cutting-edge. Contribute to the overall success of the team by meeting project deadlines and delivering high-quality work. Qualifications Must have strong experience in AWS DevOps including setting up and managing CI/CD pipelines. Should possess excellent programming skills in Python with a focus on writing clean and efficient code. Must be proficient in SQL with experience in designing and optimizing queries. Should have a good understanding of cloud computing concepts and services. Must have experience working in a hybrid work model and be adaptable to both remote and in-office environments. Should have strong problem-solving skills and the ability to troubleshoot complex issues. Must be a team player with excellent communication and collaboration skills. Should have a proactive attitude and be willing to take initiative in improving processes and systems. Must be detail-oriented and committed to delivering high-quality work. Should have experience with version control systems like Git. Must be able to work independently and manage time effectively. Should have a passion for learning and staying updated with new technologies and best practices.

Posted 2 months ago

Apply

3.0 - 6.0 years

15 - 25 Lacs

Chennai

Work from Office

Data Engineer Skills and Qualifications SQL - Mandatory Strong knowledge of AWS services (e.g., S3, Glue, Redshift, Lambda ). - Mandatory Experience working with DBT – Nice to have Proficiency in PySpark or Python for big data processing. - Mandatory Experience with orchestration tools like Apache Airflow and AWS CodePipeline . - Mandatory Job Summary We are seeking a skilled Developer with 3 to 6 years of experience to join our team. The ideal candidate will have expertise in AWS DevOps Python and SQL. This role involves working in a hybrid model with day shifts and no travel requirements. The candidate will contribute to the companys purpose by developing and maintaining high-quality software solutions. Responsibilities Develop and maintain software applications using AWS DevOps Python and SQL. Collaborate with cross-functional teams to design and implement new features. Ensure the scalability and reliability of applications through effective coding practices. Monitor and optimize application performance to meet user needs. Provide technical support and troubleshooting for software issues. Implement security best practices to protect data and applications. Participate in code reviews to maintain code quality and consistency. Create and maintain documentation for software applications and processes. Stay updated with the latest industry trends and technologies to enhance skills. Work in a hybrid model balancing remote and in-office work as needed. Communicate effectively with team members and stakeholders to ensure project success. Contribute to the continuous improvement of development processes and methodologies. Ensure timely delivery of projects while maintaining high-quality standards. Qualifications Possess a strong understanding of AWS DevOps including experience with deployment and management of applications on AWS. Demonstrate proficiency in Python programming with the ability to write clean and efficient code. Have experience with SQL for database management and querying. Show excellent problem-solving skills and attention to detail. Exhibit strong communication and collaboration skills. Be adaptable to a hybrid work model and able to manage time effectively.

Posted 2 months ago

Apply

10.0 - 15.0 years

12 - 17 Lacs

Bengaluru

Work from Office

Grade : 7 Purpose of your role This role sits within the ISS Data Platform Team. The Data Platform team is responsible for building and maintaining the platform that enables the ISS business to operate. This role is appropriate for a Lead Data Engineer capable of taking ownership and a delivering a subsection of the wider data platform. Key Responsibilities Design, develop and maintain scalable data pipelines and architectures to support data ingestion, integration and analytics. Be accountable for technical delivery and take ownership of solutions. Lead a team of senior and junior developers providing mentorship and guidance. Collaborate with enterprise architects, business analysts and stakeholders to understand data requirements, validate designs and communicate progress. Drive technical innovation within the department to increase code reusability, code quality and developer productivity. Challenge the status quo by bringing the very latest data engineering practices and techniques. Essential Skills and Experience Core Technical Skills Expert in leveraging cloud-based data platform (Snowflake, Databricks) capabilities to create an enterprise lake house. Advanced expertise with AWS ecosystem and experience in using a variety of core AWS data services like Lambda, EMR, MSK, Glue, S3. Experience designing event-based or streaming data architectures using Kafka. Advanced expertise in Python and SQL. Open to expertise in Java/Scala but require enterprise experience of Python. Expert in designing, building and using CI/CD pipelines to deploy infrastructure (Terraform) and pipelines with test automation. Data Security & Performance Optimization:Experience implementing data access controls to meet regulatory requirements. Experience using both RDBMS (Oracle, Postgres, MSSQL) and NOSQL (Dynamo, OpenSearch, Redis) offerings. Experience implementing CDC ingestion. Experience using orchestration tools (Airflow, Control-M, etc..) Bonus technical Skills: Strong experience in containerisation and experience deploying applications to Kubernetes. Strong experience in API development using Python based frameworks like FastAPI. Key Soft Skills: Problem-Solving:Leadership experience in problem-solving and technical decision-making. Communication:Strong in strategic communication and stakeholder engagement. Project Management:Experienced in overseeing project lifecycles working with Project Managers to manage resources.

Posted 2 months ago

Apply

4.0 - 9.0 years

25 - 35 Lacs

Bengaluru

Hybrid

Dodge Position Title: Software Engineer STG Labs Position Title: Location: Bangalore, India About Dodge Dodge Construction Network exists to deliver the comprehensive data and connections the construction industry needs to build thriving communities. Our legacy is deeply rooted in empowering our customers with transformative insights, igniting their journey towards unparalleled business expansion and success. We serve decision-makers who seek reliable growth and who value relationships built on trust and quality. By combining our proprietary data with cutting-edge software, we deliver to our customers the essential intelligence needed to excel within their respective landscapes. We propel the construction industry forward by transforming data into tangible guidance, driving unparalleled advancement. Dodge is the catalyst for modern construction. https://www.construction.com/ About Symphony Technology Group (STG) STG is a Silicon Valley (California) based private equity firm that has a long and successful track record of transforming high potential software and software-enabled services companies, as well as insights-oriented companies into definitive market leaders. The firm brings expertise, flexibility, and resources to build strategic value and unlock the potential of innovative companies. Partnering to build customer-centric, market winning portfolio companies, STG creates sustainable foundations for growth that bring value to all existing and future stakeholders. The firm is dedicated to transforming and building outstanding technology companies in partnership with world class management teams. With over $5.0 billion in assets under management, including a recently raised $2.0 billion fund. STGs expansive portfolio has consisted of more than 30 global companies. STG Labs is the incubation center for many of STG’s portfolio companies, building their engineering, professional services, and support delivery teams in India. STG Labs offers an entrepreneurial start-up environment for software and AI engineers, data scientists and analysts, project and product managers and provides a unique opportunity to work directly for a software or technology company. Based in Bangalore, STG Labs supports hybrid working. https://stg.com Roles and Responsibilities Design, build, and maintain scalable data pipelines and ETL processes leveraging AWS services. Collaborate closely with data architects, business analysts, and DevOps teams to translate business requirements into technical data solutions. Apply SDLC best practices, including planning, coding standards, code reviews, testing, and deployment. Automate workflows and optimize data pipelines for efficiency, performance, and reliability. Implement monitoring and logging to ensure the health and performance of data systems. Ensure data security and compliance through adherence to industry and internal standards. Participate actively in agile development processes and contribute to sprint planning, stand-ups, retrospectives, and documentation efforts. Qualifications Hands-on working knowledge and experience is required in: Data Structures Memory Management Basic Algos (Search, Sort, etc) Hands-on working knowledge and experience is preferred in: Memory Management Algorithms: Search, Sort, etc. AWS Data Services: Glue, EMR, Kinesis, Lambda, Athena, Redshift, S3 Scripting & Programming Languages: Python, Bash, SQL Version Control & CI/CD Tools: Git, Jenkins, Bitbucket Database Systems & Data Engineering: Data modeling, data warehousing principles Infrastructure as Code (IaC): Terraform, CloudFormation Containerization & Orchestration: Docker, Kubernetes Certifications Preferred : AWS Certifications (Data Analytics Specialty, Solutions Architect Associate).(Preferred Skill). Role & responsibilities Preferred candidate profile

Posted 2 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies