Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
4 - 9 years
12 - 16 Lacs
Kochi
Work from Office
As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and AWS Cloud Data Platform Responsibilities: Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark, Scala, and Hive, Hbase or other NoSQL databases on Cloud Data Platforms (AWS) or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / AWS eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala ; Minimum 3 years of experience on Cloud Data Platforms on AWS; Experience in AWS EMR / AWS Glue / DataBricks, AWS RedShift, DynamoDB Good to excellent SQL skills Exposure to streaming solutions and message brokers like Kafka technologies Preferred technical and professional experience Certification in AWS and Data Bricks or Cloudera Spark Certified developers.
Posted 2 months ago
4 - 9 years
12 - 16 Lacs
Kochi
Work from Office
As a senior SAP Consultant, you will serve as a client-facing practitioner working collaboratively with clients to deliver high-quality solutions and be a trusted business advisor with deep understanding of SAP Accelerate delivery methodology or equivalent and associated work products. You will work on projects that assist clients in integrating strategy, process, technology, and information to enhance effectiveness, reduce costs, and improve profit and shareholder value. There are opportunities for you to acquire new skills, work across different disciplines, take on new challenges, and develop a comprehensive understanding of various industries. Your primary responsibilities includeStrategic SAP Solution FocusWorking across technical design, development, and implementation of SAP solutions for simplicity, amplification, and maintainability that meet client needs. Comprehensive Solution DeliveryInvolvement in strategy development and solution implementation, leveraging your knowledge of SAP and working with the latest technologies. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Total 5 - 7+ years of experience in Data Management (DW, DL, Data Platform, Lakehouse) and Data Engineering skills Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala. Minimum 3 years of experience on Cloud Data Platforms on AWS; Exposure to streaming solutions and message brokers like Kafka technologies. Experience in AWS EMR / AWS Glue / DataBricks, AWS RedShift, DynamoDB Good to excellent SQL skills Preferred technical and professional experience Certification in AWS and Data Bricks or Cloudera Spark Certified developers
Posted 2 months ago
8 - 13 years
25 - 40 Lacs
Pune, Delhi NCR
Hybrid
Role: Lead Data Engineer Experience: 8-12 years Must-Have: 8+ years of relevant experienceinData Engineeringand delivery. 8+ years of relevant work experience in Big Data Concepts. Worked on cloud implementations. Have experience in Snowflake, SQL, AWS (glue, EMR, S3, Aurora, RDS, AWS architecture) Good experience withAWS cloudand microservices AWS glue, S3, Python, and Pyspark. Good aptitude, strong problem-solving abilities, analytical skills, and ability to take ownership asappropriate. Should be able to do coding, debugging, performance tuning, and deploying the apps to the Production environment. Experience working in Agile Methodology Candidates having experience in Requirement Gathering, Analysis, Gap Analysis, Team Leading & Ownership & Client communication would be preferred Ability to learn and help the team learn new technologiesquickly. Excellentcommunication and coordination skills Good to have: Have experience in DevOps tools (Jenkins, GIT etc.) and practices, continuous integration, and delivery (CI/CD) pipelines. Spark, Python, SQL (Exposure to Snowflake), Big Data Concepts, AWS Glue. Worked on cloud implementations (migration, development, etc. Role & Responsibilities: Be accountable for the delivery of the project within the defined timelines with good quality. Working with the clients and Offshore leads to understanding requirements, coming up with high-level designs, and completingdevelopment,and unit testing activities. Keep all the stakeholders updated about the task status/risks/issues if there are any. Keep all the stakeholders updated about the project status/risks/issues if there are any. Work closely with the management wherever and whenever required, to ensure smooth execution and delivery of the project. Guide the team technically and give the team directions on how to plan, design, implement, and deliver the projects. Education: BE/B.Tech from a reputed institute.
Posted 2 months ago
5 - 7 years
8 - 10 Lacs
Noida
Work from Office
What you need BS in an Engineering or Science discipline, or equivalent experience 5+ years of software/data engineering experience using Java, Scala, and/or Python, with at least 3 years experience in a data and BI focused role Experience in data integration (ETL/ELT) development using multiple languages (e.g., Python, PySpark, SparkSQL) and data transformation (e.g., dbt) Experience building data pipelines supporting a variety of integration and information delivery methods as well as data modelling techniques and analytics Knowledge and experience with various relational databases and demonstrable proficiency in SQL and data analysis requiring complex queries, and optimization Experience with AWS-based data services technologies (e.g., Glue, RDS, Athena, etc.) and Snowflake CDW, as well as BI tools (e.g., PowerBI) Willingness to experiment and learn new approaches and technology applications Knowledge of software engineering and agile development best practices Excellent written and verbal communication skills
Posted 2 months ago
10 - 14 years
25 - 30 Lacs
Gurgaon
Hybrid
Role : Principal Consultant- Tableau Developer Exp- 10+years Location: Gurugram Your scope of work / key responsibilities: Design and implement scalable BI architecture and highly performant BI Dashboards Ensure data quality, accuracy, and consistency across BI Platforms Manage Tableau Server/Cloud environment, including user accounts, permissions, and security settings Oversee Tableau site configuration, maintenance, and performance optimization Monitor Tableau server health, usage, and capacity planning Oversee the development and maintenance of dashboards, reports, and data visualizations Implement and manage Tableau governance policies and best practices Proficiency in SQL and experience with major database platforms Excellent problem-solving and analytical skills Strong knowledge of data warehousing concepts and ETL processes Experience working in Insurance / Finance domain Review and attest the solutions implementation as per approved architecture / guidelines Ability to design and represent solution architecture / Integration Pattern at various Architecture Forums for approval Collaborate with Regional and Market Architecture Teams Enforce security standards as per organization security directives on the solutions. Develop and maintain Tableau training materials and documentation Good to have experience in other Data Visualization tools like AWS Quicksight, Power BI Key Qualifications and experience Bachelors or master’s degree in computer science, IT or related technical field Minimum 12 years of professional software development experience Strong communication skills with ability to work with business and Technology stakeholders Minimum 8 years of experience with designing Business Intelligence Dashboards Strong hands-on experience with Tableau Site Administration including but not limited to site configuration, maintenance, performance, security, HA, Disaster Recovery. Strong experience with AWS data services like Glue, Athena, Lake Formation, S3, RDS, Redshift etc. Interested candidates can share their resume at divya@beanhr.com
Posted 2 months ago
4 - 5 years
10 - 20 Lacs
Mumbai
Work from Office
Role Overview: As a Junior Full Stack Developer at Exponentia.ai, you will work closely with our experienced development team to build and maintain scalable, high-performance web applications. Your primary focus will be on implementing user interface components using Angular,Node ensuring that our applications are both functional and user-friendly . Job Responsibilities: Backend : Familiar with AWS Platform, Knowledge of AWS Services like Glue, ECS, Application backend, s3, RDS and other foundational services like IAM, KMS. Frontend : Familiar with AWS Platform. Knowledge of Flask and Django, API management. CI/CD : Familiar with AWS Platform. Knowledge of Bitbucket, AIC setup - Terraform, Cloud Formation Develop and Maintain Web Applications: Build and enhance web applications using Angular, adhering to best practices and coding standards. Collaborate with Team: Work closely with designers, product managers, and other developers to understand requirements and deliver high-quality solutions. Implement UI Components: Create responsive and reusable UI components that offer a seamless user experience. Debug and Troubleshoot: Identify and resolve issues in existing codebases and applications to ensure optimal performance. Code Quality and Documentation: Write clean, maintainable code and contribute to comprehensive documentation to facilitate team collaboration and future development. Roles and Responsibilities Technical Skills: Proficiency in Angular, TypeScript, and JavaScript. Fulstack with Node Familiarity with HTML5, CSS3, and responsive design principles. Experience with RESTful APIs and integration. Understanding of version control systems, particularly Git. Experience with state management libraries such as NgRx or Akita. Knowledge of front-end build tools and task runners (e.g., Webpack, Gulp). Exposure to Agile development practices and methodologies. Strong problem-solving skills and attention to detail. Excellent communication and teamwork abilities. Eagerness to learn and adapt to new technologies and methodologies. Why join Exponentia: Expand your knowledge and work with cutting-edge technologies. Opportunity to work with some of the best minds and collaborate with them. Learn from Industry experts. Get Certified with latest technologies and platforms. Get access to networks of OEM partners and business leaders who are setting new standards at the cutting-edge of technology. Exponentia.ai is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees.
Posted 2 months ago
5 - 10 years
15 - 30 Lacs
Bengaluru
Work from Office
Urgent Hiring: AWS Data Engineer, Senior Data Engineers & Lead Data Engineers Apply Now: Send your resume to heena.ruchwani@gspann.com Location: Bangalore (5+ Years Experience) Company: GSPANN Technologies, Inc. GSPANN Technologies is seeking talented professionals with 5+ years of experience to join our team in Bangalore. We are looking for immediate joiners who are passionate about data engineering and eager to take on exciting challenges. Key Skills & Experience: 5+ years of hands-on experience with AWS Data Services (Glue, Redshift, S3, Lambda, EMR, Athena, etc.) Strong expertise in Big Data Technologies (Spark, Hadoop, Kafka) Proficiency in SQL, Python, and Scala Hands-on experience with ETL pipelines, data modeling, and cloud-based data solutions Location: Bangalore Apply Now: Send your resume to heena.ruchwani@gspann.com Immediate Joiners Preferred! If you're ready to contribute to dynamic, data-driven projects and advance your career with GSPANN Technologies, apply today!
Posted 2 months ago
9 - 12 years
30 - 35 Lacs
Mumbai, Hyderabad
Work from Office
About the Role: Grade Level (for internal use): 11 The Role : S&P Dow Jones Indices a global leader in providing investable and benchmark indices to the financial markets, is looking for a Senior Lead Development Engineer to join our technology team. The Location : Mumbai Hyderabad Gurgaon The Team : You will be part of global technology team comprising of Dev, QA and BA teams and will be responsible for analysis, design, development and testing. The Impact : You will be working on one of the core technology platforms responsible for the end of day calculation as well as dissemination of index values. Whats in it for you : You will have the opportunity to work on the enhancements to the existing index calculation system as well as implement new methodologies as required. Responsibilities : Design and development of Java applications for SPDJI web sites and its feeder systems. Participate in multiple software development processes including Coding, Testing, De-bugging & Documentation. Develop software applications based on clear business specifications. Work on new initiatives and support existing Index applications. Perform Application & System Performance tuning and troubleshoot performance issues. Develop web based applications and build rich front-end user interfaces. Build applications with object oriented concepts and apply design patterns. Integrate in-house applications with various vendor software platforms. Setup development environment / sandbox for application development. Check-in application code changes into the source repository. Perform unit testing of application code and fix errors. Interface with databases to extract information and build reports. Effectively interact with customers, business users and IT staff. What were looking for : Basic Qualification : Bachelor's degree in Computer Science, Information Systems or Engineering is required, or in lieu, a demonstrated equivalence in work experience. (9 to 12) years of IT experience in application development and support. Strong Experience with Java, J2EE, JMS &.EJBs including Spring Framework Strong Experience with Advanced SQL, PL/SQL programming. Basic networking knowledge / Unix scripting. Must have AWS experience (EC2, EMR, Lambda, S3, Glue, etc.) Excellent communication and interpersonal skills are essential, with strong verbal and writing proficiencies. Preferred Qualification: Minimum 1-2 years of experience in minimum three of following. Infrastructure/ CICD/DevOps Big data / AWS Cloud / Micro services Spark using scala/Java and HDFS Good understanding of AWS cloud Ansible / Fortify / Jenkins Exposure to addressing Vulnerabilities.
Posted 2 months ago
3 - 8 years
15 - 25 Lacs
Pune, Delhi NCR, Bengaluru
Hybrid
Key Responsibilities: 1. Design and implement scalable, high-performance data pipelines using AWS services 2. Develop and optimize ETL processes using AWS Glue, EMR, and Lambda 3. Build and maintain data lakes using S3 and Delta Lake 4. Create and manage analytics solutions using Amazon Athena and Redshift 5. Design and implement database solutions using Aurora, RDS, and DynamoDB 6. Develop serverless workflows using AWS Step Functions 7. Write efficient and maintainable code using Python/PySpark, and SQL/PostgrSQL 8. Ensure data quality, security, and compliance with industry standards 9. Collaborate with data scientists and analysts to support their data needs 10. Optimize data architecture for performance and cost-efficiency 11. Troubleshoot and resolve data pipeline and infrastructure issues Required Qualifications: 1. bachelors degree in computer science, Information Technology, or related field 2. Relevant years of experience as a Data Engineer, with at least 60% of experience focusing on AWS 3. Strong proficiency in AWS data services: Glue, EMR, Lambda, Athena, Redshift, S3 4. Experience with data lake technologies, particularly Delta Lake 5. Expertise in database systems: Aurora, RDS, DynamoDB, PostgreSQL 6. Proficiency in Python and PySpark programming 7. Strong SQL skills and experience with PostgreSQL 8. Experience with AWS Step Functions for workflow orchestration Technical Skills: - AWS Services: Glue, EMR, Lambda, Athena, Redshift, S3, Aurora, RDS, DynamoDB, Step Functions - Big Data: Hadoop, Spark, Delta Lake - Programming: Python, PySpark - Databases: SQL, PostgreSQL, NoSQL - Data Warehousing and Analytics - ETL/ELT processes - Data Lake architectures - Version control: Git - Agile methodologies
Posted 2 months ago
5 - 10 years
10 - 15 Lacs
Pune
Work from Office
Meeting with managers to determine the company’s Big Data needs Developing big data solutions on AWS, using Apache Spark, Databricks, Delta Tables, EMR, Athena, Glue, Hadoop, Familiarity with Data warehousing will be a plus NoSQL and RDBMS databases Required Candidate profile Loading disparate data sets and conducting pre-processing services using Athena, Glue, Spark, etc. Building cloud platforms for the development of company applications. Maintaining production systems.
Posted 2 months ago
5 - 10 years
15 - 30 Lacs
Bengaluru
Work from Office
Urgent Hiring: AWS Data Engineer, Senior Data Engineers & Lead Data Engineers Apply Now: Send your resume to heena.ruchwani@gspann.com Location: Bangalore (5+ Years Experience) Company: GSPANN Technologies, Inc. GSPANN Technologies is seeking talented professionals with 4+ years of experience to join our team in Bangalore. We are looking for immediate joiners who are passionate about data engineering and eager to take on exciting challenges. Key Skills & Experience: 4+ years of hands-on experience with AWS Data Services (Glue, Redshift, S3, Lambda, EMR, Athena, etc.) Strong expertise in Big Data Technologies (Spark, Hadoop, Kafka) Proficiency in SQL, Python, and Scala Hands-on experience with ETL pipelines, data modeling, and cloud-based data solutions Location: Bangalore Apply Now: Send your resume to heena.ruchwani@gspann.com Immediate Joiners Preferred! If you're ready to contribute to dynamic, data-driven projects and advance your career with GSPANN Technologies, apply today!
Posted 2 months ago
3 - 8 years
5 - 10 Lacs
Bengaluru
Work from Office
Your Job The Data Engineer will be a part of an international team that designs, develops and delivers new applications for Koch Industries. Koch Industries is a privately held global organization with over 120,000 employees around the world, with subsidiaries involved in manufacturing, trading, and investments. Koch Global Services (KGS) is being developed in India to as a Shared Service Operations, as well as act as a hub for innovation across functions. As KGS rapidly scales up its operations in India, its employees will get opportunities to carve out a career path for themselves within the organization. This role will have the opportunity to join on the ground floor and will play a critical part in helping build out the Koch Global Services (KGS) over the next several years. Working closely with global colleagues would provide significant international exposure to the employees Our Team We are seeking a Data Engineer expert to join KGS Analytics capability. We love passionate, forward-thinking individuals who are driven to innovate. You will have the opportunity to engage with Business Analysts, Analytics Consultants, and internal customers to implement ideas, optimize existing dashboards, and create Visualization products using powerful, contemporary tools. This opportunity engages diverse types of business applications and data sets at a rapid pace, and our ideal candidate gets excited when they are faced with a challenge . What You Will Do If a candidate is entrepreneurial in the way they approach ideas, Koch is among the most fulfilling organizations they could join. We are growing an analytics capability and looking for entrepreneurial minded innovators who can help us further develop this service of exceptionally high value to our business. Due to the diversity of companies and work within Koch, we are frequently working in new and interesting global business spaces, with data and analytics applications that are unique relative to opportunities from other employers in the marketplace . W ho You Are (Basic Qualifications) Work with business partners to understand key business drivers and use that knowledge to experiment and transform Business Intelligence & Advanced Analytics solutions to capture the value of potential business opportunities Translate a business process/problem into a conceptual and logical data model and proposed technical implementation plan Assist in developing and implementing consistent processes for data modeling, mining, and production Focus on implementing development processes and tools that allow for the collection of metadata, access to metadata, and completed in a way that allows for widespread code reuse (e.g., utilization of ETL Frameworks, Generic Metadata driven Tools, shared data dimensions, etc.) that will enable impact analysis as well as source to target tracking and reporting. Improve data pipeline reliability, scalability, and security What You Will Need to Bring with You: (experience & education required) 5+ years of industry professional experience or a bachelors degree in MIS, CS, or an industry equivalent. At least 4 years of Data Engineering experience (preferably AWS) with strong knowledge in SQL, developing, deploying, and modelling DWH and data pipelines on AWS cloud or similar other cloud environments. 3+ years of experience with business and technical requirements analysis, elicitation, data modeling, verification, and methodology development with a good hold of communicating complex technical ideas to technical and non-technical team members Demonstrated experience with Snowflake and AWS Lambda with python development for provisioning and troubleshooting. Demonstrated experience using git-based source control management platforms (Gitlab, GitHub, DevOps, etc.). What Will Put You Ahead 3+ years experience in the Amazon Web Services stack experience including S3, Athena, Redshift, Glue, or Lambda 3+ years experience with cloud data warehousing solutions including Snowflake with developing in and implementation of dimensional modeling 2+ years experience with data visualization and statistical tools like PowerBI, Python, etc. Experience with Git and CICD pipelines. Development experience with docker and a Kubernetes environment (would be a plus).
Posted 2 months ago
10 - 13 years
27 - 32 Lacs
Bengaluru
Work from Office
Department: ISS Reports To: Head of Data Platform - ISS Grade : 7 Were proud to have been helping our clients build better financial futures for over 50 years. How have we achieved this? By working together - and supporting each other - all over the world. So, join our team and feel like youre part of something bigger. Department Description ISS Data Engineering Chapter is an engineering group comprised of three sub-chapters - Data Engineers, Data Platform and Data Visualisation that supports the ISS Department. Fidelity is embarking on several strategic programmes of work that will create a data platform to support the next evolutionary stage of our Investment Process.These programmes span across asset classes and include Portfolio and Risk Management, Fundamental and Quantitative Research and Trading. Purpose of your role This role sits within the ISS Data Platform Team. The Data Platform team is responsible for building and maintaining the platform that enables the ISS business to operate. This role is appropriate for a Lead Data Engineer capable of taking ownership and a delivering a subsection of the wider data platform. Key Responsibilities Design, develop and maintain scalable data pipelines and architectures to support data ingestion, integration and analytics. Be accountable for technical delivery and take ownership of solutions. Lead a team of senior and junior developers providing mentorship and guidance. Collaborate with enterprise architects, business analysts and stakeholders to understand data requirements, validate designs and communicate progress. Drive technical innovation within the department to increase code reusability, code quality and developer productivity. Challenge the status quo by bringing the very latest data engineering practices and techniques. Essential Skills and Experience Core Technical Skills Expert in leveraging cloud-based data platform (Snowflake, Databricks) capabilities to create an enterprise lake house. Advanced expertise with AWS ecosystem and experience in using a variety of core AWS data services like Lambda, EMR, MSK, Glue, S3. Experience designing event-based or streaming data architectures using Kafka. Advanced expertise in Python and SQL. Open to expertise in Java/Scala but require enterprise experience of Python. Expert in designing, building and using CI/CD pipelines to deploy infrastructure (Terraform) and pipelines with test automation. Data Security & Performance Optimization:Experience implementing data access controls to meet regulatory requirements. Experience using both RDBMS (Oracle, Postgres, MSSQL) and NOSQL (Dynamo, OpenSearch, Redis) offerings. Experience implementing CDC ingestion. Experience using orchestration tools (Airflow, Control-M, etc..) Bonus technical Skills: Strong experience in containerisation and experience deploying applications to Kubernetes. Strong experience in API development using Python based frameworks like FastAPI. Key Soft Skills: Problem-Solving:Leadership experience in problem-solving and technical decision-making. Communication:Strong in strategic communication and stakeholder engagement. Project Management:Experienced in overseeing project lifecycles working with Project Managers to manage resources.
Posted 2 months ago
3 - 6 years
10 - 15 Lacs
Pune
Work from Office
Role & responsibilities Requirements- -3+ years of hands-on experience with AWS services including EMR, GLUE, Athena, Lambda, SQS, OpenSearch, CloudWatch, VPC, IAM, AWS Managed Airflow, security groups, S3, RDS, and DynamoDB. -Proficiency in Linux and experience with management tools like Apache Airflow and Terraform. Familiarity with CI/CD tools, particularly GitLab. Responsibilities- -Design, deploy, and maintain scalable and secure cloud and on-premises infrastructure. -Monitor and optimize performance and reliability of systems and applications. -Implement and manage continuous integration and continuous deployment (CI/CD) pipelines. -Collaborate with development teams to integrate new applications and services into existing infrastructure. -Conduct regular security assessments and audits to ensure compliance with industry standards. -Provide support and troubleshooting assistance for infrastructure-related issues. -Create and maintain detailed documentation for infrastructure configurations and processes.
Posted 2 months ago
5 - 9 years
9 - 19 Lacs
Bengaluru, Hyderabad
Hybrid
We are seeking a skilled Data Engineer with expertise in PySpark, Databricks, SQL, and experience with the AWS cloud platform. The ideal candidate will design, develop, and maintain scalable data pipelines and processing systems, ensuring data quality, integrity, and security. Responsibilities include implementing ETL processes, collaborating with stakeholders to meet data requirements, and utilizing AWS services such as S3, Lambda, and Glue. 5+ years of data engineering experience, proficiency in SQL, and strong problem-solving and communication skills are required.
Posted 2 months ago
5 - 10 years
20 - 27 Lacs
Pune, Hyderabad, Noida
Hybrid
Looking For AWS Data Engineer - (immediate joiners who will join within 15days) Location-Hyderabad,Chennai,Noida,Pune Working Mode- Hybrid Mandatory Skill- AWS glue,Python,Pyspark, SCD1, SCD2 Proficiency in Python,PySpark architecture, complex SQL, and RDBMS. Hands-on experience with ETL tools (e.g., Informatica) and SCD1, SCD2. 2-6 years of DWH, AWS Services and ETL design knowledge Hand on experience using AWS services like Glue (Pyspark), Lambda, S3, Athena, experience on implementing different loading strategies like SCD1 and SCD2, Table/ partition refresh, insert update, Swap Partitions
Posted 2 months ago
5 - 10 years
25 - 30 Lacs
Bengaluru
Work from Office
Your opportunity Do you love the transformative impact data can have on a businessAre you motivated to push for results and overcome all obstaclesThen we have a role for you. New Relic is looking for a Senior Data Engineer to help grow our global engineering team. What youll do Lead the building of scalable, fault tolerant pipelines with built in data quality checks that transform, load and curate data from various internal and external systems Provide leadership to cross-functional initiatives and projects. Influence architecture design and decisions. Build cross-functional relationships with Data Scientists, Product Managers and Software Engineers to understand data needs and deliver on those needs. Improve engineering processes and cross-team collaboration Provide thought leadership to grow and evolve DE function and implementation of SDLC best practices in building internal-facing data products by staying up-to-date with industry trends, emerging technologies, and best practices in data engineering This role requires 5+ years of experience in BI and Data Warehousing. Experience and knowledge of building data-lakes in AWS (i.e. Spark/Glue, Athena), including data modeling, data quality best practices, and self-service tooling. Demonstrated success leading cross functional initiatives Passionate about data quality, code quality, SLAs and continuous improvement Deep understanding of data system architecture Deep understanding of ETL/ELT patterns Development experience in at least one object-oriented language (Python,R,Scala, etc.). Comfortable with SQL and related tooling Bonus points if you have Experience with dbt, Airflow and snowflake Experience with Apache Iceberg tables Data observability experience
Posted 2 months ago
9 - 14 years
35 - 45 Lacs
Hyderabad
Remote
Senior Data Engineer (SQL, Python & AWS) Experience: 9-15 years Salary : INR 35,00,000-45,00,000 / year Preferred Notice Period : Within 30 Days Shift : 5:30AM to 2:30PM IST Opportunity Type: Remote Placement Type: Permanent (*Note: This is a requirement for one of Uplers' Clients) Must have skills required : Airflow, ETL pipelines, PostgreSQL Aurora database, PowerBI, AWS, Python, RestAPI, SQL Good to have skills : Athena, Data Lake Architecture, Glue, Lambda, JSON, Redshift, Tableau Leading Proptech Company (One of Uplers' Clients) is Looking for: Data Engineer (WFH) who is passionate about their work, eager to learn and grow, and who is committed to delivering exceptional results. If you are a team player, with a positive attitude and a desire to make a difference, then we want to hear from you. Role Overview Description We are seeking an experienced Data Engineer to join our team of passionate professionals working on cutting-edge technology. In this role, you will be responsible for the ELT of our Company using Python, Airflow, and SQL within an AWS environment. Additionally, you will create and maintain data visualizations and dashboards using PowerBI, connecting to our SQL Server and PostgreSQL Aurora database through a gateway. This role requires strong critical thinking, the ability to assess data and outcomes, and proactive problem-solving skills. Responsibilities: Design, develop, and maintain ELT pipelines using Python, Airflow, and SQL in an AWS environment. Create and manage a data lake and data warehouse solutions on AWS. Develop and maintain data-driven dashboards and reporting solutions in PowerBI. ¢ Connect PowerBI to SQL Server and PostgreSQL Aurora databases using a gateway. ¢ Extract and integrate data from third-party APIs to populate the data lake. ¢ Perform data profiling and source system analysis to ensure data quality and integrity. ¢ Collaborate with business stakeholders to capture and understand data requirements. ¢ Implement industry best practices for data engineering and visualization. ¢ Participate in architectural decisions and contribute to the continuous improvement of data solutions. ¢ Follow agile practices and a Lean approach in project development. ¢ Critically assess the outcomes of your work to ensure they align with expectations before marking tasks as complete. ¢ Optimize SQL queries for performance and ensure efficient database operations. ¢ Perform database tuning and optimisation as needed. ¢ Proactively identify and present alternative solutions to achieve desired outcomes. ¢ Take ownership of end-to-end data-related demands from data extraction (whether from internal databases or third-party apps) to understanding the data, engaging with relevant people when needed, and delivering meaningful solutions. Required Skills and Experience: ¢ At least 9+ years of experience will be preferred. Strong critical thinking skills to assess outcomes, evaluate results, and suggest better alternatives where appropriate. ¢ Expert-level proficiency in SQL (TSQL, MS SQL) with a strong focus on optimizing queries for performance. ¢ Extensive experience with Python (including data-specific libraries) and Airflow for ELT processes. ¢ Proven ability to extract and manage data from third-party APIs. ¢ Proven experience in designing and developing data warehousing solutions on the AWS cloud platform. ¢ Strong expertise in PowerBI for data visualization and dashboard creation. ¢ Familiarity with connecting PowerBI to SQL Server and PostgreSQL Aurora databases. ¢ Experience with REST APIs and JSON ¢ Agile development experience with a focus on continuous delivery and improvement. ¢ Proactive mindset able to suggest alternative approaches to achieve goals efficiently. ¢ Excellent problem-solving skills and a proactive can-do attitude. ¢ Strong communication skills and the ability to work collaboratively in a team environment. ¢ Ability to independently assess data, outcomes, and potential gaps to ensure results align with business goals. ¢ Ability to perform database tuning and optimization to ensure efficient data operations. Desired Skills: ¢ Exposure to AWS Cloud Data Services such as RedShift, Athena, Lambda, Glue, etc. ¢ Experience with other reporting tools like Tableau. ¢ Knowledge of data lake architectures and best practices. How to apply for this opportunity: Easy 3-Step Process: 1. Click On Apply! And Register or log in on our portal 2. Upload updated Resume & Complete the Screening Form 3. Increase your chances to get shortlisted & meet the client for the Interview! About Our Client: We are a cloud-based residential sales platform designed to bridge the communication gap between clients, sales teams, and construction teams. Our goal is to ensure seamless collaboration, resulting in buildable and well-aligned residential projects. As builders with a strong tech foundation, we bring deep industry expertise to every solution we create. About Uplers: Our goal is to make hiring and getting hired reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant product and engineering job opportunities and progress in their career. (Note: There are many more opportunities apart from this on the portal.) So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 2 months ago
4 - 8 years
12 - 18 Lacs
Bengaluru
Work from Office
exp in Spark with Scala Hive & Big data technologies exp in Scala and object oriented concepts exp in HDFS, Spark, Hive and Oozie data models, data mining, and partitioning techniques exp with SQL database CI/CD tools (Maven, Git, Jenkins) and SONAR
Posted 2 months ago
4 - 6 years
6 - 8 Lacs
Hyderabad
Work from Office
Position Summary: Cigna, a leading Health Services company, is looking for data engineers/developers in our Data & Analytics organization. The Full Stack Engineer is responsible for the delivery of a business need end-to-end starting from understanding the requirements to deploying the software into production. This role requires you to be fluent in some of the critical technologies with proficiency in others and have a hunger to learn on the joband add value to the business. Critical attributes of being a Full Stack Engineer, among others, is Ownership & Accountability. In addition to Delivery, the Full Stack Engineer should have an automation first and continuous improvement mindset. He/She should drive the adoption of CI/CD tools and support the improvement of the tools sets/processes. Behaviors of a Full Stack Engineer: Full Stack Engineers are able to articulate clear business objectives aligned to technical specifications and work in an iterative, agile pattern daily. They have ownership over their work tasks, and embrace interacting with all levels of the team and raise challenges when necessary. We aim to be cutting-edge engineers not institutionalized developers Job Description & Responsibilities: Minimize "meetings" to get requirements and have direct business interactions Write referenceable & modular code Design and architect the solution independently Be fluent in particular areas and have proficiency in many areas Have a passion to learn Take ownership and accountability Understands when to automate and when not to Have a desire to simplify Be entrepreneurial / business minded Have a quality mindset, not just code quality but also to ensure ongoing data quality by monitoring data to identify problems before they have business impact Take risks and champion new ideas Experience Required: 4+ years being part of Agile teams 3+ years of scripting 3+ years of database Teradata. 2+ years of AWS services 1+ years of experience with FHIR. (Good to have). Experience Desired: Experience with GITHUB Devops experience Jenkins, Terraform & Docker. Python/Pyspark. SQL experience good to have. Education and Training Required: Knowledge and/or experience with Health care information domains is a plus Computer science Good to have Primary Skills: Must Haves Inter systems experience. Python / Pyspark experience. Exposure to AWS services - Glue, S3, SNS, SQS, Lambda, Step Functions, Opensearch, DynamoDB, API Gateway etc. Additional Skills: Excellent troubleshooting skills Strong communication skills Fluent in BDD and TDD development methodologies Work in an agile CI/CD environment (Jenkins experience a plus)
Posted 2 months ago
2 - 6 years
12 - 16 Lacs
Pune
Work from Office
As a Data Engineer at IBM, you'll play a vital role in the development, design of application, provide regular support/guidance to project teams on complex coding, issue resolution and execution. Your primary responsibilities include: Lead the design and construction of new solutions using the latest technologies, always looking to add business value and meet user requirements. Strive for continuous improvements by testing the build solution and working under an agile framework. Discover and implement the latest technologies trends to maximize and build creative solutions Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Design and Develop Data Solutions, Design and implement efficient data processing pipelines using AWS services like AWS Glue, AWS Lambda, Amazon S3, and Amazon Redshift. Develop and manage ETL (Extract, Transform, Load) workflows to clean, transform, and load data into structured and unstructured storage systems. Build scalable data models and storage solutions in Amazon Redshift, DynamoDB, and other AWS services. Data Integration: Integrate data from multiple sources including relational databases, third-party APIs, and internal systems to create a unified data ecosystem. Work with data engineers to optimize data workflows and ensure data consistency, reliability, and performance. Automation and Optimization: Automate data pipeline processes to ensure efficiency Preferred technical and professional experience Define, drive, and implement an architecture strategy and standards for end-to-end monitoring. Partner with the rest of the technology teams including application development, enterprise architecture, testing services, network engineering, Good to have detection and prevention tools for Company products and Platform and customer-facing
Posted 2 months ago
12 - 16 years
30 - 45 Lacs
Pune, Bengaluru
Hybrid
Role: AWS Cloud Architect Location: Pune & Bengaluru Fulltime Key Responsibilities • Develop and maintain scalable and reliable data pipelines to ingest data from various APIs into the AWS ecosystem. • Utilize AWS Redshift for data warehousing tasks, optimizing data retrieval and query performance. • Configure and use AWS Glue for ETL processes, ensuring data is clean, well-structured, and ready for analysis. • Utilize EC2 instances for custom applications and services that require compute capacity. • Implement data lake and warehousing strategies to support analytics and business intelligence initiatives. • Collaborate with cross-functional teams to understand data needs and deliver solutions that align with business goals. • Ensure compliance with data governance and security policies. Qualifications • A solid experience in AWS services, especially S3, Redshift, Glue, and EC2. • Proficiency in data ingestion and integration, particularly with APIs. • A strong understanding of data warehousing, ETL processes, and cloud data storage. • Experience with scripting languages such as Python for automation and data manipulation. • Familiarity with infrastructure as code tools for managing AWS resources. • Excellent problem-solving skills and ability to work in a dynamic environment. • Strong communication skills for effective collaboration and documentation.
Posted 2 months ago
2 - 7 years
27 - 42 Lacs
Bangalore Rural
Hybrid
Note: We are preferring employees from Product organization and premium Engineering Institutes. Data Platform Engineer: Assisting team members in designing and building data infrastructure at scale. We handle PB of data each day through streaming and batch processing. You will be helping to deliver data to our data lake for use by our Data Warehouse team, Analytics teams and Data Scientists. Work on Data Lakehouse system architecture and ingesting/pipelining of data, and tools to automateand orchestrate in delivering performance, reliability, and operational efficiency • Define both batch and streaming data-parallel processing pipelines and distributed processing back-ends Build CI/CD Pipelines and manage configuration management Build tools and services that run on k8s that are part of our data ecosystem Routinely write efficient, legible, and well-commented Python Clear communication skills to deliver on complex, technical topics Help scale our data warehouse (we use Snowflake) for clean data-ready delivery for analysis • Work closely with Analytic Engineers and Data Analysts for the collection/analysis of raw data for models that empower end users. Build and scale our warehouse platform for data ingest, logging, search, aggregation, viewing, and analysis What we're looking for: 5+ years of professional experience working and developing using Python and/or Java • 3+ years of professional experience working with Python scripting experience (Unix, bash, python) AWS Certification or equivalent experience Terraform or other IaC Tools (Terraform Preferred) Experience with Streaming Data Apache Beam, Fink, Spark, and Kafka Experience with modern data technologies such as Airflow, Snowflake, Redshift, Spark • Knowledge of source control, gitflow, gitlabflow, CI/CD (gitlab, circleci) Knowledge/Experience working with Kubernetes, Docker, Helm Experience with automation and orchestration tools Bachelors degree or equivalent in computer science, information systems, or a combination of education and related experience required Data Engineer: What You'll Do Lead the building of scalable, fault tolerant pipelines with built in data quality checks that transform, load and curate data from various internal and external systems Provide leadership to cross-functional initiatives and projects. Influence architecture design and decisions. Build cross-functional relationships with Data Scientists, Product Managers and Software Engineers to understand data needs and deliver on those needs. Improve engineering processes and cross-team collaboration. Ruthlessly prioritize work to align with company priorities. Provide thought leadership to grow and evolve DE function and implementation of SDLC best practices in building internal-facing data products by staying up-to-date with • industry trends, emerging technologies, and best practices in data engineering What we're looking for: 3-12 years of experience in BI and Data Warehousing. Minimum 3 years of experience leading data teams in a high-volume environment • Minimum 4 years of experience with dbt, Airflow and snowflake Experience with Apache Iceberg tables Experience and knowledge of building data-lakes in AWS (i.e. Spark/Glue, Athena), • Including data modeling, data quality best practices, and self-service tooling. • Experience mentoring data professionals from junior to senior levels Demonstrated success leading cross functional initiatives Passionate about data quality, code quality, SLAs and continuous improvement • Deep understanding of data system architecture Deep understanding of ETL/ELT patterns Development experience in at least one object-oriented language (Python,R,Scala, etc.). • Comfortable with SQL and related tooling
Posted 2 months ago
2 - 7 years
27 - 42 Lacs
Bengaluru
Hybrid
Note: We are preferring employees from Product organization and premium Engineering Institutes. Data Platform Engineer: Assisting team members in designing and building data infrastructure at scale. We handle PB of data each day through streaming and batch processing. You will be helping to deliver data to our data lake for use by our Data Warehouse team, Analytics teams and Data Scientists. Work on Data Lakehouse system architecture and ingesting/pipelining of data, and tools to automateand orchestrate in delivering performance, reliability, and operational efficiency • Define both batch and streaming data-parallel processing pipelines and distributed processing back-ends Build CI/CD Pipelines and manage configuration management Build tools and services that run on k8s that are part of our data ecosystem Routinely write efficient, legible, and well-commented Python Clear communication skills to deliver on complex, technical topics Help scale our data warehouse (we use Snowflake) for clean data-ready delivery for analysis • Work closely with Analytic Engineers and Data Analysts for the collection/analysis of raw data for models that empower end users. Build and scale our warehouse platform for data ingest, logging, search, aggregation, viewing, and analysis What we're looking for: 5+ years of professional experience working and developing using Python and/or Java • 3+ years of professional experience working with Python scripting experience (Unix, bash, python) AWS Certification or equivalent experience Terraform or other IaC Tools (Terraform Preferred) Experience with Streaming Data Apache Beam, Fink, Spark, and Kafka Experience with modern data technologies such as Airflow, Snowflake, Redshift, Spark • Knowledge of source control, gitflow, gitlabflow, CI/CD (gitlab, circleci) Knowledge/Experience working with Kubernetes, Docker, Helm Experience with automation and orchestration tools Bachelors degree or equivalent in computer science, information systems, or a combination of education and related experience required Data Engineer: What You'll Do Lead the building of scalable, fault tolerant pipelines with built in data quality checks that transform, load and curate data from various internal and external systems Provide leadership to cross-functional initiatives and projects. Influence architecture design and decisions. Build cross-functional relationships with Data Scientists, Product Managers and Software Engineers to understand data needs and deliver on those needs. Improve engineering processes and cross-team collaboration. Ruthlessly prioritize work to align with company priorities. Provide thought leadership to grow and evolve DE function and implementation of SDLC best practices in building internal-facing data products by staying up-to-date with • industry trends, emerging technologies, and best practices in data engineering What we're looking for: 3-12 years of experience in BI and Data Warehousing. Minimum 3 years of experience leading data teams in a high-volume environment • Minimum 4 years of experience with dbt, Airflow and snowflake Experience with Apache Iceberg tables Experience and knowledge of building data-lakes in AWS (i.e. Spark/Glue, Athena), • Including data modeling, data quality best practices, and self-service tooling. • Experience mentoring data professionals from junior to senior levels Demonstrated success leading cross functional initiatives Passionate about data quality, code quality, SLAs and continuous improvement • Deep understanding of data system architecture Deep understanding of ETL/ELT patterns Development experience in at least one object-oriented language (Python,R,Scala, etc.). • Comfortable with SQL and related tooling
Posted 2 months ago
2 - 7 years
27 - 42 Lacs
Bangalore Rural
Hybrid
We are preferring employees from Product organisation and premium Engineering Institutes. We are hiring for our client, who is an Indian multinational technology services company based in Pune. It is primarily engaged in cloud computing, internet of things, endpoint security, big data analytics and software product engineering services. Data Platform Engineer: Assisting team members in designing and building data infrastructure at scale. We handle PB of data each day through streaming and batch processing. You will be helping to deliver data to our data lake for use by our Data Warehouse team, Analytics teams and Data Scientists. Work on Data Lakehouse system architecture and ingesting/pipelining of data, and tools to automateand orchestrate in delivering performance, reliability, and operational efficiency • Define both batch and streaming data-parallel processing pipelines and distributed processing back-ends Build CI/CD Pipelines and manage configuration management Build tools and services that run on k8s that are part of our data ecosystem Routinely write efficient, legible, and well-commented Python Clear communication skills to deliver on complex, technical topics Help scale our data warehouse (we use Snowflake) for clean data-ready delivery for analysis • Work closely with Analytic Engineers and Data Analysts for the collection/analysis of raw data for models that empower end users. Build and scale our warehouse platform for data ingest, logging, search, aggregation, viewing, and analysis What we're looking for: 5+ years of professional experience working and developing using Python and/or Java • 3+ years of professional experience working with Python scripting experience (Unix, bash, python) AWS Certification or equivalent experience Terraform or other IaC Tools (Terraform Preferred) Experience with Streaming Data Apache Beam, Fink, Spark, and Kafka Experience with modern data technologies such as Airflow, Snowflake, Redshift, Spark • Knowledge of source control, gitflow, gitlabflow, CI/CD (gitlab, circleci) Knowledge/Experience working with Kubernetes, Docker, Helm Experience with automation and orchestration tools Bachelors degree or equivalent in computer science, information systems, or a combination of education and related experience required Data Engineer: What You'll Do Lead the building of scalable, fault tolerant pipelines with built in data quality checks that transform, load and curate data from various internal and external systems Provide leadership to cross-functional initiatives and projects. Influence architecture design and decisions. Build cross-functional relationships with Data Scientists, Product Managers and Software Engineers to understand data needs and deliver on those needs. Improve engineering processes and cross-team collaboration. Ruthlessly prioritize work to align with company priorities. Provide thought leadership to grow and evolve DE function and implementation of SDLC best practices in building internal-facing data products by staying up-to-date with • industry trends, emerging technologies, and best practices in data engineering What we're looking for: 3-12 years of experience in BI and Data Warehousing. Minimum 3 years of experience leading data teams in a high-volume environment • Minimum 4 years of experience with dbt, Airflow and snowflake Experience with Apache Iceberg tables Experience and knowledge of building data-lakes in AWS (i.e. Spark/Glue, Athena), • Including data modeling, data quality best practices, and self-service tooling. • Experience mentoring data professionals from junior to senior levels Demonstrated success leading cross functional initiatives Passionate about data quality, code quality, SLAs and continuous improvement • Deep understanding of data system architecture Deep understanding of ETL/ELT patterns Development experience in at least one object-oriented language (Python,R,Scala, etc.). • Comfortable with SQL and related tooling
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
In recent years, the demand for professionals with expertise in glue technologies has been on the rise in India. Glue jobs involve working with tools and platforms that help connect various systems and applications together seamlessly. This article aims to provide an overview of the glue job market in India, including top hiring locations, average salary ranges, career progression, related skills, and interview questions for aspiring job seekers.
Here are 5 major cities in India actively hiring for glue roles: 1. Bangalore 2. Pune 3. Hyderabad 4. Chennai 5. Mumbai
The estimated salary range for glue professionals in India varies based on experience levels. Entry-level professionals can expect to earn around INR 4-6 lakhs per annum, while experienced professionals with several years of experience can earn between INR 12-18 lakhs per annum.
In the field of glue technologies, a typical career progression may include roles such as: - Junior Developer - Senior Developer - Tech Lead - Architect
Apart from expertise in glue technologies, professionals in this field are often expected to have or develop skills in: - Data integration - ETL (Extract, Transform, Load) processes - Database management - Programming languages (e.g., Python, Java)
Here are 25 interview questions for glue roles: - What is Glue in the context of data integration? (basic) - Explain the difference between ETL and ELT. (basic) - How would you handle data quality issues in a glue job? (medium) - Can you explain how Glue works with Apache Spark? (medium) - What is the significance of schema evolution in Glue? (medium) - How do you optimize Glue jobs for performance? (medium) - Describe a scenario where you had to troubleshoot a failed Glue job. (medium) - What is a bookmark in Glue and how is it used? (medium) - How does Glue handle schema inference? (medium) - Have you worked with AWS Glue DataBrew? If so, explain your experience. (medium) - Explain how Glue handles schema evolution. (advanced) - How does Glue support job bookmarks for incremental processing? (advanced) - What are the differences between Glue ETL and Glue DataBrew? (advanced) - How do you handle nested JSON structures in Glue transformations? (advanced) - Explain a complex Glue job you have designed and implemented. (advanced) - How does Glue handle dynamic frame operations? (advanced) - What is the role of a Glue DynamicFrame in data transformation? (advanced) - How do you handle schema changes in Glue jobs? (advanced) - Explain how Glue can be integrated with other AWS services. (advanced) - What are the limitations of Glue that you have encountered in your projects? (advanced) - How do you monitor and debug Glue jobs in production environments? (advanced) - Describe your experience with Glue job scheduling and orchestration. (advanced) - How do you ensure security in Glue jobs that handle sensitive data? (advanced) - Explain the concept of lazy evaluation in Glue. (advanced) - How do you handle dependencies between Glue jobs in a workflow? (advanced)
As you prepare for interviews and explore opportunities in the glue job market in India, remember to showcase your expertise in glue technologies, related skills, and problem-solving abilities. With the right preparation and confidence, you can land a rewarding career in this dynamic and growing field. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2