Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 10.0 years
20 - 35 Lacs
Chennai
Work from Office
Development: Design, build, and maintain robust, scalable, and high-performance data pipelines to ingest, process, and store large volumes of structured and unstructured data. Utilize Apache Spark within Databricks to process big data efficiently, leveraging distributed computing to process large datasets in parallel. Integrate data from a variety of internal and external sources, including databases, APIs, cloud storage, and real-time streaming data. Data Integration & Storage: Implement and maintain data lakes and warehouses, using technologies like Databricks, Azure Synapse, Redshift, BigQuery to store and retrieve data. Design and implement data models, schemas, and architecture for efficient querying and storage. Data Transformation & Optimization: Leverage Databricks and Apache Spark to perform data transformations at scale, ensuring data is cleaned, transformed, and optimized for analytics. Write and optimize Spark SQL, PySpark, and Scala code to process large datasets in real-time and batch jobs. Work on ETL processes to extract, transform, and load data from various sources into cloud-based data environments. Big Data Tools & Technologies: Utilize cloud-based big data platforms (e.g., AWS, Azure, Google Cloud) in conjunction with Databricks for distributed data processing and storage. Implement and maintain data pipelines using Apache Kafka, Apache Flink, and other data streaming technologies for real-time data processing. Collaboration & Stakeholder Engagement: Work with data scientists, data analysts, and business stakeholders to define data requirements and deliver solutions that align with business objectives. Collaborate with cloud engineers, data architects, and other teams to ensure smooth integration and data flow between systems. Monitoring & Automation: Build and implement monitoring solutions for data pipelines, ensuring consistent performance, identifying issues, and optimizing workflows. Automate data ingestion, transformation, and validation processes to reduce manual intervention and increase efficiency. Document data pipeline processes, architectures, and data models to ensure clarity and maintainability. Adhere to best practices in data engineering, software development, version control, and code review. Required Skills & Qualifications: Education: Bachelors degree in Computer Science, Engineering, Data Science, or a related field (or equivalent experience). Technical Skills: Apache Spark: Strong hands-on experience with Spark, specifically within Databricks (PySpark, Scala, Spark SQL). Experience working with cloud-based platforms such as AWS, Azure, or Google Cloud, particularly in the context of big data processing and storage. Proficiency in SQL and experience with cloud data warehouses (e.g., Redshift, BigQuery, Snowflake). Strong programming skills in Python, Scala, or Java. Big Data & Cloud Technologies: Experience with distributed computing concepts and scalable data processing architectures. Familiarity with data lake architectures and frameworks (e.g., AWS S3, Azure Data Lake). Data Engineering Concepts: Strong understanding of ETL processes, data modeling, and database design. Experience with batch and real-time data processing techniques. Familiarity with data quality, data governance, and privacy regulations. Problem Solving & Analytical Skills: Strong troubleshooting skills for resolving issues in data pipelines and performance optimization. Ability to work with large, complex datasets, and perform data wrangling and cleaning.
Posted 2 months ago
5.0 - 10.0 years
15 - 30 Lacs
Ahmedabad
Work from Office
Role & responsibilities Senior Data Engineer Job Description GRUBBRR is seeking a mid/senior-level data engineer to help build our next-generation analytical and big data solutions. We strive to build Cloud-native, consumer-first, UX-friendly kiosks and online applications across a variety of verticals supporting enterprise clients and small businesses. Behind our consumer applications, we integrate and interact with a deep-stack of payment, loyalty, and POS systems. In addition, we also provide actionable insights to enable our customers to make informed decisions. Our challenge and goal is to provide a frictionless experience for our end-consumers and easy-to-use, smart management capabilities for our customers to maximize their ROIs. Responsibilities: Develop and maintain data pipelines Ensure data quality and accuracy Design, develop and maintain large, complex sets of data that meet non-functional and functional business requirements Build required infrastructure for optimal extraction, transformation and loading of data from various data sources using cloud technologies Build analytical tools to utilize the data pipelines Skills: Solid experience with SQL & NoSQL Strong Data modeling skills for data lake, data warehouse, data marts including dimensional modeling and star schemas Proficient with Azure Data Factory data integration technology Knowledge of Hadoop or similar Big Data technology Knowledge of Apache Kafka, Spark, Hive or equivalent Knowledge of Azure or AWS analytics technologies Qualifications: BS in Computer Science, Applied Mathematics or related fields (MS preferred) At least 8 years of experience working with OLAPs Microsoft Azure or AWS Data engineer certification a plus
Posted 2 months ago
5.0 - 9.0 years
15 - 30 Lacs
Hyderabad
Hybrid
Hi! Greetings of the day!! We have openings for one of our product based company. Location : Hyderabad Notice Period: Only Immediate - 30 Days Work Mode - Hybrid Key Purpose Statement Core mission The core purpose of a Senior Data Engineer will play a key role in designing, building, and optimizing our data infrastructure and pipelines. This individual will leverage their deep expertise in Azure Synapse , Databricks cloud platforms, and Python programming to deliver high-quality data solutions. RESPONSIBILITIES Data Infrastructure and Pipeline Development: - Develop and maintain complex ETL/ELT pipelines using Databricks and Azure Synapse. - Optimize data pipelines for performance, scalability, and cost-efficiency. - Implement best practices for data governance, quality, and security. Cloud Platform Management: - Design and manage cloud-based data infrastructure on platforms such as Azure - Utilize cloud-native tools and services to enhance data processing and storage capabilities. - understanding and designing CI/CD pipelines for data engineering projects. Programming: - Develop and maintain high-quality, reusable Code on Databricks, and Synapse environment for data processing and automation. - Collaborate with data scientists and analysts to design solutions into data workflows. - Conduct code reviews and mentor junior engineers in Python , PySpark & SQL environments best practices. If interested, please share resume to aparna.ch@v3staffing.in
Posted 2 months ago
9.0 - 13.0 years
25 - 35 Lacs
Hyderabad
Hybrid
Senior Data Engineer You are familiar with AWS and Azure Cloud. You have extensive knowledge of Snowflake , SnowPro Core certification is a must have. You have used DBT at least in one project to deploy models in production. You have configured and deployed Airflow and integrated various operator in airflow (especially DBT & Snowflake). You can design build, release pipelines and understand of Azure DevOps Ecosystem. You have excellent understanding of Python (especially PySpark) and able to write metadata driven programs. Familiar with Data Vault (Raw , Business) also concepts like Point In Time , Semantic Layer. You are resilient in ambiguous situations and can clearly articulate the problem in a business friendly way. You believe in documenting processes and managing the artifacts and evolving that over time.
Posted 2 months ago
8.0 - 13.0 years
16 - 27 Lacs
Indore, Hyderabad, Ahmedabad
Work from Office
Kanerika Inc. is a premier global software products and services firm that specializes in providing innovative solutions and services for data-driven enterprises. Our focus is to empower businesses to achieve their digital transformation goals and maximize their business impact through the effective use of data and AI. We leverage cutting-edge technologies in data analytics, data governance, AI-ML, GenAI/ LLM and industry best practices to deliver custom solutions that help organizations optimize their operations, enhance customer experiences, and drive growth. Designation: Lead Data Engineer Location: Hyderabad, Indore, Ahmedabad Experience: 8 years Role & responsibilities What You Will Do: • Analyze Business Requirements. • Analyze the Data Model and do GAP analysis with Business Requirements and Power BI. Design and Model Power BI schema. • Transformation of Data in Power BI/SQL/ETL Tool. • Create DAX Formula, Reports, and Dashboards. Able to write DAX formulas. • Experience writing SQL Queries and stored procedures. • Design effective Power BI solutions based on business requirements. • Manage a team of Power BI developers and guide their work. • Integrate data from various sources into Power BI for analysis. • Optimize performance of reports and dashboards for smooth usage. • Collaborate with stakeholders to align Power BI projects with goals. • Knowledge of Data Warehousing(must), Data Engineering is a plus What we need? • B. Tech computer science or equivalent • Minimum 5+ years of relevant experience Perks and benefits
Posted 2 months ago
4.0 - 7.0 years
5 - 14 Lacs
Mumbai, Navi Mumbai, Mumbai (All Areas)
Work from Office
We are looking for an experienced Data Engineer to design, develop, and maintain our data pipelines, primarily focused on ingesting data into our Snowflake data platform. The ideal candidate will have strong expertise in Snowflake and practical experience with AWS services, particularly using S3 as a landing zone and an entry point to the Snowflake environment. You will be responsible for building efficient, reliable, and scalable data pipelines that are critical for our data-driven decision-making processes. Role & responsibilities 1. Design, develop, implement, and maintain scalable and robust data pipelines to ingest data from various sources into the Snowflake data platform. 2. Utilize AWS S3 as a primary landing zone for data, ensuring efficient data transfer and integration with Snowflake. 3. Develop and manage ETL/ELT processes, focusing on data transformation, cleansing, and loading within the Snowflake and AWS ecosystem. 4.Write complex SQL queries and stored procedures in Snowflake for data manipulation, transformation, and performance optimization. 5. Monitor, troubleshoot, and optimize data pipelines for performance, reliability, and scalability. 6. Collaborate with data architects, data analysts, data scientists, and business stakeholders to understand data requirements and deliver effective solutions. 7. Ensure data quality, integrity, and governance across all data pipelines and within the Snowflake platform. 8. Implement data security best practices in AWS and Snowflake. 9. Develop and maintain comprehensive documentation for data pipelines, processes, and architectures. 10. Stay up-to-date with emerging technologies and best practices in data engineering, particularly related to Snowflake and AWS. 11. Participate in Agile/Scrum development processes, including sprint planning, daily stand-ups, and retrospectives. Preferred candidate profile 1. Strong, hands-on proficiency with Snowflake: In-depth knowledge of Snowflake architecture, features (e.g., Snowpipe, Tasks, Streams, Time Travel, Zero-Copy Cloning). Experience in designing and implementing Snowflake data models (schemas, tables, views). Expertise in writing and optimizing complex SQL queries in Snowflake. Experience with data loading and unloading techniques in Snowflake. 2. Solid experience with AWS Cloud services: Proficiency in using AWS S3 for data storage, staging, and as a landing zone for Snowflake. Experience with other relevant AWS services (e.g., IAM for security, Lambda for serverless processing, Glue for ETL - if applicable). 3. Strong experience in designing and building ETL/ELT data pipelines. Proficiency in at least one programming language commonly used in data engineering (e.g., Python, Scala, Java). Python is highly preferred.
Posted 2 months ago
7.0 - 11.0 years
20 - 35 Lacs
Gandhinagar, Ahmedabad
Hybrid
Job Title: Senior Data Engineer Experience: 8 to 10 Years Location: Ahmedabad & Gandhinagar Employment Type: Full-time Our client is a leading provider of advanced solutions for capital markets, specializing in cutting-edge trading infrastructure and software. With a global presence and a strong focus on innovation, the company empowers professional traders, brokers, and financial institutions to execute high-speed, high-performance trading strategies across multiple asset classes. Their technology is known for its reliability, low latency, and scalability, making it a preferred choice for firms seeking a competitive edge in dynamic financial environments. Role & responsibilities Design, develop, and maintain scalable and reliable data pipelines using DBT and Airflow . Work extensively with Snowflake to optimize data storage, transformation, and access. Develop and maintain efficient ETL/ELT processes in Python to support analytical and operational workloads. Ensure high standards of data quality, consistency, and security across systems. Collaborate with cross-functional teams to understand data requirements and translate them into technical solutions. Monitor and troubleshoot data pipelines, resolving issues proactively. Optimize performance of existing data workflows and recommend improvements. Document data engineering processes and solutions effectively. Preferred candidate profile Bachelors or Masters degree in Computer Science, Engineering, or related field 8 - 10 years of experience in data engineering or related roles Strong knowledge of SQL and data warehousing principles Familiarity with version control (e.g., Git) and CI/CD practices Excellent problem-solving skills and attention to detail Strong communication and collaboration abilities Preferred Skills Experience in cloud platforms like AWS, GCP, or Azure Exposure to data governance and security best practices Knowledge of modern data architecture and real-time processing frameworks Competitive Benefits Offered By Our Client: Relocation Support: Our client offers an additional relocation allowance to assist with moving expenses. Comprehensive Health Benefits: Including medical, dental, and vision coverage. Flexible Work Schedule: Hybrid work model with an expectation of just 2 days on-site per week. Generous Paid Time Off (PTO): 21 days per year, with the ability to roll over 1 day into the following year. Additionally, 1 day per year is allocated for volunteering, 2 training days per year for uninterrupted professional development, and 1 extra PTO day during milestone years. Paid Holidays & Early Dismissals: A robust paid holiday schedule with early dismissal on select days, plus generous parental leave for all genders, including adoptive parents. Tech Resources: A rent-to-own program offering employees a company-provided Mac/PC laptop and/or mobile phone of their choice, along with a tech accessories budget for monitors, headphones, keyboards, and other office equipment. Health & Wellness Subsidies: Contributions toward gym memberships and health/wellness initiatives to support your well-being. Milestone Anniversary Bonuses: Special bonuses to celebrate key career milestones. Inclusive & Collaborative Culture: A forward-thinking, culture-based organisation that values diversity and inclusion and fosters collaborative teams.
Posted 2 months ago
8.0 - 10.0 years
15 - 20 Lacs
Pune
Work from Office
Education: Bachelors or masters degree in computer science, Information Technology, Engineering, or a related field. Experience: 8-10 years 8+ years of experience in data engineering or a related field. Strong hands-on experience with Azure Databricks , Spark , Python/Scala, CICD, Scripting for data processing. Experience working in multiple file formats like Parquet , Delta , and Iceberg . Knowledge of Kafka or similar streaming technologies for real-time data ingestion. Experience with data governance and data security in Azure. Proven track record of building large-scale data ingestion and ETL pipelines in cloud environments, specifically Azure. Deep understanding of Azure Data Services (e.g., Azure Blob Storage, Azure Data Lake, Azure SQL Data Warehouse, Event Hubs, Functions etc.). Familiarity with data lakes , data warehouses , and modern data architectures. Experience with CI/CD pipelines , version control (Git), Jenkins and agile methodologies. Understanding of cloud infrastructure and architecture principles (especially within Azure ). Technical Skills: Expert-level proficiency in Spark, SPARK Streaming , including optimization, debugging, and troubleshooting Spark jobs. Solid knowledge of Azure Databricks for scalable, distributed data processing. Strong coding skills in Python and Scala for data processing. Experience working with SQL , especially for large datasets. Knowledge of data formats like Iceberg , Parquet , ORC , and Delta Lake . Leadership Skills: Proven ability to lead and mentor a team of data engineers, ensuring adherence to best practices. Excellent communication skills, capable of interacting with both technical and non-technical stakeholders. Strong problem-solving, analytical, and troubleshooting abilities.
Posted 2 months ago
5.0 - 10.0 years
20 - 30 Lacs
Gurugram
Hybrid
Hi, Wishes from GSN!!! Pleasure connecting with you!!! We been into Corporate Search Services for Identifying & Bringing in Stellar Talented Professionals for our reputed IT / Non-IT clients in India. We have been successfully providing results to various potential needs of our clients for the last 20 years. At present, GSN is hiring Pyspark Developer for one of our leading MNC client. PFB the details for your better understanding: ~~~~ LOOKING FOR IMMEDIATE JOINERS ~~~~ WORK LOCATION: Gurugram Job Role: Pyspark Developer EXPERIENCE: 5 Yrs -10 Yrs CTC Range: 20LPA -28 LPA Work Type: HYBRID Only JD: Must be strong in Advanced SQL (e.g., joins and aggregations) Should have good experience in Pyspark (atleast 4 years) Good have knowledge in AWS services Experience across the data lifecycle Design & develop ETL pipeline using PySpark on AWS framework If interested, kindly APPLY for IMMEDIATE response. Thanks & Regards Sathya K GSN Consulting Mob: 8939666794 Mail ID: sathya@gsnhr.net; Web: https://g.co/kgs/UAsF9W
Posted 2 months ago
8.0 - 12.0 years
15 - 27 Lacs
Mumbai, Pune, Bengaluru
Work from Office
Role & responsibilities : Job Description: Primarily looking for a Data Engineer (AWS) with expertise in processing data pipelines using Data bricks, PySpark SQL on Cloud distributions like AWS Must have AWS Data bricks ,Good-to-have PySpark, Snowflake, Talend Requirements- • Candidate must be experienced working in projects involving • Other ideal qualifications include experiences in • Primarily looking for a data engineer with expertise in processing data pipelines using Databricks Spark SQL on Hadoop distributions like AWS EMR Data bricks Cloudera etc. • Should be very proficient in doing large scale data operations using Databricks and overall very comfortable using Python • Familiarity with AWS compute storage and IAM concepts • Experience in working with S3 Data Lake as the storage tier • Any ETL background Talend AWS Glue etc. is a plus but not required • Cloud Warehouse experience Snowflake etc. is a huge plus • Carefully evaluates alternative risks and solutions before taking action. • Optimizes the use of all available resources • Develops solutions to meet business needs that reflect a clear understanding of the objectives practices and procedures of the corporation department and business unit • Skills • Hands on experience on Databricks Spark SQL AWS Cloud platform especially S3 EMR Databricks Cloudera etc. • Experience on Shell scripting • Exceptionally strong analytical and problem-solving skills • Relevant experience with ETL methods and with retrieving data from dimensional data models and data warehouses • Strong experience with relational databases and data access methods especially SQL • Excellent collaboration and cross functional leadership skills • Excellent communication skills both written and verbal • Ability to manage multiple initiatives and priorities in a fast-paced collaborative environment • Ability to leverage data assets to respond to complex questions that require timely answers • has working knowledge on migrating relational and dimensional databases on AWS Cloud platform Skills Mandatory Skills: Apache Spark, Databricks, Java, Python, Scala, Spark SQL. Note : Need only Immediate joiners/ Serving notice period. Interested candidates can apply. Regards, HR Manager
Posted 2 months ago
5.0 - 10.0 years
8 - 18 Lacs
Hyderabad, Chennai, Bengaluru
Work from Office
Role: Celonis Data Engineer Skills: Celonis, Celonis EMS, Data Engineer, SQL, PQL, ETL, OCPM Notice Period: 30-45 Days Role & responsibilities : Hands-on experience with Celonis EMS (Execution Management System). Strong SQL skills for data extraction, transformation, and modeling. Proficiency in PQL (Process Query Language) for custom process analytics. Experience integrating Celonis with SAP, Oracle, Salesforce, or other ERP/CRM systems. Knowledge of ETL, data pipelines, and APIs (REST/SOAP). Process Mining & Analytical Skills: Understanding of business process modeling and process optimization techniques. At least one OCPM project experience Ability to analyze event logs and identify bottlenecks, inefficiencies, and automation opportunities. 6-10 years of experience in IT industry with Data Architecture / Business Process out of which 3-4 Years of Experience in process mining, data analytics, or business intelligence. Celonis certification (e.g., Celonis Data Engineer, Business Analyst, or Solution Consultant) is a plus. OCPM experience is a Plus
Posted 2 months ago
13.0 - 20.0 years
30 - 45 Lacs
Bengaluru
Hybrid
Job Descirption: Java/Big Data/SQL/Architect Good understanding of Java/J2EE based scalable application development. Good understanding of Data Engineering with hands in experience in dealing with Data Transfer and Data Pipeline development. Exposure to building enterprise products / tools that improves developer productivity. Passionate about Gen AI / Impact creation with hands on experience.
Posted 2 months ago
13.0 - 15.0 years
37 - 40 Lacs
Noida, Gurugram, Delhi / NCR
Work from Office
Role & responsibilities REQUIREMENTS: Total experience 13+years. Proficient in architecting, designing, and implementing data platforms and data applications Strong experience in AWS Glue and Azure Data Factory. Hands-on experience with Databricks. Experience working with Big Data applications and distributed processing systems Working experience in build and maintain ETL/ELT pipelines using modern data engineering tools and frameworks Lead the architecture and implementation of data lakes, data warehouses, and real-time streaming solution Collaborate with stakeholders to understand business requirements and translate them into technical solutions Participate and contribute to RFPs, workshops, PoCs, and technical solutioning discussions Ensure scalability, reliability, and performance of data platforms Strong communication skills and the ability to collaborate effectively with cross-functional teams. RESPONSIBILITIES: Writing and reviewing great quality code Understanding the clients business use cases and technical requirements and be able to convert them in to technical design which elegantly meets the requirements Mapping decisions with requirements and be able to translate the same to developers Identifying different solutions and being able to narrow down the best option that meets the client’s requirements Defining guidelines and benchmarks for NFR considerations during project implementation Writing and reviewing design document explaining overall architecture, framework, and high-level design of the application for the developers Reviewing architecture and design on various aspects like extensibility, scalability, security, design patterns, user experience, NFRs, etc., and ensure that all relevant best practices are followed Developing and designing the overall solution for defined functional and non-functional requirements; and defining technologies, patterns, and frameworks to materialize it Understanding and relating technology integration scenarios and applying these learnings in projects Resolving issues that are raised during code/review, through exhaustive systematic analysis of the root cause, and being able to justify the decision taken Carrying out POCs to make sure that suggested design/technologies meet the requirements Preferred candidate profile
Posted 2 months ago
5.0 - 10.0 years
12 - 22 Lacs
Hyderabad, Chennai, Bengaluru
Work from Office
Job Title: ======= Microsoft ETL Developer - Microsoft SSIS / Informatica x4 positions Onsite Location: ============= Dubai, UAE Doha , Qatar Riyadh, Saudi Onsite Monthly Salary: ============== 10k AED - 15k AED Offshore Location: =============== Pune / Hyderabad / Chennai / Bangalore / Mumbai Offshore Annual Salary: ============== 12 LPA - 20 LPA Note: ===== You need to travel to onsite (UAE) on needful basis Project duration: ============= 2 Years Initially Desired Experience Level Needed: =========================== 5 - 10 Years Qualification: ========== B.Tech / M.Tech / MCA / M.Sc or equivalent Experience Needed: =============== Over all: 5 or more Years of total IT experience Solid 3+ Years experience or more - as ETL Developer with Microsoft SSIS / Informatica as ETL Developer Engineer Job Responsibilities: ================ - Design and develop ETL data flows - Design Microsoft ETL packages - Able to code T-SQL - Able to create Orchestrations - Able to design batch jobs / Orchestrations runs - Familiarity with data models - Able to develop MDM (Master Data Management) and design SCD-1/2/3 as per client requirements Experience: ================ - Experience as ETL Developer with Microsoft SSIS - Exposure and experience with Azure services including Azure Data Factory - Sound knowledge of BI practices and visualization tools such as PowerBI / SSRS/ QlikView - Collecting / gathering data from various multiple source systems - Load the data using ETL - Creating automated data pipelines - Configuring Azure resources and services Skills: ================ - Microsoft SSIS - Informatica - Azure Data Factory - Spark - SQL Nice to have: ========== - Any on site experience is added advantage, but not mandatory - Microsoft certifications are added advantage Business Vertical: ============== - Banking / Investment Banking - Capital Markets - Securities / Stock Market Trading - Bonds / Forex Trading - Credit Risk - Payments Cards Industry (VISA/ Master Card/ Amex) Job Code: ====== ETL_DEVP_0525 No.of positions: ============ 04 Email: ===== spectrumconsulting1977@gmail.com if you are interested, please email your CV as ATTACHMENT with job ref. code [ ETL_DEVP_0525 ] as subject
Posted 2 months ago
5 - 8 years
22 - 30 Lacs
Pune, Chennai
Work from Office
Experience: Minimum of 5 years of experience in data engineering, with a strong focus on data pipeline development. At least 2 years of experience leading teams or projects in the healthcare, life sciences, or related domains. Proficiency in Python, with experience in data manipulation libraries. Hands-on experience with AWS Glue, AWS Lambda, S3, Redshift, and other relevant AWS data services. Familiarity with data integration tools, ETL (Extract, Transform, Load) frameworks, and data warehousing solutions. Proven experience working in an onsite-offshore model, managing distributed teams, and coordinating development across multiple time zones.
Posted 2 months ago
6 - 11 years
18 - 33 Lacs
Pune, Bengaluru
Work from Office
Urgent hiring for AWS Data Engineer Experience: 6-18 Years Location: Pune/ Bangalore No of positions: 9 Notice Period: immediate joiner Role & responsibilities Requires 5 to 10 years of experience in data engineering on the AWS platform. Proficiency in Spark/Pyspark/Python/SQL is essential. Familiarity with AWS data stores including S3, RDS, DynamoDB, and AWS Data Lake, having utilized these technologies in previous projects. Knowledge of AWS Services like Redshift, Kinesis Streaming, Glue, Iceberg, Lambda, Athena, S3, EC2, SQS, and SNS. Understanding of monitoring and observability toolsets like CloudWatch and Tivoli Netcool. Basic understanding of AWS networking components: VPC, SG, Subnets, Load Balancers. Collaboration with cross-functional teams to gather technical requirements and deliver high-quality ETL solutions. Strong AWS development experience for data ETL, pipeline, integration, and automation work. Deep understanding of Data & Analytics Solution development lifecycle. Proficient in CI/CD, Jenkins, capable of writing testing scripts and automating processes. Experience with IaC Terraform or CloudFormation, basic knowledge of containers. Familiarity with Bitbucket/Git and experience working in an agile/scrum team. Experience in the Private Bank/Wealth Management domain.
Posted 2 months ago
5 - 10 years
10 - 20 Lacs
Bengaluru
Work from Office
Job Title: Senior Data Engineer Location: Bengaluru, India Experience: 5-10 Years Notice period : Immediate Key Responsibilities Design, develop, and maintain scalable data pipelines for efficient data processing. Build and optimize data storage solutions, ensuring high performance and reliability. Implement ETL processes to extract, transform, and load data from various sources. Work closely with data analysts and scientists to support their data needs. Optimize database structures and ensure data integrity. Develop and manage cloud-based data architectures (AWS, Azure, or Google Cloud). Ensure compliance with data governance and security standards. Monitor and troubleshoot data workflows to maintain system efficiency. Required Skills & Qualifications Strong proficiency in SQL, Python, and R for data processing. Experience with big data technologies like Hadoop, Spark, and Kafka. Hands-on expertise in ETL tools and data warehousing solutions . Deep understanding of database management systems (MySQL, PostgreSQL, MongoDB, etc.). Familiarity with cloud platforms such as AWS, Azure, or Google Cloud. Strong problem-solving and communication skills to collaborate with cross-functional teams.
Posted 2 months ago
5 - 10 years
20 - 30 Lacs
Hyderabad
Hybrid
Experience : 5 to 10 Years Location : Hyderabad Notice Period : Immediate to 30 Days Skills Required: 5+ years of experience as a Data Engineer or in a similar role working with large data sets and ELT/ETL processes 7+ years of industry experience in software development Knowledge and practical use of a wide variety of RDBMS technologies such as MySQL, Postgres, SQL Server or Oracle Use of cloud-based data warehouse technologies including Snowflake, AWS RedShift. Strong SQL experience with an emphasis on analytic queries and performance Experience with various NoSQL” technologies such as MongoDB or Elastic Search Familiarity with either native database or external change-data-capture technologies Practical use of various data formats such as CSV, XML, JSON, and Parquet Use of Data flow and transformation tools such as Apache Nifi or Talend Implementation of ELT processes in languages such as Java, Python or NodeJS Use of large, shared data stores such as Amazon S3 or Hadoop File System Thorough and practical use of various Data Warehouse data schemas (Snowflake, Star) If interested please share your updated resume to arampally@jaggaer.com with below details: Total Years of Experience: Years of Experience as Data Engineer: Years of experience in MySQL: Years of Experience in Snowflake, AWS RedShift: Current CTC: Expected CTC: Notice Period:
Posted 2 months ago
6 - 10 years
11 - 21 Lacs
Bengaluru
Hybrid
RESPONSIBILITIES: Choosing the right technologies for our use cases, deploy and operate. Setting up Data stores structured, semi structured and non-structured. Secure data at rest via encryption Implement tool to access securely multiple data sources Implement solutions to run real-time analytics Use container technologies Required Experience & Skills: Experience in one of the following: Elastic Search, Cassandra, Hadoop, Mongo DB Experience in Spark and Presto/Trino Experience with microservice based architectures Experience on Kubernetes Experience of Unix/Linux environments is plus Experience of Agile/Scrum development methodologies is a plus Cloud knowledge a big plus (AWS/GCP) (Kubernetes/Docker) Be nice, respectful, able to work in a team Willingness to learn
Posted 2 months ago
6 - 10 years
10 - 20 Lacs
Hyderabad
Work from Office
We're looking for a Data Engineer to join our team. We need someone who's great at building data pipelines and understands how data works. You'll be using tools like DBT and Snowflake a lot. The most important thing for us is that you've worked with all sorts of data sources , not just files. Think different cloud systems, other company databases, and various online tools. What you'll do: Build and manage how data flows into our system using DBT and storing it in Snowflake . Design how our data is organized so it's easy to use for reports and analysis. Fix any data problems that come up. Connect to and get data from many different places , like: Cloud apps (e.g., Salesforce, marketing tools) Various databases (SQL Server, Oracle, etc.) Streaming data Different file types (CSV, JSON, etc.) Other business systems Help us improve our data setup. What you need: Experience as a Data Engineer . Strong skills with DBT (Data Build Tool). Solid experience with Snowflake . Must have experience working with many different types of data sources, especially cloud systems and other company databases not just files. Good at data modeling (organizing data). Comfortable with SQL . Good at solving problems
Posted 2 months ago
11 - 19 years
25 - 40 Lacs
Hyderabad, Chennai, Bengaluru
Hybrid
Greetings from Wilco Source a CitiusTech Company!!! Position: Senior Data Engineer Location: Chennai/Hyderabad/Bangalore/Pune/Gurgaon/Noida/Mumbai Job Description: Display depth knowledge on SQL language is a must and Cloud-based technologies. Good understanding of Healthcare and life sciences domain is a must. Patient support domain is nice to have. Ex Novartis, J&J, Pfizer, Sanofi are preferable. Good Data Analysis skills is a must. Experience on Data warehousing concepts, data modelling and metadata management. Design, develop, test, and deploy enterprise-level applications using the Snowflake platform. Display Good communication skills is a must and should be able to provide 4 hours overlap with EST timings (~09:30 PM IST) is a must. Good Understanding of PowerBI. Handson on PowerBI is nice to have.
Posted 2 months ago
5 - 10 years
16 - 31 Lacs
Pune, Bengaluru, Mumbai (All Areas)
Hybrid
Greetings from Accion Labs !!! We are looking for a Sr Data Engineer Location : Bangalore , Mumbai , Pune, Hyderabad, Noida Experience : 5+ years Notice Period : Immediate Joiners/ 15 Days Any references would be appreciated !!! Job Description / Skill set: Python/Spark/PySpark/Pandas SQL AWS EMR/Glue/S3/RDS/Redshift/Lambda/SQS/AWS Step Function/EventBridge Real - time analytics
Posted 2 months ago
10 - 18 years
12 - 22 Lacs
Pune, Bengaluru
Hybrid
Hi, We are hiring for the role of AWS Data Engineer with one of the leading organization for Bangalore & Pune. Experience - 10+ Years Location - Bangalore & Pune Ctc - Best in the industry Job Description Technical Skills PySpark coding skill Proficient in AWS Data Engineering Services Experience in Designing Data Pipeline & Data Lake If interested kindly share your resume at nupur.tyagi@mounttalent.com
Posted 2 months ago
5 - 10 years
9 - 19 Lacs
Bangalore Rural, Bengaluru
Work from Office
Job Summary: We are seeking an experienced Data Engineer with expertise in Snowflake and PLSQL to design, develop, and optimize scalable data solutions. The ideal candidate will be responsible for building robust data pipelines, managing integrations, and ensuring efficient data processing within the Snowflake environment. This role requires a strong background in SQL, data modeling, and ETL processes, along with the ability to troubleshoot performance issues and collaborate with cross-functional teams. Responsibilities: Design, develop, and maintain data pipelines in Snowflake to support business analytics and reporting. Write optimized PLSQL queries, stored procedures, and scripts for efficient data processing and transformation. Integrate and manage data from various structured and unstructured sources into the Snowflake data platform. Optimize Snowflake performance by tuning queries, managing workloads, and implementing best practices. Collaborate with data architects, analysts, and business teams to develop scalable and high-performing data solutions. Ensure data security, integrity, and governance while handling large-scale datasets. Automate and streamline ETL/ELT workflows for improved efficiency and data consistency. Monitor, troubleshoot, and resolve data quality issues, performance bottlenecks, and system failures. Stay updated on Snowflake advancements, best practices, and industry trends to enhance data engineering capabilities. Required Skills: Bachelors degree in Engineering, Computer Science, Information Technology, or a related field. Strong experience in Snowflake, including designing, implementing, and optimizing Snowflake-based solutions. Hands-on expertise in PLSQL, including writing and optimizing complex queries, stored procedures, and functions. Proven ability to work with large datasets, data warehousing concepts, and cloud-based data management. Proficiency in SQL, data modeling, and database performance tuning. Experience with ETL/ELT processes and integrating data from multiple sources. Familiarity with cloud platforms such as AWS, Azure, or GCP is an added advantage. Snowflake certifications (e.g., SnowPro Core, SnowPro Advanced) are a plus. Strong analytical skills, problem-solving abilities, and attention to detail.
Posted 2 months ago
6 - 11 years
11 - 20 Lacs
Hyderabad
Work from Office
We are hiring for Data Engineer for Hyderabad Location. Please find the below Job Description. Role & responsibilities 6+ years of experience in data engineering, specifically in cloud environments like AWS. Proficiency in Python and PySpark for data processing and transformation tasks. Solid experience with AWS Glue for ETL jobs and managing data workflows. Hands-on experience with AWS Data Pipeline (DPL) for workflow orchestration. Strong experience with AWS services such as S3, Lambda, Redshift, RDS, and EC2. Deep understanding of ETL concepts and best practices.. Strong knowledge of SQL for querying and manipulating relational and semi-structured data. Experience with Data Warehousing and Big Data technologies, specifically within AWS.
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough