Home
Jobs

214 Data Engineer Jobs - Page 4

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

11 - 19 years

25 - 40 Lacs

Hyderabad, Chennai, Bengaluru

Hybrid

Naukri logo

Greetings from Wilco Source a CitiusTech Company!!! Position: Senior Data Engineer Location: Chennai/Hyderabad/Bangalore/Pune/Gurgaon/Noida/Mumbai Job Description: Display depth knowledge on SQL language is a must and Cloud-based technologies. Good understanding of Healthcare and life sciences domain is a must. Patient support domain is nice to have. Ex Novartis, J&J, Pfizer, Sanofi are preferable. Good Data Analysis skills is a must. Experience on Data warehousing concepts, data modelling and metadata management. Design, develop, test, and deploy enterprise-level applications using the Snowflake platform. Display Good communication skills is a must and should be able to provide 4 hours overlap with EST timings (~09:30 PM IST) is a must. Good Understanding of PowerBI. Handson on PowerBI is nice to have.

Posted 1 month ago

Apply

5 - 10 years

16 - 31 Lacs

Pune, Bengaluru, Mumbai (All Areas)

Hybrid

Naukri logo

Greetings from Accion Labs !!! We are looking for a Sr Data Engineer Location : Bangalore , Mumbai , Pune, Hyderabad, Noida Experience : 5+ years Notice Period : Immediate Joiners/ 15 Days Any references would be appreciated !!! Job Description / Skill set: Python/Spark/PySpark/Pandas SQL AWS EMR/Glue/S3/RDS/Redshift/Lambda/SQS/AWS Step Function/EventBridge Real - time analytics

Posted 1 month ago

Apply

10 - 18 years

12 - 22 Lacs

Pune, Bengaluru

Hybrid

Naukri logo

Hi, We are hiring for the role of AWS Data Engineer with one of the leading organization for Bangalore & Pune. Experience - 10+ Years Location - Bangalore & Pune Ctc - Best in the industry Job Description Technical Skills PySpark coding skill Proficient in AWS Data Engineering Services Experience in Designing Data Pipeline & Data Lake If interested kindly share your resume at nupur.tyagi@mounttalent.com

Posted 1 month ago

Apply

5 - 10 years

9 - 19 Lacs

Bangalore Rural, Bengaluru

Work from Office

Naukri logo

Job Summary: We are seeking an experienced Data Engineer with expertise in Snowflake and PLSQL to design, develop, and optimize scalable data solutions. The ideal candidate will be responsible for building robust data pipelines, managing integrations, and ensuring efficient data processing within the Snowflake environment. This role requires a strong background in SQL, data modeling, and ETL processes, along with the ability to troubleshoot performance issues and collaborate with cross-functional teams. Responsibilities: Design, develop, and maintain data pipelines in Snowflake to support business analytics and reporting. Write optimized PLSQL queries, stored procedures, and scripts for efficient data processing and transformation. Integrate and manage data from various structured and unstructured sources into the Snowflake data platform. Optimize Snowflake performance by tuning queries, managing workloads, and implementing best practices. Collaborate with data architects, analysts, and business teams to develop scalable and high-performing data solutions. Ensure data security, integrity, and governance while handling large-scale datasets. Automate and streamline ETL/ELT workflows for improved efficiency and data consistency. Monitor, troubleshoot, and resolve data quality issues, performance bottlenecks, and system failures. Stay updated on Snowflake advancements, best practices, and industry trends to enhance data engineering capabilities. Required Skills: Bachelors degree in Engineering, Computer Science, Information Technology, or a related field. Strong experience in Snowflake, including designing, implementing, and optimizing Snowflake-based solutions. Hands-on expertise in PLSQL, including writing and optimizing complex queries, stored procedures, and functions. Proven ability to work with large datasets, data warehousing concepts, and cloud-based data management. Proficiency in SQL, data modeling, and database performance tuning. Experience with ETL/ELT processes and integrating data from multiple sources. Familiarity with cloud platforms such as AWS, Azure, or GCP is an added advantage. Snowflake certifications (e.g., SnowPro Core, SnowPro Advanced) are a plus. Strong analytical skills, problem-solving abilities, and attention to detail.

Posted 1 month ago

Apply

6 - 11 years

11 - 20 Lacs

Hyderabad

Work from Office

Naukri logo

We are hiring for Data Engineer for Hyderabad Location. Please find the below Job Description. Role & responsibilities 6+ years of experience in data engineering, specifically in cloud environments like AWS. Proficiency in Python and PySpark for data processing and transformation tasks. Solid experience with AWS Glue for ETL jobs and managing data workflows. Hands-on experience with AWS Data Pipeline (DPL) for workflow orchestration. Strong experience with AWS services such as S3, Lambda, Redshift, RDS, and EC2. Deep understanding of ETL concepts and best practices.. Strong knowledge of SQL for querying and manipulating relational and semi-structured data. Experience with Data Warehousing and Big Data technologies, specifically within AWS.

Posted 1 month ago

Apply

4 - 9 years

10 - 20 Lacs

Bangalore Rural, Bengaluru

Hybrid

Naukri logo

We are looking for a skilled and detail-oriented Data Engineer to join our growing data team. You will be responsible for building and maintaining scalable data pipelines, optimizing data systems, and ensuring data is clean, reliable, and ready for analysis. Mandatory Skills- Python, AWS (Glue & Lambda), SQL, Pyspark, Any other cloud Key Responsibilities: Design, develop, and maintain robust ETL/ELT pipelines. Work with structured and unstructured data from multiple sources. Build and maintain data warehouse/data lake infrastructure. Ensure data quality, integrity, and governance practices. Collaborate with data scientists, analysts, and other engineers to deliver data solutions. Optimize data workflows for performance and scalability. Monitor and troubleshoot data pipeline issues in real-time. Required Qualifications: Bachelors or master’s )degree in computer science, Engineering, or related field. Strong experience with SQL and relational databases (e.g., PostgreSQL, MySQL). Proficient in Python, Pyspark, Any cloud (Azure or AWS ), Experience with cloud platforms (e.g., AWS, GCP, Azure). Familiarity with data warehousing solutions (e.g., Snowflake, Redshift, Big Query).

Posted 1 month ago

Apply

7 - 12 years

13 - 23 Lacs

Hyderabad

Hybrid

Naukri logo

Hi, We are hiring for one of our client for c2h role: Designation:Data Engineer Education:Graduate Exp:6+ Location: Hyderabad Skills: AWS,Data Engineer,Python,Scala,Java,Kafka,Data Bricks etc Notice Period:15days/Immediate Joiner Regards, Ashwini.

Posted 1 month ago

Apply

7 - 12 years

11 - 21 Lacs

Hyderabad, Chennai, Bengaluru

Work from Office

Naukri logo

Dear Candidates, Looking for Databricks , Data Analytic, data engineer,analytics,data lake,AWS,pyspark with MNC Client. Notice period-immediate/Max 15 days. Location-Hyderbad Please find the below Job description: Proficiency in Dat abricks Unified Data Analytics Platform- Good To Have Skills: Experience with Python (Programming Language) - Strong understanding of data analytics and data processing- Experience in building and configuring applications- Knowledge of software development lifecycle- Ability to troubleshoot and debug applications Additional Information: - The candidate sh.ould have a minimum of 7.5 years of experience in Databricks Unified Data Analytics Platform. Key Responsibilities: Work on client projects to deliver AWS, PySpark, Databricks based Data engineering & Analytics solutions. Build and operate very large data warehouses or data lakes. ETL optimization, designing, coding, & tuning big data processes using Apache Spark. ¢ Build data pipelines & applications to stream and process datasets at low latencies. €¢ Show efficiency in handling data - tracking data lineage, ensuring data quality, and improving discoverability of data. Technical Experience: €¢ Minimum of 5 years of experience in Databricks engineering solutions on AWS Cloud platforms using PySpark, Databricks SQL, Data pipelines using Delta Lake.€¢ Minimum of 5 years of experience years of experience in ETL, Big Data/Hadoop and data warehouse architecture & delivery. €¢ Minimum of 2 years of experience years in real time streaming using Kafka/Kinesis€¢ Minimum 4 year of Experience in one or more programming languages Python, Java, Scala.€¢ Experience using airflow for the data pipelines in min 1 project.€¢ 1 years of experience developing CICD pipelines using GIT, Jenkins, Docker, Kubernetes, Shell Scripting, Terraform Professional Attributes: €¢ Ready to work in B Shift (12 PM €“ 10 PM) €¢ A Client facing skills: solid experience working in client facing environments, to be able to build trusted relationships with client stakeholders.€¢ Good critical thinking and problem-solving abilities €¢ Health care knowledge €¢ Good Communication Skills Educational Qualification: Bachelor of Engineering / Bachelor of Technology Additional Information: Data Engineering, PySpark, AWS, Python Programming Language, Apache Spark, Databricks, Hadoop, Certifications in Databrick or Python or AWS. Additional Information: - The candidate should have a minimum of 5 years of experience in Databricks Unified Data Analytics Platform- This position is based at our Hyderabad office- A 15 years full-time education is required Kindly mention above details ..if you are interested in above position. Kindly share your profile-w akdevi@crownsolution.com. Regards, devii

Posted 1 month ago

Apply

5 - 10 years

17 - 32 Lacs

Pune

Hybrid

Naukri logo

We are hiring for multiple roles in Data Engineering function and welcoming applications at all levels of experience. Location: Pune About bp/team Bp's Technology organization is the central organization for all software and platform development. We build all the technology that powers bps businesses, from upstream energy production to downstream energy delivery to our customers. We have a variety of teams depending on your areas of interest, including infrastructure and backend services through to customer-facing web and native applications. We encourage our teams to adapt quickly by using native AWS and Azure services, including serverless, and enable them to pick the best technology for a given problem. This is meant to empower our software and platform engineers while allowing them to learn and develop themselves. Responsibilities Part of a cross-disciplinary team, working closely with other data engineers, software engineers, data scientists, data managers and business partners. Architects, designs, implements and maintains reliable and scalable data infrastructure to move, process and serve data. Writes, deploys and maintains software to build, integrate, manage, maintain, and quality-assure data at bp. Adheres to and advocates for software engineering standard methodologies (e.g. technical design, technical design review, unit testing, monitoring & alerting, checking in code, code review, documentation) Responsible for deploying secure and well-tested software that meets privacy and compliance requirements; develops, maintains and improves CI / CD pipeline. Responsible for service reliability and following site-reliability engineering best practices: on-call rotations for services they maintain, responsible for defining and maintaining SLAs. Design, build, deploy and maintain infrastructure as code. Containerizes server deployments. Actively contributes to improve developer velocity. Mentors others. Qualifications BS degree or equivalent experience in computer science or related field Deep and hands-on experience designing, planning, building, productionizing, maintaining and documenting reliable and scalable data infrastructure and data products in complex environments Development experience in one or more object-oriented programming languages (e.g. Python, Scala, Java, C#) Sophisticated database and SQL knowledge Experience designing and implementing large-scale distributed data systems Deep knowledge and hands-on experience in technologies across all data lifecycle stages Strong stakeholder management and ability to lead initiatives through technical influence Continuous learning and improvement mindset Desired No prior experience in the energy industry required Travel Requirement Negligible travel should be expected with this role Relocation Assistance: This role is eligible for relocation within country Remote Type: This position is a hybrid of office/remote working Skills: Commercial Acumen, Communication, Data Analysis, Data cleansing and transformation, Data domain knowledge, Data Integration, Data Management, Data Manipulation, Data Sourcing, Data strategy and governance, Data Structures and Algorithms (Inactive), Data visualization and interpretation, Digital Security, Extract, transform and load, Group Problem Solving Legal Disclaimer: We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, sex, gender, gender expression, sexual orientation, age, marital status, socioeconomic status, neurodiversity/neurocognitive functioning, veteran status or disability status. Individuals with an accessibility need may request an adjustment/accommodation related to bps recruiting process (e.g., accessing the job application, completing required assessments, participating in telephone screenings or interviews, etc.). If you would like to request an adjustment/accommodation related to the recruitment process, please contact us. If you are selected for a position and depending upon your role, your employment may be contingent upon adherence to local policy. This may include pre-placement drug screening, medical review of physical fitness for the role, and background checks.

Posted 1 month ago

Apply

5 - 10 years

15 - 30 Lacs

Noida, Gurugram, Delhi / NCR

Hybrid

Naukri logo

Skills: Mandatory: SQL, Python, Databricks, Spark / Pyspark. Good to have: MongoDB, Dataiku DSS, Databricks Exp in data processing using Python/scala Advanced working SQL knowledge, expertise using relational databases Need Early joiners. Required Candidate profile ETL development tools like databricks/airflow/snowflake. Expert in building and optimizing big data' data pipelines, architectures, and data sets. Proficient in Big data tools and ecosystem

Posted 1 month ago

Apply

4 - 6 years

14 - 22 Lacs

Chennai

Work from Office

Naukri logo

Maintain metadata, data dictionaries, and lineage documentation. Build and maintain scalable data pipelines and ETL processes for BI needs. Optimize data models and ensure clean, reliable, and well-structured data for Power BI. Integrate data Required Candidate profile Strong SQL development , experience with Power BI dataset structuring and integration. Exp in Python for ETL, automation, APIs and scripting for data ingestion. Azure, SSIS, cloud-based data services

Posted 1 month ago

Apply

8 - 13 years

12 - 22 Lacs

Gurugram

Work from Office

Naukri logo

Data & Information Architecture Lead 8 to 15 years - Gurgaon Summary An Excellent opportunity for Data Architect professionals with expertise in Data Engineering, Analytics, AWS and Database. Location Gurgaon Your Future Employer : A leading financial services provider specializing in delivering innovative and tailored solutions to meet the diverse needs of our clients and offer a wide range of services, including investment management, risk analysis, and financial consulting. Responsibilities Design and optimize architecture of end-to-end data fabric inclusive of data lake, data stores and EDW in alignment with EA guidelines and standards for cataloging and maintaining data repositories Undertake detailed analysis of the information management requirements across all systems, platforms & applications to guide the development of info. management standards Lead the design of the information architecture, across multiple data types working closely with various business partners/consumers, MIS team, AI/ML team and other departments to design, deliver and govern future proof data assets and solutions Design and ensure delivery excellence for a) large & complex data transformation programs, b) small and nimble data initiatives to realize quick gains, c) work with OEMs and Partners to bring the best tools and delivery methods. Drive data domain modeling, data engineering and data resiliency design standards across the micro services and analytics application fabric for autonomy, agility and scale Requirements Deep understanding of the data and information architecture discipline, processes, concepts and best practices Hands on expertise in building and implementing data architecture for large enterprises Proven architecture modelling skills, strong analytics and reporting experience Strong Data Design, management and maintenance experience Strong experience on data modelling tools Extensive experience in areas of cloud native lake technologies e.g. AWS Native Lake Solution onsibilities

Posted 1 month ago

Apply

12 - 16 years

25 - 35 Lacs

Bengaluru, Delhi / NCR, Mumbai (All Areas)

Hybrid

Naukri logo

We're looking for an experienced Data Engineer Architect with expertise in AWS technologies, to join our team in India. If you have a passion for analytics and a proven track record of designing and implementing complex data solutions, we want to hear from you. Location : Noida/ Gurgaon/ Bangalore/ Mumbai/ Pune Your Future Employer : Join a dynamic and inclusive organization at the forefront of technology, where your expertise will be valued and your career development will be a top priority. Responsibilities: Designing and implementing robust, scalable data pipelines and architectures using AWS technologies Collaborating with cross-functional teams to understand data requirements and develop solutions to meet business needs Optimizing data infrastructure and processes for improved performance and efficiency Providing technical leadership and mentorship to junior team members, and driving best practices in data engineering Requirements: 12+ years of experience in data engineering, with a focus on AWS technologies Strong proficiency in analytics and data processing tools such as SQL, Spark, and Hadoop Proven track record of designing and implementing large-scale data solutions Experience in leading and mentoring teams, and driving technical best practices Excellent communication skills and ability to collaborate effectively with stakeholders at all levels Whats in it for you: Competitive compensation and benefits package Opportunity to work with cutting-edge technologies and make a real impact on business outcomes Career growth and development in a supportive and inclusive work environment Reach us - If you feel this opportunity is well aligned with your career progression plans, please feel free to reach me with your updated profile at isha.joshi@crescendogroup.in Disclaimer - Crescendo Global is specializes in Senior to C-level niche recruitment. We are passionate about empowering job seekers and employers with an engaging memorable job search and leadership hiring experience. Crescendo Global does not discriminate on the basis of race, religion, color, origin, gender, sexual orientation, age, marital status, veteran status or disability status. Note -We receive a lot of applications on a daily basis so it becomes a bit difficult for us to get back to each candidate. Please assume that your profile has not been shortlisted in case you don't hear back from us in 1 week. Your patience is highly appreciated. Profile keywords : Data Engineer, Architect, AWS, Analytics, SQL, Spark, Hadoop, Kafka, Crescendo Global.

Posted 1 month ago

Apply

4 - 9 years

18 - 25 Lacs

Bengaluru

Hybrid

Naukri logo

Skill required : Data Engineers- Azure Designation : Sr Analyst/ Consultant Job Location : Bengaluru Qualifications: BE/BTech Years of Experience : 4 - 11 Years OVERALL PURPOSE OF JOB Understand client requirements and build ETL solution using Azure Data Factory, Azure Databricks & PySpark . Build solution in such a way that it can absorb clients change request very easily. Find innovative ways to accomplish tasks and handle multiple projects simultaneously and independently. Works with Data & appropriate teams to effectively source required data. Identify data gaps and work with client teams to effectively communicate the findings to stakeholders/clients. Responsibilities : Develop ETL solution to populate Centralized Repository by integrating data from various data sources. Create Data Pipelines, Data Flow, Data Model according to the business requirement. Proficient in implementing all transformations according to business needs. Identify data gaps in data lake and work with relevant data/client teams to get necessary data required for dashboarding/reporting. Strong experience working on Azure data platform, Azure Data Factory, Azure Data Bricks. Strong experience working on ETL components and scripting languages like PySpark, Python . Experience in creating Pipelines, Alerts, email notifications, and scheduling jobs. Exposure on development/staging/production environments. Providing support in creating, monitoring and troubleshooting the scheduled jobs. Effectively work with client and handle client interactions. Skills Required: Bachelors' degree in Engineering or Science or equivalent graduates with at least 4-11 years of overall experience in data management including data integration, modeling & optimization. Minimum 4 years of experience working on Azure cloud, Azure Data Factory, Azure Databricks. Minimum 3-4 years of experience in PySpark, Python, etc. for data ETL . In-depth understanding of data warehouse, ETL concept and modeling principles. Strong ability to design, build and manage data. Strong understanding of Data integration. Strong Analytical and problem-solving skills. Strong Communication & client interaction skills. Ability to design database to store huge data necessary for reporting & dashboarding. Ability and willingness to acquire knowledge on the new technologies, good analytical and interpersonal skills with ability to interact with individuals at all levels. Interested candidates can reach on Neha 9599788568 neha.singh@mounttalent.com

Posted 1 month ago

Apply

4 - 9 years

18 - 25 Lacs

Pune, Gurugram, Chennai

Hybrid

Naukri logo

Skill required : Data Engineers- Python Pyspark Designation : Sr Analyst/ Consultant Job Location : Bangalore/Gurgaon/Chennai/Pune/Mumbai Qualifications: BE/BTech Years of Experience : 4 - 11 Years What would you do? You will be aligned with Insights & Intelligence vertical and help us generate insights by leveraging the latest Artificial Intelligence (AI) and Analytics techniques to deliver value to our clients.You will also help us apply your expertise in building world-class solutions, conquering business problems, addressing technical challenges using AI Platforms and technologies. You will be required to utilize the existing frameworks, standards, patterns to create architectural foundation and services necessary for AI applications that scale from multi-user to enterprise-class and demonstrate yourself as an expert by actively blogging, publishing research papers, and creating awareness in this emerging area.You will be working as a part of Data Management team which is accountable for Data Management that includes fully scalable relational database management system. ETL(Extract, Transport and Load) - Set of methods and tools to extract data from outside sources, transform it to fit an organization`s business needs and load it into a target such as the organization`s data warehouse.The Python Programming Language team focuses on building multiple programming paradigms including procedural, object-oriented and functional programming. The team is responsible for writing logical code for different projects and take a constructive and object-orientated approach. What are we looking for? Problem-solving skills,Prioritization of workload ,Commitment to quality PySpark, Python, SQL (Structured Query Language) Roles and Responsibilities In this role, you need to analyze and solve moderately complex problems. You are required to create new solutions, leveraging and, where needed, adapting existing methods and procedures. You are required to understand the strategic direction set by senior management, clearly communicate team goals, deliverables, and keep the team updated on change. Interested candidates can reach on Neha 9599788568 neha.singh@mounttalent.com

Posted 1 month ago

Apply

6 - 8 years

5 - 15 Lacs

Hyderabad

Hybrid

Naukri logo

CGI is looking for a talented and motivated Data Engineer with strong expertise in Python, Apache Spark, HDFS, and MongoDB to build and manage scalable, efficient, and reliable data pipelines and infrastructure • Youll play a key role in transforming raw data into actionable insights, working closely with data scientists, analysts, and business teams. Core Duties & Responsibilities: Data Pipeline Development Build and maintain scalable, high-performance data pipelines using Python and Apache Spark . Handle ingestion, transformation, and preparation of large-scale datasets from diverse sources. Data Infrastructure Management Manage distributed storage systems like HDFS and MongoDB to ensure reliable and efficient data access. Monitor and tune the performance of data infrastructure for speed, scalability, and availability. Quality & Governance Implement data validation checks and monitoring tools to ensure data accuracy and reliability. Enforce data governance , security, and privacy best practices in all engineering processes. Cross-Functional Collaboration Work closely with data scientists , analysts , and business stakeholders to understand data needs and deliver solutions. Contribute to team discussions, code reviews, and architectural decisions. Code & Process Optimization Write clean, modular , and well-documented code . Debug and optimize workflows, addressing performance bottlenecks as they arise.

Posted 1 month ago

Apply

3 - 6 years

10 - 20 Lacs

Gurugram

Work from Office

Naukri logo

About ProcDNA: ProcDNA is a global consulting firm. We fuse design thinking with cutting-edge tech to create game-changing Commercial Analytics and Technology solutions for our clients. We're a passionate team of 275+ across 6 offices, all growing and learning together since our launch during the pandemic. Here, you won't be stuck in a cubicle - you'll be out in the open water, shaping the future with brilliant minds. At ProcDNA, innovation isn't just encouraged, it's ingrained in our DNA. What we are looking for: As the Associate Engagement Lead, youll leverage data to unravel complexities, adept at devising strategic solutions that deliver tangible results for our clients. We are seeking an individual who not only possesses the requisite expertise but also thrives in the dynamic landscape of a fast-paced global firm. What youll do Design/implement complex and scalable enterprise data processing and BI reporting solutions. Design, build and optimize ETL pipelines or underlying code to enhance data warehouse systems. Work towards optimizing the overall costs incurred due to system infrastructure, operations, change management etc. Deliver end-to-end data solutions across multiple infrastructures and applications Coach, mentor, and manage a team of junior associates to help them (plan tasks effectively and more). Demonstrate overall client stakeholder and project management skills (drive client meetings, creating realistic project timelines, planning and managing individual and team's task). Assist senior leadership in business development proposals focused on technology by providing SME support. Build strong partnerships with other teams to create valuable solutions Stay up to date with latest industry trends. Must have 3- 5 years of experience in designing/building data warehouses and BI reporting with a B.Tech/B.E background Prior experience of managing client stakeholders and junior team members. A background in managing Life Science clients is mandatory. Proficient in big data processing and cloud technologies like AWS, Azure, Databricks, PySpark, Hadoop etc. Along with proficiency in Informatica is a plus. Extensive hands-on experience in working with cloud data warehouses like Redshift, Azure, Snowflake etc. And Proficiency in SQL, Data modelling, designing ETL pipelines is a must. Intermediate to expert-level proficiency in Python. Proficiency in either Tableau, PowerBI, Qlik is a must. Should have worked on large datasets and complex data modelling projects. Prior experience in business development activities is mandatory. Domain knowledge of the pharma/healthcare landscape is mandatory.

Posted 1 month ago

Apply

4 - 9 years

16 - 27 Lacs

Hyderabad, Pune, Bengaluru

Work from Office

Naukri logo

Role & responsibilities 1. Strong experience in AWS Data Engineer 2. Experience in Python/Pyspark 3. Experience in EMR,Glue,athena,Redshift,lamda

Posted 1 month ago

Apply

7 - 12 years

10 - 20 Lacs

Bengaluru

Work from Office

Naukri logo

Senior. Snowflake Data Engineer Experience- 7+ Years Location- Bangalore Location, Work from the Office, Notice period- Immediate only Mandatory skills- Data Engineer, Snowflake, Data bricks

Posted 1 month ago

Apply

11 - 21 years

25 - 40 Lacs

Hyderabad, Pune, Bengaluru

Hybrid

Naukri logo

Warm Greetings from SP Staffing Services Private Limited!! We have an urgent opening with our CMMI Level 5 client for the below position. Please send your update profile if you are interested. Relevant Experience: 3 - 20 Yrs Location- Pan India Job Description : - Skills: GCP, BigQuery, Cloud Composer, Cloud DataFusion, Python, SQL 5-20 years of overall experience mainly in the data engineering space, 2+ years of Hands-on experience in GCP cloud data implementation, Experience of working in client facing roles in technical capacity as an Architect. must have implementation experience of GCP based clous Data project/program as solution architect, Proficiency of using Google Cloud Architecture Framework in Data context Expert knowledge and experience of core GCP Data stack including BigQuery, DataProc, DataFlow, CloudComposer etc. Exposure to overall Google tech stack of Looker/Vertex-AI/DataPlex etc. Expert level knowledge on Spark.Extensive hands-on experience working with data using SQL, Python Strong experience and understanding of very large-scale data architecture, solutioning, and operationalization of data warehouses, data lakes, and analytics platforms. (Both Cloud and On-Premise) Excellent communications skills with the ability to clearly present ideas, concepts, and solutions If interested please forward your updated resume to sankarspstaffings@gmail.com / Sankar@spstaffing.in or you can reach me @ 8939853050 With Regards, Sankar G Sr. Executive - IT Recruitment

Posted 1 month ago

Apply

11 - 18 years

35 - 60 Lacs

Pune, Chennai, Delhi / NCR

Hybrid

Naukri logo

Warm Greetings from SP Staffing Services Private Limited!! We have an urgent opening with our CMMI Level 5 client for the below position. Please send your update profile if you are interested. Relevant Experience: 8 - 18 Yrs Location- Pan India Job Description : - Mandatory experience - Data Program Manager with AWS Technical Program Manager worked on transformation projects on private cloud He should manage the overall delivery Dependency resolution with stakeholders Cross team dependency resolution Keep stakeholders informed and manage expectations Coordinate with Performance Engineering team for performance analysis Understand the impact of design decisions on the total cost of ownership Coordinate with other teams to ensure information going into designs is accurate and complete Ensure that all designs are see through the relevant security approvals Engage with other teams to ensure any identified design issues are remediated accordingly Able to manage CAB meetings release management exception approvals Well versed with SAFe Agile processes If interested please forward your updated resume to sankarspstaffings@gmail.com / Sankar@spstaffing.in With Regards, Sankar G Sr. Executive - IT Recruitment

Posted 1 month ago

Apply

8 - 13 years

10 - 20 Lacs

Bengaluru

Work from Office

Naukri logo

Hi, Greetings from Sun Technology Integrators!! This is regarding a job opening with Sun Technology Integrators, Bangalore. Please find below the job description for your reference. Kindly let me know your interest and share your updated CV to nandinis@suntechnologies.com with the below details ASAP. C.CTC- E.CTC- Notice Period- Current location- Are you serving Notice period/immediate- Exp in Snowflake- Exp in Matillion- 2:00PM-11:00PM-shift timings (free cab facility-drop) +food Please let me know, if any of your friends are looking for a job change. Kindly share the references. Only Serving/ Immediate candidates can apply. Interview Process-1 Round(Virtual)+Final Round(F2F) Please Note: WFO-Work From Office (No hybrid or Work From Home) Mandatory skills : Snowflake, SQL, ETL, Data Ingestion, Data Modeling, Data Warehouse,Python, Matillion, AWS S3, EC2 Preferred skills : SSIR, SSIS, Informatica, Shell Scripting Venue Details: Sun Technology Integrators Pvt Ltd No. 496, 4th Block, 1st Stage HBR Layout (a stop ahead from Nagawara towards to K. R. Puram) Bangalore 560043 Company URL: www.suntechnologies.com Thanks and Regards,Nandini S | Sr.Technical Recruiter Sun Technology Integrators Pvt. Ltd. nandinis@suntechnologies.com www.suntechnologies.com

Posted 1 month ago

Apply

3 - 8 years

10 - 20 Lacs

Bengaluru

Work from Office

Naukri logo

Hi, Greetings from Sun Technology Integrators!! This is regarding a job opening with Sun Technology Integrators, Bangalore. Please find below the job description for your reference. Kindly let me know your interest and share your updated CV to nandinis@suntechnologies.com with the below details ASAP. C.CTC- E.CTC- Notice Period- Current location- Are you serving Notice period/immediate- Exp in Snowflake- Exp in Matillion- 2:00PM-11:00PM-shift timings (free cab facility-drop) +food Please let me know, if any of your friends are looking for a job change. Kindly share the references. Only Serving/ Immediate candidates can apply. Interview Process-2 Rounds(Virtual)+Final Round(F2F) Please Note: WFO-Work From Office (No hybrid or Work From Home) Mandatory skills : Snowflake, SQL, ETL, Data Ingestion, Data Modeling, Data Warehouse,Python, Matillion, AWS S3, EC2 Preferred skills : SSIR, SSIS, Informatica, Shell Scripting Venue Details: Sun Technology Integrators Pvt Ltd No. 496, 4th Block, 1st Stage HBR Layout (a stop ahead from Nagawara towards to K. R. Puram) Bangalore 560043 Company URL: www.suntechnologies.com Thanks and Regards,Nandini S | Sr.Technical Recruiter Sun Technology Integrators Pvt. Ltd. nandinis@suntechnologies.com www.suntechnologies.com

Posted 1 month ago

Apply

10 - 20 years

20 - 30 Lacs

Hyderabad

Remote

Naukri logo

Note: Looking for Immediate Joiners and timings 5:30 pm - 1:30 am IST (Remote) Project Overview (If Possible): Its one of the workstreams of Project Acuity. Client Data Platform includes centralized web application for internal platform users across the Recruitment Business to support marketing and operational use cases. Building a database at the patient level will provide significant benefit to Client future reporting capabilities and engagement of external stakeholders. Role Scope / Deliverables: We are looking for an experienced AWS Data Engineer to join our dynamic team, responsible for developing, managing, and optimizing data architectures. The ideal candidate will have extensive experience in integrating large-scale datasets, building scalable and automated data pipelines. The candidate should also have experience with AWS ETL services (such as AWS Glue, Lambda, and Data Pipeline) to handle data processing and integration tasks effectively. Must Have Skills: Proficiency in programming languages such as Python, Scala, or similar. Strong experience in data classification, including the identification of PII data entities. Ability to leverage AWS services (e.g., SageMaker, Comprehend, Entity Resolution) to solve complex data related challenges. Strong analytical and problem-solving skills, with the ability to innovate and develop new approaches to data engineering Experience with AWS ETL services (such as AWS Glue, Lambda, and Data Pipeline) to handle data processing and integration tasks effectively. Experience in core AWS Services including AWS IAM, VPC, EC2, S3, RDS, Lambda, CloudWatch, CloudTrail. Nice to Have skills: Experience with data privacy and compliance requirements, especially related to PII data. Familiarity with advanced data indexing techniques, vector databases, and other technologies that improve the quality of outputs.

Posted 1 month ago

Apply

6 - 10 years

15 - 20 Lacs

Gurugram

Remote

Naukri logo

Title: Looker Developer Team: Data Engineering Work Mode: Remote Shift Time: 3:00 PM - 12:00AM IST Contract: 12 months Key Responsibilities Collaborate closely with engineers, architects, business analysts, product owners, and other team members to understand the requirements and develop test strategies. LookML Proficiency: LookML is Looker's proprietary language for defining data models. Looker developers need to be able to write, debug, and maintain LookML code to create and manage data models, explores, and dashboards. Data Modeling Expertise:Understanding how to structure and organize data within Looker is essential. This involves mapping database schemas to LookML, creating views, and defining measures and dimensions. SQL Knowledge: Looker leverages SQL queries under the hood. Developers need to be able to write SQL to understand the data, debug queries, and potentially extend LookML with custom SQL. Looker Environment: Familiarity with the Looker interface, including the IDE, LookML Validator, and SQL Runner, is necessary for efficient development. Education and/or Experience Bachelor's degree in MIS, Computer Science, Information Technology or equivalent required 6+ Years of IT Industry experience in Data management field.

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies