Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
4 - 9 years
10 - 18 Lacs
Bengaluru, Mumbai (All Areas)
Work from Office
R We are looking for a passionate Data Engineer with a strong background in Azure, Python, and SQL. The ideal candidate will have at least 4 years of relevant experience and will be based in Bangalore or Mumbai. You will have the opportunity to work with a leading organization and contribute to their data engineering and analytics initiatives. Location: Bangalore/Mumbai Your Future Employer : Our client is a leading organization in the [specific industry/sector] and is known for its innovative and forward-thinking approach. They offer a collaborative work environment and provide ample opportunities for professional growth and skill development. Responsibilities: Design, build, and maintain scalable data pipelines and architectures on Azure platform Collaborate with cross-functional teams to understand data requirements and develop solutions Optimize and troubleshoot data processes for performance and reliability Implement best practices for data security and compliance Contribute to the development and maintenance of data models and frameworks Stay updated with the latest trends and technologies in data engineering and analytics Requirements: Bachelor's degree in Computer Science, Engineering, or related field 4+ years of experience in data engineering, with strong proficiency in Azure, Python, and SQL Hands-on experience with building and optimizing big data pipelines and architectures Knowledge of data modeling, ETL processes, and data warehousing concepts Strong communication and teamwork skills Relevant certifications in Azure and data engineering will be a plus What's in it for you: Competitive compensation package Opportunity to work with a reputable organization and contribute to impactful projects Professional development and training opportunities Collaborative and inclusive work culture Reach us : If you feel this opportunity is well aligned with your career progression plans, please feel free to reach me with your updated profile at isha.joshi@crescendogroup.in Disclaimer : Crescendo Global specializes in Senior to C-level niche recruitment. We are passionate about empowering job seekers and employers with an engaging memorable job search and leadership hiring experience. Crescendo Global does not discriminate on the basis of race, religion, color, origin, gender, sexual orientation, age, marital status, veteran status, or disability status. Note : We receive a lot of applications on a daily basis so it becomes a bit difficult for us to get back to each candidate. Please assume that your profile has not been shortlisted in case you don't hear back from us in 1 week. Your patience is highly appreciated. Profile keywords : Analytics, Data Engineering, Data engineer, Azure Datafactory, Data bricks, Logic apps, SQL, Azure Synapse
Posted 2 months ago
5 - 10 years
3 - 8 Lacs
Chennai, Pune, Noida
Work from Office
Candidates can share their resumes at dee[ali.rawat@rsystems.com Data Engineer Developer with 6+ years relevant working experience in following skills Azure data Factory Azure Databricks Microsoft fabric Complex SQL query development DAX Data Warehouse design and development Experience of handling complex data flow, pipeline, and API call requests Excellent communication skill
Posted 2 months ago
5 - 10 years
22 - 30 Lacs
Chennai, Mumbai, Bengaluru
Work from Office
Design and implement data models and data architecture for both structured and unstructured data. Build data quality rules and data governance practises and tools from the start Model complex business and functional processes onto logical, physical data models Oversee the design, development, and maintenance of ETL and ELT processes. Work closely with business units and other technology teams to gather data integration requirements, reporting requirements. Continuously assess and optimize existing data pipelines for performance, reliability, and cost-effectiveness. Evaluate and implement new tools and technologies that can improve data engineering processes. Ensure thorough documentation of data processes, systems, and architecture. Proficiency in SQL, and experience with programming languages like Python / Scala. Familiarity with data warehousing solutions (e.g., Snowflake) and related data technologies (e.g., Apache Spark, dbt). Experience with cloud platforms (preferably Azure) Strong understanding of data modeling techniques and principles Ability to manage multiple projects, prioritize tasks, and meet deadlines. Strong verbal and written communication skills to articulate complex concepts. Ability to work collaboratively with technical and non-technical stakeholders. Proficiency in troubleshooting data issues and optimizing data workflows. Experience in working with buy-side financial services firms Familiarity with financial products like Equities, Fixed Income and business functions like Accounting, Risk Management, Reg Reporting etc.
Posted 2 months ago
7 - 11 years
30 - 35 Lacs
Bengaluru
Work from Office
1. The resource should have knowledge on Data Warehouse and Data Lake 2. Should aware of building data pipelines using Pyspark 3. Should be strong in SQL skills 4. Should have exposure to AWS environment and services like S3, EC2, EMR, Athena, Redshift etc 5. Good to have programming skills in Python
Posted 2 months ago
5 - 10 years
20 - 35 Lacs
Chennai
Work from Office
Development: Design, build, and maintain robust, scalable, and high-performance data pipelines to ingest, process, and store large volumes of structured and unstructured data. Utilize Apache Spark within Databricks to process big data efficiently, leveraging distributed computing to process large datasets in parallel. Integrate data from a variety of internal and external sources, including databases, APIs, cloud storage, and real-time streaming data. Data Integration & Storage: Implement and maintain data lakes and warehouses, using technologies like Databricks, Azure Synapse, Redshift, BigQuery to store and retrieve data. Design and implement data models, schemas, and architecture for efficient querying and storage. Data Transformation & Optimization: Leverage Databricks and Apache Spark to perform data transformations at scale, ensuring data is cleaned, transformed, and optimized for analytics. Write and optimize Spark SQL, PySpark, and Scala code to process large datasets in real-time and batch jobs. Work on ETL processes to extract, transform, and load data from various sources into cloud-based data environments. Big Data Tools & Technologies: Utilize cloud-based big data platforms (e.g., AWS, Azure, Google Cloud) in conjunction with Databricks for distributed data processing and storage. Implement and maintain data pipelines using Apache Kafka, Apache Flink, and other data streaming technologies for real-time data processing. Collaboration & Stakeholder Engagement: Work with data scientists, data analysts, and business stakeholders to define data requirements and deliver solutions that align with business objectives. Collaborate with cloud engineers, data architects, and other teams to ensure smooth integration and data flow between systems. Monitoring & Automation: Build and implement monitoring solutions for data pipelines, ensuring consistent performance, identifying issues, and optimizing workflows. Automate data ingestion, transformation, and validation processes to reduce manual intervention and increase efficiency. Document data pipeline processes, architectures, and data models to ensure clarity and maintainability. Adhere to best practices in data engineering, software development, version control, and code review. Required Skills & Qualifications: Education: Bachelors degree in Computer Science, Engineering, Data Science, or a related field (or equivalent experience). Technical Skills: Apache Spark: Strong hands-on experience with Spark, specifically within Databricks (PySpark, Scala, Spark SQL). Experience working with cloud-based platforms such as AWS, Azure, or Google Cloud, particularly in the context of big data processing and storage. Proficiency in SQL and experience with cloud data warehouses (e.g., Redshift, BigQuery, Snowflake). Strong programming skills in Python, Scala, or Java. Big Data & Cloud Technologies: Experience with distributed computing concepts and scalable data processing architectures. Familiarity with data lake architectures and frameworks (e.g., AWS S3, Azure Data Lake). Data Engineering Concepts: Strong understanding of ETL processes, data modeling, and database design. Experience with batch and real-time data processing techniques. Familiarity with data quality, data governance, and privacy regulations. Problem Solving & Analytical Skills: Strong troubleshooting skills for resolving issues in data pipelines and performance optimization. Ability to work with large, complex datasets, and perform data wrangling and cleaning.
Posted 2 months ago
4 - 7 years
14 - 20 Lacs
Bengaluru, Gurgaon
Hybrid
Role & responsibilities Data Engineering & ETL Development: Develop, optimize, and maintain ETL workflows using PySpark, Sqoop, and Hadoop. Implement data ingestion and transformation pipelines for structured and unstructured data. Integrate and migrate data between Hadoop, Teradata, and SQL-based systems. Big Data & Distributed Systems: Work with Hadoop ecosystem (HDFS, Hive, Sqoop, YARN, MapReduce). Optimize PySpark-based distributed computing workflows for performance and scalability. Handle large-scale batch processing and near-real-time data pipelines. Preferred candidate profile Database & SQL Development: Write, optimize, and debug complex SQL queries on Teradata, Hive, and other RDBMS systems. Ensure data consistency, quality, and performance tuning of databases. Automation & Scripting: Develop reusable Python scripts for automation, data validation, and process scheduling. Work with Airflow, Oozie, or similar workflow schedulers. Performance Optimization & Troubleshooting: Monitor and tune PySpark jobs and Hadoop cluster performance. Debug and optimize SQL queries, data pipelines, and ETL processes. Collaboration & Stakeholder Engagement: Work with data analysts, data scientists, and business teams to understand requirements. Provide guidance on data best practices, architecture, and governance. Strong expertise in PySpark, Hadoop ecosystem (HDFS, YARN, Sqoop, Hive, MapReduce). Proficiency in SQL, Teradata, and other relational databases. Hands-on experience with Python scripting for ETL, automation, and data processing. Experience with big data performance tuning, optimization, and debugging. Familiarity with job scheduling tools like Airflow, Oozie, or Control-M. Strong knowledge of data warehousing concepts, data modeling, and ETL frameworks. Ability to work in Agile/Scrum environments and collaborate with cross-functional teams.
Posted 2 months ago
8 - 13 years
20 - 35 Lacs
Navi Mumbai, Mumbai, Bengaluru
Work from Office
Key Points Fundamentals of DevOps, DevSecOps, CD CI Pipeline using ADO Good understanding of MPP Architecture, MySQL, RDS, MS-SQL DB, Oracle ,Postgres DB Would need to interact with Software Integrators on a day-to-day basis ELT - Trino, Azure Data factory, Azure Databricks, PySpark, Python, Iceberg, Parquet CDC Tool like Qlik/ Golden Gate/Dbsium/IBM CDC, Kafka/ Solace Scripting Shell, Python, Java, Good Understanding of Azure Cloud Engineering ADLS, Iceberg, Databricks, AKS, RHEL Good understanding of MS-Project Development skill using Trino, PySpark and Databricks Understanding of security basics, Encryption/Decryption, Understanding of IT hardware basics: Unix/Windows servers, RAM/CPU utilization, storage on cloud Basic project management skills for preparation of a high-level project plan. Understanding of DNS and Load Balancing, and their use. Understanding of DR/BCP/Recovery/Backup conceptually for DB and Apply Servers Expereince Experience with data integration concepts, Cloud, modern application development methodologies and technologies Proven track record of leading multiple software implementation projects and meeting customer and business requirements Strong analytical, critical thinking, and problem solving skills Ability to conduct an analysis of a business need, including scheduling meetings, planning agendas and conferring with business line leaders
Posted 2 months ago
6 - 10 years
22 - 30 Lacs
Chennai, Mumbai, Bengaluru
Work from Office
We are seeking highly skilled Power BI and Data Engineer with expertise in Python, ETL, Spark, Azure Databricks, Azure Data Factory, Azure Synapse, and SQL. The ideal candidate will be responsible for designing, developing, and optimizing data pipelines, ETL processes, and analytical dashboards to support business intelligence and data-driven decision-making. Design, develop, and maintain ETL pipelines to process structured and unstructured data efficiently. Build and optimize data models and workflows in Azure Synapse Analytics, Azure Databricks, and Azure Data Factory. Develop interactive and insightful Power BI dashboards and reports to support business intelligence. Work with big data technologies (Spark, Databricks) to process and analyze large datasets. Implement data transformation and data integration solutions using Python and SQL. Ensure data quality, integrity, and governance across the data lifecycle. Optimize database performance and query efficiency in SQL-based environments. Collaborate with cross-functional teams to understand business requirements and translate them into scalable data solutions. Monitor and troubleshoot data pipelines, ensuring high availability and performance. Implement security best practices and role-based access controls (RBAC) in Azure services. Strong experience in Power BI (DAX, Power Query, data modeling, report development). Proficiency in Python for data engineering and automation. Experience in ETL development and working with large-scale data pipelines. Strong understanding of Spark (PySpark, Scala, or Spark SQL) for big data processing. Expertise in SQL development, query optimization, and data modeling. Familiarity with CI/CD pipelines, DevOps for data engineering, and version control (Git). Knowledge of data governance, security, and compliance best practices. Experience with Azure Machine Learning or other AI/ML tools is a plus. Knowledge of streaming technologies (Kafka, Event Hub, etc.). Familiarity with scripting languages for automation (Shell, PowerShell). Certification in Azure Data Engineering or Power BI is a plus. Hands-on expertise in Azure Data Factory, Azure Databricks, and Azure Synapse Analytics. Experience in Azure cloud services for data storage, processing, and analytics.
Posted 2 months ago
3 - 7 years
20 - 35 Lacs
Bengaluru
Work from Office
As a Risk Modeling Data programming Associate within the CCB- Portfolio Risk Modeling team you are expected to support critical statistical development projects and related analysis. You will have the chance to collaborate with modeling teams, comprehend model monitoring requirements, and construct feasible end-to-end solutions. The incumbent candidate will have the following roles and responsibilities. Job responsibilities Support end-to-end credit-risk model development efforts within Regulatory Modeling Migrating existing code base to the cloud. Developing, and building tools to support prototyping, evaluating, and deploying models Support model development efforts and liaison with different teams Manage quality control within development projects, assuring accurate results. Assist in development and monitoring of Regulatory Models Efficiently design and produce programs to streamline and create repeatable procedures for model development, validation, and reporting. Proactively communicate and collaborate with line of business partners and model end-users to analyze and meet analysis and reporting needs. Inventing creative and innovative ways to answer key business questions by leveraging existing data assets Required qualifications, capabilities, and skills Min 4 years of programming experience in any of the following languages SAS or Python. Experience utilizing SQL in a relational database environment such as DB2, Oracle, or Teradata. Experience in BI development and reporting would be an added advantage. A degree in Computer Science, Engineering, or Information Technology. Experience in developing and implementation of solutions in a production environment. Ability to deliver high-quality results under tight deadlines and be comfortable with the manipulation, analysis, and summarization of large quantities of data. Well-developed oral and written communication skills. Ability to make contributions to the groups knowledge base by proposing new and creative ways for approaching analytic problems and project design. Preferred qualifications, capabilities, and skills • Formal training or certification in AWS Exposure to risk at a financial services institution would be an added advantage. Knowledge of big data environment (AWS) will be an added plus.
Posted 3 months ago
8 - 13 years
20 - 30 Lacs
Pune, Bengaluru
Hybrid
Role: Data Engineer Responsibilities Work on the collecting, storing, processing, and analyzing of large sets of data Choose optimal solutions to use for these purposes, then maintaining, implementing, and monitoring them Responsible for integrating those solutions with the architecture used across the company and to help build out some core services that power Machine Learning and analytics systems Role Requirements Lead and work closely with all teams (including virtual teams based in non UK locations), creating a strong culture of transparency and collaboration Ability to process and rationalize structured data, message data and semi/unstructured data and ability to integrate multiple large data sources and databases into one system Proficient understanding of distributed computing principles and of the fundamental design principles behind a scalable application Strong knowledge of the Big Data eco system, experience with Hortonworks/Cloudera platforms Strong self-starter with strong technical skills who enjoys the challenge of delivering change within tight deadlines Knowledge of one or more of the following domains (including market data vendors): • Party/Client • Trade • Settlements • Payments • Instrument and pricing • Market and/or Credit Risk Practical expertise in developing applications and using querying tools on top of Hive, Spark (PySpark) Strong Scala skills. Experience in Python, particularly the Anaconda environment and Python based ML model deployment • Experience of Continuous Integration/Continuous Deployment (Jenkins/Hudson/Ansible) • Experience in working in Teams using the Agile Methods (SCRUM) and Confluence/JIRA • Good communication skills (written and spoken), ability to engage with different stakeholders and to synthesise different opinions and priorities • Knowledge of at least one Python web framework (preferably: Flask, Tornado, and/or twisted) • Basic understanding of front-end technologies, such as JavaScript, HTML5, and CSS3 would be a plus • Good understanding of global markets, markets macrostructure and macro economics • Knowledge of Elastic Search Stack (ELK) • Experience processing and rationalising structured data, message data and semi/unstructured data and integrating multiple large data sources and databases into one system • Knowledge of distributed computing principles and of the fundamental design principles behind a scalable application • Experience using: o Hortonworks/Cloudera platforms o HDFS o Querying tools on top of Hive, Spark (PySpark)\ Scala o Python, particularly the Anaconda environment o GIT/GITLAB as a version control system • Good communication skills (written and spoken), ability to engage with different stakeholders and to synthesise different opinions and priorities • Good knowledge of SDLC and formal Agile processes, a bias towards TDD and a willingness to test products as part of the delivery cycle
Posted 3 months ago
0 - 1 years
3 Lacs
Pune
Work from Office
Jade Global Software Pvt. Ltd. is looking for Data Engineer to join our dynamic team and embark on a rewarding career journey. Liaising with coworkers and clients to elucidate the requirements for each task. Conceptualizing and generating infrastructure that allows big data to be accessed and analyzed. Reformulating existing frameworks to optimize their functioning. Testing such structures to ensure that they are fit for use. Preparing raw data for manipulation by data scientists. Detecting and correcting errors in your work. Ensuring that your work remains backed up and readily accessible to relevant coworkers. Remaining up-to-date with industry standards and technological advancements that will improve the quality of your outputs.
Posted 3 months ago
12 - 14 years
25 - 30 Lacs
Gurgaon
Work from Office
Spearhead development projects right from inception to Go-Live using Agile development methodology Create project roadmap, layout phased delivery approach, manage inter team dependencies while executing the project using methodology Perform resource coordination, setup best structured project team to ensure optimal performance Identify project risks and dependencies and track them to closure Ensure best software development practices are used in the project delivery Perform project reporting to both internal and external stakeholders Responsible for keeping a track of project PnL Work with organizations growth team to identify new opportunities Support organization on RFI and RFPs Ownership of high ESAT and CSAT Responsible for career progression, upskilling, mentoring of technical consultants Openness to travel both within India and Abroad for project needs Desired qualification: Experience of technical development during the start of career, preferably in Java, Front End, .NET, DevOps, Data Engineer/Data Science etc. Good understanding of project architecture and software development methodology CSM / CSPO / PMP certification CANDIDATES BASED OUT OF GURUGRAM OR WILLING TO RELOCATE WILL BE PREFERRED.
Posted 3 months ago
7 - 12 years
15 - 30 Lacs
Hyderabad
Work from Office
Job Description: Note: Mode of Interview: F2F on Saturday 22nd March'25 (Only Female Candidates should apply) We are seeking a highly skilled and experienced Senior Data Engineer to join our dynamic team. The ideal candidate will have a strong background in data engineering and be proficient in the following technologies: SQL Stored Procedures Spark Python BigQuery Experience with distributed databases Kafka Data testing with GCP or AWS cloud Responsibilities: Design, develop, and maintain data pipelines and ETL processes. Implement and optimize SQL queries and stored procedures. Utilize Spark for large-scale data processing. Develop and maintain Python scripts for data manipulation and analysis. Work with BigQuery to manage and analyze large datasets. Ensure data integrity and consistency across distributed databases. Integrate Kafka for real-time data streaming. Perform data testing and validation on GCP or AWS cloud platforms. Collaborate with cross-functional teams to understand data requirements and deliver solutions. Qualifications: Bachelor's degree in Computer Science, Information Technology, or a related field. Proven experience in data engineering with a minimum of 7 years in the industry. Strong problem-solving skills and attention to detail. Excellent communication and teamwork abilities. Experience Required: 7 to 17 yrs Location: Hyderabad For More details, please share your resume to email id muskan@hnssolution.in or monica.srivastava@hnssolution.in .
Posted 3 months ago
3 - 5 years
15 - 18 Lacs
Bengaluru, Hyderabad
Work from Office
The ideal candidate will have a solid foundation in SQL, proficiency in Python. Data Management & Analysis: Utilize SQL and programming skills to manage, analyse, and extract insights from large datasets. Ensure data integrity and accuracy across all platforms. Programming & Automation: Develop and maintain scripts using Python or other programming languages to automate repetitive tasks, optimize processes, and support data-related projects. Cloud Exposure: Leverage cloud technologies (e.g., AWS, Azure, GCP) to design, implement, and manage data storage and processing solutions. Prior cloud experience is an advantage, but not mandatory.
Posted 3 months ago
7 - 12 years
15 - 30 Lacs
Hyderabad
Work from Office
Data Engineer, Sql, Stored Procedure, Cloud, etc.
Posted 3 months ago
5 - 10 years
8 - 18 Lacs
Delhi NCR, Jaipur
Hybrid
Job Summary: We are seeking a highly skilled Lead Data Engineer with expertise in PySpark, Spark, Databricks, and Cloud technologies to join our dynamic team. The ideal candidate will be responsible for designing, developing, and optimizing large-scale data pipelines while leading a team of data engineers. This role requires a deep understanding of big data processing, distributed computing, and cloud-based data solutions. Key Responsibilities: Lead and mentor a team of data engineers in designing, building, and maintaining scalable data pipelines. Develop, optimize, and maintain ETL/ELT processes using PySpark and Apache Spark . Architect and implement Databricks-based solutions for big data processing and analytics. Work with cloud platforms (AWS, Azure, or GCP) to build robust, scalable, and cost-effective data solutions. Collaborate with data scientists, analysts, and business stakeholders to understand data needs and deliver high-quality solutions. Ensure data security, governance, and compliance with industry standards. Optimize data processing performance and troubleshoot data pipeline issues. Drive best practices in data engineering, including CI/CD, automation, and monitoring. Required Qualifications: Bachelors or Master’s degree in Computer Science, Engineering, or a related field. 5+ years of experience in data engineering with a focus on big data technologies. Strong expertise in PySpark, Apache Spark, and Databricks . Hands-on experience with cloud platforms such as AWS (Glue, EMR, Redshift), Azure (Data Factory, Synapse, Databricks), or GCP (BigQuery, Dataflow). Proficiency in SQL, Python, and Scala for data processing. Experience in building scalable ETL/ELT data pipelines . Knowledge of CI/CD for data pipelines and automation tools. Strong understanding of data governance, security, and compliance . Experience in leading and mentoring data engineering teams. Preferred Qualifications: Experience with Kafka, Airflow, or other data orchestration tools . Knowledge of machine learning model deployment in a big data environment. Familiarity with containerization (Docker, Kubernetes) . Certifications in cloud technologies (AWS, Azure, or GCP). Why Join Us? Opportunity to work with cutting-edge big data technologies. Competitive salary and benefits. Collaborative and innovative work environment. Career growth and professional development opportunities. On-Site Opportunity to client location. If you are passionate about data engineering and want to lead high-impact projects, we encourage you to apply!
Posted 3 months ago
8 - 11 years
10 - 13 Lacs
Bengaluru
Work from Office
6 + years of overall IT experience in Telecom OSS especially in Assurance domain Solution, design, and Implementation Strong Knowledge of Telecom OSS domain, with excellent experience Service Now for Assuance Knowledge and experience on Big Data,Data lake solution, KAFKA , Hadoop/Hive. Experience on Python (pyspark) is essential. Implementation experience in continuous integration and delivery philosophies and practices specifically on Docker, Git, JenKins Self driven and highly motivated candidate for a client facing role in a challenging environment.
Posted 3 months ago
6 - 7 years
10 - 18 Lacs
Mumbai
Work from Office
Role & responsibilities About the Role We are seeking a highly skilled and experienced Senior Data Engineer to join our dynamic team. The ideal candidate will have a strong background in data engineering, with extensive experience in data visualization using Power BI, advanced Python programming, and cloud infrastructure on Azure. Additionally, expertise in using Databricks for large-scale data processing is essential Work Location : Mumbai Notice Period : Immediate or 15days Experience : 6+yrs Required Qualifications : Experience: 6+ years of professional experience in data engineering, with a proven track record of working on large-scale data projects. Technical Skills: o Power BI: Expert in building advanced dashboards, reports, and custom visuals, including the use of DAX and Python. o Python: Proficient in Python for data manipulation, ETL processes, and integration with BI tools. Azure Cloud: Extensive experience with Azure services, including Azure Data Lake, Azure SQL Database, Azure Data Factory, and Azure Synapse Analytics. Databricks: Deep understanding of Databricks, including notebook development, cluster management, and performance tuning. SQL: Advanced knowledge of SQL for querying and data transformation. Data Modelling: Strong experience in designing and implementing data models for both operational and analytical purposes Preferred candidate profile Preferred Qualifications • Certifications: Relevant Azure certifications (e.g., Azure Data Engineer, Azure Solutions Architect) are a plus. • Experience with Other Tools: Familiarity with other BI tools (e.g., Tableau), and experience with big data technologies (e.g., Hadoop, Spark) is advantageous. • Industry Experience: Prior experience in [specific industry, e.g., Manufacturing, retail] is preferred but not required. Perks and benefits As per industry Regards, HR Manager
Posted 3 months ago
2 - 5 years
4 - 7 Lacs
Bengaluru
Work from Office
Work Experience • 4-5 years experience working in MS Purview. Technical / Professional Skills Please provide at least 3 5+ years of experience as a software developer or data engineer Hands-on experience with Microsoft Purview (formerly Azure Information Protection and Microsoft Cloud App Security) Proficient in C#+ PowerShell+ and Azure Resource Manager templates Strong understanding of data governance+ compliance+ and risk management concepts Knowledge of data classification+ sensitivity labeling+ and retention policies Familiarity with Azure Data Factory+ Azure Synapse Analytics+ and other Azure data services Ability to automate deployment and configuration using Infrastructure as Code (IaC) Excellent problem-solving and troubleshooting skills
Posted 3 months ago
5 - 10 years
14 - 24 Lacs
Pune, Greater Noida, Gurgaon
Hybrid
Role: AWS Data Engineer Exp.: 5+ years Location: Gurugram, Noida & Pune (Hybrid 3 days work from Office) Job Description : Candidate should Provide technical expertise in needs identification, data modeling, data movement, and translating business needs into technical solutions with adherence to established data guidelines and approaches from a business unit or project perspective. Good knowledge of conceptual, logical, and physical data models, the implementation of RDBMS, operational data store (ODS), data marts, and data lakes on target platforms (SQL/NoSQL). Oversee and govern the expansion of existing data architecture and the optimization of data query performance via best practices. The candidate must be able to work independently and collaboratively Requirement: 5+ Years of experience as a Data Engineer Strong technical expertise in SQL is a must Strong knowledge of joins and common table expressions (CTEs) Strong experience with Python Experience in Data brick, Pyspark Strong expertise in ETL process and with various data model concepts Knowledge of star schema and snowflake schema Good to know about AWS services such as S3, Athena, Glue, EMR/Spark with a major emphasis on S3 and Glue Experience with Big Data Tools and technologies Key Skills: Good Understanding of data structures and data analysis using SQL or Python Knowledge of Insurance Domain is an addition. Knowledge of implementing ETL/ELT for data solutions end-to-end Understanding requirements, and data solutions (ingest, storage, integration, processing) Knowledge of analyzing data using SQL Conducting End to End verification and validation for the entire application Responsibilities : Understand and translate business needs into data models supporting long-term solutions. Perform reverse engineering of physical data models from databases and SQL scripts. Analyze data-related system integration challenges and propose appropriate solutions. Assist with and support setting the data architecture direction (including data movement approach, architecture/technology strategy, and any other data-related considerations to ensure business value)
Posted 3 months ago
5 - 10 years
22 - 30 Lacs
Chennai, Mumbai, Bengaluru
Work from Office
Design and implement data models and data architecture for both structured and unstructured data. Build data quality rules and data governance practises and tools from the start Model complex business and functional processes onto logical, physical data models Oversee the design, development, and maintenance of ETL and ELT processes. Work closely with business units and other technology teams to gather data integration requirements, reporting requirements. Continuously assess and optimize existing data pipelines for performance, reliability, and cost-effectiveness. Evaluate and implement new tools and technologies that can improve data engineering processes. Ensure thorough documentation of data processes, systems, and architecture. Proficiency in SQL, and experience with programming languages like Python / Scala. Familiarity with data warehousing solutions (e.g., Snowflake) and related data technologies (e.g., Apache Spark, dbt). Experience with cloud platforms (preferably Azure) Strong understanding of data modeling techniques and principles Ability to manage multiple projects, prioritize tasks, and meet deadlines. Strong verbal and written communication skills to articulate complex concepts. Ability to work collaboratively with technical and non-technical stakeholders. Proficiency in troubleshooting data issues and optimizing data workflows. Experience in working with buy-side financial services firms Familiarity with financial products like Equities, Fixed Income and business functions like Accounting, Risk Management, Reg Reporting etc.
Posted 3 months ago
5 - 10 years
15 - 25 Lacs
Bengaluru, Hyderabad
Hybrid
Roles and Responsibilities Design, develop, and maintain large-scale data pipelines using Azure Data Factory (ADF) to extract, transform, and load data from various sources into Azure Storage solutions such as Blobs, Files, and SQL Databases. Develop complex SQL queries to optimize database performance and troubleshoot issues in Microsoft Azure SQL Database. Collaborate with cross-functional teams to gather requirements for new projects and provide technical guidance on data engineering best practices. Desired Candidate Profile 6-11 years of experience in designing and developing large-scale data architectures on cloud platforms like AWS or Azure. Strong expertise in working with Azure services including ADF, SQL Database, Blobs/Files storage solutions. Proficiency in writing complex SQL queries for query optimization and troubleshooting purposes.
Posted 3 months ago
5 - 10 years
8 - 18 Lacs
Bengaluru, Hyderabad
Work from Office
Hiring For Top IT Company- Designation: Azure Data Engineer Skills : ADF, Data Lake, Data Services with Azure Location : Hyd/Bang Exp: 5+ yrs Call : 7375057507 7733995078 Apply to :conversedataengineer@gmail.com Thanks, Team Converse
Posted 3 months ago
0 - 5 years
10 - 20 Lacs
Bengaluru
Work from Office
Hi, Greetings from Sun Technology Integrators!! This is regarding a job opening with Sun Technology Integrators, Bangalore. Please find below the job description for your reference. Kindly let me know your interest and share your updated CV to nandinis@suntechnologies.com with the below details ASAP. C.CTC- E.CTC- Notice Period- Current location- Are you serving Notice period/immediate- Exp in Snowflake- Exp in Matillion- 2:00PM-11:00PM-shift timings (free cab facility-drop) +food Please let me know, if any of your friends are looking for a job change. Kindly share the references. Only Serving/ Immediate candidates can apply. Interview Process-2 Rounds(Virtual)+Final Round(F2F) Please Note: WFO-Work From Office (No hybrid or Work From Home) Mandatory skills : Snowflake, SQL, ETL, Data Ingestion, Data Modeling, Data Warehouse,Python, Matillion, AWS S3, EC2 Preferred skills : SSIR, SSIS, Informatica, Shell Scripting Venue Details: Sun Technology Integrators Pvt Ltd No. 496, 4th Block, 1st Stage HBR Layout (a stop ahead from Nagawara towards to K. R. Puram) Bangalore 560043 Company URL: www.suntechnologies.com Thanks and Regards,Nandini S | Sr.Technical Recruiter Sun Technology Integrators Pvt. Ltd. nandinis@suntechnologies.com www.suntechnologies.com
Posted 3 months ago
0 - 3 years
2 - 3 Lacs
Hyderabad
Work from Office
BTech / MCA / MSC Year of Pass Out Should be 2022/ 2023 Minimum 60% and above in Academics ( 10th, 12th & Engineering ) Good in Analytical and reasoning skills. Communication skills need to be good. Look for candidates who have done some Technical Certification. Minimum Knowledge in Core Java, Spring, Java Script, SQL Interview Process: Online Aptitude Test & Coding Test F2F Manager Round -Technical and Coding HR Round Job Notification Start Date : 18th March 2025 Last date of Apply through Naukri and Other Social Media: 5 Th April 2025 Interviews Start Date : 10 th April 2025 Online Aptitude Test Dates: 10 Th April to 20 Th April F2F Manager Round Interview Dates : 25 Th April to 15 Th May HR Interviews : 20 Th May to 15 June 2025 Joining Date : 30 Th June / 1 St July 2025 Note: HR and Recruitment team coordinate to shortlisted applications through email and phone communication only, no physical Prescence required till 25 Th April, hence requested you please don't come to office for resume submission.
Posted 3 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
The data engineer job market in India is rapidly growing as organizations across various industries are increasingly relying on data-driven insights to make informed decisions. Data engineers play a crucial role in designing, building, and maintaining data pipelines to ensure that data is accessible, reliable, and secure for analysis.
The average salary range for data engineer professionals in India varies based on experience and location. Entry-level data engineers can expect to earn anywhere between INR 4-6 lakhs per annum, while experienced professionals with 5+ years of experience can earn upwards of INR 15 lakhs per annum.
The typical career progression for a data engineer in India may include roles such as Junior Data Engineer, Data Engineer, Senior Data Engineer, Lead Data Engineer, and eventually Chief Data Engineer. As professionals gain more experience and expertise in handling complex data infrastructure, they may move into management roles such as Data Engineering Manager.
In addition to strong technical skills in data engineering, professionals in this field are often expected to have knowledge of programming languages such as Python, SQL, and Java. Familiarity with cloud platforms like AWS, GCP, or Azure, as well as proficiency in data warehousing technologies, is also beneficial for data engineers.
As you explore data engineer jobs in India, remember to showcase your technical skills, problem-solving abilities, and experience in handling large-scale data projects during interviews. Stay updated with the latest trends in data engineering and continuously upskill to stand out in this competitive job market. Prepare thoroughly, apply confidently, and seize the opportunities that come your way!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2