Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
6.0 - 12.0 years
0 Lacs
maharashtra
On-site
Role Overview: As an Azure Data Engineer, you will be responsible for leveraging your proven experience in data engineering with a focus on Azure cloud technologies. Your primary focus will be on utilizing Azure SQL Database, Azure Data Lake, Azure Data Factory, and Azure Databricks to create and manage data pipelines for data transformation and integration. Your proficiency in programming languages such as Python will be essential for this role. Key Responsibilities: - Utilize Azure Data Factory to create efficient data pipelines for seamless data transformation and integration processes. - Demonstrate strong hands-on knowledge of Azure cloud services related to data and analytics, including Azure SQL, Data Lake, Data Factory, and Databricks. - Apply your experience with SQL and data modeling, as well as your familiarity with NoSQL databases, to optimize data storage and retrieval processes. - Implement your knowledge of data warehousing and data lake concepts and technologies to ensure effective data management and analytics solutions. Qualifications Required: - Minimum of 6-12 years of proven experience as a Data Engineer, with a specific focus on Azure cloud technologies. - Proficiency in programming languages such as Python to support data engineering tasks efficiently. - Strong understanding of Azure SQL Database, Azure Data Lake, Azure Data Factory, and Azure Databricks for effective data processing. - Experience with SQL, data modeling, and NoSQL databases to ensure robust data storage and retrieval mechanisms. - Knowledge of data warehousing and data lake concepts and technologies to implement best practices in data management. Note: No additional details about the company were provided in the job description.,
Posted 14 hours ago
2.0 - 6.0 years
5 - 15 Lacs
chennai
Work from Office
Data Engineer Company: Blackstraw.ai Location: Chennai Job Type: Full-time Experience: 3+ years Job Summary We are looking for a Data Engineer with knowledge of ML concepts to design and build scalable data pipelines and solutions that align with business goals. The role involves working with large datasets, cloud platforms, and advanced data engineering tools to ensure efficiency, reliability, and performance. Mandatory Skills Python and PySpark programming. Hands-on experience with Databricks . Exposure to Machine Learning concepts and methods . Strong knowledge of SQL and RDBMS architecture . Experience with Azure (preferred), AWS, or GCP cloud services.
Posted 1 day ago
9.0 - 13.0 years
20 - 30 Lacs
hyderabad, chennai, bengaluru
Hybrid
Greeting from Wilco Source A CitiusTech Company!!! Role: Senior SAP CRM Data Engineer Experience: 9+ Years Location: Chennai, Hyderabad, Bangalore, Mumbai, Pune, Noida, Gurugram Summary As a Senior SAP CRM Data Engineer, you will lead the design, development, and maintenance of data pipelines and integrations supporting SAP CRM systems. You will collaborate with cross-functional teams to ensure data accuracy, compliance, and efficient data flow across the organization. This role demands deep technical expertise, leadership capabilities, and a strategic mindset to drive data initiatives aligned with business goals. Responsibilities Lead the design and development of scalable data pipelines for SAP CRM systems. Architect and implement integrations between SAP CRM and enterprise data platforms. Ensure high standards of data quality, consistency, and compliance. Collaborate with business analysts, developers, and stakeholders to gather and translate data requirements. Optimize data processes for performance, scalability, and reliability. Monitor and maintain the health of data pipelines, proactively resolving issues. Support advanced reporting and analytics initiatives using SAP CRM data. Mentor junior data engineers and contribute to team development. Required Qualifications Bachelors or Masters degree in Computer Science, Information Technology, or related field. 5+ years of experience in data engineering with a strong focus on SAP CRM systems. Expertise in SAP Data Services, SAP HANA, and SAP BW. Strong understanding of ETL processes, data modeling, and integration techniques. Proficiency in SQL, Python, and data transformation tools. Experience with CRM modules such as Sales, Service, and Marketing. Excellent problem-solving, analytical, and communication skills. Proven ability to lead technical initiatives and collaborate across teams. Preferred Skills Experience with SAP CRM Web UI, ABAP, and Adobe Forms. Familiarity with cloud platforms (e.g., Azure, AWS) and big data technologies. Knowledge of SAP C/4HANA or S/4HANA CRM solutions. SAP certifications in CRM or Data Services. Experience in Agile/Scrum environments and DevOps practices. Thanks Abdul.jabbar@citiustech.com
Posted 1 day ago
2.0 - 6.0 years
0 Lacs
maharashtra
On-site
Job Description: At 3Pillar, the focus is on leveraging cutting-edge technologies to revolutionize industries by enabling data-driven decision-making. As a Data Engineer, you will hold a crucial position within the dynamic team, actively contributing to thrilling projects that reshape data analytics for clients, providing them with a competitive advantage in their industries. If you have a passion for data analytics solutions that make a real-world impact, consider this your pass to the captivating world of Data Science and Engineering! Key Responsibilities: - Actively contribute to projects that reshape data analytics for clients - Enable data-driven decision-making through cutting-edge technologies Qualifications Required: - Strong background in data analytics - Proficiency in data engineering tools and technologies,
Posted 2 days ago
5.0 - 10.0 years
0 Lacs
kolkata, west bengal
On-site
As a Senior Data Engineer at EY GDS Data and Analytics (D&A) MS Fabric, you will have the opportunity to showcase your strong technology and data understanding in the big data engineering space with proven delivery capability. By joining our leading firm and growing Data and Analytics team, you will be a key player in shaping a better working world. **Key Responsibilities:** - Design, develop, and manage data solutions using Microsoft Fabric, such as Lakehouse, Data Engineering, Pipelines, Spark, Notebooks, and KQL Database. - Implement ETL/ELT processes to ensure efficient data integration and transformation. - Create and maintain data models to support business intelligence and analytics initiatives. - Utilize Azure data storage services (e.g., Azure SQL Database, Azure Blob Storage, Azure Data Lake Storage) for effective data management. - Collaborate with cross-functional teams to gather requirements and deliver data solutions that meet business needs. - Write and optimize code in SQL, Python, PySpark, and Scala for data processing and analysis. - Implement DevOps practices and CI/CD pipelines for seamless data deployment and version control. - Monitor and troubleshoot data pipelines to ensure reliability and performance. **Qualifications Required:** - 5-10 years of experience in data warehousing, ETL/ELT processes, data modeling, and cloud-based technologies, particularly Azure and Fabric. - Proficiency in various programming languages. As a part of EY, you will have the opportunity to work on inspiring and meaningful projects, receive support, coaching, and feedback from engaging colleagues, and have the freedom and flexibility to handle your role in a way that suits you best. Additionally, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange, offering opportunities for skills development and career progression. Join EY in building a better working world by leveraging data, AI, and advanced technology to help shape the future with confidence and develop solutions for pressing issues of today and tomorrow across assurance, consulting, tax, strategy, and transactions services. You will be part of a globally connected, multi-disciplinary network that can provide services in more than 150 countries and territories.,
Posted 3 days ago
7.0 - 11.0 years
8 - 15 Lacs
hyderabad
Hybrid
Role Overview We are seeking a highly skilled Senior Data Engineer to architect, develop, and manage robust, scalable data solutions in a cloud-first environment. This role requires deep expertise in the AWS ecosystem, ETL/ELT frameworks, and data warehousing, along with strong leadership and collaboration capabilities. You will work closely with cross-functional teams to deliver high-performance data infrastructure that drives business insights and operational excellence. Key Responsibilities Design, architect, and manage scalable, secure, and reliable data platforms leveraging AWS services (e.g., Redshift, S3, Glue, Lambda) Build and maintain real-time and batch data pipelines for seamless data ingestion and processing Develop and optimize high-performance data warehouses and data marts Write and tune complex SQL queries for analytics, reporting, and data transformation Collaborate with stakeholders across engineering, analytics, and business units to translate requirements into data solutions Establish best practices for data architecture, governance, security, and quality Monitor and ensure high availability, performance, and scalability of data platforms Mentor junior data engineers and promote a culture of learning and innovation Stay current with the latest trends in data engineering, cloud technologies, and machine learning integrations Required Experience & Qualifications Bachelors or Master’s degree in Computer Science, Data Engineering, or related discipline 7–11 years of experience in data engineering or similar roles, with a strong cloud focus Advanced proficiency in AWS services, especially Redshift, S3, Glue, Lambda, RDS, and Athena Strong expertise in SQL, with a focus on query optimization and performance tuning Proven experience designing and implementing ETL/ELT pipelines and scalable data solutions Excellent problem-solving, analytical, and communication skills Demonstrated leadership and mentoring capabilities Technical Skills Cloud & Big Data AWS: Redshift, S3, Glue, Lambda, RDS, Athena, Kinesis, EMR Data Lake architectures and real-time data processing tools (e.g., Kafka, Flink, Spark Streaming) Monitoring & Logging: CloudWatch Programming & Scripting Advanced SQL, Python, Scala ETL/ELT & Data Integration Tools AWS Glue, Apache Airflow, Talend, Informatica Data Modeling Dimensional Modeling, Star and Snowflake Schemas Big Data Frameworks Apache Spark, Hadoop CI/CD & DevOps Git, GitHub, Bitbucket, Jenkins, AWS CodePipeline, Terraform Governance & Compliance Working knowledge of GDPR, HIPAA, metadata management, and audit logging Certifications (Preferred) AWS Certified Data Analytics or AWS Solutions Architect certification Additional Requirements Experience supporting production environments (BAU) Familiarity with change control processes, incident management, and prioritization Exposure to both new development and ongoing maintenance/support cycles Ability to engage with stakeholders at all organizational levels, including external vendors and partners
Posted 3 days ago
6.0 - 11.0 years
0 - 0 Lacs
chennai
On-site
Permanent job opening for Data Engineer with US MNC organization at Chennai Job Description Years of experience - 6- 9 / 9 12 years Notice period - Immediate - 15 days Work Timings: 12.30 PM to 9.30 PM IST Interview F2F Technical interviewon 13th Sept Saturday - Chennai Office Job description: Primary: Azure, Databricks,ADF, Pyspark/Python Secondary: Datawarehouse,SAS/Alteryx IT experience in Datawarehouse and ETL Hands-on data experience on Cloud Technologies on Azure, ADF, Synapse, Pyspark/Python Ability to understand Design, Source to target mapping (STTM) and create specifications documents Flexibility to operate from client office locations Able to mentor and guide junior resources, as needed Nice to Have Any relevant certifications Banking experience on RISK & Regulatory OR Commercial OR Credit Cards/Retail. Please fill in the details mentioned below and share on amishdelta@gmail.com Total Work Experience: Relevant Work Experience: Current CTC: Expected CTC: Current Location : Notice period ( Negotiable to how many days ) : If Serving /Served Notice period ( Last working Day ) : Current Company : Current payroll Organization : Alternate No : Date of Birth : Reason for Job Change : Alternate email : Alternate contact :
Posted 5 days ago
7.0 - 11.0 years
12 - 20 Lacs
hyderabad, pune, chennai
Work from Office
Responsibilities: DBE engineer with 7-9 yrs exp Hands-on MS SQL, ETL , SSIS Hands-on experience on cloud side and migration exp from AWS and GCP Hands -on experience cloud data platform: Snowflake MS SQL Server, SQLServer Performance Tuning, ANSI SQL, SQL Server,Data Modeling, ETL,CI & CD
Posted 5 days ago
6.0 - 10.0 years
0 Lacs
karnataka
On-site
As a Data Engineer with more than 6 years of experience, your role will involve building and managing data processing pipelines for analytics usage. You will be responsible for the following key tasks: - Ability to understand the business use case and translate it into technical data requirements. - Building data pipelines that can clean, transform, and aggregate data from disparate sources. - Maintaining and re-engineering existing ETL processes to enhance data accuracy, stability, and pipeline performance. - Keeping abreast of advancements in data persistence and big data technologies, and conducting pilots to design data architecture that can scale with the increasing data sets from consumer experience. - Automating tasks wherever possible. - Reviewing and ensuring the quality of the deliverables. Qualifications required for this role include: - At least 6 years of experience as a Data Engineer. - Proficiency in building and managing data processing pipelines for analytics. - Strong understanding of ETL processes and data transformation. - Ability to adapt to new technologies and methodologies in the field of big data. - Attention to detail and commitment to delivering high-quality work. If there are any additional details about the company provided in the job description, please provide that information as well.,
Posted 5 days ago
7.0 - 12.0 years
17 - 27 Lacs
hyderabad
Work from Office
Job Title: Data Engineer Mandatory Skills Data Engineer, Python, AWS, SQL, Glue, Lambda, S3, SNS, ML, SQS Job Summary: We are seeking a highly skilled Data Engineer (SDET) to join our team, responsible for ensuring the quality and reliability of complex data workflows, data migrations, and analytics solutions across both cloud and on-premises environments. The ideal candidate will have extensive experience in SQL, Python, AWS, and ETL testing, along with a strong background in data quality assurance, data science platforms, DevOps pipelines, and automation frameworks. This role involves close collaboration with business analysts, developers, and data architects to support end-to-end testing,data validation, and continuous integration for data products. Expertise in tools like Redshift, EMR,Athena, Jenkins, and various ETL platforms is essential, as is experience with NoSQL databases, big data technologies, and cloud-native testing strategies. Role and Responsibilities: Work with business stakeholders, Business Systems Analysts and Developers to ensure quality delivery of software. Interact with key business functions to confirm data quality policies and governed attributes. Follow quality management best practices and processes to bring consistency and completeness to integration service testing. Designing and managing the testing AWS environments of data workflows during development and deployment of data products Provide assistance to the team in Test Estimation & Test Planning Design, development of Reports and dashboards. Analyzing and evaluating data sources, data volume, and business rules. Proficiency with SQL, familiarity with Python, Scala, Athena, EMR, Redshift and AWS. No SQL data and unstructured data experience. Extensive experience in programming tools like Map Reduce to HIVEQL. Experience in data science platforms like SageMaker/Machine Learning Studio/ H2O. Should be well versed with the Data flow and Test Strategy for Cloud/ On Prem ETL Testing. Interpret and analyses data from various source systems to support data integration and data reporting needs. Experience in testing Database Application to validate source to destination data movement and transformation. Work with team leads to prioritize business and information needs. Develop complex SQL scripts (Primarily Advanced SQL) for Cloud ETL and On prem. Develop and summarize Data Quality analysis and dashboards. Knowledge of Data modeling and Data warehousing concepts with emphasis on Cloud/ On Prem ETL. Execute testing of data analytic and data integration on time and within budget. Work with team leads to prioritize business and information needs Troubleshoot & determine best resolution for data issues and anomalies Experience in Functional Testing, Regression Testing, System Testing, Integration Testing & End to End testing. Has deep understanding of data architecture & data modeling best practices and guidelines for different data and analytic platforms Required Skills and Qualifications: Extensive Experience in Data migration is a must (Teradata to Redshift preferred). Extensive testing Experience with SQL/Unix/Linux scripting is a must. Extensive experience testing Cloud/On Prem ETL (e.g. Abinitio, Informatica, SSIS, Datastage, Alteryx, Glu). Extensive experience DBMS like Oracle, Teradata, SQL Server, DB2, Redshift, Postgres and Sybase. Extensive experience using Python scripting and AWS and Cloud Technologies. Extensive experience using Athena, EMR, Redshift, AWS, and Cloud Technologies. Experienced in large-scale application development testing Cloud/ On Prem Data warehouse, Data Lake, Data science. Experience with multi-year, large-scale projects. Expert technical skills with hands-on testing experience using SQL queries. Extensive experience with both data migration and data transformation testing. Extensive experience DBMS like Oracle, Teradata, SQL Server, DB2, Redshift, Postgres and Sybase. Extensive testing Experience with SQL/Unix/Linux. Extensive experience testing Cloud/On Prem ETL (e.g. Abinitio, Informatica, SSIS, Datastage, Alteryx, Glu). Extensive experience using Python scripting and AWS and Cloud Technologies. Extensive experience using Athena, EMR , Redshift and AWS and Cloud Technologies. API/Rest Assured automation, building reusable frameworks, and good technical expertise/acumen. Java/Java Script - Implement core Java, Integration, Core Java and API. Functional/UI/ Selenium - BDD/Cucumber, Specflow, Data Validation/Kafka, BigData, also automation experience using Cypress. AWS/Cloud - Jenkins/ Gitlab/ EC2 machine, S3 and building Jenkins and CI/CD pipelines, SouceLabs.
Posted 5 days ago
6.0 - 11.0 years
12 - 22 Lacs
gurugram
Hybrid
Job description: Iris Software has been a trusted software engineering partner to several Fortune 500 companies for over three decades. We help clients realize the full potential of technology-enabled transformation by bringing together a unique blend of domain knowledge, best-of-breed technologies, and experience executing essential and critical application development engagements. www.irissoftware.com Title- Data Engineer Location- Gurgaon Notice: 15 Days OR Immediate joiners only Mode- Hybrid Shift: 8:00 AM- 5:00 PM IST Key Responsibilities: 6-9 yrs experience as Data developer/Data engineer Expert level development experience in SQL, Python, Pyspark. Experience in Airflow and DBT will be added advantage. Sounds like you? Send your CV to me - prerna.sharma@irissoftware.com
Posted 5 days ago
3.0 - 6.0 years
10 - 20 Lacs
pune
Remote
Work closely with clients to understand business needs, design data solutions, and deliver insights through end-to-end data management. Lead project execution, handle communication, documentation, and guide team members throughout. Required Candidate profile Must have hands-on experience with Python, ETL tools (Fivetran, StitchData), databases, and cloud platforms (AWS, GCP, Azure, Snowflake, Databricks). Familiarity with REST/SOAP APIs is essential.
Posted 5 days ago
1.0 - 6.0 years
0 - 2 Lacs
mumbai, navi mumbai, mumbai (all areas)
Work from Office
Responsibilities Automate manual CAT modeling processes using Python, SQL, and Azure services. Build data pipelines and frameworks for efficient data handling and reporting. Collaborate with CAT Modeling teams in India and the UK. Deliver scalable, reliable automation solutions to improve operational efficiency. Key Skills Required Python (Advanced) SQL & Azure Cloud (Intermediate) Experience in automation, data pipelines, and reporting tools Strong communication and collaboration skills
Posted 5 days ago
7.0 - 12.0 years
6 - 11 Lacs
bengaluru
Work from Office
Key Responsibilities : Lead the design, development, and deployment of end-to-end data solutions on Azure Databricks platform. Work with data scientists, data engineers, and business analysts to design and implement data pipelines and machine learning models. Develop efficient, scalable, and high-performance data processing workflows and analytics solutions using Databricks , Apache Spark , and Azure Synapse . Manage and optimize Databricks clusters and data pipelines. Collaborate with cross-functional teams to gather requirements and deliver optimal solutions. Design and implement ETL processes using Databricks Notebooks, Azure Data Factory, and other Azure services. Ensure high availability, performance, and security of cloud-based data solutions. Implement best practices for data quality, security, and governance. Monitor system performance and troubleshoot any issues related to Databricks clusters or data pipelines. Stay up-to-date with the latest advancements in cloud computing, big data, and machine learning technologies.
Posted 5 days ago
6.0 - 11.0 years
15 - 30 Lacs
chennai
Work from Office
Looking for Sr AWS Data Engineer , Tech Lead , And Architect .4 to 15 Yrs of exp
Posted 6 days ago
5.0 - 10.0 years
10 - 20 Lacs
pune
Remote
Role & responsibilities Python Data Engineer with Strong AZURE Databricks, and SQL
Posted 6 days ago
5.0 - 10.0 years
10 - 20 Lacs
noida, gurugram, delhi / ncr
Hybrid
Data Engineer (GCP) Company: Moptra Infotech Pvt. Ltd. Location: Noida, Sector 63 Website: https://moptra.com/ Work Schedule: 5 days working (Saturday & Sunday fixed off) About Moptra Infotech Moptra Infotech Pvt. Ltd. is a fast-growing IT services and consulting company headquartered in Noida, India. We specialize in delivering data engineering, cloud solutions, business intelligence, and advanced analytics services to global enterprises. With a strong focus on innovation, scalability, and client satisfaction , Moptra partners with leading organizations across industries to build reliable, future-ready digital ecosystems. Job Responsibilities Develop and maintain robust data pipelines and ETL/ELT processes using Python. Design and implement scalable, high-performance applications. Collaborate with cross-functional teams to define requirements and deliver solutions. Build and manage near real-time data streaming solutions (Pub/Sub, Apache Beam). Participate in code reviews, architecture discussions, and continuous improvement. Monitor and troubleshoot production systems to ensure reliability and performance. Basic Qualifications 5+ years of professional software development experience with Python. Strong understanding of software engineering best practices (testing, version control, CI/CD). Hands-on experience in building and optimizing ETL/ELT pipelines. Proficiency in SQL and strong database concepts. Experience with data processing frameworks (Pandas, etc.). Solid understanding of software design patterns and architectural principles. Experience with unit testing and test automation. Hands-on experience with Google Cloud Platform (GCP) . Experience with CI/CD pipelines and Infrastructure as Code (IaC). Hands-on with containerization technologies (Docker, Kubernetes). Bachelors degree in Computer Science, Engineering, or related field (or equivalent experience). Proven track record of delivering complex software projects. Excellent problem-solving, analytical and communication skills. Preferred Qualifications Experience with GCP services (Cloud Run, Dataflow, Pub/Sub). Exposure to big data technologies (Airflow). Knowledge of data visualization tools and libraries. Experience with CI/CD (GitLab) and IaC (Terraform). Familiarity with Snowflake, BigQuery, or Databricks. GCP Data Engineer Certification i Preferred candidate profile Desired Candidates Must Have Good Communication Skills With Minimum 5 Years of Experience as a Data Engineer ( GCP services Cloud Run, Dataflow, Pub/Sub). •Exposure to big data technologies (Airflow). •Knowledge of data visualization tools and libraries. •Experience with CI/CD (GitLab) and IaC (Terraform). •Familiarity with Snowflake, BigQuery, or Databricks. •GCP Data Engineer Certification is a plus. Note - Need Immediate Joiner or Max 30 Days Call / What's App Resume - 9718978697 Email - Siddharth Mathur Manager Talent Acquisition Moptra Infotech
Posted 6 days ago
7.0 - 10.0 years
7 - 17 Lacs
pune
Hybrid
Hiring: Data Engineer- Creating data pipelines in GCP Big query Airflow DAG (python nice to have) Job Location : Pune & Hyderabad
Posted 6 days ago
7.0 - 12.0 years
15 - 27 Lacs
gurugram
Hybrid
Work Location : Gurugram - Hybrid Notice : Immediate to short notice Experience : 7+ years Required skills and experience Proven experience in working with GA4 data, including understanding its data model, event-based tracking, and integration with BigQuery. 6+ years of hands-on experience with Google BigQuery or Snowflake including advanced SQL techniques (CTEs, window functions, aggregate functions), schema design, and performance optimization. 6+ years of hands-on experience with advanced SQL techniques (CTEs, window functions, aggregate functions). 5+ years of strong experience in Python programming (object-oriented/functional programming, Pandas, PySpark). 5+ years working with cloud platforms. Experience designing star schemas, analyzing data warehouses, and applying data warehouse methodologies, particularly in the context of digital analytics data. Essential experience in loading data into warehouse/ODS environments from diverse sources and formats. Design strategies for new data ingestion requests, including logical and physical data modeling. Hands-on experience with version control and CI/CD pipelines for data engineering workflows. Proven ability to perform unit testing (SQL scripts, ETL modules), integration testing, and performance/load/stress testing. Roles & Responsibility: Design, build, and deploy robust ETL and data management processes specifically for ingesting, transforming, and loading high-volume digital analytics data from Google Analytics 4 (GA4) into BigQuery/Snowflake. Develop and optimize BigQuery/Snowflake datasets, tables, and views to support various analytical needs, ensuring efficient querying and data integrity. Design, build, and deploy ETL job workflows with reliable error/exception handling and rollback frameworks. This includes designing and implementing data pipelines to feed data models for subsequent consumption. Monitoring and optimizing data processing and storage resources, with a focus on performance and cost efficiency. Troubleshooting and resolving data pipeline issues and performance bottlenecks, particularly those related to large-scale digital data processing. Write complex, customized SQL queries to manipulate data, generate automatic periodic reports, and support ad-hoc analytical requests. Build applications and scripts using Python to automate data processes, integrate systems, and enhance data quality. Interested candidates can mail me your resume to manjula.balaraman@orioninc.com
Posted 6 days ago
4.0 - 8.0 years
0 Lacs
karnataka
On-site
As a Data Engineer at Walmart Global Tech, you will be responsible for architecting, designing, and implementing high-performance data ingestion and integration processes in a complex, large-scale data environment. Your role will involve developing and implementing databases, data collection systems, data analytics, and other strategies to optimize statistical efficiency and quality. You will also oversee and mentor the data engineering team's practices to ensure data privacy and security compliance. Collaboration is key in this role as you will work closely with data scientists, data analysts, and other stakeholders to understand data needs and deliver on those requirements. Additionally, you will collaborate with all business units and engineering teams to develop a long-term strategy for data platform architecture. Your responsibilities will also include developing and maintaining scalable data pipelines, building new API integrations, and monitoring data quality to ensure accurate and reliable production data. To be successful in this role, you should have a Bachelor's degree or higher in Computer Science, Engineering, Mathematics, or a related field, along with at least 12 years of proven experience in data engineering, software development, or a similar data management role. You should have strong knowledge and experience with Big Data technologies such as Hadoop, Spark, and Kafka, as well as proficiency in scripting languages like Python, Java, Scala, etc. Experience with SQL and NoSQL databases, deep understanding of data structures and algorithms, and familiarity with machine learning algorithms and principles are also preferred. Excellent communication and leadership skills are essential for this role, along with hands-on experience in data processing and manipulation. Expertise with GCP cloud and GCP data processing tools like GCS, DataProc, DPaaS, BigQuery, Hive, as well as experience with Orchestration tools like Airflow, Automic, Autosys, are highly valued. Join Walmart Global Tech, where you can make a significant impact by leveraging your expertise to innovate at scale, influence millions, and shape the future of retail. With a hybrid work environment, competitive compensation, incentive awards, and a range of benefits, you'll have the opportunity to grow your career and contribute to a culture where everyone feels valued and included. Walmart Global Tech is committed to being an Equal Opportunity Employer, fostering a workplace culture where everyone is respected and valued for their unique contributions. Join us in creating opportunities for all associates, customers, and suppliers, and help us build a more inclusive Walmart for everyone.,
Posted 6 days ago
4.0 - 10.0 years
0 Lacs
pune, maharashtra
On-site
At Solidatus, we are revolutionizing the way organizations comprehend their data. We are an award-winning, venture-backed software company often referred to as the Git for Metadata. Our platform enables businesses to extract, model, and visualize intricate data lineage flows. Through our unique lineage-first approach and active AI development, we offer organizations unparalleled clarity and robust control over their data's journey and significance. As a rapidly growing B2B SaaS business with fewer than 100 employees, your contributions play a pivotal role in shaping our product. Renowned for our innovation and collaborative culture, we invite you to join us as we expand globally and redefine the future of data understanding. We are currently looking for an experienced Data Pipeline Engineer/Data Lineage Engineer to support the development of data lineage solutions for our clients" existing data pipelines. In this role, you will collaborate with cross-functional teams to ensure the integrity, accuracy, and timeliness of the data lineage solution. Your responsibilities will involve working directly with clients to maximize the value derived from our product and assist them in achieving their contractual objectives. **Experience:** - 4-10 years of relevant experience **Qualifications:** - Proven track record as a Data Engineer or in a similar capacity, with hands-on experience in constructing and optimizing data pipelines and infrastructure. - Demonstrated experience working with Big Data and related tools. - Strong problem-solving and analytical skills to diagnose and resolve complex data-related issues. - Profound understanding of data engineering principles and practices. - Exceptional communication and collaboration abilities to work effectively in cross-functional teams and convey technical concepts to non-technical stakeholders. - Adaptability to new technologies, tools, and methodologies within a dynamic environment. - Proficiency in writing clean, scalable, and robust code using Python or similar programming languages. Background in software engineering is advantageous. **Desirable Languages/Tools:** - Proficiency in programming languages such as Python, Java, Scala, or SQL for data manipulation and scripting. - Experience with XML in transformation pipelines. - Familiarity with major Database technologies like Oracle, Snowflake, and MS SQL Server. - Strong grasp of data modeling concepts including relational and dimensional modeling. - Exposure to big data technologies and frameworks such as Databricks, Spark, Kafka, and MS Notebooks. - Knowledge of modern data architectures like lakehouse. - Experience with CI/CD pipelines and version control systems such as Git. - Understanding of ETL tools like Apache Airflow, Informatica, or SSIS. - Familiarity with data governance and best practices in data management. - Proficiency in cloud platforms and services like AWS, Azure, or GCP for deploying and managing data solutions. - Strong problem-solving and analytical skills for resolving complex data-related issues. - Proficiency in SQL for database management and querying. - Exposure to tools like Open Lineage, Apache Spark Streaming, Kafka, or similar for real-time data streaming. - Experience utilizing data tools in at least one cloud service - AWS, Azure, or GCP. **Key Responsibilities:** - Implement robust data lineage solutions utilizing Solidatus products to support business intelligence, analytics, and data governance initiatives. - Collaborate with stakeholders to comprehend data lineage requirements and translate them into technical and business solutions. - Develop and maintain lineage data models, semantic metadata systems, and data dictionaries. - Ensure data quality, security, and compliance with relevant regulations. - Uphold Solidatus implementation and data lineage modeling best practices at client sites. - Stay updated on emerging technologies and industry trends to enhance data lineage architecture practices continually. **Qualifications:** - Bachelor's or Master's degree in Computer Science, Information Systems, or a related field. - Proven experience in data architecture, focusing on large-scale data systems across multiple companies. - Proficiency in data modeling, database design, and data warehousing concepts. - Experience with cloud platforms (e.g., AWS, Azure, GCP) and big data technologies (e.g., Hadoop, Spark). - Strong understanding of data governance, data quality, and data security principles. - Excellent communication and interpersonal skills to thrive in a collaborative environment. **Why Join Solidatus ** - Participate in an innovative company that is shaping the future of data management. - Collaborate with a dynamic and talented team in a supportive work environment. - Opportunities for professional growth and career advancement. - Flexible working arrangements, including hybrid work options. - Competitive compensation and benefits package. If you are passionate about data architecture and eager to make a significant impact, we invite you to apply now and become a part of our team at Solidatus.,
Posted 6 days ago
4.0 - 8.0 years
10 - 20 Lacs
navi mumbai
Work from Office
Hi, Greetings from HR Central! Job Title: AWS Data Engineer Location: Airoli, Navi Mumbai (Work from Office) Client: A leading global partner in sustainable construction Experience: 4 to 8 years Education: BE / B.Tech Certification: AWS (preferred) Job Description: Strong hands-on experience in Python programming (mandatory) Expertise in Data Engineering with exposure to large-scale projects/accounts Hands-on experience with AWS Big Data platforms Redshift, Glue, Lambda, Data Lakes, Data Warehouses Strong skills in SQL, Spark, PySpark Experience in building data pipelines and data integration workflows Exposure to orchestration tools like Airflow, Luigi, Azkaban Apply Now: Interested candidates can share their updated CV to: rajalakshmi@hr-central.in Thanks & Regards, Rajalakshmi HR Central rajalakshmi@hr-central.in
Posted 6 days ago
5.0 - 10.0 years
5 - 15 Lacs
hyderabad, pune, bengaluru
Hybrid
Job Title: Data Engineer Duration: Full time role Location: Pune/Bengaluru/Chennai/ Hyderabad Experience Level: 5 - 7 years Job Description: About the Role: We are seeking a skilled Data Engineer with a strong background in the Microsoft technology stack. The ideal candidate will have solid experience in data analysis, reporting, and working with Power BI. Experience with Microsoft Fabric is a strong plus. Key Responsibilities: Design, develop, and maintain data pipelines and ETL processes using Microsoft technologies. Collaborate with analysts and stakeholders to deliver high-quality, actionable data insights. Create interactive dashboards and reports in Power BI for various business units. Perform data modelling, cleansing, and transformation tasks. Ensure data accuracy, consistency, and security across all systems. Optimize queries and data flows for performance and scalability. Required Skills: Proven experience as a Data Engineer or in a similar role. Strong knowledge of the Microsoft Tech Stack (e.g., SQL Server, Azure Data Factory, SSIS, etc.). Proficiency in Power BI for reporting and dashboard creation. Experience in data analysis and working with large datasets. Familiarity with data warehousing concepts and practices. Excellent problem-solving and communication skills. Nice to Have: Experience with Microsoft Fabric. Knowledge of cloud-based data solutions (Azure preferred). Background in business intelligence or data science.
Posted 6 days ago
3.0 - 5.0 years
5 - 11 Lacs
pune
Hybrid
Data Engineer Job description : Job Overview We are looking for a savvy Data Engineer to manage in-progress and upcoming data infrastructure projects. The candidate will be responsible for expanding and optimizing our data and data pipeline architecture, as well as optimizing data flow and collection for cross functional teams. The ideal candidate is an experienced data pipeline builder using Python and data wrangler who enjoys optimizing data systems and building them from the ground up. They must be self-directed and comfortable supporting the data needs of multiple teams, systems and products. Responsibilities for Data Engineer • Create and maintain optimal data pipeline architecture, assemble large, complex data sets that meet functional / non-functional business requirements using Python and SQL / AWS / Snowflakes. • Identify, design, and implement internal process improvements through: automating manual processes using Python, optimizing data delivery, re-designing infrastructure for greater scalability, etc. • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL / AWS / Snowflakes technologies. • Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics. • Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs. • Keep our data separated and secure across national boundaries through multiple data centers and AWS regions. • Work with data and analytics experts to strive for greater functionality in our data systems. Qualifications for Data Engineer • Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases. • Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement. • Strong analytic skills related to working with unstructured datasets. • Build processes supporting data transformation, data structures, metadata, dependency and workload management. • A successful history of manipulating, processing and extracting value from large disconnected datasets. Desired Skillset:- • 3+ years of experience in a Python Scripting and Data specific role, with Bachelor degree. • Experience with data processing and cleaning libraries e.g. Pandas, numpy, etc., web scraping/web crawling for automation of processes, APIs and how they work. • Debugging code if it fails and find the solution. Should have basic knowledge of SQL server job activity monitoring and of Snowflake. • Experience with relational SQL and NoSQL databases, including PostgreSQL and Cassandra. • Experience with most or all the following cloud services: AWS, Azure, Snowflake, Google Strong project management and organizational skills. • Experience supporting and working with cross-functional teams in a dynamic environment Data Analyst Job description : Summary: In this role, you will be a part of the centralised global office based in India and work closely with each of our markets globally to understand the clients communication objectives, access multiple data sources and visualise it using Tableau / Datorama, support on of ETL process using MSSQL / Alteryx Flow and has sound knowledge of Excel VBA. Key responsibilities : Own the design, development, and maintenance of ongoing metrics, reports, analyses, dashboards on the key drivers of our business. Partner with operations/business teams to consult, develop and implement KPIs, automated reporting/process solutions and data infrastructure improvements to meet business needs Enable effective decision making by retrieving and aggregating data from multiple sources and compiling it into a digestible and actionable format Manage on-time delivery of regular client reports including: o Building reports from data warehouse o Review of completed reports for anomalies & discrepancies o Troubleshooting data issues/discrepancies o Ensure formatting & delivery parameters are met. Updating Tableau dashboards and Excel Dashboard as required for daily/weekly client reporting. Investigate and understand the opportunities of new data sources in the context of integration into Tableau. Updating Tableau/Excel/ or any similar dashboards for daily/weekly client reporting. Support data cleansings & manipulation process including but not limited to: o Taxonomy classification o Conversion re-naming, grouping o Removal of test/ghost impressions. Desired Skills:- Minimum 2+ years of experience in Analytics Strong verbal/written communication and data presentation skills, including an ability to effectively communicate with both business and technical teams. Hands on experience in creating complex Excel reports, SQL Queries joining multiple datasets Data Visualization tools such as Quick Sight / Tableau / Power BI / Datorama An ability and interest in working in a fast-paced, ambiguous and rapidly-changing environment. Experience in developing requirements and formulating business metrics for reporting, familiarity with data visualization tools, e.g. Tableau, Power BI.
Posted 1 week ago
2.0 - 6.0 years
0 Lacs
delhi
On-site
Would you like to be part of an exciting, innovative, and high-growth startup from one of the largest and most well-respected business houses in the country - the Hero Group Hero Vired is a premium learning experience offering industry-relevant programs and world-class partnerships, to create the change-makers of tomorrow. At Hero Vired, we believe everyone is made of big things. With the experience, knowledge, and expertise of the Hero Group, Hero Vired is on a mission to change the way we learn. Hero Vired aims to give learners the knowledge, skills, and expertise through deeply engaged and holistic experiences, closely mapped with industry to empower them to transform their aspirations into reality. The focus will be on disrupting and reimagining university education & skilling for working professionals by offering high-impact online certification and degree programs. The illustrious and renowned US$5 billion diversified Hero Group is a conglomerate of Indian companies with primary interests and operations in automotive manufacturing, financing, renewable energy, electronics manufacturing, and education. The Hero Group (BML Munjal family) companies include Hero MotoCorp, Hero FinCorp, Hero Future Energies, Rockman Industries, Hero Electronix, Hero Mindmine, and the BML Munjal University. For detailed information, visit Hero Vired. Role: Technical Recruiter - Career Services Location: Delhi (Sultanpur) Job Type: Full Time (Work from Office) Experience Level: 2 to 4 years Function: Career Services Role Overview: As a Technical Recruiter - Career Services at Hero Vired, you will play a pivotal role in contributing to the overall success of the Talent Acquisition team. You will be responsible for attracting talent, providing effective thought leadership, setting benchmarks, analyzing data/metrics, and prioritizing/planning to meet business objectives. Working in a collaborative and engaging environment, you will manage a team across multiple locations and work closely with business leaders to influence and deliver high-quality hires and a great candidate experience through all aspects of recruitment. Primary Responsibilities: - Quality candidate search from the rich pool of Hero Vired's learners based on job opportunities. - Mentoring candidates for job opportunities and tracking application status from applying till selection. - JD posting, data upkeeping, reports, and analysis. - Strong written and verbal English communication skills. - 2-4 years of experience as an IT Recruiter, preferably in edtech or an established recruitment and staffing firm. - Perform other related duties as assigned. Qualifications/Skills: - 2+ years of experience in hiring for IT roles in Data Science, Cloud, Devops, Software Development, Data Engineer, Data Analyst, etc. - Bachelor's Degree in Human Resource Management or equivalent experience is required. - Must have a proven track record in high-volume, entry-level recruitment, recruitment systems, and market research capabilities. - Ability to work in ambiguity and a constantly changing environment to meet the dynamic needs of the business. - Efficient time management. - Excellent verbal and written communication skills. - Ability to express innovative thoughts/ideas about recruiting and the overall learners" experience. - Experience in delivering reports and hiring/placements dashboards to inform business leaders on the progress of key recruitment initiatives and hiring metrics.,
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
The data engineer job market in India is rapidly growing as organizations across various industries are increasingly relying on data-driven insights to make informed decisions. Data engineers play a crucial role in designing, building, and maintaining data pipelines to ensure that data is accessible, reliable, and secure for analysis.
The average salary range for data engineer professionals in India varies based on experience and location. Entry-level data engineers can expect to earn anywhere between INR 4-6 lakhs per annum, while experienced professionals with 5+ years of experience can earn upwards of INR 15 lakhs per annum.
The typical career progression for a data engineer in India may include roles such as Junior Data Engineer, Data Engineer, Senior Data Engineer, Lead Data Engineer, and eventually Chief Data Engineer. As professionals gain more experience and expertise in handling complex data infrastructure, they may move into management roles such as Data Engineering Manager.
In addition to strong technical skills in data engineering, professionals in this field are often expected to have knowledge of programming languages such as Python, SQL, and Java. Familiarity with cloud platforms like AWS, GCP, or Azure, as well as proficiency in data warehousing technologies, is also beneficial for data engineers.
As you explore data engineer jobs in India, remember to showcase your technical skills, problem-solving abilities, and experience in handling large-scale data projects during interviews. Stay updated with the latest trends in data engineering and continuously upskill to stand out in this competitive job market. Prepare thoroughly, apply confidently, and seize the opportunities that come your way!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |