Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
10.0 - 15.0 years
30 - 40 Lacs
Hyderabad, Pune, Greater Noida
Work from Office
Responsibilities: * Design and build data architecture frameworks leveraging Azure services (Azure Data Factory, Azure Synapse Analytics, Azure Data Lake Storage, Azure SQL Database, ADLS Gen2, Synapse Engineering, Fabric Notebook, Pyspark, Scala, Python etc.). * Define and implement reference architectures and architecture blueprinting. * Experience demonstrating and ability to talk about wide variety of data engineering tools, architectures across cloud providers Especially on Azure platform. * Experience in building Data Product, data processing frameworks, Metadata Driven ETL pipelines , Data Security, Data Standardization, Data Quality and Data Reconciliation workflows. *Vast experience on building data product on MS AZURE / Fabric platform, Azure Managed instance, Microsoft Fabrics, Lakehouse, Synapse Engineering, MS onelake. Requirements:- * 10+ years of experience in Data Warehousing and Azure Cloud technologies. * Strong hands-on experience with Azure Fabrics, Synapse, ADf, SQL, Python/PySpark. * Proven expertise in designing and implementing data architectures on Azure using Microsoft fabric, azure synapse, ADF, MS fabric notebook * Exposure to Azure DevOps and Business Intelligence. * Solid understanding of data governance, data security, and compliance. * Excellent communication and collaboration skills.
Posted 3 weeks ago
3.0 - 5.0 years
3 - 8 Lacs
Bangalore Rural, Bengaluru
Work from Office
Job Title: Data Engineer (Mid-Level) Experience: 3 to 5 Years Location: Bangalore Department: Data Engineering / Analytics / IT Summary: entomo is an Equal Opportunity Employer. The company promotes and supports a diverse workforce at all levels across the Company. The Company ensures that its associates or potential hires, third-party support staff and suppliers are not discriminated against, directly or indirectly, as a result of their colour, creed, cast, race, nationality, ethnicity or national origin, marital status, pregnancy, age, disability, religion or similar philosophical belief, sexual orientation, gender or gender reassignment, etc We are looking for a skilled and experienced Data Engineer with 3 to 5 years of experience to design, build, and optimize scalable data pipelines and infrastructure. The ideal candidate will work closely with data scientists, analysts, and software engineers to ensure reliable and efficient data delivery throughout our data ecosystem. Key Responsibilities: Design, implement, and maintain robust data pipelines using ETL/ELT frameworks. Build and manage data warehousing solutions (e.g., Snowflake, Redshift, BigQuery). Optimize data systems for performance, scalability, and cost-efficiency. Ensure data quality, consistency, and integrity across various sources. Collaborate with cross-functional teams to integrate data from multiple business systems. Implement data governance, privacy, and security best practices. Monitor and troubleshoot data workflows and conduct root cause analysis on data-related issues. Automate data integration and validation processes using scripting languages (e.g., Python, SQL). Work with DevOps teams to deploy data solutions using CI/CD pipelines. Required Skills & Qualifications: Bachelor's or Masters degree in Computer Science, Engineering, Data Science, or a related field. 3 to 5 years of experience in data engineering or a similar role. Strong proficiency in SQL and at least one programming language (Python, Java, or Scala). Experience with cloud platforms (AWS, Azure, or GCP). Hands-on experience with data pipeline tools is added bonus(e.g., Apache Airflow, Luigi, DBT). Proficient in working with relational and NoSQL databases. Familiarity with big data tools (e.g., Spark, Hadoop) is a plus. Good understanding of data architecture, modeling, and warehousing principles. Excellent problem-solving and communication skills. Preferred Qualifications: Certifications in cloud platforms or data engineering tools. Experience with containerization (Docker, Kubernetes). Knowledge of real-time data processing tools (Kafka, Flink). Exposure to data privacy regulations (GDPR, HIPAA)
Posted 3 weeks ago
7.0 - 12.0 years
20 - 22 Lacs
Pune, Bangalore Rural, Bengaluru
Hybrid
Job Title: AWS Data Engineer Exp- 7+ Years Overall experience of 4-8 years. - Proven experience with SQL , Python, Amazon Redshift, Apache Spark ( Pyspark ), AWS IAM, Amazon S3 , and AWS Glue ETL is mandatory. - Good to have knowledge on data modelling skills - Strong communication and collaboration skills, with the ability to work effectively in a team. Resource needs to be very strong in SQL and Pyspark / Python . AWS knowledge can be compromised if they are strong in SQL/Pyspark. Good Communication Skills
Posted 3 weeks ago
4.0 - 9.0 years
19 - 34 Lacs
Bengaluru, Mumbai (All Areas)
Hybrid
Experience: 4-7 years Job Location: Bangalore/Mumbai Wissen Technology is actively hiring for a highly skilled Data Engineer. The ideal candidate will have substantial experience in managing and optimizing complex data pipelines, with a strong emphasis on Python and SQL expertise. A deep understanding of data engineering principles, along with hands-on experience in Spark applications and Databricks, will make you an ideal fit for this role. Key Responsibilities Python Expertise: Develop, enhance, and optimize Python code for data processing, ETL pipelines, and automation tasks. (Essential) SQL Mastery: Write, understand, and improve complex SQL queries to handle large datasets, optimize performance, and ensure data integrity. (Essential) Data Engineering: Build and manage scalable data pipelines using DBT or Apache Spark/Databricks.Optimize data processing tasks to ensure efficiency and scalability. Understanding airflow or a similar orchestration tool is useful. (Essential) Data Warehouse/Data Modeling: Strong understanding of data models and data warehouses. Demonstrate expertise in data modeling concepts, schema design, and decomposing models (Essential). The ability to understand and implement technical and business data quality rules is essential. Cloud Platforms: Experience with cloud-based platforms (AWS, Google Cloud, Snowflake, etc.). Familiarity with cloud-native data services and tools is desirable. (Good to have) Soft Skills: Strong problem-solving abilities and analytical skills. Excellent communication and teamwork skills. Ability to work in an Agile development environment.
Posted 3 weeks ago
6.0 - 11.0 years
15 - 30 Lacs
Hyderabad, Chennai, Bengaluru
Hybrid
Key Responsibilities Design and develop scalable data pipelines to migrate user knowledge objects from Splunk to ClickHouse and Grafana. Implement data ingestion, transformation, and validation processes to ensure data integrity and performance. Collaborate with cross-functional teams to automate and optimize data migration workflows. Monitor and troubleshoot data pipeline performance and resolve issues proactively. Work closely with observability engineers and analysts to understand data requirements and deliver solutions. Contribute to the continuous improvement of the observability stack and migration automation tools. Required Skills and Qualifications Proven experience as a Big Data Developer or Engineer working with large-scale data platforms. Strong expertise with ClickHouse or other columnar databases, including query optimization and schema design. Hands-on experience with Splunk data structures, dashboards, and reports. Proficiency in data pipeline development using technologies such as Apache Spark, Kafka, or similar frameworks. Strong programming skills in Python, Java, or Scala. Experience with data migration automation and scripting. Familiarity with Grafana for data visualization and monitoring. Understanding of observability concepts and monitoring systems. Nice to Have Experience with Bosun or other alerting platforms. Knowledge of cloud-based big data services and infrastructure as code. Familiarity with containerization and orchestration tools (Docker, Kubernetes). Experience working in agile POD-based teams.
Posted 3 weeks ago
6.0 - 11.0 years
15 - 30 Lacs
Noida, Pune, Bengaluru
Hybrid
We are looking for a Snowflake Data Engineer with deep expertise in Snowflake and DBT to help us build and scale our modern data platform. Key Responsibilities: Design and build scalable ELT pipelines in Snowflake using DBT . Develop efficient, well-tested DBT models (staging, intermediate, and marts layers). Implement data quality, testing, and monitoring frameworks to ensure data reliability and accuracy. Optimize Snowflake queries, storage, and compute resources for performance and cost-efficiency. Collaborate with cross-functional teams to gather data requirements and deliver data solutions. Required Qualifications: 5+ years of experience as a Data Engineer, with at least 4 years working with Snowflake . Proficient with DBT (Data Build Tool) including Jinja templating, macros, and model dependency management. Strong understanding of ELT patterns and modern data stack principles. Advanced SQL skills and experience with performance tuning in Snowflake. Interested candidates share your CV at himani.girnar@alikethoughts.com with below details Candidate's name- Email and Alternate Email ID- Contact and Alternate Contact no- Total exp- Relevant experience- Current Org- Notice period- CCTC- ECTC- Current Location- Preferred Location- Pancard No-
Posted 3 weeks ago
3.0 - 6.0 years
15 - 20 Lacs
Bengaluru
Work from Office
About Zscaler Serving thousands of enterprise customers around the world including 40% of Fortune 500 companies, Zscaler (NASDAQ: ZS) was founded in 2007 with a mission to make the cloud a safe place to do business and a more enjoyable experience for enterprise users. As the operator of the world’s largest security cloud, Zscaler accelerates digital transformation so enterprises can be more agile, efficient, resilient, and secure. The pioneering, AI-powered Zscaler Zero Trust Exchange™ platform, which is found in our SASE and SSE offerings, protects thousands of enterprise customers from cyberattacks and data loss by securely connecting users, devices, and applications in any location. Named a Best Workplace in Technology by Fortune and others, Zscaler fosters an inclusive and supportive culture that is home to some of the brightest minds in the industry. If you thrive in an environment that is fast-paced and collaborative, and you are passionate about building and innovating for the greater good, come make your next move with Zscaler. Our Engineering team built the world's largest cloud security platform from the ground up, and we keep building. With more than 100 patents and big plans for enhancing services and increasing our global footprint, the team has made us and our multitenant architecture today's cloud security leader, with more than 15 million users in 185 countries. Bring your vision and passion to our team of cloud architects, software engineers, security experts, and more who are enabling organizations worldwide to harness speed and agility with a cloud-first strategy. We're looking for an experienced Senior Software Development Engineer to join our Team. Reporting to the Engineering Manager, you'll be responsible for: Designing, analyzing, and troubleshooting large-scale distributed systems Contributing in continuous monitoring, vulnerability scanning, patching, and reporting of the system What We're Looking for (Minimum Qualifications) 2+ years of public cloud experience (AWS or GCP or Azure) and Kubernetes Expertise in designing, analyzing, and troubleshooting large-scale distributed systems Experience with Infrastructure as Code and programming languages like Python, Java Experience in data engineering, Spark, DBT, temporal, SQL etc is a plus but not must have What Will Make You Stand Out (Preferred Qualifications) Experience in continuous monitoring, vulnerability scanning, patching, and reporting Experience with multiple cloud providers (AWS, Azure) and both relational and non-relational databases for microservices Bachelor's degree in Science, Engineering, IT, or equivalent #LI-GL2 #LI-Hybrid At Zscaler, we are committed to building a team that reflects the communities we serve and the customers we work with. We foster an inclusive environment that values all backgrounds and perspectives, emphasizing collaboration and belonging. Join us in our mission to make doing business seamless and secure. Our Benefits program is one of the most important ways we support our employees. Zscaler proudly offers comprehensive and inclusive benefits to meet the diverse needs of our employees and their families throughout their life stages, including: Various health plans Time off plans for vacation and sick time Parental leave options Retirement options Education reimbursement In-office perks, and more! By applying for this role, you adhere to applicable laws, regulations, and Zscaler policies, including those related to security and privacy standards and guidelines. Zscaler is committed to providing equal employment opportunities to all individuals. We strive to create a workplace where employees are treated with respect and have the chance to succeed. All qualified applicants will be considered for employment without regard to race, color, religion, sex (including pregnancy or related medical conditions), age, national origin, sexual orientation, gender identity or expression, genetic information, disability status, protected veteran status, or any other characteristic protected by federal, state, or local laws. See more information by clicking on the Know Your Rights: Workplace Discrimination is Illegal link. Pay Transparency Zscaler complies with all applicable federal, state, and local pay transparency rules. Zscaler is committed to providing reasonable support (called accommodations or adjustments) in our recruiting processes for candidates who are differently abled, have long term conditions, mental health conditions or sincerely held religious beliefs, or who are neurodivergent or require pregnancy-related support.
Posted 3 weeks ago
7.0 - 8.0 years
7 - 9 Lacs
Bengaluru
Work from Office
We are seeking an experienced Data Engineer to join our innovative data team data team and help build scalable data infrastructure, software consultancy, and development services that powers business intelligence, analytics, and machine learning initiatives. The ideal candidate will design, develop, and maintain robust high-performance data pipelines and solutions while ensuring data quality, reliability, and accessibility across the organization working with cutting-edge technologies like Python, Microsoft Fabric, Snowflake, Dataiku, SQL Server, Oracle, PostgreSQL, etc. Required Qualifications 5 + years of experience in Data engineering role. Programming Languages: Proficiency in Python Cloud Platforms: Hands-on experience with Azure (Fabric, Synapse, Data Factory, Event Hubs) Databases: Strong SQL skills and experience with both relational (Microsoft SQL Server, PostgreSQL, MySQL) and NoSQL (MongoDB, Cassandra) databases Version Control: Proficiency with Git and collaborative development workflows Proven track record of building production-grade data pipelines handling large-scale data or solutions. Desired experience with containerization (Docker) and orchestration (Kubernetes) technologies . Knowledge of machine learning workflows and MLOps practices Familiarity with data visualization tools (Tableau, Looker, Power BI) Experience with stream processing and real-time analytics Experience with data governance and compliance frameworks (GDPR, CCPA) Contributions to open-source data engineering projects Relevant Cloud certifications (e.g., Microsoft Certified: Azure Data Engineer Associate, AWS Certified Data Engineer, Google Cloud Professional Data Engineer). Specific experience or certifications in Microsoft Fabric, or Dataiku, Snowflake.
Posted 3 weeks ago
7.0 - 12.0 years
15 - 25 Lacs
Bengaluru
Work from Office
Data Engineer Python- Azure Databricks – 7 Years – Bangalore Location – Bangalore Are you a seasoned Data Engineer passionate about turning complex datasets into scalable insights? Here’s your chance to build robust data platforms and pipelines that support global decision-making at scale—within a forward-thinking organization that champions innovation and excellence. Your Future Employer – A global enterprise delivering high-impact technology and operational services to Fortune-level clients. Known for fostering a culture of innovation, agility, and collaboration. Responsibilities – 1. Architect and implement data models and infrastructure to support analytics, reporting, and data science. 2. Build high-performance ETL pipelines and manage data integration from multiple sources. 3. Maintain data quality, governance, and security standards. 4. Collaborate with cross-functional teams to translate business needs into technical solutions. 5. Troubleshoot, optimize, and document scalable data workflows. Requirements – 1. 7+ years of experience as a Data Engineer with at least 4 years in cloud ecosystems . 2. Strong expertise in Azure (ADF, Data Lake Gen2, Databricks) or AWS. 3. Proficiency in Python and SQL ; experience with Spark, Kafka, or Hadoop is a plus. 4. Deep understanding of data warehousing, OLAP, and data modelling. 5. Familiarity with visualization tools like Power BI, Tableau, or Looker. What is in it for you – High-visibility projects with real-world impact. Access to cutting-edge cloud and big data technologies. Flexible hybrid work environment in Bangalore. Dynamic and collaborative global work culture. Reach us: If you think this role aligns with your career, kindly write to me along with your updated CV at parul.arora@crescendogroup.in for a confidential discussion. Disclaimer: Crescendo Global specializes in senior to C-level niche recruitment. We are passionate about empowering job seekers and employers with a memorable job search and leadership hiring experience. We do not discriminate based on race, religion, gender, or any other protected status. Note: We receive a lot of applications on a daily basis, so it becomes difficult for us to get back to each candidate. Please assume that your profile has not been shortlisted in case you don't hear back from us in 1 week. Your patience is highly appreciated. Profile Keywords – Data Engineer Bangalore, Azure Data Factory, Azure Data Lake, Azure Databricks, ETL Developer, Big Data Engineer, Python Data Engineer, SQL Developer, Data Pipeline Developer, Cloud Data Engineering, Data Warehousing, Data Modelling, Spark Developer, Kafka Engineer, Hadoop Jobs, Power BI Developer, Tableau Analyst, CI/CD for Data, Streaming Data Engineer, DataOps
Posted 3 weeks ago
5.0 - 8.0 years
25 - 35 Lacs
Gurugram, Bengaluru
Hybrid
Role & responsibilities Work with data product managers, analysts, and data scientists to architect, build and maintain data processing pipelines in SQL or Python. Build and maintain a data warehouse / data lake-house for analytics, reporting and ML predictions. Implement DataOps and related DevOps focused on creating ETL pipelines for data analytics / reporting, and ELT pipelines for model training. Support, optimise and transition our current processes to ensure well architected implementations and best practices. Work in an agile environment within a collaborative agile product team using Kanban Collaborate across departments and work closely with data science teams and with business (economists/data) analysts in refining their data requirements for various initiatives and data consumption requirements. Educate and train colleagues such as data scientists, analysts, and stakeholders in data pipelining and preparation techniques, which make it easier for them to integrate and consume the data they need for their own use cases. Participate in ensuring compliance and governance during data use, to ensure that the data users and consumers use the data provisioned to them responsibly through data governance and compliance initiatives. Become a data and analytics evangelist, and promote the available data and analytics capabilities and expertise to business unit leaders, and educate them in leveraging these. Preferred candidate profile What you'll need to be successful 8+ years of professional experience with data processing environments used in large scale digital applications. Extensive experience with programming in Python, Spark( SparkSQL) and SQL Experience with warehouse technologies such as Snowflake, and data modelling, lineage and data governance tools such as Alation. Professional experience of designing, building and managing bespoke data pipelines (including ETL, ELT and lambda architectures), using technologies such as Apache Airflow, Snowflake, Amazon Athena, AWS Glue, Amazon EMR, or other equivalent. Strong, fundamental technical expertise in cloud-native technologies, such as serverless functions, API gateway, relational and NoSQL databases, and caching. Experience in leading / mentoring data engineering teams. Experience in working in teams with data scientists and ML engineers, for building automated pipelines for data pre-processing and feature extraction. An advanced degree in software / data engineering, computer / information science, or a related quantitative field or equivalent work experience. Strong verbal and written communication skills and ability to work well with a wide range of stakeholders. Strong ownership, scrappy and biassed for action. Perks and benefits
Posted 3 weeks ago
3.0 - 6.0 years
22 - 25 Lacs
Hyderabad
Remote
Company Overview: We are a fast-growing startup revolutionizing the contact center industry with GenAI-powered solutions. Our innovative platform is designed to enhance customer engagement Job Description: We are looking for a skilled and experienced Data Engineer to design, build, and optimize scalable data pipelines and architectures that power data-driven decision-making across the organization. Candidate with a proven track record of writing complex stored procedures and optimizing query performance on large datasets. Requirement: Architect, develop, and maintain scalable and secure data pipelines to process structured and unstructured data from diverse sources. Collaborate with data scientists, BI analysts and business stakeholders to understand data requirements. Optimize data workflows and processing for performance, ensure data quality, reliability and governance Hands-on experience with modern data platforms such as Snowflake, Redshift, BigQuery, or Databricks. Strong knowledge of T-SQL and SQL Server Management Studio (SSMS) Experience in writing complex stored procedures, Views and query performance tuning on large datasets Strong understanding of database management systems (SQL,NoSQL) and data warehousing concepts. Good knowledge and hands on experience in tuning the Database at Memory level, able to tweak SQL queries. In-depth knowledge of data modeling principles and methodologies (e.g., relational, dimensional, NoSQL). Excellent analytical and problem-solving skills with a meticulous attention to detail. Hands-on experience with data transformation techniques, including data mapping, cleansing, and validation. Proven ability to work independently and manage multiple priorities in a fast-paced environment. Work closely with cross-functional teams to gather and analyse requirements, develop database solutions, and support application development efforts Knowledge of cloud database solutions (e.g., Azure SQL Database, AWS RDS).
Posted 3 weeks ago
3.0 - 5.0 years
17 - 22 Lacs
Mumbai, Gurugram
Work from Office
locationsGurugram - DLF BuildingMumbai - Hiranandaniposted onPosted Yesterday time left to applyEnd DateJune 10, 2025 (10 days left to apply) job requisition idR_308095 Company: Mercer Description: We are seeking a talented individual to join our Data Engineering team at Mercer. This role will be based in Gurgaon/ Mumbai. This is a hybrid role that has a requirement of working at least three days a week in the office. Senior Principal Engineer - Data Enginering We will count on you to: Design, develop, and maintain scalable and robust data pipelines on Databricks. Collaborate with data scientists and analysts to understand data requirements and deliver solutions. Optimize and troubleshoot existing data pipelines for performance and reliability. Ensure data quality and integrity across various data sources. Implement data security and compliance best practices. Monitor data pipeline performance and conduct necessary maintenance and updates. Document data pipeline processes and technical specifications. Use analytical skills to solve complex problems associated with database development and management. Working with other teams, such as data scientists, business analysts, and Qlik Developers, to identify organizational needs and design effective solutions. Providing technical leadership and guidance to the team. This may include code reviews, mentoring, and helping team members troubleshoot technical issues. Aligning the data engineering strategy with the wider organizational strategy. This might involve deciding which projects to prioritize, making technology choices, and planning for the team's growth and development. Ensuring that all data engineering activities are compliant with relevant laws and regulations, and that data is stored and processed securely. Keeping up-to-date with new technologies and methodologies in the field of data engineering, and fostering a culture of innovation and continuous improvement within the team. Communicate effectively with both technical and non-technical stakeholders, explaining data infrastructure, strategies, and systems in an understandable way. What you need to have: Bachelors degree (BE/B.TECH) in Computer Science/IT/ECE etc. MIS or related qualification. A masters degree is always helpful 3-5 years of experience in data engineering Proficiency with Databricks or AWS (Glue, S3), Phython and Spark Strong SQL skills and experience with relational databases Knowledge of data warehousing concepts and ETL processes Excellent problem-solving and analytical skills Effective Communication Skills What makes you stand out Exposure to any BI tool like Qlik(Preference tool), Power BI, Tableau etc. Hands on experience on SQL OR PL-SQL Experience with big data technologies (e.g., Hadoop, Kafka) Agile, JIRA and SDLC process knowledge Teamwork and collaboration skills Strong Quantitative and Analytical skills Why join our team: We help you be your best through professional development opportunities, interesting work and supportive leaders. We foster a vibrant and inclusive culture where you can work with talented colleagues to create new solutions and have impact for colleagues, clients and communities. Our scale enables us to provide a range of career opportunities, as well as benefits and rewards to enhance your well-being Mercer, a business of Marsh McLennan (NYSEMMC), is a global leader in helping clients realize their investment objectives, shape the future of work and enhance health and retirement outcomes for their people. Marsh McLennan is a global leader in risk, strategy and people, advising clients in 130 countries across four businessesMarsh, Guy Carpenter, Mercer and Oliver Wyman. With annual revenue of $24 billion and more than 90,000 colleagues, Marsh McLennan helps build the confidence to thrive through the power of perspective. For more information, visit mercer.com, or follow on LinkedIn and X. Marsh McLennan is committed to embracing a diverse, inclusive and flexible work environment. We aim to attract and retain the best people and embrace diversity of age, background, caste, disability, ethnic origin, family duties, gender orientation or expression, gender reassignment, marital status, nationality, parental status, personal or social status, political affiliation, race, religion and beliefs, sex/gender, sexual orientation or expression, skin color, or any other characteristic protected by applicable law. Marsh McLennan is committed to hybrid work, which includes the flexibility of working remotely and the collaboration, connections and professional development benefits of working together in the office. All Marsh McLennan colleagues are expected to be in their local office or working onsite with clients at least three days per week. Office-based teams will identify at least one anchor day per week on which their full team will be together in person.
Posted 3 weeks ago
7.0 - 12.0 years
9 - 14 Lacs
Gurugram, Dlf
Work from Office
locationsGurugram - DLF Buildingposted onPosted 10 Days Ago time left to applyEnd DateMay 31, 2025 (10 hours left to apply) job requisition idR_295096 Company: Oliver Wyman Description: Oliver Wyman is a global leader in management consulting. With offices in 60 cities across 29 countries, Oliver Wyman combines deep industry knowledge with specialized expertise in strategy, operations, risk management, and organization transformation. Our 5000 professionals help clients optimize their business, improve their operations, and risk profile, and accelerate their organizational performance to seize the most attractive opportunities. Oliver Wymans thought leadership is evident in our agenda-setting books, white papers, research reports, and articles in the business press. Our clients are the CEOs and executive teams of the top Global 1000 companies. Visit our website for more details about Oliver Wyman Job specification Job title: Senior Data Engineer Department: OWG Tech Office/region: India Reports to: Director of Data Engineering Job Overview: The OWG Technology department is seeking a highly skilled and motivated Senior Data Engineer to play a critical role in our data transformation program. In this position, you will lead major projects and workstreams, collaborating closely with stakeholders to ensure the successful implementation of data solutions. You will also mentor and coach junior team members, providing guidance during sprints as a technical lead. Your expertise in cloud data platforms, particularly the Databricks Lakehouse, will be essential in driving innovation and best practices within the team. Key Responsibilities: Lead the design and implementation of processes to ingest data from various sources into the Databricks Lakehouse platform, ensuring alignment with architectural and engineering standards. Oversee the development, maintenance, and optimization of data models and ETL pipelines that support the Medallion Architecture (Bronze, Silver, Gold layers) to enhance data processing efficiency and facilitate data transformation. Utilize Databricks to integrate, consolidate, and cleanse data, ensuring accuracy and readiness for analysis, while leveraging Delta Lake for versioned data management. Implement and manage Unity Catalog for centralized data governance, ensuring proper data access, security, and compliance with organizational policies and regulations. Collaborate with business analysts, data scientists, and stakeholders to understand their data requirements and deliver tailored solutions that leverage the capabilities of the Databricks Lakehouse platform. Promote available data and analytics capabilities to business stakeholders, educating them on how to effectively leverage these tools and the Medallion Architecture for their analytical needs. Experience: Bachelors or Masters degree in Computer Science, Data Science, Engineering, or a related field. Minimum of 7+ years of experience in data engineering or a related data role, with a proven track record of leading projects and initiatives. Expertisein designing and implementing production-grade Spark-based solutions. Expertisein query tuning, performance tuning, troubleshooting, and debugging Spark or other big data solutions. Proficientin big data technologies such as Spark/Delta, Hadoop, NoSQL, MPP, and OLAP. Proficientin cloud architecture, systems, and principles, particularly in AWS. Proficientin programming languages such as Python, R, Scala, or Java. Expertisein scaling ETL pipelines for performance and cost-effectiveness. Experiencein building and scaling streaming data pipelines. Strong understandingof DevOps tools and best practices for data engineering, including CI/CD, unit and integration testing, automation, and orchestration. Cloud or Databricks certification is highly desirable. Skills and Attributes: Full professional proficiency in both written and spoken English. Strong problem-solving and troubleshooting skills. Excellent communication skills, both verbal and written, with the ability to articulate complex concepts clearly and engage effectively with diverse audiences. Proven ability to lead and mentor junior team members, fostering a collaborative and high-performing team environment. Neutral toward technology, vendor, and product choices, prioritizing results over personal preferences. Resilient and composed in the face of opposition to ideas, demonstrating a collaborative spirit. Lead the migration of existing ETL processes from Informatica IICS and SSIS to cloud-based data pipelines within the Databricks environment, ensuring minimal disruption and maximum efficiency. Act as a technical lead during sprints, providing guidance and support to team members, and ensuring adherence to best practices in data engineering. Engage with clients and stakeholders to support architectural designs, address technical queries, and provide strategic guidance on utilizing the Databricks Lakehouse platform effectively. Stay updated on industry trends and emerging technologies in data engineering, particularly those related to Databricks, cloud data solutions, and ETL migration strategies, continuously enhancing your skills and knowledge. Demonstrate excellent problem-solving skills, with an ability to see and solve issues before they affect business productivity. Demonstrate thought leadership by contributing to the development of best practices, standards, and documentation for data engineering processes within the organization. Why join our team: We help you be your best through professional development opportunities, interesting work and supportive leaders. We foster a vibrant and inclusive culture where you can work with talented colleagues to create new solutions and have impact for colleagues, clients and communities. Our scale enables us to provide a range of career opportunities, as well as benefits and rewards to enhance your well-being. (NYSEMMC) is a global leader in risk, strategy and people, advising clients in 130 countries across four businesses, , and . With annual revenue of $23 billion and more than 85,000 colleagues, Marsh McLennan helps build the confidence to thrive through the power of perspective. For more information, visit , or follow on and . Marsh McLennan is committed to embracing a diverse, inclusive and flexible work environment. We aim to attract and retain the best people and embrace diversity of age, background, caste, disability, ethnic origin, family duties, gender orientation or expression, gender reassignment, marital status, nationality, parental status, personal or social status, political affiliation, race, religion and beliefs, sex/gender, sexual orientation or expression, skin color, or any other characteristic protected by applicable law. Marsh McLennan is committed to hybrid work, which includes the flexibility of working remotely and the collaboration, connections and professional development benefits of working together in the office. All Marsh McLennan colleagues are expected to be in their local office or working onsite with clients at least three days per week. Office-based teams will identify at least one anchor day per week on which their full team will be together in person. Marsh McLennan(NYSEMMC) is a global leader in risk, strategy and people, advising clients in 130 countries across four businessesMarsh, Guy Carpenter, Mercerand Oliver Wyman. With annual revenue of $24 billion and more than 90,000 colleagues, Marsh McLennan helps build the confidence to thrive through the power of perspective. For more information, visit marshmclennan.com, or follow on LinkedInand X. Marsh McLennan is committed to embracing a diverse, inclusive and flexible work environment. We aim to attract and retain the best people and embrace diversity of age, background, caste, disability, ethnic origin, family duties, gender orientation or expression, gender reassignment, marital status, nationality, parental status, personal or social status, political affiliation, race, religion and beliefs, sex/gender, sexual orientation or expression, skin color, or any other characteristic protected by applicable law. Marsh McLennan is committed to hybrid work, which includes the flexibility of working remotely and the collaboration, connections and professional development benefits of working together in the office. All Marsh McLennan colleagues are expected to be in their local office or working onsite with clients at least three days per week. Office-based teams will identify at least one anchor day per week on which their full team will be together in person.
Posted 3 weeks ago
3.0 - 5.0 years
13 - 18 Lacs
Mumbai, Pune
Work from Office
locationsMumbai - HiranandaniPune - Business Bayposted onPosted 3 Days Ago time left to applyEnd DateJune 30, 2025 (30 days left to apply) job requisition idR_307045 Company: Marsh Description: We are seeking a talented individual to join our Benefit Analytics Team at Marsh. This role will be based in Mumbai. This is a hybrid role that has a requirement of working at least three days a week in the office. Principal Engineer - Data Science We will count on you to: Design and implement data analytics products that utilize web-based technologies to solve complex business problems and drive strategic outcomes. Utilize strong conceptual skills to explore the "Art of the Possible" in analytics, integrating data, market trends, and cutting-edge technologies to inform business strategies. Manage and manipulate large datasets from diverse sources, ensuring data quality through cleaning, consolidation, and transformation into meaningful insights. Conduct exploratory data analysis (EDA) to identify patterns and trends, reporting key metrics and synthesizing disparate datasets for comprehensive insights. Perform rigorous quality assurance (QA) on datasets, ensuring accuracy, logical consistency, and alignment with analytical dashboards. Automate data capture processes from various sources, streamlining data cleaning and insight generation workflows. Apply knowledge of insurance claims, policies, terminologies, health risks, and wellbeing to enhance analytical models and insights. Collaborate with cross-functional teams to develop and deploy machine learning models and predictive analytics solutions. Utilize SQL for database management and data manipulation, with a focus on optimizing queries and data retrieval processes. Develop ETL Automation pipelines using tools such as Python, GenAI and ChatGPT APIs ensuring efficient and optimized code. Communicate complex data-driven solutions clearly and effectively, translating technical findings into actionable business recommendations. Having knowledge around LLM/RAG/Power BI/Tableau will be preferred What you need to have: Educational Background A Bachelors or Masters degree in Computer Science, Information Technology, Mathematics, Statistics, or a related field is essential. A strong academic foundation will support your analytical and technical skills. Experience 3-5 years of progressive experience in a Data Science or Data Analytics role, demonstrating a solid track record of delivering impactful data-driven insights and solutions. Technical Proficiency : Programming Skills Advanced proficiency in Python is required, with hands-on experience in data engineering and ETL processes. Familiarity with exploratory data analysis (EDA) techniques is essential. API Knowledge Intermediate experience with ChatGPT APIs or similar technologies is a plus, showcasing your ability to integrate AI solutions into data workflows. Business Intelligence Tools A good understanding of BI tools such as Qlik Sense, Power BI, or Tableau is necessary for effective data visualization and reporting. Data Extraction Expertise Proven ability to extract and manipulate data from diverse sources, including web platforms, PDFs, Excel files, and various databases. A broad understanding of analytics methodologies is crucial for transforming raw data into actionable insights. Analytical Mindset Strong analytical and problem-solving skills, with the ability to interpret complex data sets and communicate insights effectively to stakeholders. Adaptability to New Technologies A keen interest in AI and emerging technologies, with a willingness to learn and adapt to new tools and methodologies in the rapidly evolving data landscape. What makes you stand out Degree or Certification in Data Management, Statistics , Analytics and BI tools (Qlik Sense & Tableau) ( would be preferred ) Experience in Healthcare sector, working with Multination clients . Why join our team We help you be your best through professional development opportunities, interesting work and supportive leaders. We foster a vibrant and inclusive culture where you can work with talented colleagues to create new solutions and have impact for colleagues, clients and communities. Our scale enables us to provide a range of career opportunities, as well as benefits and rewards to enhance your well-being. Mercer, a business of Marsh McLennan (NYSEMMC), is a global leader in helping clients realize their investment objectives, shape the future of work and enhance health and retirement outcomes for their people. Marsh McLennan is a global leader in risk, strategy and people, advising clients in 130 countries across four businessesMarsh, Guy Carpenter, Mercer and Oliver Wyman. With annual revenue of $24 billion and more than 90,000 colleagues, Marsh McLennan helps build the confidence to thrive through the power of perspective. For more information, visit mercer.com, or follow on LinkedIn and X. Marsh McLennan is committed to embracing a diverse, inclusive and flexible work environment. We aim to attract and retain the best people and embrace diversity of age, background, caste, disability, ethnic origin, family duties, gender orientation or expression, gender reassignment, marital status, nationality, parental status, personal or social status, political affiliation, race, religion and beliefs, sex/gender, sexual orientation or expression, skin color, or any other characteristic protected by applicable law. Marsh McLennan is committed to hybrid work, which includes the flexibility of working remotely and the collaboration, connections and professional development benefits of working together in the office. All Marsh McLennan colleagues are expected to be in their local office or working onsite with clients at least three days per week. Office-based teams will identify at least one anchor day per week on which their full team will be together in person.
Posted 3 weeks ago
6.0 - 10.0 years
16 - 25 Lacs
Hyderabad
Work from Office
Key Responsibilities Architect and implement modular, test-driven ELT pipelines using dbt on Snowflake. Design layered data models (e.g., staging, intermediate, mart layers / medallion architecture) aligned with dbt best practices. Lead ingestion of structured and semi-structured data from APIs, flat files, cloud storage (Azure Data Lake, AWS S3), and databases into Snowflake. Optimize Snowflake for performance and cost: warehouse sizing, clustering, materializations, query profiling, and credit monitoring. Apply advanced dbt capabilities including macros, packages, custom tests, sources, exposures, and documentation using dbt docs. Orchestrate workflows using dbt Cloud, Airflow, or Azure Data Factory, integrated with CI/CD pipelines. Define and enforce data governance and compliance practices using Snowflake RBAC, secure data sharing, and encryption strategies. Collaborate with analysts, data scientists, architects, and business stakeholders to deliver validated, business-ready data assets. Mentor junior engineers, lead architectural/code reviews, and help establish reusable frameworks and standards. Engage with clients to gather requirements, present solutions, and manage end-to-end project delivery in a consulting setup Required Qualifications 5 to 8 years of experience in data engineering roles, with 3+ years of hands-on experience working with Snowflake and dbt in production environments. Technical Skills: o Cloud Data Warehouse & Transformation Stack: Expert-level knowledge of SQL and Snowflake, including performance optimization, storage layers, query profiling, clustering, and cost management. Experience in dbt development: modular model design, macros, tests, documentation, and version control using Git. o Orchestration and Integration: Proficiency in orchestrating workflows using dbt Cloud, Airflow, or Azure Data Factory. Comfortable working with data ingestion from cloud storage (e.g., Azure Data Lake, AWS S3) and APIs. Data Modelling and Architecture: Dimensional modelling (Star/Snowflake schemas), Slowly changing dimensions. ' Knowledge of modern data warehousing principles. Experience implementing Medallion Architecture (Bronze/Silver/Gold layers). Experience working with Parquet, JSON, CSV, or other data formats. o Programming Languages: Python: For data transformation, notebook development, automation. SQL: Strong grasp of SQL for querying and performance tuning. Jinja (nice to have): Exposure to Jinja for advanced dbt development. o Data Engineering & Analytical Skills: ETL/ELT pipeline design and optimization. Exposure to AI/ML data pipelines, feature stores, or MLflow for model tracking (good to have). Exposure to data quality and validation frameworks. o Security & Governance: Experience implementing data quality checks using dbt tests. Data encryption, secure key management and security best practices for Snowflake and dbt. Soft Skills & Leadership: Ability to thrive in client-facing roles with competing/changing priorities and fast-paced delivery cycles. Stakeholder Communication: Collaborate with business stakeholders to understand objectives and convert them into actionable data engineering designs. Project Ownership: End-to-end delivery including design, implementation, and monitoring. Mentorship: Guide junior engineers and establish best practices; Build new skill in the team. Agile Practices: Work in sprints, participate in scrum ceremonies, story estimation. Education: Bachelors or masters degree in computer science, Data Engineering, or a related field. Certifications such as Snowflake SnowPro Advanced, dbt Certified Developer are a plus.
Posted 3 weeks ago
5.0 - 8.0 years
7 - 12 Lacs
Pune
Hybrid
We are looking for a highly skilled Senior Python Developer for a 6-month contractual role. The position involves designing and implementing data-oriented and scalable backend solutions using Python and related technologies. The candidate must have 5-8 years of experience and be well-versed in distributed systems, cloud platforms (AWS/GCP), and data pipelines. Strong expertise in Airflow, Kafka, SQL, and modern software development practices (TDD, CI/CD, DevSecOps) is essential. Exposure to AdTech, ML/AI, SaaS, and container technologies (Docker/Kubernetes) is a strong plus. The position is hybrid, based in Pune, and only immediate joiners are eligible.
Posted 3 weeks ago
3.0 - 5.0 years
10 - 12 Lacs
Mumbai, Delhi / NCR, Bengaluru
Work from Office
Technical Requirements: 3 to 6 Years of IT & Azure Data engineering technologies experience Prior experience in ETL, data pipelines, and data flow techniques using Azure Data Services Working experience in Python, PySpark, Azure Data Factory, Azure Data Lake Gen2, Databricks, Azure Synapse, and file formats like JSON & Parquet. Experience in creating ADF Pipelines to source and process data sets. Experience in creating Databricks notebooks to cleanse, transform, and enrich data sets. Development experience in the orchestration of pipelines Good understanding of SQL, Databases, and data warehouse systems, preferably Teradata Experience in deployment and monitoring techniques Working experience with Azure DevOps CI/CD pipelines to deploy Azure resources. Experience in handling operations/Integration with source repository Must have good knowledge of Data Warehouse concepts and Data Warehouse modelling. Working knowledge of SNO, M, S, including resolving incidents, handling Change requests /Service requests, and reporting on metrics to provide insights. Collaborate with the project team to understand tasks to model tables using data warehouse best practices and develop data pipelines to ensure the efficient delivery of data. Non-technical requirement: Work with project leaders to model tables using data warehouse best practices and develop data pipelines to ensure the efficient delivery of data. Think and work agile, from estimation to development, including testing, continuous integration, and deployment. Manage numerous project tasks concurrently and strategically, prioritizing when necessary. Proven ability to work as part of a virtual team of technical consultants working from different locations (including onsite) around project delivery goals. Technologies: Azure data factory Azure Data bricks Azure Synapse PySpark/SQL ADLS, BLOB Azure DevOps with CI/CD implementation. Nice to have skill sets: Business Intelligence tools (preferred Power BI) DP-203 Certified. Locations : Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune, Remote
Posted 3 weeks ago
8.0 - 12.0 years
16 - 27 Lacs
Bengaluru
Work from Office
Role & responsibilities We are looking for a skilled Data Engineer with strong expertise in Java and hands-on experience with Hadoop or Spark. The ideal candidate will be responsible for designing, building, and maintaining scalable data pipelines and processing systems. Key Responsibilities: Develop and maintain data pipelines using Java. Work with big data technologies such as Hadoop or Spark to process large datasets. Optimize data workflows and ensure high performance and reliability. Collaborate with data scientists, analysts, and other engineers on data-related initiatives. Requirements: Strong programming skills in Java. Hands-on experience with Hadoop or Spark. Experience with data ingestion, transformation, and storage solutions. Familiarity with distributed systems and big data architecture. Preferred candidate profile Hadoop Spark Java If Interested then connect with shravani.m@genxhire.in OR 7710889351
Posted 3 weeks ago
3.0 - 6.0 years
25 - 33 Lacs
Bengaluru
Work from Office
Overview Overview Annalect is currently seeking a Senior Data Engineer to join our Technology team. In this role you will build Annalect products which sit atop cloud-based data infrastructure. We are looking for people who have a shared passion for technology, design & development, data, and fusing these disciplines together to build cool things. In this role, you will work on one or more software and data products in the Annalect Engineering Team. You will participate in technical architecture, design and development of software products as well as research and evaluation of new technical solutions Responsibilities Designing, building, testing, and deploying data transfers across various cloud environments (Azure, GCP, AWS, Snowflake, etc). Developing data pipelines, monitoring, maintaining, and tuning. Write at-scale data transformations in SQL and Python. Perform code reviews and provide leadership and guidance to junior developers. Qualifications Curiosity in learning the business requirements that are driving the engineering requirements. Interest in new technologies and eagerness to bring those technologies and out of the box ideas to the team. 3+ years of SQL experience. 3+ years of professional Python experience. 3+ years of professional Linux experience. Preferred familiarity with Snowflake, AWS, GCP, Azure cloud environments. Intellectual curiosity and drive; self-starters will thrive in this position. Passion for Technology: Excitement for new technology, bleeding edge applications, and a positive attitude towards solving real world challenges. Additional Skills BS BS, MS or PhD in Computer Science, Engineering, or equivalent real-world experience. Experience with big data and/or infrastructure. Bonus for having experience in setting up Petabytes of data so they can be easily accessed. Understanding of data organization, ie partitioning, clustering, file sizes, file formats. Experience working with classical relational databases (Postgres, Mysql, MSSQL). Experience with Hadoop, Hive, Spark, Redshift, or other data processing tools (Lots of time will be spent building and optimizing transformations) Proven ability to independently execute projects from concept to implementation to launch and to maintain a live product. Perks of working at Annalect We have an incredibly fun, collaborative, and friendly environment, and often host social and learning activities such as game night, speaker series, and so much more! Halloween is a special day on our calendar since it is our Founding Day – we go all out with decorations, costumes, and prizes! Generous vacation policy. Paid time off (PTO) includes vacation days, personal days, and a Summer Friday program. Extended time off around the holiday season. Our office is closed between Xmas and New Year to encourage our hardworking employees to rest, recharge and celebrate the season with family and friends. As part of Omnicom, we have the backing and resources of a global billion-dollar company, but also have the flexibility and pace of a “startup” - we move fast, break things, and innovate. Work with modern stack and environment to keep on learning and improving helping to experiment and shape latest technologies
Posted 3 weeks ago
3.0 - 8.0 years
0 - 3 Lacs
Hyderabad
Work from Office
Job Summary: We are looking for a Machine Learning Engineer with strong data engineering capabilities to support the development and deployment of predictive models in a smart manufacturing environment. This role involves building robust data pipelines, developing high-accuracy ML models for defect prediction, and implementing automated control systems for real-time corrective actions on the production floor. Key Responsibilities: Data Engineering & Integration: Validate and ensure the correct flow of data from Influx DB/CDL to Smart box/Databricks. Assist data scientists in the initial modeling phase through reliable data provisioning. Provide ongoing support for data pipeline corrections and ad-hoc data extraction. ML Model Development for Defect Prediction: Develop 3 separate ML models for predicting 3 types of defects based on historical data. Predict defect occurrence within a 5-minute window using: Artificial sampling techniques Dimensionality reduction Deliver results with: Accuracy 95% Precision & recall 80% Feature importance insights Closed-Loop Control System Implementation: Prescribe machine setpoint changes based on model outputs to prevent defect occurrence. Design and implement a closed-loop system that includes: Real-time data fetching from production line PLCs (via Influx DB/CDL). Deployment of ML models on Smart box. Pipeline to output recommendations to the appropriate PLC tag. Retraining pipeline triggered by drift detection (cloud-based retraining when recommendations deviate from centerlines). Qualifications: Education: Bachelor's or Masters degree in Computer Science, Data Science, Electrical Engineering, or related field. Technical Skills: Proficient in Python and ML libraries (e.g., scikit-learn, XG Boost, pandas) Experience with: Influx DB and CDL for industrial data integration Smart box and Databricks for model deployment and data processing Real-time data pipelines and industrial control systems (PLCs) Model performance tracking and retraining pipelines Preferred: Experience in manufacturing analytics or predictive maintenance Familiarity with Industry 4.0 principles and edge/cloud hybrid architectures Soft Skills: Strong analytical and problem-solving abilities Effective communication with cross-functional teams (data science, automation, production) Attention to detail and focus on solution reliability
Posted 3 weeks ago
5.0 - 10.0 years
5 - 9 Lacs
Bengaluru
Work from Office
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Data Engineering Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will be involved in designing, building, and configuring applications to meet business process and application requirements. Your typical day will revolve around creating innovative solutions to address business needs and ensuring seamless application functionality. Roles & Responsibilities:- Expected to be an SME- Collaborate and manage the team to perform- Responsible for team decisions- Engage with multiple teams and contribute on key decisions- Provide solutions to problems for their immediate team and across multiple teams- Lead the team in implementing cutting-edge technologies- Drive continuous improvement initiatives within the team Professional & Technical Skills: - Must To Have Skills: Proficiency in Data Engineering- Strong understanding of ETL processes- Experience with cloud-based data platforms such as AWS or Azure- Hands-on experience with data modeling and database design- Good To Have Skills: Knowledge of big data technologies like Hadoop and Spark Additional Information:- The candidate should have a minimum of 5 years of experience in Data Engineering- This position is based at our Bengaluru office- A 15 years full-time education is required Qualification 15 years full time education
Posted 3 weeks ago
5.0 - 8.0 years
9 - 13 Lacs
Mumbai
Work from Office
Skill required: Data Management - PySpark Designation: Data Eng, Mgmt & Governance Sr Analyst Qualifications: BE/BTech Years of Experience: 5 to 8 years About Accenture Accenture is a global professional services company with leading capabilities in digital, cloud and security.Combining unmatched experience and specialized skills across more than 40 industries, we offer Strategy and Consulting, Technology and Operations services, and Accenture Song all powered by the worlds largest network of Advanced Technology and Intelligent Operations centers. Our 699,000 people deliver on the promise of technology and human ingenuity every day, serving clients in more than 120 countries. We embrace the power of change to create value and shared success for our clients, people, shareholders, partners and communities.Visit us at www.accenture.com What would you do Data & AIUnderstand Pyspark interface and integration in handling complexities of multiprocessing, such as distributing the data, distributing code and collecting output from the workers on a cluster of machines. What are we looking for Data Engineering Python (Programming Language) Structured Query Language (SQL) Strong analytical skills Written and verbal communication Commitment to quality Agility for quick learning Ability to work well in a team Roles and Responsibilities: In this role you are required to do analysis and solving of increasingly complex problems Your day-to-day interactions are with peers within Accenture You are likely to have some interaction with clients and/or Accenture management You will be given minimal instruction on daily work/tasks and a moderate level of instruction on new assignments Decisions that are made by you impact your own work and may impact the work of others In this role you would be an individual contributor and/or oversee a small work effort and/or team Qualification BE,BTech
Posted 3 weeks ago
7.0 - 12.0 years
20 - 35 Lacs
Mumbai
Work from Office
Job Summary: We are looking for a highly skilled Azure Data Engineer with a strong background in real-time and batch data ingestion and big data processing, particularly using Kafka and Databricks . The ideal candidate will have a deep understanding of streaming architectures , Medallion data models , and performance optimization techniques in cloud environments. This role requires hands-on technical expertise , including live coding during the interview process. Key Responsibilities Design and implement streaming data pipelines integrating Kafka with Databricks using Structured Streaming . Architect and maintain Medallion Architecture with well-defined Bronze, Silver, and Gold layers . Implement efficient ingestion using Databricks Autoloader for high-throughput data loads. Work with large volumes of structured and unstructured data , ensuring high availability and performance. Apply performance tuning techniques such as partitioning, caching , and cluster resource optimization . Collaborate with cross-functional teams (data scientists, analysts, business users) to build robust data solutions. Establish best practices for code versioning , deployment automation , and data governance . Required Technical Skills: Strong expertise in Azure Databricks and Spark Structured Streaming Processing modes (append, update, complete) Output modes (append, complete, update) Checkpointing and state management Experience with Kafka integration for real-time data pipelines Deep understanding of Medallion Architecture Proficiency with Databricks Autoloader and schema evolution Deep understanding of Unity Catalog and Foreign catalog Strong knowledge of Spark SQL, Delta Lake, and DataFrames Expertise in performance tuning (query optimization, cluster configuration, caching strategies) Must have Data management strategies Excellent with Governance and Access management Strong with Data modelling, Data warehousing concepts, Databricks as a platform Solid understanding of Window functions Proven experience in: Merge/Upsert logic Implementing SCD Type 1 and Type 2 Handling CDC (Change Data Capture) scenarios Retail/Telcom/Energy any one industry expertise Real time use case execution Data modelling Location: Mumbai
Posted 3 weeks ago
3.0 - 8.0 years
13 - 18 Lacs
Bengaluru
Work from Office
Project Role : Data Architect Project Role Description : Define the data requirements and structure for the application. Model and design the application data structure, storage and integration. Must have skills : Data Engineering Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Architect, you will define the data requirements and structure for the application. Your typical day will involve modeling and designing the application data structure, storage, and integration, ensuring that the data architecture aligns with the overall business objectives and technical specifications. You will collaborate with various stakeholders to gather requirements and translate them into effective data solutions, while also addressing any challenges that arise during the development process. Your role will be pivotal in establishing a robust data framework that supports the applications functionality and performance. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Engage in continuous learning to stay updated with industry trends and best practices in data architecture.- Collaborate with cross-functional teams to ensure data integration and consistency across various platforms. Professional & Technical Skills: - Must To Have Skills: Proficiency in Data Engineering.- Strong understanding of data modeling techniques and database design principles.- Experience with data integration tools and ETL processes.- Familiarity with cloud data services and big data technologies.- Ability to analyze and optimize data storage solutions for performance and scalability. Additional Information:- The candidate should have minimum 3 years of experience in Data Engineering.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 3 weeks ago
15.0 - 20.0 years
13 - 18 Lacs
Bengaluru
Work from Office
Project Role : Data Architect Project Role Description : Define the data requirements and structure for the application. Model and design the application data structure, storage and integration. Must have skills : Data Engineering Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Architect, you will define the data requirements and structure for the application. Your typical day will involve modeling and designing the application data structure, storage, and integration, ensuring that the data architecture aligns with the overall business objectives and technical specifications. You will collaborate with various teams to ensure that the data architecture is robust, scalable, and efficient, while also addressing any challenges that arise during the development process. Your role will be pivotal in shaping the data landscape of the organization, enabling data-driven decision-making and fostering innovation through effective data management practices. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Facilitate knowledge sharing and mentoring within the team to enhance overall performance.- Continuously assess and improve data architecture practices to align with industry standards. Professional & Technical Skills: - Must To Have Skills: Proficiency in Data Engineering.- Strong understanding of data modeling techniques and best practices.- Experience with data integration tools and ETL processes.- Familiarity with cloud-based data storage solutions and architectures.- Ability to design and implement data governance frameworks. Additional Information:- The candidate should have minimum 5 years of experience in Data Engineering.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 3 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
20183 Jobs | Dublin
Wipro
10025 Jobs | Bengaluru
EY
8024 Jobs | London
Accenture in India
6531 Jobs | Dublin 2
Amazon
6260 Jobs | Seattle,WA
Uplers
6244 Jobs | Ahmedabad
Oracle
5916 Jobs | Redwood City
IBM
5765 Jobs | Armonk
Capgemini
3771 Jobs | Paris,France
Tata Consultancy Services
3728 Jobs | Thane