Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
As a Senior Data Engineer at our company, you will play a crucial role in leading the design, development, and optimization of scalable, secure, and high-performance data solutions in Databricks. Your responsibilities will include: - Leading Data Architecture & Development by designing and optimizing data solutions in Databricks - Building and maintaining ETL/ELT pipelines, integrating Databricks with Azure Data Factory or AWS Glue - Implementing machine learning models and AI-powered solutions for business innovation - Collaborating with data scientists, analysts, and business teams to translate requirements into technical designs - Enforcing data validation, governance, and security best practices for trust and compliance Additionally, you will be responsible for mentoring junior engineers, conducting code reviews, and promoting continuous learning. Staying updated with the latest trends in Databricks, Azure Data Factory, cloud platforms, and AI/ML to suggest and implement improvements is also a key part of your role. Qualifications required for this position include: - Bachelor's or master's degree in Computer Science, Information Technology, or a related field - 5+ years of experience in data engineering with a minimum of 3 years of hands-on experience in Databricks - Expertise in data modeling, Data Lakehouse architectures, ELT/ETL processes, SQL, Python or Scala, and integration with Azure Data Factory or AWS Glue - Strong knowledge of cloud platforms (Azure preferred) and containerization (Docker) - Excellent analytical, problem-solving, and communication skills - Demonstrated experience in mentoring or leading technical teams Preferred skills for this role include experience with Generative AI technologies, familiarity with other cloud platforms like AWS or GCP, and knowledge of data governance frameworks and tools. Please note that the job posting was referenced from hirist.tech.,
Posted 18 hours ago
3.0 - 4.0 years
10 - 20 Lacs
bengaluru
Hybrid
Data Engineer Experience: 3 - 4 Years Exp Salary : Competitive Preferred Notice Period : Within 30 Days Opportunity Type: Hybrid (Bengaluru) Placement Type: Permanent (*Note: This is a requirement for one of Uplers' Clients) Must have skills required : PostgreSQL OR MongoDB OR Airflow OR Kafka OR Python OR Spark OR SQL OR ETL , Azure OR Data Lakehouse, Data Mesh architectures. Living Things (One of Uplers' Clients) is Looking for: Data Engineer who is passionate about their work, eager to learn and grow, and who is committed to delivering exceptional results. If you are a team player, with a positive attitude and a desire to make a difference, then we want to hear from you. Role Overview Description Job Title: Data Engineer Organization: Living Things Pvt. Ltd Location: Bangalore Job Type: Full-Time Experience Level: Mid-Level (3 - 4 years experience) About Us: Living Things is a pioneering IoT platform by iCapotech Pvt Ltd, dedicated to accelerating the net zero journey towards a sustainable future. We bring mindfulness in energy usage by our platform. Our solution seamlessly integrates with existing air conditioners, empowering businesses & organizations to optimize & reduce energy usage, enhance operational efficiency, reduce carbon footprints, and drive sustainable practices. Analysis of electricity consumption across all locations from electricity bills. By harnessing the power of real-time data analytics and intelligent insights, our energy-saving algorithm helps in saving a minimum of 15% on an air conditioners energy consumption. About the Role: We are seeking a highly skilled and motivated data engineer to join our growing data team. You will play a critical role in designing, building, and maintaining our data infrastructure, enabling data-driven decision-making across the organization. Job Responsibilities: Manage and optimize relational (PostgreSQL, MySQL) and NoSQL (MongoDB) databases, including performance tuning and schema evolution management. Leverage cloud platforms (AWS, Azure, GCP) for data storage, processing, and analysis, with a focus on optimizing cost, performance, and scalability using cloud-native services. Design, build, and maintain robust, scalable, and fault-tolerant data pipelines using modern orchestration tools (Apache Airflow, Apache Flink, Dagster). Implement and manage real-time data streaming solutions (Apache Kafka, Kinesis, Pub/Sub). Knowledge of BI tools (Metabase, Power BI, Looker, and QuickSight) and the ability to design data models that support efficient querying for analytical purposes. Collaborate closely with data scientists, analysts, and business stakeholders to understand data requirements and translate them into technical data solutions. Stay updated on the latest data engineering technologies and best practices, and advocate for their adoption where appropriate. Contribute to the development and improvement of data infrastructure and processes, including embracing DataOps principles for automation and collaboration. Work with containerization (e.g., Docker) and orchestration tools (e.g., Kubernetes) for deploying and managing data services. Implement data governance policies and practices, including data lineage and metadata management. Skills and Qualifications: Essential: Strong proficiency in Python, SQL, and MongoDB. Experience with relational databases (PostgreSQL, MySQL) and NoSQL databases (MongoDB). Understanding of database internals, indexing, and query optimization. Knowledge of data modeling, data warehousing principles, and ETL/ELT methodologies. Proficiency with cloud platforms (AWS, Azure, GCP), including data storage (S3, ADLS Gen2, GCS), data warehousing services (e.g., Redshift, Snowflake, BigQuery), and managed services for data processing (AWS Glue, Azure Data Factory, Google Cloud Dataflow). Experience with data quality and validation techniques and implementing automated data quality frameworks. Strong analytical and problem-solving abilities. Ability to troubleshoot complex data pipeline issues. Experience with BI tools (Metabase, Power BI, Looker, QuickSight) from a data provisioning perspective. Preferred: Experience with Data Lake, Data Lakehouse, or Data Mesh architectures. Hands-on experience with data processing frameworks like Apache Spark, Apache Kafka, and stream processing technologies (Spark Streaming, Flink). Experience with workflow orchestration tools like Apache Airflow and Dagster. Understanding of DataOps and MLOps concepts and practices. Experience with data observability and monitoring tools. Excellent communication and presentation skills. How to apply for this opportunity: Easy 3-Step Process: 1. Click On Apply! And Register or log in on our portal 2. Upload updated Resume & Complete the Screening Form 3. Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring and getting hired reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant product and engineering job opportunities and progress in their career. (Note: There are many more opportunities apart from this on the portal.) So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 5 days ago
7.0 - 12.0 years
15 - 27 Lacs
bengaluru
Work from Office
Note - This opportunity is on behalf of one of our esteemed clients were assisting them with the hiring process. They are looking to fill this position immediately (within this week). Only candidates from Bengaluru or nearby (commutable) locations will be considered. Role Overview We are looking for an experienced AI Data Architect to design and implement robust, scalable, and secure data architectures that power AI/ML solutions. The role involves defining data strategies, enabling advanced analytics, ensuring data quality/governance, and optimizing infrastructure to support modern AI-driven applications. Key Responsibilities Design and implement end-to-end data architectures to support AI/ML workloads. Define data strategy, governance, and frameworks for structured, semi-structured, and unstructured data. Architect scalable data pipelines, warehouses, and lakehouses optimized for AI/ML. Collaborate with Data Scientists, ML Engineers, and business teams to translate requirements into data architecture solutions. Ensure data security, compliance, lineage, and metadata management. Optimize data platforms for performance, scalability, and cost efficiency. Guide teams on best practices for integrating data platforms with AI/ML model training, deployment, and monitoring. Evaluate emerging tools and technologies in the Data & AI ecosystem. Required Skills & Experience Proven experience as a Data Architect with a focus on AI/ML workloads. Strong expertise in cloud platforms (AWS, Azure, GCP) and cloud-native data services. Hands-on experience with data lakehouse architectures (Databricks, Snowflake, Delta Lake, BigQuery, Synapse). Proficiency in data pipeline frameworks (Apache Spark, Kafka, Airflow, DBT). Strong understanding of ML data lifecycle feature engineering, data versioning, training pipelines, MLOps. Knowledge of data governance frameworks , security, and compliance standards. Experience with SQL, Python, and distributed data systems . Familiarity with AI/ML platforms (SageMaker, Vertex AI, Azure ML) is a plus. Excellent problem-solving and stakeholder management skills. Note - This opportunity is on behalf of one of our esteemed clients were assisting them with the hiring process. They are looking to fill this position immediately (within this week). Only candidates from Bengaluru or nearby (commutable) locations will be considered.
Posted 6 days ago
7.0 - 12.0 years
7 - 17 Lacs
hyderabad
Work from Office
About this role: Wells Fargo is seeking a Principal Engineer In this role, you will: Act as an advisor to leadership to develop or influence applications, network, information security, database, operating systems, or web technologies for highly complex business and technical needs across multiple groups Lead the strategy and resolution of highly complex and unique challenges requiring in-depth evaluation across multiple areas or the enterprise, delivering solutions that are long-term, large-scale and require vision, creativity, innovation, advanced analytical and inductive thinking Translate advanced technology experience, an in-depth knowledge of the organizations tactical and strategic business objectives, the enterprise technological environment, the organization structure, and strategic technological opportunities and requirements into technical engineering solutions Provide vision, direction and expertise to leadership on implementing innovative and significant business solutions Maintain knowledge of industry best practices and new technologies and recommends innovations that enhance operations or provide a competitive advantage to the organization Strategically engage with all levels of professionals and managers across the enterprise and serve as an expert advisor to leadership Required Qualifications: 7+ years of Engineering experience, or equivalent demonstrated through one or a combination of the following: work experience, training, military experience, education Desired Qualifications: Overall 16+ years' experience and Lead the Data technology transformation and implementation initiative for Corporate & Investment Banking. Strong experience with real-time low latency Data pipelines using Apache Flink. Strong experience with GCP-Cloud transformation and implementation, well versed with Big Query. Experience in cloud Migration from HDFS to GCP. Experience working with real-time low latency Data pipelines. Experience with Iceberg, Object Store (AWS S3/Minio) will be an added advantage. Well versed with Data Warehousing/Lakehouse methodologies. Strong communication and interpersonal skills. Data Environment Transformation/Simplification Lead the application transformation and rationalization initiatives Analyzes performance trends and recommends process improvements; assesses changes for risk to production systems and assures quality, security and compliance requirements Evaluate, incubate, adopt modern technologies and engineering practices and recommends innovations that provide competitive advantage to the organization Develop strategies to improve developer productivity and reduce technology debt Develop strategy to improve data quality and maintenance of data lineage Architecture Oversight Develops consistent architecture strategy and delivers safe, secure and consistent architecture solutions. Reduces technology risk by working closely with architect and design solutions that aligns to architecture roadmap, enterprise principles, policies and standards Partner with Enterprise cloud platform, CI/CD pipelines, platform teams, architects, engineering managers and the developer community. Job Expectations: Act as liaison between business and technical organization by planning, conducting, and directing the analysis of highly complex business problems to be solved with automated systems. Provides technical assistance in identifying, evaluating, and developing systems and procedures that are cost effective and meet business requirements Acts as an internal consultant within technology and business groups by using quality tools and process definition/improvement to re-engineer technical processes for greater efficiencies We are open for both the locations - Hyderabad or Bangalore and will be required to work in the office as per organizations In Office Adherence / Return to Office (RTO)
Posted 1 week ago
12.0 - 16.0 years
15 - 19 Lacs
noida, hyderabad
Work from Office
Experience : 12+ Years Location : Noida, Hyderabad About the Role: We are looking for a Principal Data Engineer to lead the design and delivery of scalable data solutions using Azure Data Factory and Azure Databricks. This is a consulting-focused role that requires strong technical expertise, stakeholder engagement, and architectural thinking. You will work closely with business, functional, and technical teams to define data strategies, design robust pipelines, and ensure smooth delivery in an Agile environment. Responsibilities Collaborate with business and technology stakeholders to gather and understand data needs Translate functional requirements into scalable and maintainable data architecture Design and implement robust data pipelines Lead data modeling, transformation, and performance optimization efforts Ensure data quality, validation, and consistency Participate in Agile ceremonies including sprint planning and backlog grooming Support CI/CD automation for data pipelines and integration workflows Mentor junior engineers and promote best practices in data engineering Must Have 12+ years of IT experience, with at least 5 years in data architecture roles in modern metadata driven and cloud-based technologies, bringing a software engineering mindset Strong analytical and problem-solving skills - Ability to determine data patterns and perform root cause analysis to resolve production issues Excellent communication skills, with experience in leading client-facing discussion Strong hands-on experience with Azure Data Factory and Databricks, leveraging custom solutioning and design beyond drag-and-drop capabilities for big data workloads Demonstrated proficiency in SQL, Python, and Spark Experience with CI/CD pipelines, version control and DevOps tools Experience with applying dimensional and Data Vault methodologies Background in working with Agile methodologies and sprint-based delivery Ability to produce clear and comprehensive technical documentation Nice to Have Experience with Azure Synapse and Power BI Experience with Microsoft Purview and/or Unity Catalog Understanding of Data Lakehouse and Data Mesh concepts Familiarity with enterprise data governance and quality frameworks Manufacturing experience within the operations domain
Posted 1 week ago
8.0 - 13.0 years
10 - 16 Lacs
hyderabad, bengaluru
Work from Office
Skill: Databricks Production Support - Senior Level Experience: 8-12 Years Location: Hyderabad & Bangalore Notice Period: Immediate - 15 Days Shift Timings: 1PM - 10PM Detailed job description - Skill Set: We are seeking a highly skilled Databricks Platform Operations to join our team, responsible for daily monitoring and resolution of data load issues, platform optimization, capacity planning, and governance management. This role is pivotal in ensuring the stability, scalability, and security of our Databricks environment while acting as a technical architect for platform best practices. The ideal candidate will bring a strong operational background, potentially with earlier experience as a Linux, Hadoop, or Spark administrator, and possess deep expertise in managing cloud-based data platforms. Mandatory Skills Terraform, Azure Purview, Apache Spark and SQL, Data lakehouse
Posted 1 week ago
5.0 - 10.0 years
4 - 8 Lacs
delhi, india
On-site
Technical Skills & Experience: Advanced to expert knowledge in SQL on any database platform. Proficient in modern data warehouse design concepts such as Data Warehouse, Data Lakehouse, Data Mesh, Data Vault. Experience working with large and complex datasets. Skilled in using design tools such as Enterprise Architect, Power Designer. Experience collaborating with Data Architects to design and implement data solutions. Nice to Have: Experience with Health Care or Health Insurance data. Strong communication skills, both verbal and written. Qualifications: Honours or Master's degree in BSc Computer Science preferred. Other qualifications considered if supported by relevant experience. 5 to 10 years of professional experience preferred.
Posted 1 week ago
5.0 - 10.0 years
4 - 8 Lacs
kolkata, west bengal, india
On-site
Technical Skills & Experience: Advanced to expert knowledge in SQL on any database platform. Proficient in modern data warehouse design concepts such as Data Warehouse, Data Lakehouse, Data Mesh, Data Vault. Experience working with large and complex datasets. Skilled in using design tools such as Enterprise Architect, Power Designer. Experience collaborating with Data Architects to design and implement data solutions. Nice to Have: Experience with Health Care or Health Insurance data. Strong communication skills, both verbal and written. Qualifications: Honours or Master's degree in BSc Computer Science preferred. Other qualifications considered if supported by relevant experience. 5 to 10 years of professional experience preferred.
Posted 1 week ago
5.0 - 10.0 years
10 - 20 Lacs
hyderabad, chennai, bengaluru
Work from Office
Role Overview: We are seeking street-smart and technically strong Senior Data Engineers / Leads who can take ownership of designing and developing cutting-edge data and AI platforms using Azure-native technologies and Databricks. You will play a critical role in building scalable data pipelines, modern data architectures, and intelligent analytics solutions. Key Responsibilities: Design and implement scalable, metadata-driven frameworks for data ingestion, quality, and transformation across both batch and streaming datasets. Develop and optimize end-to-end data pipelines to process structured and unstructured data, enabling the creation of analytical data products. Build robust exception handling, logging, and monitoring mechanisms for better observability and operational support. Take ownership of complex modules and lead the development of critical data workflows and components. Provide guidance to data engineers and peers on best practices. Collaborate with cross-functional teamsincluding business consultants, data architects, scientists, and application developersto deliver impactful analytics solutions. Preferred candidate profile 5+ years of overall technical experience, with a minimum of 2 years of hands-on experience with Microsoft Azure and Databricks. Proven experience delivering at least one end-to-end Data Lakehouse solution on Azure Databricks using the Medallion Architecture. Strong working knowledge of the Databricks ecosystem, including: PySpark, Notebooks, Structured Streaming, Unity Catalog, Delta Live Tables, Workflows, and SQL Warehouse. Advanced programming, unit testing, and debugging skills in Python and SQL. Hands-on experience with Azure-native services such as: Azure Data Factory, ADLS Gen2, Azure SQL Database, and Event Hub. Solid understanding of data modeling techniques, including both Dimensional and Third Normal Form (3NF) models. Exposure to developing LLM/Generative AI-powered applications. Must have excellent understanding of CI/CD workflows using Azure DevOps. Bonus: Knowledge of Azure infrastructure, including provisioning, networking, security, and governance. Educational Background: Bachelors degree (B.E/B.Tech) in Computer Science, Information Technology, or a related field from a reputed institute (preferred). You are important to us, lets stay connected! Every individual comes with a different set of skills and qualities so even if you don’t tick all the boxes for the role today we urge you to apply as there might be a suitable/unique role for you tomorrow. We are an equal- opportunity employer. Ourdiverse and inclusive culture and values guide us to listen, trust, respect, and encourage people to grow the way they desire, packages are among the best in industry.
Posted 2 weeks ago
8.0 - 12.0 years
0 Lacs
hyderabad, telangana
On-site
One of our esteemed clients is a Japanese multinational information technology (IT) service and consulting company headquartered in Tokyo, Japan, which acquired Italy-based Value Team S.p.A. and launched Global One Teams. This dynamic and high-impact firm is currently seeking a Data Quality Engineer specializing in Informatica Data Quality tool. The ideal candidate should possess 8+ years of experience in Informatica Data Quality tool and demonstrate proficiency in using Informatica Data Quality (IDQ) for data profiling to identify data anomalies and patterns. Strong knowledge of SQL for querying databases is essential, along with the ability to design and implement data cleansing routines using IDQ. Experience with database systems such as Lakehouse, PostgreSQL, Teradata, and SQL Server is preferred. Key Responsibilities: - Analyze data quality metrics and generate reports - Standardize, validate, and enrich data - Create and manage data quality rules and workflows in IDQ - Automate data quality checks and validations - Integrate IDQ with other data management tools and platforms - Manage data flows and ensure data consistency - Utilize data manipulation libraries like Pandas and NumPy - Use PySpark for big data processing and analytics - Write complex SQL queries for data extraction and transformation Interested candidates should possess relevant experience in Informatica Data Quality tool and be able to share their updated resume along with details such as Total Experience, Current Location, Current CTC, Expected CTC, and Notice Period. The company assures strict confidentiality in handling all profiles. If you are ready to take your career to new heights and be part of this incredible journey, apply now to join this innovative team. Thank you, Syed Mohammad syed.m@anlage.co.in,
Posted 2 weeks ago
10.0 - 14.0 years
0 Lacs
haryana
On-site
You will play a pivotal role as a Lead Solution Designer within our evolving Data Engineering team, focusing on the strategic implementation of data solutions on AWS. Your main responsibilities will include driving the technical vision and execution of cloud-based data architectures, ensuring scalability, security, and performance of the platforms while meeting both business and technical requirements. Your role will involve spearheading the implementation, performance finetuning, development, and delivery of data solutions using AWS core data services. You will be responsible for overseeing all technical aspects of AWS-based data systems and coordinating with various stakeholders to ensure timely implementation and value realization. Additionally, you will continuously enhance the D&A platform, work closely with business partners and cross-functional teams, and develop data management strategies. To excel in this role, you should have at least 10 years of hands-on experience in developing and architecting data solutions, with a strong background in AWS cloud services. You must possess expertise in designing and implementing AWS data services like S3, Redshift, Athena, and Glue, as well as building large-scale data platforms including Data Lakehouse, Data Warehouse, Master Data Management, and Advanced Analytics systems. Effective communication skills are crucial as you will be required to translate complex technical solutions to both technical and non-technical stakeholders. Experience in managing multiple projects in a high-pressure environment, strong problem-solving skills, and proficiency in data solution coding are essential for success in this role. Experience within the Insurance domain and a solid understanding of data governance frameworks and Master Data Management principles are considered advantageous. This opportunity is ideal for an experienced data architect passionate about leading innovative AWS data solutions while balancing technical expertise with business needs.,
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
haryana
On-site
As a Data Modeller/Data Modeler, you will play a crucial role in leading data architecture efforts across various enterprise domains such as Sales, Procurement, Finance, Logistics, R&D, and Advanced Planning Systems (SAP/Oracle). Your responsibilities will include designing scalable and reusable data models, constructing data lake foundations, and collaborating with cross-functional teams to deliver robust end-to-end data solutions. You will work closely with business and product teams to understand processes and translate them into technical specifications. Using methodologies such as Medallion Architecture, EDW, or Kimball, you will design logical and physical data models. It will be essential to source the correct grain of data from authentic source systems or existing DWHs and create intermediary data models and physical views for reporting and consumption. In addition, you will be responsible for implementing Data Governance, Data Quality, and Data Observability practices. Developing business process maps, user journey maps, and data flow/integration diagrams will also be part of your tasks. You will design integration workflows utilizing APIs, FTP/SFTP, web services, and other tools to support large-scale implementation programs involving multiple projects. Your technical skills should include a minimum of 5+ years of experience in data-focused projects, strong expertise in Data Modelling encompassing Logical, Physical, Dimensional, and Vault modeling, and familiarity with enterprise data domains such as Sales, Finance, Procurement, Supply Chain, Logistics, and R&D. Proficiency in tools like Erwin or similar data modeling tools, understanding of OLTP and OLAP systems, and knowledge of Kimball methodology, Medallion architecture, and modern Data Lakehouse patterns are essential. Furthermore, you should have knowledge of Bronze, Silver, and Gold layer architecture in cloud platforms and the ability to read existing data dictionaries, table structures, and normalize data tables effectively. Familiarity with cloud data platforms (AWS, Azure, GCP), DevOps/DataOps best practices, Agile methodologies, and end-to-end integration needs and methods is also required. Preferred experience includes a background in Retail, CPG, or Supply Chain domains, as well as experience with data governance frameworks, quality tools, and metadata management platforms. Your skills should encompass a range of technical aspects such as FTP/SFTP, physical data models, DevOps, data observability, cloud platforms, APIs, data lakehouse, vault modeling, dimensional modeling, and more. In summary, as a Data Modeller/Data Modeler, you will be a key player in designing and implementing data solutions that drive business success across various domains and collaborating with diverse teams to achieve strategic objectives seamlessly.,
Posted 2 weeks ago
8.0 - 12.0 years
0 Lacs
maharashtra
On-site
This role is for one of our clients in the Technology, Information, and Media industry at a Mid-Senior level. With a minimum of 8 years of experience, the location for this full-time position is in Mumbai. About The Role We are seeking an accomplished Assistant Vice President - Data Engineering to spearhead our enterprise data engineering function within the broader Data & Analytics leadership team. The ideal candidate will be a hands-on leader with a strategic mindset, capable of architecting modern data ecosystems, leading high-performing teams, and fostering innovation in a cloud-first, analytics-driven environment. Responsibilities Team Leadership & Vision Lead and mentor a team of data engineers, fostering a culture of quality, collaboration, and innovation while shaping the long-term vision for data engineering. Modern Data Infrastructure Design and implement scalable, high-performance data pipelines for batch and real-time workloads using tools like Databricks, PySpark, and Delta Lake, focusing on data lakehouse and data mesh implementations on modern cloud platforms. ETL/ELT & Data Pipeline Management Drive the development of robust ETL/ELT workflows, ensuring ingestion, transformation, cleansing, and enrichment of data from diverse sources while implementing orchestration and monitoring using tools like Airflow, Azure Data Factory, or Prefect. Data Modeling & SQL Optimization Architect logical and physical data models to support advanced analytics and BI use cases, along with writing and reviewing complex SQL queries for performance efficiency and scalability. Data Quality & Governance Collaborate with governance and compliance teams to implement data quality frameworks, lineage tracking, and access controls to ensure alignment with data privacy regulations and security standards. Cross-Functional Collaboration Act as a strategic partner to various stakeholders, translating data requirements into scalable solutions, and effectively communicating data strategy and progress to both technical and non-technical audiences. Innovation & Continuous Improvement Stay updated on emerging technologies in cloud data platforms, streaming, and AI-powered data ops, leading proof-of-concept initiatives, and driving continuous improvement in engineering workflows and infrastructure. Required Experience & Skills The ideal candidate should have 8+ years of hands-on data engineering experience, including 2+ years in a leadership role, deep expertise in Databricks, PySpark, and big data processing frameworks, advanced SQL proficiency, experience building data pipelines on cloud platforms, knowledge of data lakehouse concepts, and strong communication and leadership skills. Preferred Qualifications A Bachelor's or Master's degree in Computer Science, Engineering, Data Science, or related field, industry certifications in relevant platforms, experience with data mesh, streaming architectures, or lakehouse implementations, and exposure to DataOps practices and data product development frameworks would be advantageous.,
Posted 2 weeks ago
5.0 - 7.0 years
10 - 15 Lacs
hyderabad, chennai, bengaluru
Work from Office
Bachelors or masters degree in Computer Science, IT, or related field. 5+ years of data engineering experience, including 1+ years on Azure Data Platform. Strong proficiency in Azure Data Factory, Azure Fabric, and Notebooks (PySpark/Python/SQL). Expertise in Delta Lake, Parquet, and modern data lakehouse architectures. Experience with Azure Synapse Analytics, Databricks (if applicable), and data visualization tools such as Power BI. Skilled in SQL, Python, and PySpark for large-scale data transformations. Experience implementing CI/CD for data pipelines and notebooks using Azure DevOps or Git. Knowledge of data governance, security best practices, and compliance frameworks. If youre ready to make an impact with your expertise in data engineering and Azure, this is the role for you!
Posted 3 weeks ago
12.0 - 15.0 years
15 - 19 Lacs
noida, hyderabad
Work from Office
About the Role: We are looking for a Principal Data Engineer to lead the design and delivery of scalable data solutions using Azure Data Factory and Azure Databricks. This is a consulting-focused role that requires strong technical expertise, stakeholder engagement, and architectural thinking. You will work closely with business, functional, and technical teams to define data strategies, design robust pipelines, and ensure smooth delivery in an Agile environment. Responsibilities Collaborate with business and technology stakeholders to gather and understand data needs Translate functional requirements into scalable and maintainable data architecture Design and implement robust data pipelines Lead data modeling, transformation, and performance optimization efforts Ensure data quality, validation, and consistency Participate in Agile ceremonies including sprint planning and backlog grooming Support CI/CD automation for data pipelines and integration workflows Mentor junior engineers and promote best practices in data engineering Must Have 12+ years of IT experience, with at least 5 years in data architecture roles in modern metadata driven and cloud-based technologies, bringing a software engineering mindset Strong analytical and problem-solving skills - Ability to determine data patterns and perform root cause analysis to resolve production issues Excellent communication skills, with experience in leading client-facing discussion Strong hands-on experience with Azure Data Factory and Databricks, leveraging custom solutioning and design beyond drag-and-drop capabilities for big data workloads Demonstrated proficiency in SQL, Python, and Spark Experience with CI/CD pipelines, version control and DevOps tools Experience with applying dimensional and Data Vault methodologies Background in working with Agile methodologies and sprint-based delivery Ability to produce clear and comprehensive technical documentation Nice to Have Experience with Azure Synapse and Power BI Experience with Microsoft Purview and/or Unity Catalog Understanding of Data Lakehouse and Data Mesh concepts Familiarity with enterprise data governance and quality frameworks Manufacturing experience within the operations domain
Posted 3 weeks ago
6.0 - 8.0 years
0 Lacs
gurugram
Work from Office
Design and manage Azure-based data pipelines, models, and ETL processes. Ensure governance, security, performance, and real-time analytics while leveraging DevOps, CI/CD, and big data tools for reliable, scalable solutions. Health insurance Maternity policy Annual bonus Provident fund Gratuity
Posted 3 weeks ago
7.0 - 11.0 years
0 Lacs
ahmedabad, gujarat
On-site
As an accomplished Lead Data Engineer with 7 to 10 years of experience in data engineering, we are looking for you to join our dynamic team in either Ahmedabad or Pune. Your expertise in Databricks will play a crucial role in enhancing our data engineering capabilities and working with advanced technologies, including Generative AI. Your key responsibilities will include leading the design, development, and optimization of data solutions using Databricks to ensure scalability, efficiency, and security. You will collaborate with cross-functional teams to gather and analyze data requirements, translating them into robust data architectures and solutions. Developing and maintaining ETL pipelines, leveraging Databricks, and integrating with Azure Data Factory when necessary will also be part of your role. Furthermore, you will implement machine learning models and advanced analytics solutions, incorporating Generative AI to drive innovation. Ensuring data quality, governance, and security practices are adhered to will be essential to maintain the integrity and reliability of data solutions. Providing technical leadership and mentorship to junior engineers to foster an environment of learning and growth will also be a key aspect of your role. It is crucial to stay updated on the latest trends and advancements in data engineering, Databricks, Generative AI, and Azure Data Factory to continually enhance team capabilities. To qualify for this role, you should hold a Bachelor's or Master's degree in Computer Science, Information Technology, or a related field. Your proven expertise in building and optimizing data solutions using Databricks, integrating with Azure Data Factory/AWS Glue, proficiency in SQL, and programming languages like Python or Scala are essential. A strong understanding of data modeling, ETL processes, Data Warehousing/Data Lakehouse concepts, cloud platforms (particularly Azure), and containerization technologies such as Docker are required. Excellent analytical, problem-solving, and communication skills are a must, along with demonstrated leadership ability and experience mentoring junior team members. Preferred qualifications include experience with Generative AI technologies and applications, familiarity with other cloud platforms like AWS or GCP, and knowledge of data governance frameworks and tools. In return, we offer flexible timings, 5 days working week, a healthy environment, celebrations, opportunities to learn and grow, build a community, and medical insurance benefits. Join us and be part of a team that values innovation, collaboration, and professional development.,
Posted 1 month ago
9.0 - 11.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Job Description Qualifications: Overall 9+ years of IT experience Minimum of 5+ years' preferred managing Data Lakehouse environments, Azure Databricks, Snowflake, DBT (Nice to have) specific experience a plus. Hands-on experience with data warehousing, data lake/lakehouse solutions, data pipelines (ELT/ETL), SQL, Spark/PySpark, DBT,. Strong understanding of Data Modelling, SDLC, Agile, and DevOps principles. Bachelors degree in management/computer information systems, computer science, accounting information systems, computer or in a relevant field. Knowledge/Skills: Tools and Technologies: Azure Databricks, Apache Spark, Python, Databricks SQL, Unity Catalog, and Delta Live Tables. Understanding of cluster configuration, compute and storage layers. Expertise with Snowflake Architecture, with experience in design, development, and evolution System integration experience, data extraction, transformation, and quality controls design techniques. Familiarity with data science concepts, as well as MDM, business intelligence, and data warehouse design and implementation techniques. Extensive experience with the medallion architecture data management framework as well as unity catalog. Data modeling and information classification expertise at the enterprise level. Understanding of metamodels, taxonomies and ontologies, as well as of the challenges of applying structured techniques (data modeling) to less-structured sources. Ability to assess rapidly changing technologies and apply them to business needs. Be able to translate the information architecture contribution to business outcomes into simple briefings for use by various data-and-analytics-related roles. About Us Datavail is a leading provider of data management, application development, analytics, and cloud services, with more than 1,000 professionals helping clients build and manage applications and data via a world-class tech-enabled delivery platform and software solutions across all leading technologies. For more than 17 years, Datavail has worked with thousands of companies spanning different industries and sizes, and is an AWS Advanced Tier Consulting Partner, a Microsoft Solutions Partner for Data & AI and Digital & App Innovation (Azure), an Oracle Partner, and a MySQL Partner. About The Team Datavails Data Management Services: Datavails Data Management and Analytics practice is made up of experts who provide a variety of data services including initial consulting and development, designing and building complete data systems, as well as ongoing support and management of database, data warehouse, data lake, data integration, and virtualization and reporting environments. Datavails team is comprised of not just excellent BI & analytics consultants, but great people as well. Datavails data intelligence consultants are experienced, knowledgeable and certified in the best in breed BI and analytics software applications and technologies. We ascertain your business objectives, goals and requirements, assess your environment, and recommend the tools which best fit your unique situation. Our proven methodology can help your project succeed, regardless of stage. With the combination of a proven delivery model and top-notch experience ensures that Datavail will remain the Data Management experts on demand you desire. Datavails flexible and client focused services always add value to your organization. Show more Show less
Posted 1 month ago
3.0 - 7.0 years
0 Lacs
chennai, tamil nadu
On-site
As a Data Engineer, you will be responsible for developing and maintaining a metadata-driven generic ETL framework to automate ETL code. Your primary tasks will include designing, building, and optimizing ETL/ELT pipelines using Databricks (PySpark/SQL) on AWS. You will be required to ingest data from a variety of structured and unstructured sources such as APIs, RDBMS, flat files, and streaming services. In this role, you will also develop and maintain robust data pipelines for both batch and streaming data utilizing Delta Lake and Spark Structured Streaming. Implementing data quality checks, validations, and logging mechanisms will be essential to ensure data accuracy and reliability. You will work on optimizing pipeline performance, cost, and reliability and collaborate closely with data analysts, BI teams, and business stakeholders to deliver high-quality datasets. Additionally, you will support data modeling efforts, including star and snowflake schemas, de-normalization tables approach, and assist in data warehousing initiatives. Your responsibilities will also involve working with orchestration tools like Databricks Workflows to schedule and monitor pipelines effectively. To excel in this role, you should have hands-on experience in ETL/Data Engineering roles and possess strong expertise in Databricks (PySpark, SQL, Delta Lake). Experience with Spark optimization, partitioning, caching, and handling large-scale datasets is crucial. Proficiency in SQL and scripting in Python or Scala is required, along with a solid understanding of data lakehouse/medallion architectures and modern data platforms. Knowledge of cloud storage systems like AWS S3, familiarity with DevOps practices (Git, CI/CD, Terraform, etc.), and strong debugging, troubleshooting, and performance-tuning skills are also essential for this position. Following best practices for version control, CI/CD, and collaborative development will be a key part of your responsibilities. If you are passionate about data engineering, enjoy working with cutting-edge technologies, and thrive in a collaborative environment, this role offers an exciting opportunity to contribute to the success of data-driven initiatives within the organization.,
Posted 1 month ago
7.0 - 11.0 years
0 Lacs
karnataka
On-site
FICO is a leading global analytics software company, assisting businesses in 100+ countries to make informed decisions. Join our esteemed team today to unleash your career potential. As part of the product development team, you will play a crucial role in providing innovative thought leadership. This position offers you the chance to gain a profound understanding of our business, collaborate closely with product management to design, architect, and develop a highly feature-rich product. You will be reporting to the VP, Software Engineering. Your responsibilities will include designing, developing, testing, deploying, and supporting the capabilities of an enterprise-level platform. You will create scalable microservices focusing on high performance, availability, interoperability, and reliability. Additionally, you will be expected to contribute designs, technical proof of concepts, and adhere to standards set by the architecture team. Collaborating with senior engineers and product management to create epics and stories, while defining technical acceptance criteria will also be a part of your role. We are seeking candidates with a Bachelor's/Master's degree in computer science or related field, with a minimum of 7 years of experience in software architecture, design, development, and testing. It is essential to be proficient in Java (Java 17 and above), Spring, Spring Boot, Maven/Gradle, Docker, Git, and GitHub. Expertise in Data Structure, Algorithm, Multi-threading, Memory Management, and experience with data engineering services is highly desirable. The ideal candidate should possess a strong understanding of Microservices Architecture, RESTful and gRPC APIs, Cloud engineering areas like Kubernetes, and AWS/Azure/GCP. Knowledge of databases such as MySQL, PostgreSQL, MongoDB, and Cassandra is also required. Experience with Agile or Scaled Agile software development, along with excellent communication and documentation skills, is crucial. Join us at FICO to be part of a culture that reflects our core values and offers an inclusive work environment. You will have the opportunity to contribute to impactful projects, develop professionally, and be rewarded for your hard work. We provide competitive compensation, benefits, and rewards programs, as well as a people-first work environment that promotes work/life balance and employee engagement. Make a move to FICO and be a part of a leading organization in the Big Data analytics field. You will have the chance to make a difference by helping businesses leverage data to enhance decision-making. Join us in our commitment to innovation and collaboration and be part of a diverse and inclusive environment that fosters growth and development. Explore how you can fulfill your potential at www.fico.com/Careers. Please note that information submitted with your application is subject to the FICO Privacy Policy available at https://www.fico.com/en/privacy-policy.,
Posted 1 month ago
6.0 - 11.0 years
15 - 27 Lacs
Hyderabad, Chennai, Bengaluru
Hybrid
Hiring in straive!!! Job Title: Business Analyst Experience- 6-10 years Location- Bangalore/Hyderabad/Chennai/Noida/Gurgaon Mode- Hybrid About Straive:- Straive is a market leading Content and Data Technology company providing data services, subject matter expertise, & technology solutions to multiple domains. Data Analytics & Al Solutions, Data Al Powered Operations and Education & Learning form the core pillars of the companys long-term vision. The company is a specialized solutions provider to business information providers in finance, insurance, legal, real estate, life sciences and logistics. Straive continues to be the leading content services provider to research and education publishers. Our Data Solutions business has become critical to our client's success. We use technology and Al with human experts-in loop to create data assets that our clients use to power their data products and their end customers' workflows. As our clients expect us to become their future-fit Analytics and Al partner, they look to us for help in building data analytics and Al enterprise capabilities for them. With a client-base scoping 30 countries worldwide, Straives multi-geographical resource pool is strategically located in eight countries - India, Philippines, USA, Nicaragua, Vietnam, United Kingdom, and the company headquarters in Singapore. Position Overview: We are seeking a highly skilled Business Analyst with a strong technical background to join our team. The ideal candidate will excel in creating technical documentation, performing data analysis using SQL and Advanced Excel and conducting root cause analyses to support engineering teams. This role is critical to bridging the gap between business needs and technical execution, ensuring smooth and efficient processes. Key Responsibilities: Act as the primary liaison between business stakeholders, product owners, and technology teams. Lead requirements elicitation sessions and document functional and non-functional specifications. Analyze business processes, identify areas for improvement, and design solutions Work with the client’s internal stakeholders to define and capture any fulfilment requirements such as outbound data deliveries, reporting and metrics. Prioritize tomanage ad-hoc requests in parallel with ongoing sprints Experienced with Scrum and Agile Methodologies to coordinate global delivery teams,run scrum ceremonies, manage backlog items, and handle escalations Required Skill Sets: Prior exposure to Data Ingestion and Curation work (such as working with Data Lakehouse) Strong knowledge of SQL for data analysis and validation and Advanced Excel Well versed with Stakeholder management Excellent skills in requirements gathering, process modeling, and documentation. Strong communication and stakeholder management skills. Proficient in tools like JIRA, Confluence,Visio Preferred Qualifications: Master’s degree in computer science, statistics, or related discipline 6+ years as a business analyst Comfortable making decisions and leading “Straive is an Equal Opportunity Employer. Our policy is clear: there shall be no discrimination based on age, disability, sex, race, religion or belief, gender reassignment, marriage/civil partnership, pregnancy/maternity, or sexual orientation. We are an inclusive organization and actively promote equality of opportunity for all with the right mix of talent, skills and potential. We welcome all applications from a wide range of candidates. Selection for roles will be based on individual merit alone.”
Posted 2 months ago
7.0 - 11.0 years
0 Lacs
karnataka
On-site
FICO is a leading global analytics software company that assists businesses in over 100 countries in making informed decisions. By joining the world-class team at FICO, you will have the opportunity to realize your career potential. As a part of the product development team, you will play a crucial role in providing thought leadership and driving innovation. This position involves collaborating closely with product management to architect, design, and develop a highly feature-rich product as the VP, Software Engineering. Your responsibilities will include designing, developing, testing, deploying, and supporting the capabilities of a large enterprise-level platform. You will create scalable microservices with a focus on high performance, availability, interoperability, and reliability. Additionally, you will contribute to technical designs, participate in defining technical acceptance criteria, and mentor junior engineers to uphold quality standards. To be successful in this role, you should hold a Bachelor's or Master's degree in computer science or a related field and possess a minimum of 7 years of experience in software architecture, design, development, and testing. Expertise in Java, Spring, Spring Boot, Maven/Gradle, Docker, Git, GitHub, as well as experience with data structures, algorithms, and system design is essential. Furthermore, you should have a strong understanding of microservices architecture, RESTful and gRPC APIs, cloud engineering technologies such as Kubernetes and AWS/Azure/GCP, and databases like MySQL, PostgreSQL, MongoDB, and Cassandra. Experience with Agile software development, data engineering services, and software design principles is highly desirable. At FICO, you will have the opportunity to work in an inclusive culture that values core principles like acting like an owner, delighting customers, and earning respect. You will benefit from competitive compensation, benefits, and rewards programs while enjoying a people-first work environment that promotes work/life balance and professional development. Join FICO and be part of a leading organization at the forefront of Big Data analytics, where you can contribute to helping businesses leverage data to enhance decision-making processes. Your role at FICO will make a significant impact on global businesses, and you will be part of a diverse and inclusive environment that fosters collaboration and innovation.,
Posted 2 months ago
3.0 - 7.0 years
0 Lacs
pune, maharashtra
On-site
The ideal candidate for this position should have advanced proficiency in Python, with a solid understanding of inheritance and classes. Additionally, the candidate should be well-versed in EMR, Athena, Redshift, AWS Glue, IAM roles, CloudFormation (CFT is optional), Apache Airflow, Git, SQL, Py-Spark, Open Metadata, and Data Lakehouse. Experience with metadata management is highly desirable, particularly with AWS Services such as S3. The candidate should possess the following key skills: - Creation of ETL Pipelines - Deploying code in EMR - Querying in Athena - Creating Airflow Dags for scheduling ETL pipelines - Knowledge of AWS Lambda and ability to create Lambda functions This role is for an individual contributor, and as such, the candidate is expected to autonomously manage client communication and proactively resolve technical issues without external assistance.,
Posted 2 months ago
10.0 - 20.0 years
50 - 75 Lacs
Bengaluru
Work from Office
A leading player in cloud-based enterprise solutions is expanding its analytics leadership team in Bangalore. This pivotal role calls for a seasoned professional to drive the evolution of data products and analytics capabilities across international markets. The ideal candidate will possess the strategic vision, technical expertise, and stakeholder savvy to lead in a fast-paced, innovation-driven environment. Key Responsibilities Lead and mentor a dynamic team of product managers to scale enterprise-grade data lake and analytics platforms Drive program execution and delivery with a focus on performance, prioritization, and business alignment Define and execute the roadmap for an analytical data platform, ensuring alignment with strategic and user-centric goals Collaborate cross-functionally with engineering, design, and commercial teams to launch impactful BI solutions Translate complex business needs into scalable data models and actionable product requirement documents for multi-tenant SaaS products Champion AI-enabled analytics experiences to deliver smart, context-aware data workflows Maintain high standards in performance, usability, trust, and documentation of data products Ensure seamless execution of global data strategies through on-the-ground leadership in India Promote agile methodologies, metadata governance, and product-led thinking across teams Ideal Candidate Profile 10+ years in product leadership roles focused on data products, BI, or analytics in SaaS environments Deep understanding of modern data architectures, including dimensional modeling and cloud-native analytics tools Proven expertise in building multi-tenant data platforms serving external customer use cases Skilled in simplifying complex inputs into clear, scalable requirements and deliverables Familiarity with platforms like Deltalake, dbt, ThoughtSpot, and similar tools Strong communicator with demonstrated stakeholder management and team leadership capabilities Experience launching customer-facing analytics products is a definite plus A passion for intuitive, scalable, and intelligent user experiences powered by data
Posted 2 months ago
5.0 - 12.0 years
5 - 12 Lacs
Bengaluru, Karnataka, India
On-site
Experience in ETL and Data Warehousing Excellent leadership and communication skills Strong hands-on experience with Data Lakehouse architecture Proficient in GCP BigQuery, Cloud Storage, Airflow, Dataflow, Cloud Functions, Pub/Sub, Cloud Run Built solution automations using various ETL tools Delivered at least 2 GCP Cloud Data Warehousing projects Worked on at least 2 Agile/SAFe methodology-based projects Experience with PySpark and Teradata Skilled in using DevOps tools like GitHub, Jenkins, Cloud Native tools Experienced in handling semi-structured data formats like JSON, Parquet, XML Written complex SQL queries for data analysis and extraction Deep understanding of Data Warehousing, Data Analysis, Data Profiling, Data Quality, and Data Mapping Global delivery model experience (15+ team members) Collaborated with product/project managers, developers, DBAs, and data governance teams for requirements, design, and deployment Responsibilities: Design and implement data pipelines using GCP services Manage deployments and ensure efficient orchestration of services Implement CI/CD pipelines using Jenkins or native tools Guide a team of data engineers in building scalable data pipelines Develop ETL/ELT pipelines using Python, Beam, and SQL Continuously monitor and optimize data workflows Integrate data from various sources using GCP services and orchestrate with Cloud Composer (Airflow) Set up monitoring and alerting using Cloud Monitoring, Datadog, etc. Mentor junior developers and data engineers Collaborate with developers, architects, and stakeholders on robust data solutions Lead data migration from legacy systems (Oracle, Teradata, SQL Server) to GCP Facilitate Agile ceremonies (sprint planning, scrums, backlog grooming) Interact with clients on analytics programs and ensure governance and communication with program leadership
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |