Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
6.0 - 11.0 years
10 - 20 Lacs
Chennai
Hybrid
Job Description: SAC/SAP Data Bricks Consultant Experience: 6+years Job Type: Full Time Location: Hybrid / Chennai Work Timing: 4:30 PM to 12:30 AM IST Skills: Experience in Data bricks for data processing and analytics Hands-on knowledge of SAP Datasphere for data modeling and integration Good understanding of SAP Analytics Cloud (SAC) for building reports and dashboards Strong background in data and analytics Ability to build accelerators and innovative products in the analytics space Can work on end-to-end solutions from data to insights
Posted 1 month ago
5.0 - 10.0 years
35 - 40 Lacs
Bengaluru
Work from Office
Key Requirements: A minimum of 5+ years of experience as an ML Engineer is must. Experience in ML model development applying Data Science algorithms and GenAI Are proficient with Python and Databricks , including open-source data libraries (e.g. Pandas, Numpy, Scikit learn, etc.). Strong understanding and experience of DevOps technologies such as Docker/Terraform and ML Ops practices and platforms/ frameworks like ML Flow ( Amazon Sagemaker/Azure Machine Learning) . Strong experience in production using machine learning models. Working experience in any one of predictive modelling, optimisation or recommendation systems. Hands-on experience with agile delivery methodologies and CI/CD. Hands-on experience with Unit Testing . Have a broad understanding of data extraction, data manipulation and feature engineering techniques. Are familiar with statistical and mathematical methodologies. Have an advanced degree in Computer Science, Mathematics or a similar quantitative discipline. Excellent communication skills. Good to Haves: PySpark/Ray for processing big data. DBT for data management. Understanding of NLP algorithms and techniques. Applied experience using graphs. Experience with Large Language Models (fine-tuning, RAG, agents, LangChain). Experience with cloud infrastructure. Good to have a masters degree in ML Engineering
Posted 1 month ago
3.0 - 6.0 years
10 - 15 Lacs
Pune
Hybrid
Role & responsibilities As a C++ developer in the Simulation environment unit you will work with: Enablement of advanced large-scale simulation testing for the development of autonomous vehicles Integration of third-party state-of-the-arttools in our simulation platform. Support of software developers and testers throughout the organization with the simulation tools. Contributing to the teams planning and roadmaps. In your role, you will learn how our autonomous vehicle software stack works. Your contributions will be one of the keys to securing the efficient and safe deployment of Scanias next-generation autonomous software. We give high value to cooperation, support to each other and knowledge sharing. Spontaneous mob sessions happen frequently. Preferred candidate profile shares knowledge. To succeed in your tasks, we believe you have: Degree in Computer science, Electrical engineering, Mechanical engineering More than 5 years of experience working with C++ on Linux Experience using debugging tools Git experience Meritorious to have: Real-time operating system experience Basic CI/CD experience Basic containerized tool experience Cloud experience -> Cloud experience: AWS, Databricks, etc. Basic knowledge of robotics or autonomous technologies Working experience in product with large code base
Posted 1 month ago
3.0 - 5.0 years
20 - 25 Lacs
Bengaluru
Hybrid
Company name: PulseData labs Pvt Ltd (captive Unit for URUS, USA) About URUS We are the URUS family (US), a global leader in products and services for Agritech. SENIOR DATA ENGINEER This role is responsible for the design, development, and maintenance of data integration and reporting solutions. The ideal candidate will possess expertise in Databricks and strong skills in SQL Server, SSIS and SSRS, and experience with other modern data engineering tools such as Azure Data Factory. This position requires a proactive and results-oriented individual with a passion for data and a strong understanding of data warehousing principles. Responsibilities Data Integration Design, develop, and maintain robust and efficient ETL pipelines and processes on Databricks. Troubleshoot and resolve Databricks pipeline errors and performance issues. Maintain legacy SSIS packages for ETL processes. Troubleshoot and resolve SSIS package errors and performance issues. Optimize data flow performance and minimize data latency. Implement data quality checks and validations within ETL processes. Databricks Development Develop and maintain Databricks pipelines and datasets using Python, Spark and SQL. Migrate legacy SSIS packages to Databricks pipelines. Optimize Databricks jobs for performance and cost-effectiveness. Integrate Databricks with other data sources and systems. Participate in the design and implementation of data lake architectures. Data Warehousing Participate in the design and implementation of data warehousing solutions. Support data quality initiatives and implement data cleansing procedures. Reporting and Analytics Collaborate with business users to understand data requirements for department driven reporting needs. Maintain existing library of complex SSRS reports, dashboards, and visualizations. Troubleshoot and resolve SSRS report issues, including performance bottlenecks and data inconsistencies. Collaboration and Communication Comfortable in entrepreneurial, self-starting, and fast-paced environment, working both independently and with our highly skilled teams. Collaborate effectively with business users, data analysts, and other IT teams. Communicate technical information clearly and concisely, both verbally and in writing. Document all development work and procedures thoroughly. Continuous Growth Keep abreast of the latest advancements in data integration, reporting, and data engineering technologies. Continuously improve skills and knowledge through training and self-learning. This job description reflects managements assignment of essential functions; it does not prescribe or restrict the tasks that may be assigned. Requirements Bachelor's degree in computer science, Information Systems, or a related field. 2+ years of experience in data integration and reporting. Extensive experience with Databricks, including Python, Spark, and Delta Lake. Strong proficiency in SQL Server, including T-SQL, stored procedures, and functions. Experience with SSIS (SQL Server Integration Services) development and maintenance. Experience with SSRS (SQL Server Reporting Services) report design and development. Experience with data warehousing concepts and best practices. Experience with Microsoft Azure cloud platform and Microsoft Fabric desirable. Strong analytical and problem-solving skills. Excellent communication and interpersonal skills. Ability to work independently and as part of a team. Experience with Agile methodologies.
Posted 1 month ago
7.0 - 10.0 years
20 - 30 Lacs
Bengaluru
Hybrid
Company name: PulseData labs Pvt Ltd (captive Unit for URUS, USA) About URUS We are the URUS family (US), a global leader in products and services for Agritech. SENIOR DATA ENGINEER This role is responsible for the design, development, and maintenance of data integration and reporting solutions. The ideal candidate will possess expertise in Databricks and strong skills in SQL Server, SSIS and SSRS, and experience with other modern data engineering tools such as Azure Data Factory. This position requires a proactive and results-oriented individual with a passion for data and a strong understanding of data warehousing principles. Responsibilities Data Integration Design, develop, and maintain robust and efficient ETL pipelines and processes on Databricks. Troubleshoot and resolve Databricks pipeline errors and performance issues. Maintain legacy SSIS packages for ETL processes. Troubleshoot and resolve SSIS package errors and performance issues. Optimize data flow performance and minimize data latency. Implement data quality checks and validations within ETL processes. Databricks Development Develop and maintain Databricks pipelines and datasets using Python, Spark and SQL. Migrate legacy SSIS packages to Databricks pipelines. Optimize Databricks jobs for performance and cost-effectiveness. Integrate Databricks with other data sources and systems. Participate in the design and implementation of data lake architectures. Data Warehousing Participate in the design and implementation of data warehousing solutions. Support data quality initiatives and implement data cleansing procedures. Reporting and Analytics Collaborate with business users to understand data requirements for department driven reporting needs. Maintain existing library of complex SSRS reports, dashboards, and visualizations. Troubleshoot and resolve SSRS report issues, including performance bottlenecks and data inconsistencies. Collaboration and Communication Comfortable in entrepreneurial, self-starting, and fast-paced environment, working both independently and with our highly skilled teams. Collaborate effectively with business users, data analysts, and other IT teams. Communicate technical information clearly and concisely, both verbally and in writing. Document all development work and procedures thoroughly. Continuous Growth Keep abreast of the latest advancements in data integration, reporting, and data engineering technologies. Continuously improve skills and knowledge through training and self-learning. This job description reflects managements assignment of essential functions; it does not prescribe or restrict the tasks that may be assigned. Requirements Bachelor's degree in computer science, Information Systems, or a related field. 7+ years of experience in data integration and reporting. Extensive experience with Databricks, including Python, Spark, and Delta Lake. Strong proficiency in SQL Server, including T-SQL, stored procedures, and functions. Experience with SSIS (SQL Server Integration Services) development and maintenance. Experience with SSRS (SQL Server Reporting Services) report design and development. Experience with data warehousing concepts and best practices. Experience with Microsoft Azure cloud platform and Microsoft Fabric desirable. Strong analytical and problem-solving skills. Excellent communication and interpersonal skills. Ability to work independently and as part of a team. Experience with Agile methodologies.
Posted 1 month ago
5.0 - 8.0 years
1 - 2 Lacs
Gurugram
Work from Office
Urgent Requirement for Tech Consultant Data Engg with 5+yrs Exp Strong knowledge of Big Data technologies (Hadoop, Spark, Snowflake, Databricks, Airflow, AWS), Python, SQL & cloud platforms (AWS, Azure)
Posted 1 month ago
5.0 - 10.0 years
14 - 24 Lacs
Bengaluru
Remote
Detailed job description - Skill Set: Strong Knowledge in Databricks. This includes creating scalable ETL (Extract, Transform, Load) processes, data lakes Strong knowledge in Python and SQL Strong experience with AWS cloud platforms is a must Good understanding of data modeling principles and data warehousing concepts Strong knowledge of optimizing ETL jobs, batch processing jobs to ensure high performance and efficiency Implementing data quality checks, monitoring data pipelines, and ensuring data consistency and security Hands on experience with Databricks features like Unity Catalog Mandatory Skills Databricks, AWS
Posted 1 month ago
6.0 - 10.0 years
7 - 14 Lacs
Bengaluru
Hybrid
Roles and Responsibilities Architect and incorporate an effective Data framework enabling end to end Data Solution. Understand business needs, use cases and drivers for insights and translate them into detailed technical specifications. Create epics, features and user stories with clear acceptance criteria for execution and delivery by the data engineering team. Create scalable and robust data solution designs that incorporate governance, security and compliance aspects. Develop and maintain logical and physical data models and work closely with data engineers, data analysts and data testers for successful implementation of them. Analyze, assess and design data integration strategies across various sources and platforms. Create project plans and timelines while monitoring and mitigating risks and controlling progress of the project. Conduct daily scrum with the team with a clear focus on meeting sprint goals and timely resolution of impediments. Act as a liaison between technical teams and business stakeholders and ensure. Guide and mentor the team for best practices on Data solutions and delivery frameworks. Actively work, facilitate and support the stakeholders/ clients to complete User Acceptance Testing ensure there is strong adoption of the data products after the launch. Defining and measuring KPIs/KRA for feature(s) and ensuring the Data roadmap is verified through measurable outcomes Prerequisites 5 to 8 years of professional, hands on experience building end to end Data Solution on Cloud based Data Platforms including 2+ years working in a Data Architect role. Proven hands on experience in building pipelines for Data Lakes, Data Lake Houses, Data Warehouses and Data Visualization solutions Sound understanding of modern Data technologies like Databricks, Snowflake, Data Mesh and Data Fabric. Experience in managing Data Life Cycle in a fast-paced, Agile / Scrum environment. Excellent spoken and written communication, receptive listening skills, and ability to convey complex ideas in a clear, concise fashion to technical and non-technical audiences Ability to collaborate and work effectively with cross functional teams, project stakeholders and end users for quality deliverables withing stipulated timelines Ability to manage, coach and mentor a team of Data Engineers, Data Testers and Data Analysts. Strong process driver with expertise in Agile/Scrum framework on tools like Azure DevOps, Jira or Confluence Exposure to Machine Learning, Gen AI and modern AI based solutions. Experience Technical Lead Data Analytics with 6+ years of overall experience out of which 2+ years is on Data architecture. Education Engineering degree from a Tier 1 institute preferred. Compensation The compensation structure will be as per industry standards
Posted 1 month ago
5.0 - 7.0 years
1 - 1 Lacs
Hyderabad
Remote
Databricks and Informatica Intelligent Data Management Cloud (IDMC) consultant
Posted 1 month ago
5.0 - 7.0 years
1 - 1 Lacs
Hyderabad
Remote
Databricks and Informatica Intelligent Data Management Cloud (IDMC) consultant
Posted 1 month ago
7.0 - 12.0 years
15 - 22 Lacs
Bengaluru
Hybrid
Job Summary: We are seeking a talented Data Engineer with strong expertise in Databricks, specifically in Unity Catalog, PySpark, and SQL, to join our data team. Youll play a key role in building secure, scalable data pipelines and implementing robust data governance strategies using Unity Catalog. Key Responsibilities: Design and implement ETL/ELT pipelines using Databricks and PySpark. Work with Unity Catalog to manage data governance, access controls, lineage, and auditing across data assets. Develop high-performance SQL queries and optimize Spark jobs. Collaborate with data scientists, analysts, and business stakeholders to understand data needs. Ensure data quality and compliance across all stages of the data lifecycle. Implement best practices for data security and lineage within the Databricks ecosystem. Participate in CI/CD, version control, and testing practices for data pipelines. Required Skills: Proven experience with Databricks and Unity Catalog (data permissions, lineage, audits). Strong hands-on skills with PySpark and Spark SQL. Solid experience writing and optimizing complex SQL queries. Familiarity with Delta Lake, data lakehouse architecture, and data partitioning. Experience with cloud platforms like Azure or AWS. Understanding of data governance, RBAC, and data security standards. Preferred Qualifications: Databricks Certified Data Engineer Associate or Professional. Experience with tools like Airflow, Git, Azure Data Factory, or dbt. Exposure to streaming data and real-time processing. Knowledge of DevOps practices for data engineering.
Posted 1 month ago
9.0 - 14.0 years
8 - 18 Lacs
Bhopal, Hyderabad, Pune
Hybrid
Urgent Opening for Data Architect Position Exp : Min 9yrs Salary : As per industry NP : Immediate joiners are preferred Skills : Databricks, ADF, Python, Synapse JD Job Description Work youll do This role is responsible for data architecture and support in a fast-paced cross-cultural diverse environment leveraging the Agile methodology. This role requires a solid understanding of taking business requirements to define data models and underlying data structures that support a data architecture's design. The person that fills this role is expected to work closely with product owners and a cross-functional team comprising business analysts, software engineers, functional and non- functional testers, operations engineers and project management. Key Responsibilities Collaborate with product managers, designers, and fellow developers to design, develop, and maintain web-based applications and software solutions. Write clean, maintainable, and efficient code, adhering to coding standards and best practices. Perform code reviews to ensure code quality and consistency. Troubleshoot and debug software defects and issues, providing timely solutions. Participate in the entire software development lifecycle, from requirements gathering to deployment and support. Stay up to date with industry trends and emerging technologies, incorporating them into the development process when appropriate. Mentor junior developers and provide technical guidance when needed. Qualifications : Technical Skills: Bachelor's degree in computer science, Software Engineering, or a related field. Strong experience in Data Engineering Well versed with Data Architecture 10+ years of professional experience in data engineering & architecture Advanced understanding data modelling and design. Strong database management and design, SQL Server preferred. Strong understanding of Databricks & Azure Synapse. Understanding of data pipeline/ETL frameworks and libraries. Experience with Azure Cloud Components (PaaS) and DevOps is required. Experience working in Agile and SAFe development processes. Excellent problem-solving and analytical skills. Other Skills Strong organizational and communication skills. Flexibility, energy and ability to work well with others in a team environment The ability to effectively manage multiple assignments and responsibilities in a fast-paced environment Expert problem solver. Finding simple answers to complex questions or problems. Should be able to learn and upskill on new technologies Drive for results partner with product owners to deliver on short- and long-term milestones Experience working with product owners and development teams to document and clarify business and user requirements and manage scope of defined features and functions during project lifecycle Critical thinking-able to think outside the box; use knowledge gained through prior experience, education, and training to resolve issues and remove project barriers Strong written and verbal communication skills with the ability to present and to collaborate with business leaders Experience interfacing with external software design and development vendors preferred Being a team player that can deliver in a high pressure and high demanding environment If anyone interested, Kindly share your resume on vidya.raskar@newvision-software.com Regards Vidya
Posted 1 month ago
7.0 - 12.0 years
20 - 35 Lacs
Pune, Bengaluru
Hybrid
Hi All , we have senior position for databricks expert Job Location :Pune and Bangalore(hybrid) Perks :pick and drop provided Role & responsibilities Kindly Note : Overall experience should be 7 Yrs+ and immediate joiner Data Engineering - Data pipeline development using Azure Databricks 5+ years • Optimizing data processing performance, efficient resource utilization and execution time. Workflow orchestration 5+ years • Databricks features like Databricks SQL, Delta Lake, and Workflows to orchestrate and manage complex data workflows – 5 + years • Data modelling – 5 + Years 6. Nice to Haves: Knowledge of PySparks, Good knowledge of data warehousing
Posted 1 month ago
6.0 - 8.0 years
15 - 19 Lacs
Pune
Work from Office
Responsibilities Design, build,maintain ,scalable and efficient data pipelines using PySpark, Spark SQL, and optionally Scala. Develop and manage data solutions on the Databricks platform, utilizing Workspace, Jobs, (DLT), Repos, and Unity Catalog.
Posted 2 months ago
5.0 - 10.0 years
15 Lacs
Noida, Chennai, Bengaluru
Work from Office
Responsibilities Lead the design, development, and implementation of big data solutions using Apache Spark and Databricks. Architect and optimize data pipelines and workflows to process large volumes of data efficiently. Utilize Databricks features such as Delta Lake, Databricks SQL, and Databricks Workflows to enhance data processing and analytics capabilities. Collaborate with data engineers, data scientists, and business stakeholders to understand data requirements and deliver high-quality data solutions. Implement best practices for data engineering, including data quality, data governance, and data security. Monitor and troubleshoot performance issues in Spark jobs and Databricks clusters. Mentor and guide junior engineers in the team, promoting a culture of continuous learning and improvement. Stay up-to-date with the latest advancements in Spark and Databricks technologies and incorporate them into the team's practices.
Posted 2 months ago
5.0 - 10.0 years
15 - 30 Lacs
Hyderabad
Work from Office
Job Description: In this vital role you will be responsible for the development and implementation of our data strategy. The ideal candidate possesses a strong blend of technical expertise and data-driven problem-solving skills. As a Data Engineer, you will play a crucial role in building, and optimizing our data pipelines and platforms in a SAFE Agile product team. Chip in to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions. Deliver for data pipeline projects from development to deployment, managing, timelines, and risks. Ensure data quality and integrity through meticulous testing and monitoring. Leverage cloud platforms (AWS, Databricks) to build scalable and efficient data solutions. Work closely with product team, and key collaborators to understand data requirements. Enforce to data engineering industry standards and standards. Experience developing in an Agile development environment, and comfortable with Agile terminology and ceremonies. Familiarity with code versioning using GIT and code migration tools. Familiarity with JIRA. Stay up to date with the latest data technologies and trends What we expect of you Basic Qualifications: Doctorate degree OR Masters degree and 4 to 6 years of Information Systems experience OR Bachelors degree and 6 to 8 years of Information Systems experience OR Diploma and 10 to 12 years of Information Systems experience. Demonstrated hands-on experience with cloud platforms (AWS, Azure, GCP) Proficiency in Python, PySpark, SQL. Development knowledge in Databricks. Good analytical and problem-solving skills to address sophisticated data challenges. Preferred Qualifications: Experienced with data modeling Experienced working with ETL orchestration technologies Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and DevOps Familiarity with SQL/NOSQL database Soft Skills: Skilled in breaking down problems, documenting problem statements, and estimating efforts. Effective communication and interpersonal skills to collaborate with multi-functional teams. Excellent analytical and problem solving skills. Strong verbal and written communication skills Ability to work successfully with global teams High degree of initiative and self-motivation. Team-oriented, with a focus on achieving team goals
Posted 2 months ago
3.0 - 8.0 years
0 - 3 Lacs
Hyderabad
Work from Office
Job Overview: We are seeking a skilled and proactive Machine Learning Engineer to join our smart manufacturing initiative. You will play a pivotal role in building data pipelines, developing ML models for defect prediction, and implementing closed-loop control systems to improve production quality. Responsibilities: Data Engineering & Pipeline Support: Validate and ensure correct data flow from Influx DB/CDL to Smart box/Databricks platforms. Collaborate with data scientists to support model development through accurate data provisioning. Provide ongoing support in resolving data pipeline issues and performing ad-hoc data extractions. ML Model Development: Develop three distinct ML models to predict different types of defects using historical production data. Predict short-term outcomes (next 5 minutes) using techniques like artificial sampling and dimensionality reduction. Ensure high model performance: Accuracy 95%, Precision & Recall 80%. Extract and present feature importance to support model interpretability. Closed-loop Control Architecture: Implement end-to-end ML-driven automation to proactively correct machine settings based on model predictions. Key architecture components include: Real-time data ingestion from PLCs via Influx DB/CDL. Model deployment and inference on Smart box. Output pipeline to share actionable recommendations via PLC tags. Automated retraining pipeline in the cloud triggered by model drift or recommendation deviations. Qualifications: Proven experience with real-time data streaming from industrial systems (PLCs, Influx DB/CDL). Hands-on experience in building and deploying ML models in production. Strong understanding of data preprocessing, dimensionality reduction, and synthetic data techniques. Familiarity with cloud-based retraining workflows and model performance monitoring. Experience in smart manufacturing or predictive maintenance is a plus.
Posted 2 months ago
5.0 - 10.0 years
10 - 20 Lacs
Chennai
Hybrid
The Operations Engineer will work in collaboration with and under the direction of the Manager of Data Engineering, Advanced Analytics to provide operational services, governance, and incident management solutions for the Analytics team. This includes modifying existing data ingestion workflows, releases to QA and Prod, working closely with cross functional teams and providing production support for daily issues. Essential Job Functions: * Takes ownership of customer issues reported and see problems through to resolution * Researches, diagnoses, troubleshoots and identifies solutions to resolve customer issues * Follows standard procedures for proper escalation of unresolved issues to the appropriate internal teams * Provides prompt and accurate feedback to customers * Ensures proper recording and closure of all issues * Prepares accurate and timely reports * Documents knowledge in the form of knowledge base tech notes and articles Other Responsibilities: * Be part of on-call rotation * Support QA and production releases, off-hours if needed * Work with developers to troubleshoot issues * Attend daily standups * Create and maintain support documentation (Jira/Confluence) Minimum Qualifications and Job Requirements: * Proven working experience in enterprise technical support * Basic knowledge of systems, utilities, and scripting * Strong problem-solving skills * Excellent client-facing skills * Excellent written and verbal communication skills * Experience with Microsoft Azure including Azure Data Factory (ADF), Databricks, ADLS (Gen2) * Experience with system administration and SFTP * Experience leveraging analytics team tools such as Alteryx or other ETL tools * Experience with data visualization software (e.g. Domo, Datorama) * Experience with SQL programming * Experience automating routine data tasks using various software tools (e.g., Jenkins, Nexus, SonarQube, Rundeck, Task Scheduler)
Posted 2 months ago
4.0 - 7.0 years
5 - 15 Lacs
Bengaluru
Hybrid
Job Description:- Pyspark Developer / Data Engineer Must have experience on Pyspark. Strong programming experience, Python, Pyspark, Scala is preferred. Experience in designing and implementing CI/CD, Build Management, and Development strategy. Experience with SQL and SQL Analytical functions, experience participating in key business, architectural and technical decisions Proficient in leveraging Spark for distributed data processing and transformation. Skilled in optimizing data pipelines for efficiency and scalability. Experience with real-time data processing and integration. Familiarity with Apache Hadoop ecosystem components. Strong problem-solving abilities in handling large-scale datasets. Ability to collaborate with cross-functional teams and communicate effectively with stakeholders. Primary Skills : PYSPARK+SQL+Databricks
Posted 2 months ago
5.0 - 10.0 years
20 - 30 Lacs
Pune, Chennai, Bengaluru
Hybrid
R Roles & Responsibilities: Lead and manage end-to-end data pipeline projects leveraging Databricks and Azure Data Factory . Collaborate with cross-functional teams including data engineers, analysts, and business stakeholders to gather requirements and deliver data products. Ensure proper unit testing and system testing of data pipelines to capture all files/transactions. Monitor and troubleshoot data ingestion processes, ensuring accuracy and completeness of the data. Facilitate Agile ceremonies (sprint planning, standups, retrospectives) and ensure the team adheres to Agile best practices. Define, prioritize, and manage product backlogs based on stakeholder inputs and data insights. Drive continuous improvement by evaluating emerging technologies and optimizing data integration workflows. Must-Have Skills: Databricks Minimum 3 years of hands-on experience. Azure Data Factory – Minimum 3 years of development and orchestration experience. CQL (Cassandra Query Language) – Minimum 5 years of strong understanding and practical application. Strong verbal communication skills – Ability to collaborate effectively with technical and non-technical teams. Deep understanding of the Agile development process . Good-to-Have Skills: Experience in Azure Data Lake , Synapse Analytics , or Power BI . Familiarity with DevOps and CI/CD pipelines in Azure. Exposure to Data Governance and Data Quality frameworks . Experience with cloud security best practices related to data handling. Background in Business Intelligence (BI) and reporting systems.
Posted 2 months ago
10.0 - 15.0 years
25 - 37 Lacs
Pune
Work from Office
Looking for Java Fullstack Developer Required Candidate profile Looking for our client
Posted 2 months ago
10.0 - 15.0 years
25 - 37 Lacs
Pune
Work from Office
Looking for Java Fullstack Developer Required Candidate profile Looking for our client
Posted 2 months ago
3 - 6 years
12 - 15 Lacs
Hyderabad
Remote
Job Title: Data Engineer Job Summary: Are you passionate about building scalable data pipelines, optimizing ETL processes, and designing efficient data models? We are looking for a Databricks Data Engineer to join our team and play a key role in managing and transforming data in Azure cloud environments. In this role, you will work with Azure Data Factory (ADF), Databricks, Python, and SQL to develop robust data ingestion and transformation workflows. Youll also be responsible for integrating, ,optimizing performance, and ensuring data quality & governance. If you have strong experience in big data processing, distributed computing (Spark), and data modeling, we’d love to hear from you! Key Responsibilities: 1. Develop & Optimize ETL Pipelines : Build robust and scalable data pipelines using ADF, Databricks, and Python for data ingestion, transformation, and loading. 2. Data Modeling & Systematic Layer Modeling : Design logical, physical, and systematic data models for structured and unstructured data. 3. Database Management : Develop and optimize SQL queries, stored procedures, and indexing strategies to enhance performance. 4. Big Data Processi ng: Work with Azure Data bricks for distributed computing, Spark for large-scale processing, and Delta Lake for optimized storage. 5. Data Quality & Governance : Implement data validation, lineage tracking, and security measures for high-quality, compliant data. 6. Collaboration : Work closely with business analysts, data scientists, and DevOps teams to ensure data availability and usability. 7. Testing and Debugging : Write unit tests and perform debugging to ensure the Implementation is robust and error-free. Conduct performance optimization and security audits. Required Skills and Qualifications: Azure Cloud Expertise: Strong experience in Azure Data Factory (ADF), Databricks, and Azure Synapse. Programming: Proficiency in Python for data processing, automation, and scripting. SQL & Database Skills: Advanced knowledge of SQL, T-SQL, or PL/SQL for data manipulation. Data Modeling: Hands-on experience in dimensional modeling, systematic layer modeling, and entity-relationship modeling. Big Data Frameworks: Strong understanding of Apache Spark, Delta Lake, and distributed computing. Performance Optimization: Expertise in query optimization, indexing, and performance tuning. Data Governance & Security: Knowledge of RBAC, encryption, and data privacy standards. Preferred Qualifications: Experience with CI/CD for data pipelines using Azure DevOps. Knowledge of Kafka/Event Hub for real-time data processing. Experience with Power BI/Tableau for data visualization (not mandatory but a plus).
Posted 2 months ago
5 - 8 years
8 - 18 Lacs
Bengaluru
Remote
Databricks SQL Engineer with a Pharma or Life Sciences background to join our offshore Data Engineering team. This role focuses on building efficient, scalable SQL-based data models and pipelines using Databricks SQL, Spark SQL, and Delta Lake.
Posted 2 months ago
5 - 10 years
0 Lacs
Chennai, Coimbatore, Bengaluru
Hybrid
Open & Direct Walk-in Drive event | Hexaware technologies - Azure Data Engineer/Architect in Chennai, Tamilnadu on 10th May [Saturday] 2025 - Azure Databricks/ Data factory/ SQL & Pyspark Dear Candidate, I hope this email finds you well. We are thrilled to announce an exciting opportunity for talented professionals like yourself to join our team as an Azure Data Engineer. We are hosting an Open Walk-in Drive in Chennai, Tamilnadu on 10th May [Saturday] 2025, and we believe your skills in Databricks, Data Factory, SQL, and Pyspark align perfectly with what we are seeking. Details of the Walk-in Drive: Date: 10th May [Saturday] 2025 Experience: 5 years to 12 years Time: 9.00 AM to 5 PM Venue: HEXAWARE TECHNOLOGIES H-5, Sipcot It Park, Post, Navalur, Siruseri, Tamil Nadu 603103 Point of Contact: Azhagu Kumaran Mohan/+91-9789518386 Key Skills and Experience: As an Azure Data Engineer, we are looking for candidates who possess expertise in the following: Databricks Data Factory SQL Pyspark/Spark Roles and Responsibilities: As a part of our dynamic team, you will be responsible for: Designing, implementing, and maintaining data pipelines Collaborating with cross-functional teams to understand data requirements. Optimizing and troubleshooting data processes Leveraging Azure data services to build scalable solutions. What to Bring: Updated resume Photo ID, Passport size photo How to Register: To express your interest and confirm your participation, please reply to this email with your updated resume attached. Walk-ins are also welcome on the day of the event. This is an excellent opportunity to showcase your skills, network with industry professionals, and explore the exciting possibilities that await you at Hexaware Technologies. If you have any questions or require further information, please feel free to reach out to me at AzhaguK@hexaware.com - +91-9789518386 We look forward to meeting you and exploring the potential of having you as a valuable member of our team. ********* less than 4 years of total experience will not be Screen selected to attend the interview***********
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough